Planck Length

246 views
Skip to first unread message

agrays...@gmail.com

unread,
Jan 6, 2019, 2:39:25 AM1/6/19
to Everything List
What is the argument for the claim that we cannot, in principle, measure any length smaller than Planck length? TIA, AG

Brent Meeker

unread,
Jan 6, 2019, 2:53:52 AM1/6/19
to everyth...@googlegroups.com
To measure small things you need comparably short wavelengths.  If you
make a photon with a wavelength so short it can measure the Planck
length it will have so much mass-energy that it will fold spacetime
around it and become a black hole...so you won't be able to use it to
measure anything.

Brent

John Clark

unread,
Jan 6, 2019, 9:59:39 AM1/6/19
to everyth...@googlegroups.com
There is a related concept, the Planck Mass that also involves the 3 most fundamental constants in nature, the speed of light the Planck constant and the Gravitational constant. If you take the Planck energy (c^5*h/2*PI*G)^1/2 and confine it in a box one Planck length (G*h/2*PI*c^3)^1/2 on a side it will turn into a Black Hole. To find the Planck Mass we use E=MC^2 and divide the Planck Energy by c^2. The Planck Mass works out to be .02 milligrams, about the mass of a single grain of salt; nothing less massive than the Planck Mass can form a Black Hole regardless of how much you compress it. Some, such as Roger Penrose, think this marks the boundary between the quantum realm and the realm of classical physics but most think that's a oversimplification.

 John K Clark   

agrays...@gmail.com

unread,
Jan 6, 2019, 4:56:58 PM1/6/19
to Everything List


On Sunday, January 6, 2019 at 7:53:52 AM UTC, Brent wrote:
To measure small things you need comparably short wavelengths.  If you
make a photon with a wavelength so short it can measure the Planck
length it will have so much mass-energy that it will fold spacetime
around it and become a black hole...so you won't be able to use it to
measure anything.

Brent

TY. That's clear enough. But there's a related question I was unable to explain to a friend recently. Suppose we have a small spherical cork floating on a lake, and we introduce a wave disturbance. If the wave length is much larger than the diameter of the sphere, it will just bob up and down as the wave passes. But if the wave length is comparable to the diameter, the wave will be partially reflected. What is a good *physical* argument for the existence of the reflected wave, tantamount to a detection of the cork? I am at loss to offer a physical explanation. TIA, AG

Brent Meeker

unread,
Jan 6, 2019, 6:39:03 PM1/6/19
to everyth...@googlegroups.com


On 1/6/2019 1:56 PM, agrays...@gmail.com wrote:


On Sunday, January 6, 2019 at 7:53:52 AM UTC, Brent wrote:
To measure small things you need comparably short wavelengths.  If you
make a photon with a wavelength so short it can measure the Planck
length it will have so much mass-energy that it will fold spacetime
around it and become a black hole...so you won't be able to use it to
measure anything.

Brent

TY. That's clear enough. But there's a related question I was unable to explain to a friend recently. Suppose we have a small spherical cork floating on a lake, and we introduce a wave disturbance. If the wave length is much larger than the diameter of the sphere, it will just bob up and down as the wave passes. But if the wave length is comparable to the diameter, the wave will be partially reflected. What is a good *physical* argument for the existence of the reflected wave, tantamount to a detection of the cork? I am at loss to offer a physical explanation. TIA, AG

When the wavelength is on the order of the cork dimension or smaller the cork can't react to the wave as if it were just part of the water. Because of its extent it cannot move with the water at all points, so there are pressure gradients around the cork which become the source of scattered ripples.

Brent

agrays...@gmail.com

unread,
Jan 7, 2019, 8:03:17 AM1/7/19
to Everything List


On Sunday, January 6, 2019 at 2:59:39 PM UTC, John Clark wrote:
There is a related concept, the Planck Mass that also involves the 3 most fundamental constants in nature, the speed of light the Planck constant and the Gravitational constant. If you take the Planck energy (c^5*h/2*PI*G)^1/2 and confine it in a box one Planck length (G*h/2*PI*c^3)^1/2 on a side it will turn into a Black Hole. To find the Planck Mass we use E=MC^2 and divide the Planck Energy by c^2. The Planck Mass works out to be .02 milligrams, about the mass of a single grain of salt; nothing less massive than the Planck Mass can form a Black Hole regardless of how much you compress it. Some, such as Roger Penrose, think this marks the boundary between the quantum realm and the realm of classical physics but most think that's a oversimplification.

 John K Clark 

How does one calculate Planck length using the fundamental constants G, h, and c, and having calculated it, how does one show that measuring a length that small with photons of the same approximate wave length, would result in a black hole? TIA, AG
 

agrays...@gmail.com

unread,
Jan 7, 2019, 8:11:03 AM1/7/19
to Everything List
Thank you, but I am unable to intuit the physicality of those pressure gradients and their wave length dependencies. I think I need to look up how scattering amplitudes are calculated to see the wave length dependencies for scattering. I don't recall it being done in my classical or quantum physics courses, a long long time ago, in a galaxy far far away. AG

John Clark

unread,
Jan 7, 2019, 4:25:16 PM1/7/19
to everyth...@googlegroups.com
On Mon, Jan 7, 2019 at 8:03 AM <agrays...@gmail.com> wrote:

> How does one calculate Planck length using the fundamental constants G, h, and c, and having calculated it, how does one show that measuring a length that small with photons of the same approximate wave length, would result in a black hole? TIA, AG
 
In any wave the speed of the wave is wavelength times frequency and according to Planck E= h*frequency  so E= C*h/wavelength.  Thus the smaller the wavelength the greater the energy. According to Einstein energy is just another form of mass (E = MC^2) so at some point the wavelength is so small and the light photon is so energetic (aka massive) that the escape velocity is greater than the speed of light and the object becomes a Black Hole.

Or you can look at it another way, we know from Heisenberg that to determine the position of a particle more precisely with light you have to use a smaller wavelength, and there is something called the  "Compton wavelength" (Lc) ; to pin down the position of a particle of mass m to within one Compton wavelength would require light of enough energy to create another particle of that mass. The formula for the Compton Wavelength is Lc= h/(2PI*M*c).

Schwarzschild told us that the radius of a Black Hole (Rs), that is to say where the escape velocity is the speed of light  is:  Rs= GM/c^2. At some mass Lc will equal Rs and that mass is the Planck mass, and that Black Hole will have the radius of the Planck Length, 1.6*10^-35 meters.

Then if you do a little algebra:
GM/c^2 = h/(2PI*M*c)
GM= hc/2PI*M
GM^2 = hc/2*PI
M^2 = hc/2*PI*G
M = (hc/2*PI*G)^1/2    and that is the formula for the Planck Mass , it's .02 milligrams.

And the Planck Length turns out to be (G*h/2*PI*c^3)^1/2 and the Planck time is the time it takes light to travel the Planck length

The Planck Temperature Tp is sort of the counterpoint to Absolute Zero, Tp is as hot as things can get because the black-body radiation given off by things when they are at temperature Tp have a wavelength equal to the Planck Length, the distance light can move in the Planck Time of 10^-44 seconds. The formula for the Planck temperature is Tp = Mp*c^2/k where Mp is the Planck Mass and K is Boltzmann's constant and it works out to be 1.4*10^32 degrees Kelvin.  Beyond that point both Quantum Mechanics and General Relativity break down and nobody understands what if anything is going on.

The surface temperature of the sun is at 5.7 *10^3  degrees Kelvin so if it were 2.46*10^28 times hotter it would be at the Planck Temperature, and because radiant energy is proportional to T^4 the sun would be 3.67*10^113 times brighter. At that temperature to equal the sun's brightness the surface area would have to be reduced by a factor of 3.67*10^113, the surface area of a sphere is proportional to the radius squared, so you'd have to reduce the sun's radius by (3.67*10^113)^1/2, and that is  6.05*10^56. The sun's radius is 6.95*10^8   meters and  6.95*10^8/ 6.05*10^56  is 1.15^10^-48 meters. 

That means a sphere at the Planck Temperature with a radius 10 thousand billion times SMALLER than the Planck Length would be as bright as the sun, but as far as we know nothing can be that small. If the radius was 10^13 times longer it would be as small as things can get and the object would be (10^13)^2 = 10^26 times as bright as the sun. I'm just speculating but perhaps that's the luminosity of the Big Bang; I say that because that's how bright things would be if the smallest thing we think can exist was as hot as we think things can get. 

John K Clark

Lawrence Crowell

unread,
Jan 9, 2019, 10:42:59 AM1/9/19
to Everything List
This is the basic argument. The Compton wavelength or equivalently the de Broglie wavelength with v = c is equal to the Schwarzschild radius. That is how to derive the Planck length. The argument that nothing smaller exists just means the Heisenberg uncertainty principle can't isolate something smaller than its wavelength, a Fourier transform version of the Nyquist frequency, and for general relativity anything smaller than a black hole is not observable. There is then some odd equivalency between black hole physics or general relativity and quantum physics. This means one is not able to isolate a quantum bit in some region smaller than a Planck area, or volume. The event horizon of a black hole is then a system of Planck are pixels or units of area. The Bekenstein formula is that the entropy of a black hole is

S = k A/4ℓ_p^2

for ℓ_p = sqrt{Għ/c^3} the Planck length. The area of the black hole horizon is A = 4πr^2 and r = 2GM/c^2. This Schwarzschild horizon area is then some integer multiple of the Planck areas,  A_p = πℓ_p^2, A =  4Nℓ_p^2 and we find S = Nk. It is an equipartition result.

LC

agrays...@gmail.com

unread,
Jan 17, 2019, 3:22:29 AM1/17/19
to Everything List
Later I'll post some questions I have about your derivation of the Planck length, but for now here's a philosophical question; Is there any difference between the claim that space is discrete, from the claim or conjecture that we cannot in principle measure a length shorter than the Planck length?
TIA, AG

Philip Thrift

unread,
Jan 17, 2019, 5:22:35 AM1/17/19
to Everything List
There are claims (theories, e.g. a LQG theory of space, essentially that "space is discrete") and measurements (data, collected from instruments). There is no fundamental regime for matching claims and measurements. Just whatever the scientific community ends up liking, in the end. 

What you stated are two claims: space is discrete and cannot measure a length shorter than the Planck length. Both claims are subject to whatever measurements are recorded. These two claims appear to be close, but I think there is wiggle room for them to be different. 

- pt

Bruno Marchal

unread,
Jan 17, 2019, 7:31:06 AM1/17/19
to everyth...@googlegroups.com
That is a very good question. I have no answer. I don’t think physicists have an answer either, and I do think that this requires the solution of the “quantum gravity” or the “quantum space-time” problem. 
With loop-gravity theory, I would say that the continuum is eventually replaced by something discrete, but not so with string theory; for example. With Mechanism, there are argument that something must stay “continuous”, but it might be only the distribution of probability (the real-complex amplitude). 

Bruno





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Brent Meeker

unread,
Jan 17, 2019, 1:45:31 PM1/17/19
to everyth...@googlegroups.com


On 1/17/2019 12:22 AM, agrays...@gmail.com wrote:
Later I'll post some questions I have about your derivation of the Planck length, but for now here's a philosophical question; Is there any difference between the claim that space is discrete, from the claim or conjecture that we cannot in principle measure a length shorter than the Planck length?
TIA, AG

The theory that predicts there is a shortest measured interval assumes a continuum.  There's no logical contradiction is this. But physicists tend to have a positivist attitude and think that a theory that assumes things, like arbitrarily short intervals, might be better expressed and simpler in some way that avoids those assumptions.  This attitude does not assume the mathematics itself is the reality, but only a description of reality; so there can be different descriptions of the same reality.

Brent

Philip Thrift

unread,
Jan 17, 2019, 3:02:06 PM1/17/19
to Everything List
A theory that does this assumes a continuous mathematics.
But that doesn't mean every theory has to.

As Max Tegmark's little lecture to physicists says:

    Our challenge as physicists is to discover ... infinity-free equations.


Unless he is wrong in his premise, of course!

- pt
 

Bruno Marchal

unread,
Jan 18, 2019, 8:36:34 AM1/18/19
to everyth...@googlegroups.com
That assumes non-mechanism, and thus bigger infinities. Tegmark is right: we cannot assume infinity at the ontological level (just the finite numbers 0, s(0), s(s(0)), …). But the physical reality is phenomenological, and requires infinite domain of indetermination, making some “observable” having an infinite range. The best candidate could be graham-Preskill frequency operator (that they use more or less rigorously to derive the Born rule from some “many-worlds” interpretation of QM.

Bruno




- pt
 

Philip Thrift

unread,
Jan 18, 2019, 9:44:05 AM1/18/19
to Everything List


On Friday, January 18, 2019 at 7:36:34 AM UTC-6, Bruno Marchal wrote:

On 17 Jan 2019, at 21:02, Philip Thrift <cloud...@gmail.com> wrote:



On Thursday, January 17, 2019 at 12:45:31 PM UTC-6, Brent wrote:


On 1/17/2019 12:22 AM, agrays...@gmail.com wrote:
Later I'll post some questions I have about your derivation of the Planck length, but for now here's a philosophical question; Is there any difference between the claim that space is discrete, from the claim or conjecture that we cannot in principle measure a length shorter than the Planck length?
TIA, AG

The theory that predicts there is a shortest measured interval assumes a continuum.  There's no logical contradiction is this. But physicists tend to have a positivist attitude and think that a theory that assumes things, like arbitrarily short intervals, might be better expressed and simpler in some way that avoids those assumptions.  This attitude does not assume the mathematics itself is the reality, but only a description of reality; so there can be different descriptions of the same reality.

Brent



A theory that does this assumes a continuous mathematics.
But that doesn't mean every theory has to.

As Max Tegmark's little lecture to physicists says:

    Our challenge as physicists is to discover ... infinity-free equations.


Unless he is wrong in his premise, of course!


That assumes non-mechanism, and thus bigger infinities. Tegmark is right: we cannot assume infinity at the ontological level (just the finite numbers 0, s(0), s(s(0)), …). But the physical reality is phenomenological, and requires infinite domain of indetermination, making some “observable” having an infinite range. The best candidate could be graham-Preskill frequency operator (that they use more or less rigorously to derive the Born rule from some “many-worlds” interpretation of QM.

Bruno





I think it is possible some of this can be approached with what is referred to as higher-type computing, where 

higher-type computing is about

the characterization of the sets that can be exhaustively searched [1] by an algorithm, in the sense of Turing, in finite time, as those that are topologically compact

- infinite sets that can be completely inspected in finite time in an algorithmic way, which perhaps defies intuition

[1] Exhaustible sets in higher-type computation
[2] A Haskell monad for infinite search in finite time

from Martin Escardo's page

 - pt

Lawrence Crowell

unread,
Jan 18, 2019, 7:42:59 PM1/18/19
to Everything List
The Planck length is just the smallest length beyond which you can isolate a quantum bit. Remember, it is the length at which the Compton wavelength of a black hole equals its Schwarzschild radius. It is a bit similar to the Nyquist frequency in engineering. In order to measure the frequency of a rotating system you must take pictures that are at least double that frequency. Similarly to measure the frequency of an EM wave you need to have a wave with Fourier modes that are 2 or more times the frequency you want to measure. The black hole is in a sense a fundamental cut-off in the time scale, or in a reciprocal manner the energy, one can sample space to find qubits. 

The levels of confusion over this are enormous. It does not tell us that spacetime is somehow sliced and diced into briquets or pieces. It does not tell us that quantum energy of some fields can't be far larger than the Planck energy, or equivalently the wavelength much smaller. This would be analogous to a resonance state, and there is no reason there can't be such a thing in quantum gravity. The Planck scale would suggest this sort of state may decay into a sub-Planckian energy.  Further, it is plausible that quantum gravity beyond what appears as a linearized weak field approximation similar to the QED of photon bunched pairs may only exist at most an order of magnitude larger than the Planck scale anyway. A holographic screen is then a sort of beam splitter at the quantum-classical divide.

LC

Bruno Marchal

unread,
Jan 19, 2019, 3:36:23 AM1/19/19
to everyth...@googlegroups.com
That is the constructive move. With mechanism, this is given by S4Grz1, and/or typing the combinators. It corresponds to the first person. Tegmark seems oscillate between third and first person views, but when taking mechanism seriously *in the cognitive science* (and not in physics), we have to take both points of view, and derive their relations from self-reference. As I said, the 1p/3p relation is more subtle than the bird/frog change of scale.

You might try to explain Haskell monad for infinite search in finite time. Mechanism explains this from the first person point of view, but is not seen as being something algorithmic.

Bruno

Philip Thrift

unread,
Jan 19, 2019, 5:36:22 AM1/19/19
to Everything List
The key to the higher-type computing approach

    from Infinite sets that admit fast exhaustive search


is to relate a certain kind of computing to topology

   exhaustible sets are to compact sets as 
   computable functions are to continuous maps

There is one example in the above paper [code below]  (I haven't run any of his code).


It should be really be called something like  topological computing

Programs that are like continuous maps have the property that even though they apparently deal with infinite objects, because these objects are (in a computationally-defined way) topological compact, their computing time is finite (and maybe even efficient).



[code from paper]
type Cantor = N -> Bit
foreveryC :: (Cantor -> Bool) -> Bool
equalC :: (Cantor -> N) -> (Cantor -> N) -> Bool
equalC f g = foreveryC(\a -> f a == g a)

f,g,h :: Cantor -> N
f a = a(10*a(3ˆ80)+100*a(4ˆ80)+1000*a(5ˆ80))
g a = a(10*a(3ˆ80)+100*a(4ˆ80)+1000*a(6ˆ80))
h a = if a(4ˆ80) == 0 then a j else a(100+j)
    where i = if a(5ˆ80) == 0 then 0 else 1000
               j = if a(3ˆ80) == 1 then 10+i else i


The queries “equalC f g” and “equalC f h” answer
False and True respectively, in less than 3s 




- pt

Bruno Marchal

unread,
Jan 20, 2019, 10:16:01 AM1/20/19
to everyth...@googlegroups.com
That makes some sense. It corroborates what Brent said. To “see” beyond the Planck resolution, we need so much energy that we would create a black hole, and ost any available information. This does not mean that a shorter length is no possible in principle, just that we cannot make any practical sense of it.




The levels of confusion over this are enormous. It does not tell us that spacetime is somehow sliced and diced into briquets or pieces.

I agree. Besides, this might depend heavily on the solution of the quantum gravity problem. Loop gravity, as far as I understand it, does seem to impose some granularity on space-time. Superstring do not, apparently.




It does not tell us that quantum energy of some fields can't be far larger than the Planck energy, or equivalently the wavelength much smaller.

OK.


This would be analogous to a resonance state, and there is no reason there can't be such a thing in quantum gravity. The Planck scale would suggest this sort of state may decay into a sub-Planckian energy.  Further, it is plausible that quantum gravity beyond what appears as a linearized weak field approximation similar to the QED of photon bunched pairs may only exist at most an order of magnitude larger than the Planck scale anyway. A holographic screen is then a sort of beam splitter at the quantum-classical divide.

This is a bit less clear to me, due to my incompetence to be sure. If you have some reference or link, but it is not urgent. I have not yet find to study the Holographic principle of Susskind, bu I have followed informal exposition given by him on some videos. Difficult subject, probably more so for mathematical logician.

Bruno





LC

Bruno Marchal

unread,
Jan 20, 2019, 10:18:23 AM1/20/19
to everyth...@googlegroups.com
As I said, a sort of topological intuition arise from the modes []p & p (p sigma_1), and quantum topologies appears there too, but also in []p & <>t & p ([] = Gödel’s bewesibar, <> = ~[]~).

Bruno






[code from paper]
type Cantor = N -> Bit
foreveryC :: (Cantor -> Bool) -> Bool
equalC :: (Cantor -> N) -> (Cantor -> N) -> Bool
equalC f g = foreveryC(\a -> f a == g a)

f,g,h :: Cantor -> N
f a = a(10*a(3ˆ80)+100*a(4ˆ80)+1000*a(5ˆ80))
g a = a(10*a(3ˆ80)+100*a(4ˆ80)+1000*a(6ˆ80))
h a = if a(4ˆ80) == 0 then a j else a(100+j)
    where i = if a(5ˆ80) == 0 then 0 else 1000
               j = if a(3ˆ80) == 1 then 10+i else i


The queries “equalC f g” and “equalC f h” answer
False and True respectively, in less than 3s 




- pt

Lawrence Crowell

unread,
Jan 20, 2019, 6:17:50 PM1/20/19
to Everything List
I think we talked a bit on this list about hyper-Turing machines. These are conditions set up by various spacetimes where a Cauchy horizon makes an infinite computation accessible to a local observer. A nonhalting computation can have its output read by such an observer. These spacetimes are Hobert-Malament spaces.The Planck scale may then be a way quantum gravity imposes a fundamental limit on what an observer can measure.

If one is to think of computation according to halting one needs to think according to nilpotent operators. For a group G with elements g these act on vectors v so that gv = v'. These vectors can be states in a Hilbert space or fermionic spinors. The group elements are generated by algebraic operators A so that g = e^{iA}. Now if we have the nilpotent situation where Av = 0 without A or v being zero then gv ≈ (1 + iA)v = v.

A time ordered product of fields, often used in path integral, is a sequence of operators similar to g and we may then have that g_1g_2g_3 … g_n as a way that a system interacts. We might then have some condition that at g_m for m < n the set of group operations all return the same value, so the group has a nilpotent condition on its operators. This would then bear some analogue to the idea of a halted computation.

The question of whether there are nonhalting conditions is then most likely relevant to spacetime physics of quantum fields. If we have a black hole of mass M it then has temperature T = 1/8πGM. Suppose this sits in a spacetime with a background of the same temperature. We might be tempted to say there is equilibrium, which is a sort of halted development. However, it the black hole emits a photon by Hawking radiation of mass-energy δm so M → M - δm it is evident its temperature increases. Conversely if it absorbs a photon from the thermal background then  M → M + δm and its temperature decreases. This will then put the black hole in a state where it is now more likely to quantum evaporate or to grow unbounded by absorbing background photons.

This might then be a situation of nonhalting, and with gravitation or quantum gravity the moduli space is nonHausdorff with orbits of gauge equivalent potentials or moduli that are not bounded. We might then consider quantum gravitation as an arena where the quantum computation of its states are nonhalting, or might they be entirely uncomputable. The inability to isolate a qubit in a region smaller may simply mean that no local observer can read the output of an ideal hyper-Turing machine from an HM spacetime.


The levels of confusion over this are enormous. It does not tell us that spacetime is somehow sliced and diced into briquets or pieces.

I agree. Besides, this might depend heavily on the solution of the quantum gravity problem. Loop gravity, as far as I understand it, does seem to impose some granularity on space-time. Superstring do not, apparently.



String theory does some other things that may not be right as well. The compactification of spaces with dimensions in addition to 3-space plus time has certain implications, which do not seem to be born out.
 
 


It does not tell us that quantum energy of some fields can't be far larger than the Planck energy, or equivalently the wavelength much smaller.

OK.


This would be analogous to a resonance state, and there is no reason there can't be such a thing in quantum gravity. The Planck scale would suggest this sort of state may decay into a sub-Planckian energy.  Further, it is plausible that quantum gravity beyond what appears as a linearized weak field approximation similar to the QED of photon bunched pairs may only exist at most an order of magnitude larger than the Planck scale anyway. A holographic screen is then a sort of beam splitter at the quantum-classical divide.

This is a bit less clear to me, due to my incompetence to be sure. If you have some reference or link, but it is not urgent. I have not yet find to study the Holographic principle of Susskind, bu I have followed informal exposition given by him on some videos. Difficult subject, probably more so for mathematical logician.

Bruno


This last part involves some deep physics on how the holographic screen is in entangled states with Hawking radiation. 

LC 

Bruno Marchal

unread,
Jan 21, 2019, 6:09:50 AM1/21/19
to everyth...@googlegroups.com
In a physical reality.? But once we assume mechanism, we cannot do that assumptions. Halting and non halting computations is a very solid notion which does not depend on the physical reality, nor of any choice of the universal complete theory that we presuppose. We still have to assume one Turing universal system, but both theology and physics are independent of which universal system we start with. I use usually either arithmetic, or the combinators or a universal diophantine polynomial. 
With mechanism, the physical laws are not fundamental, but are explained “Turing-thropically”, using the logics of self-reference of Gödel, Löb, Solovay. 
To test empirically the digital mechanist hypothesis (in the cognitive science) we have to compare the physics deducible by introspection by Turing machine, with the physics observed. Thanks to QM, it fits up to now. But we are light years aways from justifying string theory, or even classical physics. The goal is not to change physics, but to get the metaphysics right (with respect to that mechanist assumption and the mind-body problem). The notion of computation is the most solid epistemological notion, as with Church’s thesis, it admit a purely mathematical, even purely arithmetic, definition. Analysis and physics are ways the numbers see themselves when taking their first person indetermination in arithmetic into account.



is then most likely relevant to spacetime physics of quantum fields. If we have a black hole of mass M it then has temperature T = 1/8πGM. Suppose this sits in a spacetime with a background of the same temperature. We might be tempted to say there is equilibrium, which is a sort of halted development. However, it the black hole emits a photon by Hawking radiation of mass-energy δm so M → M - δm it is evident its temperature increases. Conversely if it absorbs a photon from the thermal background then  M → M + δm and its temperature decreases.

I am not sure I understand this.



This will then put the black hole in a state where it is now more likely to quantum evaporate or to grow unbounded by absorbing background photons.

This might then be a situation of nonhalting,


The problem of the existence of infinite computation in the physical universe is an open problem in arithmetic. Arithmetic contains all non halting computations, but it is unclear if the physical universe has to be finite or not. The first person indeterminacy suggests a priori many infinities, including continua, but the highly counter-intuitive nature of self-reference suggests to be cautious in drawing to rapidly some conclusion. With mechanism, a part of our past is determined by our (many) futures. 




and with gravitation or quantum gravity the moduli space is nonHausdorff

That could be interesting. The topological semantics of the theology (G and G*) are nonHausdorff too.
Could be a coincidence, of course, as physics should be in the intensional variants of G*.




with orbits of gauge equivalent potentials or moduli that are not bounded. We might then consider quantum gravitation as an arena where the quantum computation of its states are nonhalting, or might they be entirely uncomputable. The inability to isolate a qubit in a region smaller may simply mean that no local observer can read the output of an ideal hyper-Turing machine from an HM spacetime.

OK, I think. That would make Mechanism wrong. That is testable, but the evidences favours mechanism.







The levels of confusion over this are enormous. It does not tell us that spacetime is somehow sliced and diced into briquets or pieces.

I agree. Besides, this might depend heavily on the solution of the quantum gravity problem. Loop gravity, as far as I understand it, does seem to impose some granularity on space-time. Superstring do not, apparently.



String theory does some other things that may not be right as well. The compactification of spaces with dimensions in addition to 3-space plus time has certain implications, which do not seem to be born out.

I cannot really judge this. I can agree that this is a bit the ugly part of that theory (I mean the compactififed dimension), but that is not an argument, and taste can differ ...





 
 


It does not tell us that quantum energy of some fields can't be far larger than the Planck energy, or equivalently the wavelength much smaller.

OK.


This would be analogous to a resonance state, and there is no reason there can't be such a thing in quantum gravity. The Planck scale would suggest this sort of state may decay into a sub-Planckian energy.  Further, it is plausible that quantum gravity beyond what appears as a linearized weak field approximation similar to the QED of photon bunched pairs may only exist at most an order of magnitude larger than the Planck scale anyway. A holographic screen is then a sort of beam splitter at the quantum-classical divide.

This is a bit less clear to me, due to my incompetence to be sure. If you have some reference or link, but it is not urgent. I have not yet find to study the Holographic principle of Susskind, bu I have followed informal exposition given by him on some videos. Difficult subject, probably more so for mathematical logician.

Bruno


This last part involves some deep physics on how the holographic screen is in entangled states with Hawking radiation. 

That is interesting. Note that with mechanism, we know "for sure” that the ultimate reality (independent of us the Löbian universal machine) has to be non dimensional (as arithmetic and elementary computer science is). 

Lawrence Crowell

unread,
Jan 21, 2019, 7:19:07 PM1/21/19
to Everything List
A black hole that loses mass by Hawking radiation become a little hotter. The black hole that absorbs a quanta becomes a bit colder. There is as a result no equilibrium condition.

LC

Philip Thrift

unread,
Jan 21, 2019, 7:49:12 PM1/21/19
to Everything List
One of the oddest of things is when physicists use the language of (various) theories of physics to express what can or cannot be the case. It's just a language, which is probably wrong.

There is a sense in which the Church/Turing thesis is true: All out languages are Turing in their syntax and grammar. What they refer to is another matter (pun intended).

- pt

Bruno Marchal

unread,
Jan 23, 2019, 6:52:01 AM1/23/19
to everyth...@googlegroups.com
They refer to the set of computable functions, or to the universal machine which understand that language. But not all language are Turing universal. Only the context sensitive automata (in Chomski hierarchy) are Turing universal. Simple languages, like the “regular” one are typically not Turing universal. Bounded loops formalism cannot be either.

But the notion of language is ambiguous with respect to computability, and that is why I prefer to avoid that expression and always talk about theories (set of beliefs) or machine (recursively enumerable set of beliefs), which avoids ambiguity. 
For example, is “predicate calculus” Turing universal? We can say yes, given that the programming language PROLOG (obviously Turing universal) is a tiny subset of predicate logic. But we can say know, if we look at predicate logic as a theory. A prolog program is then an extension of that theory, not something proved in predicate calculus.
Thus, I can make sense of your remark. Even the language with only one symbol {I}, and the rules that “I” is a wff, and if x is wwf, then Ix is too, can be said Turing universal, as each program can be coded by a number, which can be coded by a finite sequence of I. But of course, that makes the notion of “universality” empty, as far as language are concerned. 
Seen as a theory, predicate calculus is notoriously not universal. Even predicate calculus + the natural numbers, and the law of addition, (Pressburger arithmetic) is not universal. Or take RA with its seven axioms. Taking any axiom out of it, and you get a complete-able theory, and thus it cannot be Turing complete.

Bruno






- pt

Philip Thrift

unread,
Jan 23, 2019, 1:01:51 PM1/23/19
to Everything List
Here's an example of a kind of "non-digital" language:

More Analog Computing Is on the Way


The door on this new generation of analog computer programming is definitely open. Last month, at the Association for Computing Machinery’s (ACM) conference on Programming Language Design and Implementation, a paper was presented that described a compiler that uses a text based, high-level, abstraction language to generate the necessary low-level circuit wiring that defines the physical analog computing implementation. This research was done at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Dartmouth College. The main focus of their investigation was to improve the simulation of biological systems. 


Configuration Synthesis for ProgrammableAnalog Devices with Arco

Programmable analog devices have emerged as a powerful
computing substrate for performing complex neuromorphic
and cytomorphic computations. We present Arco, a new
solver that, given a dynamical system specification in the
form of a set of differential equations, generates physically
realizable configurations for programmable analog devices
that are algebraically equivalent to the specified system.
On a set of benchmarks from the biological domain, Arco
generates configurations with 35 to 534 connections and 28
to 326 components in 1 to 54 minutes.


- pt
 

Lawrence Crowell

unread,
Jan 24, 2019, 6:54:46 AM1/24/19
to Everything List
My point is that in physics what might be called a halting condition is an attractor point or limit cycle. Equilibrium is the terminal point in the evolution of some system, say thinking according to Landauer's original paper on thermodynamics and information. The quantum field theory of black holes has no equilibrium condition. Now if the black hole runs away with Hawking radiation it will “explode” in a burst of gamma rays and other quanta. A Turing machine that does not halt can also be said to burn itself out, and if anyone has programmed assembler there were loops you could put a machine into that might do damage. 

Sorry for being slow on this. I forgot to get flu shots this year and I have been hit with a real doozy of a flu. Since Sunday night until yesterday I was horribly ill, and only now am beginning to feel normal. Get the shots, you really do not want this flu!

LC

Bruno Marchal

unread,
Jan 24, 2019, 8:14:15 AM1/24/19
to everyth...@googlegroups.com
Intersting.

Yet, that does not violate the Church-Thesis, even if very useful FAPP. But such computations arise in arithmetic, either directly, or through a infinite sequence of approximations. If all decimals of the analog phenomenon needs to be taken into account, then we are out of my working hypothesis, and even evolution theory becomes wrong, as evolution and life becomes sequences of miracles. But the goal of the authors here is not learning anything in metaphysics, just doing efficacious machine. In that case mechanism explains the plausible necessity of such moves, including quantum computations (which also do not violate Church’s thesis).

Bruno

Bruno Marchal

unread,
Jan 24, 2019, 8:20:03 AM1/24/19
to everyth...@googlegroups.com
Take care!

An interesting video which shed a bit of light (for me at least) is the following talk by Susskind, although I have some problem with the notion of “surface of a photon”, to be sure: 


BTW, a rather nice (but long) introduction to GR is given here:

Philip Thrift

unread,
Jan 24, 2019, 9:19:34 AM1/24/19
to Everything List
I don't believe in (or know what are) miracles (although a real hypercomputer - one you could give any statement of arithmetic to - e.g. Goldbach's conjecture  -  and it could check through all - infinite number of -  integers and tell you "true" or "false" within the hour - would be basically a miracle), but I do think that substrate matters.

Hence in the PLTOS view (program, language, transformer/compiler, object, substrate), substrate can't be eliminated in the semantics of program. In other words, in real programming, there are no such things as substrate-independent programs.


But matter is a mystery (this I've learned from Galen Strawson), so I do think there are mysteries.

- pt

Philip Thrift

unread,
Jan 24, 2019, 9:59:42 AM1/24/19
to Everything List
I used to think that there could be true hypercomputation (what is called super-Turing machines) in nature, but now I think that there is no such thing (but anything remains possible, of course).

But the idea of substrate-independent Turing machines is incomplete.

I shouldn't say (if will jinx me!) but I've never gotten a flu shot and I haven't gotten the flu in over 40 years.

But I hope the flu program doesn't start running in / affect my substrate!

- pt

Lawrence Crowell

unread,
Jan 24, 2019, 1:57:00 PM1/24/19
to Everything List
I hate to pop your bubble here, but a few years ago at a New Year's party a person who had cancer go into remission made this statement that she never got colds or flus. A doctor I know was there and responded with how not getting these sicknesses is a risk factor for cancer! The woman died a last summer with the return of her non-Hodgkins lymphoma. 

Hyper-Turing computations or results are not accessible to local observers.

LC

Philip Thrift

unread,
Jan 24, 2019, 3:03:10 PM1/24/19
to Everything List
What about the interviews of people over 100 who say they've never had a cold or the flu? 

And where are these hyper-Turing processes occurring?

- pt

Lawrence Crowell

unread,
Jan 25, 2019, 5:48:38 AM1/25/19
to Everything List
Hypercomputations run into extreme energy or frequency, so the conclusion of it occurs in black holes or in trans-Plankian scales we can't observe. In a sense it is a sort of renormailization and treated as a p-adic regularization of quantum gravity.

When it comes to cold and flu I am just echoing what I was told. You would have to research this out more extensively.

LC

Philip Thrift

unread,
Jan 25, 2019, 7:06:09 AM1/25/19
to Everything List
I think "hypercomputing" is not needed in the quantum space (LQG) model of black holes (the recent Penn State, LSU model).

As for the flu, I'm afraid researching it might jinx me. :)

- pt

Bruno Marchal

unread,
Jan 25, 2019, 7:27:44 AM1/25/19
to everyth...@googlegroups.com
Because you assume some primary substrate. And then you need, coherently, to assume no-mechanism. No problem, but the current evidence favours Mechanism, and there has never been any evidence for substrate. Adding substrate in the picture makes the mind-body problem almost non soluble, at least without invoking some precise non computationalist theory of mind. I start from the computationalist of mind, shows that we have to derive a phenomenology of matter in a special (self-referentially based) manner, and nature seems to confirm this. The illusion of matter is easier to explain once we have a theory of consciousness, than to derive a theory of consciousness from some notion substrate (which are conceived usually as being inert).
We are working in different theories. You might think about a way to motivate your ontological commitment in some primitive substance. The books in physics does not provide such motivation, as they do not aboard the mind-body problem (even if Everett Quantum Mechanics already look like a solution to the mechanist mind-body problem).

Bruno






But matter is a mystery (this I've learned from Galen Strawson), so I do think there are mysteries.

- pt

Bruno Marchal

unread,
Jan 25, 2019, 7:31:22 AM1/25/19
to everyth...@googlegroups.com
That is not obvious. Have you understand the first person indeterminacy (step 3 in the sane04 paper). This imposes a sort of random oracle to the first person associated with the infinitely many computations going through their actual state. It is hard in that case to prove that non computable processes are not available, even to local observer. But I agree that except for the quantum randomness there are not much evidence for more on this.

Philip Thrift

unread,
Jan 25, 2019, 8:53:02 AM1/25/19
to Everything List
Just to note that the "substrate" terminology is used in computing (as above):

  Programmable analog devices have emerged as a powerful computing substrate
  for performing complex neuromorphic and cytomorphic computations.  

It's a word combined with "computing" like love and marriage.

- pt

John Clark

unread,
Jan 25, 2019, 11:14:24 AM1/25/19
to everyth...@googlegroups.com
On Thu, Jan 24, 2019 at 9:59 AM Philip Thrift <cloud...@gmail.com> wrote:

> I shouldn't say (if will jinx me!) but I've never gotten a flu shot and I haven't gotten the flu in over 40 years.

It's odd, no invention in human history has saved more lives than vaccines and yet from the day the first one was invented in 1796 there has been unscientific resistance against its use; it is the reason Polio was not eliminated from the face of the Earth decades ago and the reason there is currently a measles outbreak in anti-vaccine hotspots in the USA. And no less a person than the presadent has spread the ridiculous rumor that vaccinations cause autism.

  John K Clark
 

Philip Thrift

unread,
Jan 25, 2019, 2:36:57 PM1/25/19
to Everything List
I had the usual childhood diseases (measles and mumps). Salk announced the successful test of his polio vaccine  the month I was born. ("Salk went on CBS radio to report a successful test on a small group of adults and children on 26 March 1953; two days later, the results were published in JAMA." - Wikipedia)

When I was 21 (in college) I had an extremely bad flu and I was in a college infirmary bed for 3 or 4 days. I think I had some minor flus in my 20s, but since age 30 I can't recall anything that kept me out of work. And after 40 I can't remember any flu. Simple colds, yes.

I should be getting the yearly flu shot (or shots?).


- pt

Lawrence Crowell

unread,
Jan 26, 2019, 3:25:59 PM1/26/19
to Everything List
LQG of course breaks Lorentz symmetry near the Planck scale. The finite elements have reduced diffeomorphic symmetry, and which buries away any such problems. The numerical simulations you reference are a typical case of computer science, input variables in, output variable result. LQG has a hard renormalization UV cutoff that breaks the symmetry of the field. 

LC
 

Lawrence Crowell

unread,
Jan 26, 2019, 3:36:34 PM1/26/19
to Everything List
On Friday, January 25, 2019 at 10:14:24 AM UTC-6, John Clark wrote:
I simply forgot to get flu shots this year. I have gotten two of cases of the flu, Last week was a fairly ordinary flu that was annoying. The flu I had this week was horrible. It hit me Sunday evening and within 10 minutes I was staggering. I had 102 F fever and had these bizarre delerium dreams. The fever ended on Tuesday, but the prolonged part began. Influenza can cause the lungs to fill with fluid, and my left lung was filled and this hurt a lot. I felt as if I was impaled through the left chest. It took until today to finally expectorate that out. There are some vestiges of this, but it is largely gone. I am completely exhausted.

LC
 

Philip Thrift

unread,
Jan 26, 2019, 5:54:58 PM1/26/19
to Everything List
In LQG, or quantum space models in general, the Lorenz grouphttps://en.wikipedia.org/wiki/Lorentz_group ] would be replaced by a different mathematics. 

All of the mathematics of conventional physics has to be "quantized" all the way down.

- pt


 

Philip Thrift

unread,
Jan 26, 2019, 6:01:02 PM1/26/19
to Everything List
It's a subject worth exploring of course:



Discrete Lorentz symmetry and discrete time translational symmetry

(Submitted on 1 Aug 2017 (v1), last revised 19 Feb 2018 (this version, v2))
The Lorentz symmetry and the space and time translational symmetry are fundamental symmetries of nature. Crystals are the manifestation of the continuous space translational symmetry being spontaneously broken into a discrete one. We argue that, following the space translational symmetry, the continuous Lorentz symmetry should also be broken into a discrete one, which further implies that the continuous time translational symmetry is broken into a discrete one. We deduce all the possible discrete Lorentz and discrete time translational symmetries in 1+1-dimensional spacetime, and show how to build a field theory or a lattice field theory that has these symmetries.

- pt 

Lawrence Crowell

unread,
Jan 27, 2019, 8:19:05 AM1/27/19
to Everything List
The spinorial  Lorentz group for (½, 0)⊕(0, ½)  is SL(2, C). This being SL(2,  C) =SL(2,  R)×SL(2,  R) there is a modular subgroup to SL(2, R) of linear fractional transformations SL(2, Z) ⊂  SL(2, R). This defines a set of equivalent orbits or paths. This is a discrete Lorentz symmetry for gauge or coordinate condition equivalent moduli. 

It is not commonly thought this is what spacetime is near the Planck scale, I suppose unless you are an LQG maven. It is connected with orbits on strings, with Teichmuller spaces of 6g - 6 dimensions and so forth. 

LC

Philip Thrift

unread,
Jan 27, 2019, 2:09:05 PM1/27/19
to Everything List
I can't say where theoretical physics will be decades from now (or what new experiments and astronomical data will reveal), but that continuous mathematical models will still be in place at at the fundamental (general relativity) level is dubious.

- pt

Philip Thrift

unread,
Jan 28, 2019, 12:39:48 AM1/28/19
to Everything List

When it comes to space, though, there can be no “smaller,” because size itself is a spatial concept. The building blocks cannot presume space if they are to explain it. They must have neither size nor location; they are everywhere, spanning the entire universe, and nowhere, impossible to point to. What would it mean for things not to have positions? Where would they be? “When we talk about emergent space-time, it must come out of some framework that is very far from what we’re familiar with,” Nima Arkani-Hamed says.


Within Western philosophy, the realm beyond space has traditionally been considered a realm beyond physics — the plane of God’s existence in Christian theology. In the early eighteenth century, Gottfried Leibniz’s “monads” — which he imagined to be the primitive elements of the universe — existed, like God, outside space and time. His theory was a step toward emergent space-time, but it was still metaphysical, with only a vague connection to the world of concrete things. If physicists are to succeed in explaining space as emergent, they must claim the concept of spacelessness as their own.

Einstein foresaw these difficulties. “Perhaps... we must also give up, by principle, the space-time continuum,” he wrote. “It is not unimaginable that human ingenuity will some day find methods which will make it possible to proceed along such a path. At the present time, however, such a program looks like an attempt to breathe in empty space.”

John Wheeler, the renowned gravity theorist, speculated that space-time is built out of “pregeometry,” but admitted that this was nothing more than “an idea for an idea.” Even someone as irrepressible as Arkani-Hamed has had his doubts: “These problems are very hard. They’re outside our usual language for talking about them.”

What keeps Arkani-Hamed going is that he and his colleagues have found just the sort of methods Einstein said they’d have to — ways to describe physics in the absence of space, to breathe in the vacuum. He has put these efforts into historical perspective: “For 2,000-plus years, people asked about the deep nature of space and time, but they were premature. We’ve finally arrived at the epoch where you can pose the questions and hope for some meaningful answer.”



- pt 

Bruno Marchal

unread,
Jan 28, 2019, 7:27:37 AM1/28/19
to everyth...@googlegroups.com
But those analog computations, although they could be very useful in practice, do not change the consequence of the theory, unless you claim that they provide us with a method to compute new functions, which would violate the Church-Turing thesis (which I doubt very much). Keep in mind that with mechanism, the physical reality has analog part, which might or not be used by our bodies, although there are no evidences (that I know) for this. I follow the idea of not adding any hypothesis in a theory, unless there are strong evidences for them.

Bruno

Philip Thrift

unread,
Jan 28, 2019, 9:07:00 AM1/28/19
to Everything List
"Analog" computing is a bit odd. Assuming real numbers (as commonly defined in math) do not exist in nature (which I don't think they do) then there is nothing in the Church-Turing sense of semantics that the analog computer computes differently. But in other semantics (like taking physical aspects into account - power consumption, for example - then analog substrate does matter.

- pt

Bruno Marchal

unread,
Jan 29, 2019, 6:30:18 AM1/29/19
to everyth...@googlegroups.com
Liner logic handles the notion of resource, and a machine cannot distinguish a relative substrate as defined in some ontological primary matter/substrate setting or in arithmetic. Of course, by invoking an ontological commitment, you can doubt any theory. Maybe be car are driven by invisible horses, and thermodynamic is a fake theory.

As I try to solve the mind-body problem in the Mechanist frame, I cannot use any ontological commitment other than the term of some arbitrary but fixed universal system. 

You assume some God, but that makes everything more complex, without evidences why to do so, except naive physical realism, but that does not work with Mechanism.

Philip Thrift

unread,
Jan 29, 2019, 9:03:03 AM1/29/19
to Everything List
There is no mind|body problem.
Only a language|body problem.


- pt 

Bruno Marchal

unread,
Jan 30, 2019, 6:45:34 AM1/30/19
to everyth...@googlegroups.com
With mechanism, we can identify body, words, numbers, and it is a pure third person notion, but mind has a first person part (indeed called the soul or the personal consciousness) which is pure 1p. The mind body problem consists in linking, without magic or ontological commitment those two things. The solution suggested by Theaetetus in Plato, has been refuted by Socrates (in Plato) but incompleteness refutes Socrates argument, and rehabilitates Theatetus’idea (the soul or the first person knower is the true-believer).
You can compare this with the semantic problem for language/body. To associate a semantic to a program or machine is related to the problem of associating a mind or a meaning to a body or to a code. The problem is virtually the same: once a theory/body is “rich enough”, its semantics escapes it and get multiple. Rich theories have many non isomorphic models/semantics, a bit like any computational state is supported by infinitely many computational situation, and some indeterminacy has to be taken into account.

Bruno




Philip Thrift

unread,
Jan 30, 2019, 5:14:39 PM1/30/19
to Everything List
Epicurus was born about the time Plato died. His "atomism" had atoms for consciousness (mind) that were mixed with the bodily atoms. Modern science rejected that concept, until the recent revival of (material) panpsychism has a updated version of it.

- pt

 

Bruno Marchal

unread,
Jan 31, 2019, 7:28:14 AM1/31/19
to everyth...@googlegroups.com
Unfortunately this does not explain neither what the atoms and where they comes from, nor what is consciousness and where it comes from. Mechanism explains this entirely, up to the testability of all its consequences, which, like every where in fundamental science, needs a perpetual doubt and constant verification and re-verification. 

If the theory S4Grz1, Z1*, X1* violate nature, then we will have some evidence for no-mechanism, and thus for primitive matter. But assuming primitive matter a priori seems like wanting to not understand the problem, or hiding it under ontological commitment, like materialists do since 1500 years, if not right since Aristotle.

Bruno

Philip Thrift

unread,
Jan 31, 2019, 9:40:49 AM1/31/19
to Everything List
On "where do atoms come from" I guess any physicist  you meet today has as good (or bad) an answer as any, in their way of thinking, anyway.

On consciousness: 

In a micropsychist* approach, the lowest-level psychical properties could appear in the form of their own material subatomic entities, like quarks —  quirks? :) —  in current physical theories. Thus human-level consciousness is "constituted" from lower-level material entities possessing lower-level psychical features.


According to constitutive micropsychism, the smallest parts of my brain have very basic forms of consciousness, and the consciousness of my brain as a whole is in some sense made up from the consciousness of its parts. This is the form of panpsychism that suffers most acutely from the combination problem, which we will explore below. However, if it can be made sense of, constitutive micropsychism promises an elegant and parsimonious view of nature, with all the richness of nature accounted for in terms of facts at the micro-level.


- pt 

Bruno Marchal

unread,
Feb 1, 2019, 8:19:00 AM2/1/19
to everyth...@googlegroups.com
On 31 Jan 2019, at 15:40, Philip Thrift <cloud...@gmail.com> wrote:



On Thursday, January 31, 2019 at 6:28:14 AM UTC-6, Bruno Marchal wrote:

On 30 Jan 2019, at 23:14, Philip Thrift <cloud...@gmail.com> wrote:



On Wednesday, January 30, 2019 at 5:45:34 AM UTC-6, Bruno Marchal wrote:

As I try to solve the mind-body problem in the Mechanist frame, I cannot use any ontological commitment other than the term of some arbitrary but fixed universal system. 

You assume some God, but that makes everything more complex, without evidences why to do so, except naive physical realism, but that does not work with Mechanism.

Bruno




There is no mind|body problem.
Only a language|body problem.


With mechanism, we can identify body, words, numbers, and it is a pure third person notion, but mind has a first person part (indeed called the soul or the personal consciousness) which is pure 1p. The mind body problem consists in linking, without magic or ontological commitment those two things. The solution suggested by Theaetetus in Plato, has been refuted by Socrates (in Plato) but incompleteness refutes Socrates argument, and rehabilitates Theatetus’idea (the soul or the first person knower is the true-believer).
You can compare this with the semantic problem for language/body. To associate a semantic to a program or machine is related to the problem of associating a mind or a meaning to a body or to a code. The problem is virtually the same: once a theory/body is “rich enough”, its semantics escapes it and get multiple. Rich theories have many non isomorphic models/semantics, a bit like any computational state is supported by infinitely many computational situation, and some indeterminacy has to be taken into account.

Bruno



Epicurus was born about the time Plato died. His "atomism" had atoms for consciousness (mind) that were mixed with the bodily atoms. Modern science rejected that concept, until the recent revival of (material) panpsychism has a updated version of it.


Unfortunately this does not explain neither what the atoms and where they comes from, nor what is consciousness and where it comes from. Mechanism explains this entirely, up to the testability of all its consequences, which, like every where in fundamental science, needs a perpetual doubt and constant verification and re-verification. 

If the theory S4Grz1, Z1*, X1* violate nature, then we will have some evidence for no-mechanism, and thus for primitive matter. But assuming primitive matter a priori seems like wanting to not understand the problem, or hiding it under ontological commitment, like materialists do since 1500 years, if not right since Aristotle.

Bruno




On "where do atoms come from" I guess any physicist  you meet today has as good (or bad) an answer as any, in their way of thinking, anyway.

They usually assume a primary physical reality. They make the physical universe into a (non personal god). But that explains nothing, even if very interesting in physics. Physicists are just NOT meta physicists, except very bad one the week-end or after retirement.

An explanation of X must not assume X, or, if it does, the recursion employed must be entirely justified too.





On consciousness: 

In a micropsychist* approach, the lowest-level psychical properties could appear in the form of their own material subatomic entities, like quarks —  quirks? :) —  in current physical theories. Thus human-level consciousness is "constituted" from lower-level material entities possessing lower-level psychical features.

I don’t see an atom of explanation of consciousness here. That seems just like a more sophisticated way to hide the problem under the rug of microphysics, without addressing any of the question raised by the philosopher of mind or the cognitive scientist. If you dig in that direction, both matter and consciousness becomes only more obscure. 





According to constitutive micropsychism, the smallest parts of my brain have very basic forms of consciousness, and the consciousness of my brain as a whole is in some sense made up from the consciousness of its parts. This is the form of panpsychism that suffers most acutely from the combination problem, which we will explore below. However, if it can be made sense of, constitutive micropsychism promises an elegant and parsimonious view of nature, with all the richness of nature accounted for in terms of facts at the micro-level.

I am skeptical this can work, and of course, it is incompatible with Digital mechanism. This one explains consciousness in the most standard theological way (Theaetetus), and it explains matter in an entire new way, as number hallucination, which provable exist in arithmetic (once we bet the brain is Turing emulable).

Bruno





Philip Thrift

unread,
Feb 1, 2019, 8:52:00 AM2/1/19
to Everything List
In any case, one of the "micropsychists"  has a new paper just out:


"According to the fusion view ... when micro- or protoconscious entities come together in the right way, they fuse or 'blend' together to form a single unified consciousness. ..."

Is Consciousness Intrinsic? A Problem for the Integrated Information Theory
Hedda Hassel Mørch
Journal of Consciousness Studies 26 (1-2):133-162(30) (2019)


Abstract
The Integrated Information Theory of consciousness (IIT) claims that consciousness is identical to maximal integrated information, or maximal Φ. One objection to IIT is based on what may be called the intrinsicality problem: consciousness is an intrinsic property, but maximal Φ is an extrinsic property; therefore, they cannot be identical. In this paper, I show that this problem is not unique to IIT, but rather derives from a trilemma that confronts almost any theory of consciousness. Given most theories of consciousness, the following three claims are inconsistent. INTRINSICALITY: Consciousness is intrinsic. NON-OVERLAP: Conscious systems do not overlap with other conscious systems (a la Unger’s problem of the many). REDUCTIONISM: Consciousness is constituted by more fundamental properties (as per standard versions of physicalism and Russellian monism). In view of this, I will consider whether rejecting INTRINSICALITY is necessarily less plausible than rejecting NON-OVERLAP or REDUCTIONISM. I will also consider whether IIT is necessarily committed to rejecting INTRINSICALITY or whether it could also accept solutions that reject NON-OVERLAP or REDUCTIONISM instead. I will suggest that the best option for IIT may be a solution that rejects REDUCTIONISM rather than INTRINSICALITY or NON-OVERLAP.


- pt

Bruno Marchal

unread,
Feb 1, 2019, 10:41:00 AM2/1/19
to everyth...@googlegroups.com
This is weird. All programs are maximally integrated information, I would say. But I doubt they are all conscious. At least, from the abstract, the author is aware of the error consisting in identifying a first person notion with a third person notion (he used intrinsic and extrinsic for this). 

I agree with instrinsicality, but “non-overlap” seems to use the identity thesis inconsistent with mechanism; a,d reductionism is simply false with Mechanism. The price to pay, which is also the wonderful gift, is that physics becomes reducible to digital machine theology, which is a subbranch of arithmetic. 

Physicalism is simply refuted when we postulate that the brain/body is Turing emulable. Consciousness is not attached to any particular computational history, but on some set of relative histories having some measure implied by the logic of the observable mode of the universal machine.

The people here are blinded by their belief in some primary physical universe, but until there is some evidence for this, it is only a useless complication.

Bruno  

Philip Thrift

unread,
Feb 1, 2019, 1:16:01 PM2/1/19
to Everything List
The difference between the informationists (IIT - integrated information theory - that consciousness is a property of sufficiently [large] complex networks of [purely] information  processing units) and the micropsychists (that both psychical and physical properties reside to some degrees in all levels of matter) is vast. This paper by Mørch points to a path to "fuse" the two approaches. That makes it interesting to the readership of Journal of Consciousness Studies.


What I do find useless in (philosophy of) science is the language of reduction and emergence. They are really "unscientific" terms.

- pt




Brent Meeker

unread,
Feb 1, 2019, 2:54:15 PM2/1/19
to everyth...@googlegroups.com


On 2/1/2019 5:52 AM, Philip Thrift wrote:
In any case, one of the "micropsychists"  has a new paper just out:


"According to the fusion view ... when micro- or protoconscious entities come together in the right way, they fuse or 'blend' together to form a single unified consciousness. ..."

Is Consciousness Intrinsic? A Problem for the Integrated Information Theory
Hedda Hassel Mørch
Journal of Consciousness Studies 26 (1-2):133-162(30) (2019)


Abstract
The Integrated Information Theory of consciousness (IIT) claims that consciousness is identical to maximal integrated information, or maximal Φ. One objection to IIT is based on what may be called the intrinsicality problem: consciousness is an intrinsic property, but maximal Φ is an extrinsic property; therefore, they cannot be identical.

A more cogent objection is that it attributes lots of consciousness to a Vandermonde matrix:

https://www.scottaaronson.com/blog/?p=1799

Brent

Philip Thrift

unread,
Feb 2, 2019, 1:58:14 AM2/2/19
to Everything List


On Friday, February 1, 2019 at 1:54:15 PM UTC-6, Brent wrote:


On 2/1/2019 5:52 AM, Philip Thrift wrote:
In any case, one of the "micropsychists"  has a new paper just out:


"According to the fusion view ... when micro- or protoconscious entities come together in the right way, they fuse or 'blend' together to form a single unified consciousness. ..."

Is Consciousness Intrinsic? A Problem for the Integrated Information Theory
Hedda Hassel Mørch
Journal of Consciousness Studies 26 (1-2):133-162(30) (2019)


Abstract
The Integrated Information Theory of consciousness (IIT) claims that consciousness is identical to maximal integrated information, or maximal Φ. One objection to IIT is based on what may be called the intrinsicality problem: consciousness is an intrinsic property, but maximal Φ is an extrinsic property; therefore, they cannot be identical.

A more cogent objection is that it attributes lots of consciousness to a Vandermonde matrix:

https://www.scottaaronson.com/blog/?p=1799

Brent




Scott Aaronson wrote this about 5 years ago. I haven't looked if he has has anything new.

Regarding informationism vs. panpsychism, he only addresses the former.

I’ve just conjured into my imagination beings whose Φ-values are a thousand, nay a trillion times larger than humans’, yet who are also philosophical zombies: entities that there’s nothing that it’s like to be.  

That of course panpsychists agree with.

He procedes:

Let S=F_p, where p is some prime sufficiently larger than n, and let V be an n×n Vandermonde matrix over F_p—that is, a matrix whose (i,j) entry equals i^(j-1) (mod p).  Then let f:S^n→S^n be the update function defined by f(x)=Vx. 

Concludes: the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed.


Now here is where panpsychists diverge from this way of thinking: Everything Scott wrote above involves ultimately computing with numerical entities as the "atoms" (so to speak) of what the "computer" is computing with. What the panpsychists are saying is that it is not numerical entities (numericals: Ns) at all that are at the base of the computing, but experiential entities (experientials: Es). Es are as basic (ontologically) as Ns.

Defining what Es are is the fundamental problem for panpsychists (vs. numerists, or informationists).


- pt

 

Philip Thrift

unread,
Feb 2, 2019, 2:20:16 AM2/2/19
to Everything List
The commenters to Scott's post seem to try to get into this with ψ-properties vs. Φ-properties.

- pt

Brent Meeker

unread,
Feb 2, 2019, 1:48:00 PM2/2/19
to everyth...@googlegroups.com
Yes, Scott's analysis assumes that consciousness is characterized by some kind of computation...as does Tononi.  But he observes that whatever your theory of consciousness is it needs to at least roughly agree as to who and what is conscious.  A theory that says a large Vandermonde matrix is conscious fails that test.

But to introduce experiential atoms is just words.  It doesn't explain anything.  Where do your experiential atoms go when you are unconscious?  when you die?  How do they interact with non-experiential atoms?  Are experiential atoms necessary for intelligence?

Brent

Philip Thrift

unread,
Feb 2, 2019, 2:04:18 PM2/2/19
to Everything List
So do I. It's called experience processing (vs. information processing). 



 
But he observes that whatever your theory of consciousness is it needs to at least roughly agree as to who and what is conscious.  A theory that says a large Vandermonde matrix is conscious fails that test.

But to introduce experiential atoms is just words.  It doesn't explain anything.  Where do your experiential atoms go when you are unconscious?  when you die?  How do they interact with non-experiential atoms?  Are experiential atoms necessary for intelligence?

Brent


All good questions.

"Experiential atoms" are sort of " just words" - like "atoms" was just a word to the Atomists of ancient Greece. And then a lot of people ignored them. Hopefully things will go better in this round of history.

- pt


Brent Meeker

unread,
Feb 2, 2019, 3:56:45 PM2/2/19
to everyth...@googlegroups.com


On 2/2/2019 11:04 AM, Philip Thrift wrote:
>
> "Experiential atoms" are sort of " just words" - like "atoms" was just
> a word to the Atomists of ancient Greece. And then a lot of people
> ignored them. Hopefully things will go better in this round of history.
>
> - pt

Democritus didn't just define atoms as "the uncuttable".  He built a
theory on it.  Atoms had hook-and-loop interactions that explained the
coherence and interaction of bodies.  They flowed downward explaining
gravity.  They swerved explaining interactions.  I don't see any
explantory or predictive power in experiential atoms.  It's easy to see
how anesthetics may work by blocking ion channels in neurons.  No
experiential atoms need be considered.

Brent

Philip Thrift

unread,
Feb 2, 2019, 6:36:50 PM2/2/19
to Everything List
It is Epicurus that wrote of psychical (in addition to physical) atoms:


- pt
 

Bruno Marchal

unread,
Feb 4, 2019, 11:06:45 AM2/4/19
to everyth...@googlegroups.com
There is no non-scientific term. There is a possible misusage of some terms, and that is poorly scientific.

But it makes sense to say that the theory of heat has been reduced to the theory of kinetics energy of particles, or that chemistry has been reduced to quantum mechanics. A reduction can be made precise in term of representation of a theory in another, and there are many example, including the Mechanist reduction of physics to machine’s theology (itself reducible to computer science, itself reducible to elementary arithmetic). 

For “emergence”, in my own work, you can use “definable” instead. Prime numbers and computations emerges from the number relations in the sense that you can defined such notion from the axioms and logic given, without adding any new axioms.

Now, when you put the experience into a particle, I do no more know what is a particle, nor what you mean by experience. It seems to me only confusing, and to build on the mystery instead of trying to solve it.

Philip Thrift

unread,
Feb 4, 2019, 1:09:34 PM2/4/19
to Everything List
As I have said, I am language-oriented. What this means is that I say that science (from that perspective) is a collection of domain-specific languages - general relativity, particle physics, chemistry, microbiology, cellular biology, neurobiology, psychology, sociology,  ,... - however one wants to carve them up (they are all human inventions anyway). The terms 'reduction', 'emergence' are really about how expressions (aka theories) in one domain language relate to (can compile to, translate to, can be defined in terms of) another domain language, rather than some teleological, causal relation.

But languages have semantics, including the "what" they are about. In the case of an experience processing language, there would be some fundamental "atoms" or "units" of experientiality, like  ψbits.

- pt 

Bruno Marchal

unread,
Feb 8, 2019, 6:53:01 PM2/8/19
to everyth...@googlegroups.com
As I have said, I am language-oriented. What this means is that I say that science (from that perspective) is a collection of domain-specific languages - general relativity, particle physics, chemistry, microbiology, cellular biology, neurobiology, psychology, sociology,  ,…

They all use English. The theories differ but sometimes can be related, like chemistry is in principle reducible to quantum mechanics, with electron playing a preponderant role. Yet, high level chemistry will develop higher level tools not always easily reducible to quantum physics. 
For the mind body problem, with mechanism, we have the choice of choosing any language, and any Turing complete theory. The machine theology (G*), which should include physics, is theory independent. The physical reality is phi_i independent.







- however one wants to carve them up (they are all human inventions anyway).


“Brain” is an invention of the human, but the brain itself is more an invention of nature. With mechanism, eventually nature is a result of “consciousness selection or projection”. A result of sharable first person indterminacies, from all “relative computational states existing in the sigma_1 arithmetic"



The terms 'reduction', 'emergence' are really about how expressions (aka theories) in one domain language relate to (can compile to, translate to, can be defined in terms of) another domain language, rather than some teleological, causal relation.

Non problem with this. But the representation have to be faithful, and proved to be so when used. 




But languages have semantics, including the "what" they are about.

Yes. Languages and theories have semantics. That is what mathematical logic is all about. Proof theory, Model theory, and the relation between proofs and model, where a model is usually a mathematical structure verifying the statements of the theory.





In the case of an experience processing language, there would be some fundamental "atoms" or "units" of experientiality, like  ψbits.


Experience is usually private and non provable. But when machine’s introspect themselves they got reason to believe in such true, from their perspective, statement which are non provable.

A unit of experience does not make sense to me, to be honest. Subjective experience does not admit third person description at all, although they do admit meta-pointers to them, thanks our Mechanist admission of the invariance of consciousness for some digital transformation.

Consciousness is not material. It indexical, relational, and the attribute of some higher order “hero” or person. Person are conscious, not things. I tend to believe that bacteria are already conscious, but that consciousness is not much more differentiate than the universal consciousness of its environment. It is an altered state of consciousness, quite unlike the usual mundane one, which refers to long and complex path. With mechanism there might be reason to expect us being very rare in the physical reality.

Consciousness is primitively the knowledge of our existence, but it is not definable, nor provable, yet indubitable. All (Löbian) universal machine already knows that. Consciousness is not really just consistency, but it is the semantic, or truth, of that consistency. The hero get that something is happening.

Philip Thrift

unread,
Feb 9, 2019, 4:22:25 AM2/9/19
to Everything List


On Friday, February 8, 2019 at 5:53:01 PM UTC-6, Bruno Marchal wrote:

On 4 Feb 2019, at 19:09, Philip Thrift <cloud...@gmail.com> wrote:


As I have said, I am language-oriented. What this means is that I say that science (from that perspective) is a collection of domain-specific languages - general relativity, particle physics, chemistry, microbiology, cellular biology, neurobiology, psychology, sociology,  ,…

They all use English. The theories differ but sometimes can be related, like chemistry is in principle reducible to quantum mechanics, with electron playing a preponderant role. Yet, high level chemistry will develop higher level tools not always easily reducible to quantum physics. 
For the mind body problem, with mechanism, we have the choice of choosing any language, and any Turing complete theory. The machine theology (G*), which should include physics, is theory independent. The physical reality is phi_i independent.





There is English. But there is also also a collection of mathematical language "dialects", like "Lagrangian":

This Is What The Standard Model of Physics Actually Looks Like

"The Lagrangian is a fancy way of writing an equation to determine the state of a changing system and explain the maximum possible energy the system can maintain ... Despite appearances, the Lagrangian is one of the easiest and most compact ways of presenting the theory."



Suppose there is a conference Languages for the Mind-Body Problem, including

G*
EMPL⁺ 

The irony to me is that there are people talking about those languages which could refer to themselves at a conference presenting those languages.

Experiential Modalities Programing Language 

 




- however one wants to carve them up (they are all human inventions anyway).


“Brain” is an invention of the human, but the brain itself is more an invention of nature. With mechanism, eventually nature is a result of “consciousness selection or projection”. A result of sharable first person indterminacies, from all “relative computational states existing in the sigma_1 arithmetic"



The terms 'reduction', 'emergence' are really about how expressions (aka theories) in one domain language relate to (can compile to, translate to, can be defined in terms of) another domain language, rather than some teleological, causal relation.

Non problem with this. But the representation have to be faithful, and proved to be so when used. 




But languages have semantics, including the "what" they are about.

Yes. Languages and theories have semantics. That is what mathematical logic is all about. Proof theory, Model theory, and the relation between proofs and model, where a model is usually a mathematical structure verifying the statements of the theory.




Even though the terms "model", "interpretation", "domain of discourse" etc. are used  in mathematical logic [ https://en.wikipedia.org/wiki/True_arithmetic : "The domain of discourse is the set N  of natural numbers..This structure is known as the standard model or intended interpretation of first-order arithmetic."], I've thought more recently of using substrate instead.

 


In the case of an experience processing language, there would be some fundamental "atoms" or "units" of experientiality, like  ψbits.


Experience is usually private and non provable. But when machine’s introspect themselves they got reason to believe in such true, from their perspective, statement which are non provable.

A unit of experience does not make sense to me, to be honest. Subjective experience does not admit third person description at all, although they do admit meta-pointers to them, thanks our Mechanist admission of the invariance of consciousness for some digital transformation.

Consciousness is not material. It indexical, relational, and the attribute of some higher order “hero” or person. Person are conscious, not things. I tend to believe that bacteria are already conscious, but that consciousness is not much more differentiate than the universal consciousness of its environment. It is an altered state of consciousness, quite unlike the usual mundane one, which refers to long and complex path. With mechanism there might be reason to expect us being very rare in the physical reality.

Consciousness is primitively the knowledge of our existence, but it is not definable, nor provable, yet indubitable. All (Löbian) universal machine already knows that. Consciousness is not really just consistency, but it is the semantic, or truth, of that consistency. The hero get that something is happening.

Bruno



On the "units of experience", that's the concern of the micropsychism literature. I wrote something yesterday on this in the context of John Archibald Wheeler's "it from bit":

- pt

Bruno Marchal

unread,
Feb 13, 2019, 11:17:57 PM2/13/19
to everyth...@googlegroups.com
On 9 Feb 2019, at 10:22, Philip Thrift <cloud...@gmail.com> wrote:



On Friday, February 8, 2019 at 5:53:01 PM UTC-6, Bruno Marchal wrote:

On 4 Feb 2019, at 19:09, Philip Thrift <cloud...@gmail.com> wrote:


As I have said, I am language-oriented. What this means is that I say that science (from that perspective) is a collection of domain-specific languages - general relativity, particle physics, chemistry, microbiology, cellular biology, neurobiology, psychology, sociology,  ,…

They all use English. The theories differ but sometimes can be related, like chemistry is in principle reducible to quantum mechanics, with electron playing a preponderant role. Yet, high level chemistry will develop higher level tools not always easily reducible to quantum physics. 
For the mind body problem, with mechanism, we have the choice of choosing any language, and any Turing complete theory. The machine theology (G*), which should include physics, is theory independent. The physical reality is phi_i independent.





There is English. But there is also also a collection of mathematical language "dialects", like "Lagrangian":

This Is What The Standard Model of Physics Actually Looks Like

"The Lagrangian is a fancy way of writing an equation to determine the state of a changing system and explain the maximum possible energy the system can maintain ... Despite appearances, the Lagrangian is one of the easiest and most compact ways of presenting the theory.”

That is technical language. It is just natural language with some technical terms added to it. Yes, a Lagrangian contains a lot of information, but it is open if the setting is classical or quantum, which changes a lot the interpretation problem.








Suppose there is a conference Languages for the Mind-Body Problem, including

G*
EMPL⁺ 


G* is a theory, not a language. G* is the same whatever classical ontological (Turing-complete) theory you take. (Even if you add infinity axioms, or super-Turing elements).




The irony to me is that there are people talking about those languages which could refer to themselves at a conference presenting those languages.

Experiential Modalities Programing Language 

With mechanism, experiential modalities are given by the variant of provability using “ & p” in the definition, like []p & p, or []p & <>t & p. That “& p” makes them qualitative and undefinable by the machine concerned, but a rich consistent machine can study the complete theology (at the propositional level) of a simpler machine that she knows/believes to be sound (or just consistent).





 




- however one wants to carve them up (they are all human inventions anyway).


“Brain” is an invention of the human, but the brain itself is more an invention of nature. With mechanism, eventually nature is a result of “consciousness selection or projection”. A result of sharable first person indterminacies, from all “relative computational states existing in the sigma_1 arithmetic"



The terms 'reduction', 'emergence' are really about how expressions (aka theories) in one domain language relate to (can compile to, translate to, can be defined in terms of) another domain language, rather than some teleological, causal relation.

Non problem with this. But the representation have to be faithful, and proved to be so when used. 




But languages have semantics, including the "what" they are about.

Yes. Languages and theories have semantics. That is what mathematical logic is all about. Proof theory, Model theory, and the relation between proofs and model, where a model is usually a mathematical structure verifying the statements of the theory.




Even though the terms "model", "interpretation", "domain of discourse" etc. are used  in mathematical logic [ https://en.wikipedia.org/wiki/True_arithmetic : "The domain of discourse is the set N  of natural numbers..This structure is known as the standard model or intended interpretation of first-order arithmetic."], I've thought more recently of using substrate instead.


Hmm… that would augment the probability of doing a mistake already done by early pythagoreans: to believe that arithmeticalism (only numbers) entails that there are things made of numbers. But Mechanism is more idealistic; the only “non-number-theoretical things” are only dreamed by numbers, through the computations mimicking them correctly in arithmetic (which exist by the digital mechanist assumption).



 


In the case of an experience processing language, there would be some fundamental "atoms" or "units" of experientiality, like  ψbits.


Experience is usually private and non provable. But when machine’s introspect themselves they got reason to believe in such true, from their perspective, statement which are non provable.

A unit of experience does not make sense to me, to be honest. Subjective experience does not admit third person description at all, although they do admit meta-pointers to them, thanks our Mechanist admission of the invariance of consciousness for some digital transformation.

Consciousness is not material. It indexical, relational, and the attribute of some higher order “hero” or person. Person are conscious, not things. I tend to believe that bacteria are already conscious, but that consciousness is not much more differentiate than the universal consciousness of its environment. It is an altered state of consciousness, quite unlike the usual mundane one, which refers to long and complex path. With mechanism there might be reason to expect us being very rare in the physical reality.

Consciousness is primitively the knowledge of our existence, but it is not definable, nor provable, yet indubitable. All (Löbian) universal machine already knows that. Consciousness is not really just consistency, but it is the semantic, or truth, of that consistency. The hero get that something is happening.

Bruno



On the "units of experience", that's the concern of the micropsychism literature. I wrote something yesterday on this in the context of John Archibald Wheeler's "it from bit":

Sum up it here please. It is an important issue. Thank you (in advance).

Philip Thrift

unread,
Feb 14, 2019, 4:00:07 AM2/14/19
to Everything List
What are the "units of experience" is sort of the basic problem for the panpsychical paradigm. 

I am adding 

       References   (What are the units of experience?)



We are familiar basic units of conventional (informational) computing: 0s, 1s, (SKI) combinators, now qbits, those kind of things, but what are the basic units of experience processing?

This is a new subject, and I don't have a Ph.D. in theoretical psychology, which may or not help.

- pt


Alan Grayson

unread,
Apr 25, 2020, 6:33:08 PM4/25/20
to Everything List


On Sunday, January 6, 2019 at 12:53:52 AM UTC-7, Brent wrote:
To measure small things you need comparably short wavelengths.  If you
make a photon with a wavelength so short it can measure the Planck
length it will have so much mass-energy that it will fold spacetime
around it and become a black hole...so you won't be able to use it to
measure anything.

Brent

I understand the BH issue. But suppose we want to measure the diameter of a proton and use photons of large wave length, say of radio frequency. If we're looking for a shadow on a screen, why won't the large wavelength leave a discernible shadow of the proton? Or is it the back scattering we look for? Same question; that is, why must the impinging wavelength be of comparable length to measure a physical object of the same approximate length? TIA, AG 

On 1/5/2019 11:39 PM, agrays...@gmail.com wrote:
> What is the argument for the claim that we cannot, in principle,
> measure any length smaller than Planck length? TIA, AG

Brent Meeker

unread,
Apr 25, 2020, 7:36:01 PM4/25/20
to everyth...@googlegroups.com


On 4/25/2020 3:33 PM, Alan Grayson wrote:


On Sunday, January 6, 2019 at 12:53:52 AM UTC-7, Brent wrote:
To measure small things you need comparably short wavelengths.  If you
make a photon with a wavelength so short it can measure the Planck
length it will have so much mass-energy that it will fold spacetime
around it and become a black hole...so you won't be able to use it to
measure anything.

Brent

I understand the BH issue. But suppose we want to measure the diameter of a proton and use photons of large wave length, say of radio frequency. If we're looking for a shadow on a screen, why won't the large wavelength leave a discernible shadow of the proton? Or is it the back scattering we look for? Same question; that is, why must the impinging wavelength be of comparable length to measure a physical object of the same approximate length? TIA, AG

If you use a wavelength that is not shorter than the dimension you're measuring your resolution is just the wavelength.  The waves refract around the object so you can't resole edges.

Brent

Alan Grayson

unread,
Apr 25, 2020, 8:25:21 PM4/25/20
to Everything List
That's what I was thinking; you get diffraction on the edges, which are then not well defined. But suppose you use a short enough wavelength to measure the diameter of a proton. How can get an actual measurement, given the tiny diameter? How is it done? AG 

Brent Meeker

unread,
Apr 25, 2020, 9:33:31 PM4/25/20
to everyth...@googlegroups.com
The latest method's not very different from what you imagined as "measuring the shadow cast" except they shoot electrons, not photons, at the protons.

https://www.sciencedirect.com/science/article/pii/S0370269303015387?via%3Dihub

https://arxiv.org/pdf/nucl-th/0508037.pdf

They're measuring a correction term in the scattering cross-section that depends on the size of the charge distribution in the proton.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Gunn Quznetsov

unread,
Apr 25, 2020, 11:16:51 PM4/25/20
to everyth...@googlegroups.com
Dear Dr. Alan Grayson ,

No SUSY, No AXION, No WIMP, No HIGGS, No BIG BANG...

Please, read it:

Regarda,
Dr. Gunn



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Alan Grayson

unread,
Apr 25, 2020, 11:27:53 PM4/25/20
to Everything List


On Saturday, April 25, 2020 at 9:16:51 PM UTC-6, Gunn Quznetsov wrote:
Dear Dr. Alan Grayson ,

No SUSY, No AXION, No WIMP, No HIGGS, No BIG BANG...

Please, read it:

Regarda,
Dr. Gunn

I've asked you this before, but you didn't reply. If no Big Bang, what's your interpretation of the CMBR (which is generally accepted as evidence of the Big Bang). AG 



On Sunday, April 26, 2020, 01:33:12 AM GMT+3, Alan Grayson <agrays...@gmail.com> wrote:




On Sunday, January 6, 2019 at 12:53:52 AM UTC-7, Brent wrote:
To measure small things you need comparably short wavelengths.  If you
make a photon with a wavelength so short it can measure the Planck
length it will have so much mass-energy that it will fold spacetime
around it and become a black hole...so you won't be able to use it to
measure anything.

Brent

I understand the BH issue. But suppose we want to measure the diameter of a proton and use photons of large wave length, say of radio frequency. If we're looking for a shadow on a screen, why won't the large wavelength leave a discernible shadow of the proton? Or is it the back scattering we look for? Same question; that is, why must the impinging wavelength be of comparable length to measure a physical object of the same approximate length? TIA, AG 

On 1/5/2019 11:39 PM, agrays...@gmail.com wrote:
> What is the argument for the claim that we cannot, in principle,
> measure any length smaller than Planck length? TIA, AG

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages