Entropy and information

14 views
Skip to first unread message

Evgenii Rudnyi

unread,
Feb 26, 2012, 8:58:45 AM2/26/12
to everyth...@googlegroups.com
I have written a summary for the discussion in the subject:

http://blog.rudnyi.ru/2012/02/entropy-and-information.html

No doubt, this is my personal viewpoint. If you see that I have missed
something, please let me know.

Evgenii

Jason Resch

unread,
Feb 26, 2012, 12:47:00 PM2/26/12
to everyth...@googlegroups.com
Evgenii,

Very nice summary of the history of the terms in physics and
information theory. I think it is valuable to have centralized this
information.

While I was reading your discussion of thermodynamic entropy reaching
zero at 0K a thought occurred to me which can restore meaning and
comparability between the terms:

Thermodynamic entropy refers only to the entropy (uncertainty)
associated with particle velocities.

Thus at 0K thermodynamic entropy is 0 and 0 bits of information are
required to describe the relative velocities of all the paeticals at 0K.

Memory devices do not increase their storage capacity as they become
hotter because no memory device today encodes information in terms of
partical velocities.

Under this view it is apparent that thermodynamic entropy is not the
same thing as entropy or information, it is a qualified term.
However, it seems thermodynamic entropy and "thermodynamic
information" could perhaps be considered synonymous.

Jason

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

meekerdb

unread,
Feb 26, 2012, 6:13:38 PM2/26/12
to everyth...@googlegroups.com
I think you are ignoring the conceptual unification provided by information theory and statistical mechanics.  JANAF tables only consider the thermodynamic entropy, which is a special case in which the macroscopic variables are temperature and pressure.  You can't look up the entropy of magnetization in the JANAF tables.  Yet magnetization of small domains is how information is stored on hard disks, c.f. Donald McKay's book "Information Theory, Inference, and Learning Algorithm" chapter 31. 

Did you actually read E. T. Jaynes 1957 paper in which he introduced the idea of basing entropy in statistical mechanics (which you also seem to dislike) on information?  He wrote "The mere fact that the same mathematical expression -SUM[p_i log(p_i)] occurs in both statistical mechanics and in information theory does not in itself establish a connection between these fields.  This can be done only by finding new viewpoints from which the thermodynamic entropy and information-theory entropy appear as the same concept."  Then he goes on to show how the principle of maximum entropy can be used to derive statistical mechanics.  That it *can* be done in some other way, and was historically as you assert, is not to the point.  As an example of how the information view of statistical mechanics extends its application he calculates how much the spins of protons in water would be polarized by rotating the water at 36,000rpm.  It seems you are merely objecting to "new viewpoints" on the grounds that you can see all that you want to see from the old viewpoint.

Your quotation of Arnheim, from his book on the theory of entropy in art, just shows his confusion.  The Shannon information, which is greatest when the system is most disordered in some sense, does not imply that the most disordered message contains the greatest information.  The Shannon information is that information we receive when the *potential messages* are most disordered.  It's a property of an ensemble or a channel, not of a particular message.

Brent

Alberto G.Corona

unread,
Feb 27, 2012, 6:16:45 AM2/27/12
to Everything List

Perhaps a more basic, and more pertinent question related with
entrophy and information in the context of this list is the relation
of computability, living beings , the arrow of time and entropy,

What the paper (http://qi.ethz.ch/edu/qisemFS10/papers/
81_Bennett_Thermodynamics_of_computation.pdf) that initiated the
discussion suggest is that in practical terms it is necessary a
driving force that avoids random reversibility to execute practical
computations, this driving force implies dissipation of energy and
thus an increase of entropy. This is so because most if not all
practical computations are exponentually branched (Fig 10).

And here comes the living beings. As the paper says in the
introduction, living beings perform computations at the molecular
level, and it must be said, at the neural level. Therefore given the
said above, life must proceed from less to more entrophy and this
defines the arrow of time.

Besides the paper concentrates itself in what happens inside a
computation some concepts can be used to dilucidate what happens with
the interaction of a living being and its surrounding reality. The
reality behaves like a form of ballistic computer at the microscopic
level., with elemental particles ruled by the forces of nature instead
of ellastic macroscopic collisions. At the macroscopic level,
however, there is a destruction of information and irreversibility.

However in the direction of entropy dissipation, it is possible to
perform calculations in order to predict the future at the macroscopic
level. That´s a critical function of living beings. An extreme example
of the difference between macro and micro computation is to "predict"
the distrubution of water in a water collector after rain. It is not
necessary to know the position and velocity of every water molecule,
not even the position and velocity of each drop of water. is this
erase of information that the increase of entropy perform at the
macroscopic level (that indeed is the reason of the mere concept of
macro-state in statistical mechanics) the process that permit
economically feasible computations. Since computation is expensive
and the process of discovery of the world by living beings trough
natural selection very slow, (trough the aggregation of complexity
and sophistication by natural selection is in the order of magnitude
of the age of the universe : thousands of millions years) Then the
macroscopic laws of nature must be simple enough, and there must be a
privileged direction of easy computation for life to exist.

The fact that evolution for intelligent life and age of the Universe
are in the same magnitudes means that this universe is constrained to
the maximum discoverable-by-evolution complexity in the
computationally privileged direction of the arrow of time.

This is my brief presentation about this:

https://docs.google.com/present/view?id=dd5rm7qq_142d8djhvc8

This is my previous post in this group about entrophy arrow of time
and life:

http://www.mail-archive.com/everyth...@googlegroups.com/msg15696.html

Stephen P. King

unread,
Feb 27, 2012, 7:54:37 AM2/27/12
to everyth...@googlegroups.com
Dear Albert,

    One brief comment. In your Google paper you wrote, among other interesting things, "But life and natural selection demands a mathematical universe somehow". Could it be that this is just another implication of the MMH idea? If the physical implementation of computation acts as a selective pressure on the multiverse, then it makes sense that we would find ourselves in a universe that is representable in terms of Boolean algebras with their nice and well behaved laws of bivalence (a or not-A), etc.

    Very interesting ideas.

Onward!

Stephen

Bruno Marchal

unread,
Feb 27, 2012, 11:14:16 AM2/27/12
to everyth...@googlegroups.com

On 27 Feb 2012, at 12:16, Alberto G.Corona wrote:

>
> Perhaps a more basic, and more pertinent question related with
> entrophy and information in the context of this list is the relation
> of computability, living beings , the arrow of time and entropy,
>
> What the paper (http://qi.ethz.ch/edu/qisemFS10/papers/
> 81_Bennett_Thermodynamics_of_computation.pdf) that initiated the
> discussion suggest is that in practical terms it is necessary a
> driving force that avoids random reversibility to execute practical
> computations, this driving force implies dissipation of energy and
> thus an increase of entropy. This is so because most if not all
> practical computations are exponentually branched (Fig 10).


But all computations can been done in a non branched way.
There exists reversible universal machine, and indeed a quantum
computer is such a machine.
From Bennett 73, (Logical reversibility of computation, IBM, J. Res.
Dev., 17, 525).
Energy is dissipated only when information is erased (Landauer
principle), and, as the logicians already knew (Hao Wang, for example)
computations can been done without erasing any information.
Bennett convinced me also that we can extract work from (Shannon)
information, and that energy might be a local measure of information
content.

>
> And here comes the living beings. As the paper says in the
> introduction, living beings perform computations at the molecular
> level, and it must be said, at the neural level.

I am OK with this, but of course we don't know that.


> Therefore given the
> said above, life must proceed from less to more entrophy and this
> defines the arrow of time.
>
> Besides the paper concentrates itself in what happens inside a
> computation some concepts can be used to dilucidate what happens with
> the interaction of a living being and its surrounding reality. The
> reality behaves like a form of ballistic computer at the microscopic
> level., with elemental particles ruled by the forces of nature instead
> of ellastic macroscopic collisions. At the macroscopic level,
> however, there is a destruction of information and irreversibility.

OK.


>
> However in the direction of entropy dissipation, it is possible to
> perform calculations in order to predict the future at the macroscopic
> level. That´s a critical function of living beings. An extreme example
> of the difference between macro and micro computation is to "predict"
> the distrubution of water in a water collector after rain. It is not
> necessary to know the position and velocity of every water molecule,
> not even the position and velocity of each drop of water. is this
> erase of information that the increase of entropy perform at the
> macroscopic level (that indeed is the reason of the mere concept of
> macro-state in statistical mechanics) the process that permit
> economically feasible computations. Since computation is expensive
> and the process of discovery of the world by living beings trough
> natural selection very slow, (trough the aggregation of complexity
> and sophistication by natural selection is in the order of magnitude
> of the age of the universe : thousands of millions years) Then the
> macroscopic laws of nature must be simple enough, and there must be a
> privileged direction of easy computation for life to exist.

OK.


>
> The fact that evolution for intelligent life and age of the Universe
> are in the same magnitudes means that this universe is constrained to
> the maximum discoverable-by-evolution complexity in the
> computationally privileged direction of the arrow of time.

OK, in term of "this branch of the multiverse".

Bruno

http://iridia.ulb.ac.be/~marchal/

Evgenii Rudnyi

unread,
Feb 27, 2012, 1:59:36 PM2/27/12
to everyth...@googlegroups.com
On 27.02.2012 00:13 meekerdb said the following:

> On 2/26/2012 5:58 AM, Evgenii Rudnyi wrote:
>> I have written a summary for the discussion in the subject:
>>
>> http://blog.rudnyi.ru/2012/02/entropy-and-information.html
>>
>> No doubt, this is my personal viewpoint. If you see that I have
>> missed something, please let me know.
>
> I think you are ignoring the conceptual unification provided by
> information theory and statistical mechanics. JANAF tables only
> consider the thermodynamic entropy, which is a special case in which
> the macroscopic variables are temperature and pressure. You can't
> look up the entropy of magnetization in the JANAF tables.

I do not get your point. JANAF Tables have been created to solve a
particular problem. If you need change in concentration, surface
effects, magnetization effects, you have to extend the JANAF Tables. And
this has been to solve particular problems. Experimental thermodynamics
is not limited to JANAF Tables. For example, the databases in Thermocalc
already include dependence on concentration.

> Yet
> magnetization of small domains is how information is stored on hard
> disks, c.f. Donald McKay's book "Information Theory, Inference, and
> Learning Algorithm" chapter 31.

Do you mean that when we consider magnetization, then the entropy become
subjective, context-dependent, and it will be finally filled with
information?

> Did you actually read E. T. Jaynes 1957 paper in which he introduced
> the idea of basing entropy in statistical mechanics (which you also
> seem to dislike) on information? He wrote "The mere fact that the
> same mathematical expression -SUM[p_i log(p_i)] occurs in both
> statistical mechanics and in information theory does not in itself
> establish a connection between these fields. This can be done only by
> finding new viewpoints from which the thermodynamic entropy and

> information-theory entropy appear as the same /concept/." Then he

I have missed this quote, I have to add it. In general, the first
Jaynes's paper is in a way reasonable. I wanted to better understand it,
as I like maximum likelihood, I have been using it in my own research a
lot. However, when I have read in Jaynes's second paper the following
(two quotes below), I gave up.

�With such an interpretation the expression �irreversible process�
represents a semantic confusion; it is not the physical process that is
irreversible, but rather our ability to follow it. The second law of
thermodynamics then becomes merely the statement that although our
information as to the state of a system may be lost in a variety of
ways, the only way in which it can be gained is by carrying out further
measurements.�

�It is important to realize that the tendency of entropy to increase is
not a consequence of the laws of physics as such, � . An entropy
increase may occur unavoidably, due to our incomplete knowledge of the
forces acting on a system, or it may be entirely voluntary act on our part.�

This I do not understand. Do you agree with these two quotes? If yes,
could you please explain, what he means?

> goes on to show how the principle of maximum entropy can be used to
> derive statistical mechanics. That it *can* be done in some other
> way, and was historically as you assert, is not to the point. As an
> example of how the information view of statistical mechanics extends
> its application he calculates how much the spins of protons in water
> would be polarized by rotating the water at 36,000rpm. It seems you
> are merely objecting to "new viewpoints" on the grounds that you can

> see all that you /want/ to see from the old viewpoint.

> Your quotation of Arnheim, from his book on the theory of entropy in
> art, just shows his confusion. The Shannon information, which is
> greatest when the system is most disordered in some sense, does not
> imply that the most disordered message contains the greatest
> information. The Shannon information is that information we receive
> when the *potential messages* are most disordered. It's a property of
> an ensemble or a channel, not of a particular message.

It is not a confusion of Arnheim. His book is quite good. To this end,
let me quote your second sentence in your message.

> I think you are ignoring the conceptual unification provided by
> information theory and statistical mechanics.

You see, I would love to understand the conceptual unification. To this
end, I have created many simple problems to understand this better.
Unfortunately you do not want to discuss them, you just saying general
words but you do not want to apply it to my simple practical problems.
Hence it is hard for me to understand you.

If to speak about confusion, just one example. You tell that the higher
temperature the more information the system has. Yet, engineers seems to
be unwilling to employ this knowledge in practice. Why is that? Why
engineers seem not to be impressed by the conceptual unification?

Evgenii

> Brent
>

meekerdb

unread,
Feb 27, 2012, 2:43:35 PM2/27/12
to everyth...@googlegroups.com
On 2/27/2012 10:59 AM, Evgenii Rudnyi wrote:
> On 27.02.2012 00:13 meekerdb said the following:
>> On 2/26/2012 5:58 AM, Evgenii Rudnyi wrote:
>>> I have written a summary for the discussion in the subject:
>>>
>>> http://blog.rudnyi.ru/2012/02/entropy-and-information.html
>>>
>>> No doubt, this is my personal viewpoint. If you see that I have
>>> missed something, please let me know.
>>
>> I think you are ignoring the conceptual unification provided by
>> information theory and statistical mechanics. JANAF tables only
>> consider the thermodynamic entropy, which is a special case in which
>> the macroscopic variables are temperature and pressure. You can't
>> look up the entropy of magnetization in the JANAF tables.
>
> I do not get your point. JANAF Tables have been created to solve a particular problem.
> If you need change in concentration, surface effects, magnetization effects, you have to
> extend the JANAF Tables. And this has been to solve particular problems. Experimental
> thermodynamics is not limited to JANAF Tables. For example, the databases in Thermocalc
> already include dependence on concentration.

And you don't get my point. Of course all forms of entropy can be measured and tabulated,
but the information theory viewpoint shows how they are unified by the same concept.

>
>> Yet
>> magnetization of small domains is how information is stored on hard
>> disks, c.f. Donald McKay's book "Information Theory, Inference, and
>> Learning Algorithm" chapter 31.
>
> Do you mean that when we consider magnetization, then the entropy become subjective,
> context-dependent, and it will be finally filled with information?

It is context dependent in that we consider the magnetization. What does the JANAF table
assume about the magnetization of the materials it tabulates?

>
>> Did you actually read E. T. Jaynes 1957 paper in which he introduced
>> the idea of basing entropy in statistical mechanics (which you also
>> seem to dislike) on information? He wrote "The mere fact that the
>> same mathematical expression -SUM[p_i log(p_i)] occurs in both
>> statistical mechanics and in information theory does not in itself
>> establish a connection between these fields. This can be done only by
>> finding new viewpoints from which the thermodynamic entropy and
>> information-theory entropy appear as the same /concept/." Then he
>
> I have missed this quote, I have to add it. In general, the first Jaynes's paper is in a
> way reasonable. I wanted to better understand it, as I like maximum likelihood, I have
> been using it in my own research a lot. However, when I have read in Jaynes's second
> paper the following (two quotes below), I gave up.
>
> �With such an interpretation the expression �irreversible process� represents a semantic
> confusion; it is not the physical process that is irreversible, but rather our ability
> to follow it. The second law of thermodynamics then becomes merely the statement that
> although our information as to the state of a system may be lost in a variety of ways,
> the only way in which it can be gained is by carrying out further measurements.�
>
> �It is important to realize that the tendency of entropy to increase is
> not a consequence of the laws of physics as such, � . An entropy increase may occur
> unavoidably, due to our incomplete knowledge of the forces acting on a system, or it may
> be entirely voluntary act on our part.�
>
> This I do not understand. Do you agree with these two quotes? If yes, could you please
> explain, what he means?

Yes. The physical processes are not irreversible. The fundamental physical laws are time
reversible. The free-expansion of a gas is *statistically* irreversible because we cannot
follow the individual molecules and their correlations, so when we consider only the
macroscopic variables of pressure, density, temperature,... it seems irreversible. In
very simple systems we might be able to actually follow the microscopic evolution of the
state, but we can choose to ignore it and calculate the entropy increase as though this
information were lost. Whether and how the information is lost is the crux of the
measurement problem in QM. Almost everyone on this list assumes Everett's multiple worlds
interpretation in which the information is not lost but is divided up among different
continuations of the observer.

>
>> goes on to show how the principle of maximum entropy can be used to
>> derive statistical mechanics. That it *can* be done in some other
>> way, and was historically as you assert, is not to the point. As an
>> example of how the information view of statistical mechanics extends
>> its application he calculates how much the spins of protons in water
>> would be polarized by rotating the water at 36,000rpm. It seems you
>> are merely objecting to "new viewpoints" on the grounds that you can
>> see all that you /want/ to see from the old viewpoint.
>
>> Your quotation of Arnheim, from his book on the theory of entropy in
>> art, just shows his confusion. The Shannon information, which is
>> greatest when the system is most disordered in some sense, does not
>> imply that the most disordered message contains the greatest
>> information. The Shannon information is that information we receive
>> when the *potential messages* are most disordered. It's a property of
>> an ensemble or a channel, not of a particular message.
>
> It is not a confusion of Arnheim. His book is quite good.

Then why doesn't he know that Shannon's information does not refer to particular messages?

> To this end, let me quote your second sentence in your message.
>
> > I think you are ignoring the conceptual unification provided by
> > information theory and statistical mechanics.
>
> You see, I would love to understand the conceptual unification.

I find that doubt this. If you really loved to understand it there is lots of material
online as well as good books which Russell and I have suggested.

> To this end, I have created many simple problems to understand this better.
> Unfortunately you do not want to discuss them, you just saying general words but you do
> not want to apply it to my simple practical problems. Hence it is hard for me to
> understand you.
>
> If to speak about confusion, just one example. You tell that the higher temperature the
> more information the system has. Yet, engineers seems to be unwilling to employ this
> knowledge in practice. Why is that? Why engineers seem not to be impressed by the
> conceptual unification?

I'm not an expert on this subject and it has been forty years since I studied statistical
mechanics, which is why I prefer to refer you to experts. Engineers are generally not
impressed with conceptual unification; they are interested in what can be most easily and
reliably applied. RF engineers generally don't care that EM waves are really photons.
Structural engineers don't care about interatomic forces, they just look yield strength in
tables. Engineers are not at all concerned with 'in principle' processes which can only
be realized in carefully contrived laboratory experiments. But finding unifying
principles is the job of physicists.

Brent

Evgenii Rudnyi

unread,
Feb 28, 2012, 2:08:44 PM2/28/12
to everyth...@googlegroups.com
Dear Brent,

First thank you for your time. Your answers as well as answers of other
participants has helped me to understand better the opposite viewpoint
and better to organize my thoughts.

You are right that the best is to study a good book but first I have
already a stack of books to read and second reading a book alone is
usually boring. I like much more to clear a question in a discussion. It
is more enjoyable.

What I will probably do is read more about the history development with
Maxwell's demon that John has mentioned. Somehow I have missed it.

I believe that it is normal when we have different opinions but I hope
that my emails has helped you to understand the opposite viewpoint better.

As for engineers and physicists, I do not know. Let us take for example
a landfill. There are too many of them in the modern society and
engineers are trying to find a solution to reuse waste. An open
question: Does your statement that all physical processes are not
irreversible will help engineers to find a better solution?

Finally the JANAF Tables assume that the magnetic field is zero. If it
is not, then one has to add a corresponding term.

Best wishes,

Evgenii

On 27.02.2012 20:43 meekerdb said the following:

Evgenii Rudnyi

unread,
Feb 28, 2012, 3:31:33 PM2/28/12
to everyth...@googlegroups.com
Alberto,

I am thermodynamicist and I do not know exactly what is information and
computation. You have written that living beings perform computations.
Several questions in this respect.

Are computations are limited to living beings only?

Does a bacteria perform computations as well?

If yes, then what is the difference between a ballcock in the toilet and
bacteria (provided we exclude reproduction from consideration)?

Evgenii

On 27.02.2012 12:16 Alberto G.Corona said the following:

Alberto G.Corona

unread,
Feb 28, 2012, 8:20:30 PM2/28/12
to Everything List
Dear Stephen,

A thing that I often ask myself concerning MMH is  the question about
what is mathematical and what is not?. The set of real numbers is a
mathematical structure, but also the set of real numbers plus the
point (1,1) in the plane is. The set of randomly chosen numbers { 1,4
3,4,.34, 3}  is because it can be described with the same descriptive
language of math. But the first of these structures have properties
and the others do not. The first can be infinite but can be described
with a single equation while the last   must be described
extensively. . At least some random universes (the finite ones) can be
described extensively, with the tools of mathematics but they don´t
count in the intuitive sense as mathematical.

 What is usually considered  genuinely mathematical is any structure,
that can be described briefly. Also it must have good properties ,
operations, symmetries or isomorphisms with other structures so the
structure can be navigated and related with other structures and the
knowledge can be reused.   These structures have a low kolmogorov
complexity, so they can be "navigated" with low computing resources.

So the demand of computation in each living being forces to admit
 that  universes too random or too simple, wiith no lineal or
 discontinuous macroscopic laws have no  complex spatio-temporal
volutes (that may be the aspect of life as looked from outside of our
four-dimensional universe).  The macroscopic laws are the macroscopic
effects of the underlying mathematical structures with which our
universe is isomorphic (or identical).

And our very notion of what is intuitively considered mathematical:
"something  general simple and powerful enough"    has the hallmark of
scarcity of computation resources. (And absence of contradictions fits
in the notion of simplicity, because exception to rules have to be
memorized and dealt with extensively, one by one)

Perhaps not only is that way but even may be that the absence of
contradictions ( the main rule of simplicity) or -in computationa
terms- the rule of  low kolmogorov complexity _creates_ itself the
mathematics. That is, for example, may be that the boolean logic for
example, is what it is not because it is consistent simpleand it´s
beatiful, but because it is the shortest logic in terms of the
lenght of the description of its operations, and this is the reason
because we perceive it as simple and beatiful and consistent.
.
> Dear Albert,
>
>      One brief comment. In your Google paper you wrote, among other
> interesting things, "But life and natural selection demands a
> mathematical universe
> <https://docs.google.com/Doc?docid=0AW-x2MmiuA32ZGQ1cm03cXFfMTk4YzR4cn...>somehow".

Bruno Marchal

unread,
Feb 29, 2012, 5:20:21 AM2/29/12
to everyth...@googlegroups.com

On 29 Feb 2012, at 02:20, Alberto G.Corona wrote (to Stephen):


> A thing that I often ask myself concerning MMH is the question about
> what is mathematical and what is not?. The set of real numbers is a
> mathematical structure, but also the set of real numbers plus the
> point (1,1) in the plane is.

Sure. Now, with comp, that mathematical structure is more easily
handled in the "mind" of the universal machine. For the ontology we
can use arithmetic, on which everyone agree. It is absolutely
undecidable that there is more than that (with the comp assumption).
So for the math, comp invite to assume only what is called "the
sharable part of intuitionist and classical mathematics.

> The set of randomly chosen numbers { 1,4
> 3,4,.34, 3} is because it can be described with the same descriptive
> language of math. But the first of these structures have properties
> and the others do not. The first can be infinite but can be described
> with a single equation while the last must be described
> extensively. . At least some random universes (the finite ones) can be
> described extensively, with the tools of mathematics but they don´t
> count in the intuitive sense as mathematical.

Why? If they can be finitely described, then I don't see why they
would be non mathematical.


>
> What is usually considered genuinely mathematical is any structure,
> that can be described briefly.

Not at all. In classical math any particular real number is
mathematically real, even if it cannot be described briefly. Chaitin's
Omega cannot be described briefly, even if we can defined it briefly.

> Also it must have good properties ,
> operations, symmetries or isomorphisms with other structures so the
> structure can be navigated and related with other structures and the
> knowledge can be reused. These structures have a low kolmogorov
> complexity, so they can be "navigated" with low computing resources.

But they are a tiny part of bigger mathematical structures. That's why
we use big mathematical universe, like the model of ZF, or Category
theory.


>
> So the demand of computation in each living being forces to admit
> that universes too random or too simple, wiith no lineal or
> discontinuous macroscopic laws have no complex spatio-temporal
> volutes (that may be the aspect of life as looked from outside of our
> four-dimensional universe). The macroscopic laws are the macroscopic
> effects of the underlying mathematical structures with which our
> universe is isomorphic (or identical).

We need both, if only to make precise that very reasoning. Even in
comp, despite such kind of math is better seen as epistemological than
ontological.

>
> And our very notion of what is intuitively considered mathematical:
> "something general simple and powerful enough" has the hallmark of
> scarcity of computation resources. (And absence of contradictions fits
> in the notion of simplicity, because exception to rules have to be
> memorized and dealt with extensively, one by one)
>
> Perhaps not only is that way but even may be that the absence of
> contradictions ( the main rule of simplicity) or -in computationa
> terms- the rule of low kolmogorov complexity _creates_ itself the
> mathematics.

Precisely not. Kolmogorov complexity is to shallow, and lacks the
needed redundancy, depth, etc. to allow reasonable solution to the
comp measure problem.


> That is, for example, may be that the boolean logic for
> example, is what it is not because it is consistent simpleand it´s
> beatiful, but because it is the shortest logic in terms of the
> lenght of the description of its operations, and this is the reason
> because we perceive it as simple and beatiful and consistent.

It is not the shortest logic. It has the simplest semantics, at the
propositional level. Combinators logic is far simpler conceptually,
but have even more complex semantically.

But the main problem of the MHH is that nobody can define what it is,
and it is a priori too big to have a notion of first person
indeterminacy. Comp put *much* order into this, and needs no more math
than arithmetic, or elementary mathematical computer science at the
ontological level. tegmark seems unaware of the whole foundation-of-
math progress made by the logicians.

Bruno


> .
>> Dear Albert,
>>
>> One brief comment. In your Google paper you wrote, among other
>> interesting things, "But life and natural selection demands a
>> mathematical universe
>> <https://docs.google.com/Doc?docid=0AW-x2MmiuA32ZGQ1cm03cXFfMTk4YzR4cn...
>> >somehow".
>> Could it be that this is just another implication of the MMH idea? If
>> the physical implementation of computation acts as a selective
>> pressure
>> on the multiverse, then it makes sense that we would find ourselves
>> in a
>> universe that is representable in terms of Boolean algebras with
>> their
>> nice and well behaved laws of bivalence (a or not-A), etc.
>>
>> Very interesting ideas.
>>
>> Onward!
>>
>> Stephen
>

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Alberto G.Corona

unread,
Feb 29, 2012, 9:47:26 AM2/29/12
to Everything List


On 29 feb, 11:20, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 29 Feb 2012, at 02:20, Alberto G.Corona wrote (to Stephen):
>
> > A thing that I often ask myself concerning MMH is  the question about
> > what is mathematical and what is not?. The set of real numbers is a
> > mathematical structure, but also the set of real numbers plus the
> > point (1,1) in the plane is.
>
> Sure. Now, with comp, that mathematical structure is more easily
> handled in the "mind" of the universal machine. For the ontology we
> can use arithmetic, on which everyone agree. It is absolutely
> undecidable that there is more than that (with the comp assumption).
> So for the math, comp invite to assume only what is called "the
> sharable part of intuitionist and classical mathematics.
>
I do not thing in computations in terms of "minds of universal
machines" in the abstract sense but in terms of the needs of
computability of living beings.

> > The set of randomly chosen numbers { 1,4
> > 3,4,.34, 3}  is because it can be described with the same descriptive
> > language of math. But the first of these structures have properties
> > and the others do not. The first can be infinite but can be described
> > with a single equation while the last   must be described
> > extensively. . At least some random universes (the finite ones) can be
> > described extensively, with the tools of mathematics but they don´t
> > count in the intuitive sense as mathematical.
>
> Why? If they can be finitely described, then I don't see why they
> would be non mathematical.
>
It is not mathematical in the intuitive sense that the list of the
ponits of ramdomly folded paper is not. That intuitive sense , more
restrictive is what I use here.
>
>
> >  What is usually considered  genuinely mathematical is any structure,
> > that can be described briefly.
>
> Not at all. In classical math any particular real number is
> mathematically real, even if it cannot be described briefly. Chaitin's
> Omega cannot be described briefly, even if we can defined it briefly.
>
a real number in the sense I said above is not mathematical. in the
sense I said above. In fact there is no mathematical theory about
paticular real numbers. the set of all the real numbers , in the
contrary, is.

> > Also it must have good properties ,
> > operations, symmetries or isomorphisms with other structures so the
> > structure can be navigated and related with other structures and the
> > knowledge can be reused.   These structures have a low kolmogorov
> > complexity, so they can be "navigated" with low computing resources.
>
> But they are a tiny part of bigger mathematical structures. That's why
> we use big mathematical universe, like the model of ZF, or Category
> theory.

If maths is all that can be described finitelly, then of course you
are right. but I´m intuitively sure that the ones that are interesting
can be defined briefly, using an evolutuionary sense of what is
interesting.

>
>
>
> > So the demand of computation in each living being forces to admit
> >  that  universes too random or too simple, wiith no lineal or
> >  discontinuous macroscopic laws have no  complex spatio-temporal
> > volutes (that may be the aspect of life as looked from outside of our
> > four-dimensional universe).  The macroscopic laws are the macroscopic
> > effects of the underlying mathematical structures with which our
> > universe is isomorphic (or identical).
>
> We need both, if only to make precise that very reasoning. Even in
> comp, despite such kind of math is better seen as epistemological than
> ontological.
>
There is a hole in the transition from certain mathematical
properties in macroscopic laws to simple mathematical theories of
everything . The fact that strange, but relatively simple
mathematical structure (M theory) include islands of macroscopic laws
that are warm for life. I do not know the necessity of this greed for
reduction. The macroscopic laws can reigh in a hubble sphere,
sustained by a gigant at the top of a turtle swimming in an ocean.
>
>
> > And our very notion of what is intuitively considered mathematical:
> > "something  general simple and powerful enough"    has the hallmark of
> > scarcity of computation resources. (And absence of contradictions fits
> > in the notion of simplicity, because exception to rules have to be
> > memorized and dealt with extensively, one by one)
>
> > Perhaps not only is that way but even may be that  the absence of
> > contradictions ( the main rule of simplicity) or -in computationa
> > terms- the rule of  low kolmogorov complexity  _creates_ itself the
> > mathematics.
>
> Precisely not. Kolmogorov complexity is to shallow, and lacks the
> needed redundancy, depth, etc. to allow reasonable solution to the
> comp measure problem.
>
I can not gasp from your terse definitions what the comp measure
problem is . What i know is that, kolmogorov complexity is critical
for life. if living beings compute inputs to create appropriate
outputs for survival. And they do it.

> > That is, for example, may be that the boolean logic for
> > example, is what it is not because it is consistent simpleand it´s
> > beatiful,   but because it is the shortest logic in terms of the
> > lenght of the description of its operations, and this is the reason
> > because we perceive it as simple and beatiful and consistent.
>
> It is not the shortest logic. It has the simplest semantics, at the
> propositional level. Combinators logic is far simpler conceptually,
> but have even more complex semantically.
>
I meant the sortest binary logic. I mean that any structure with
contradictions has longer description than the one without them.,so
the first is harder to discover and harder to deal with.,.

Alberto G.Corona

unread,
Feb 29, 2012, 9:58:43 AM2/29/12
to Everything List
Of course they compute. Even a plant must read the imputs of
temperature, humidity and sun radiation to decide when sping may
arrive to launch the program of leaf growing and flour blossoming.
This computation exist because, before that, the plants discovered
cycles in the weather by random mutations and natural selection, so
the plants with this computation could better optimize water and
nutrient resources and outgrown those that do not. Without a
predictable linear environment, computation and thus evolution and
life is impossible.

Alberto
> >http://www.mail-archive.com/everyth...@googlegroups.com/msg15696...

Bruno Marchal

unread,
Feb 29, 2012, 12:35:31 PM2/29/12
to everyth...@googlegroups.com
On 29 Feb 2012, at 15:47, Alberto G.Corona wrote:



On 29 feb, 11:20, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 29 Feb 2012, at 02:20, Alberto G.Corona wrote (to Stephen):

A thing that I often ask myself concerning MMH is  the question about
what is mathematical and what is not?. The set of real numbers is a
mathematical structure, but also the set of real numbers plus the
point (1,1) in the plane is.

Sure. Now, with comp, that mathematical structure is more easily
handled in the "mind" of the universal machine. For the ontology we
can use arithmetic, on which everyone agree. It is absolutely
undecidable that there is more than that (with the comp assumption).
So for the math, comp invite to assume only what is called "the
sharable part of intuitionist and classical mathematics.

I do not thing in computations in terms of "minds of universal
machines" in the abstract sense but in terms of the needs of
computability of living beings.

I am not sure I understand what you mean by that.
What is your goal?

The goal by default here is to build, or isolate (by reasoning from ideas that we can share) a theory of everything (a toe).
And by toe, most of us means a theory unifying the known forces, without eliminating the person and consciousness.

The list advocates that 'everything' is simpler than 'something'. But this leads to a measure problem. 

It happens that the comp hypothesis gives crucial constraints on that measure problem.




The set of randomly chosen numbers { 1,4
3,4,.34, 3}  is because it can be described with the same descriptive
language of math. But the first of these structures have properties
and the others do not. The first can be infinite but can be described
with a single equation while the last   must be described
extensively. . At least some random universes (the finite ones) can be
described extensively, with the tools of mathematics but they don´t
count in the intuitive sense as mathematical.

Why? If they can be finitely described, then I don't see why they
would be non mathematical.

It is not mathematical in the intuitive sense that the list of the
ponits of  ramdomly folded paper is not. That intuitive sense , more
restrictive is what I use here.

Ah?
OK.





 What is usually considered  genuinely mathematical is any structure,
that can be described briefly.

Not at all. In classical math any particular real number is
mathematically real, even if it cannot be described briefly. Chaitin's
Omega cannot be described briefly, even if we can defined it briefly.

a real number in the sense I said above is not mathematical. in the
sense I said above.  In fact there is no mathematical theory about
paticular real numbers. the set of all the real numbers , in the
contrary, is.

OK. Even for Peano Arithmetic, in fact. Basically, because a dovetailer on the reals is an arithmetical object.
It looks like you define math by the "separable part of math" on which everybody agree. Me too, as far as ontology is concerned. But I can't prevent the finite numbers to see infinities everywhere!




Also it must have good properties ,
operations, symmetries or isomorphisms with other structures so the
structure can be navigated and related with other structures and the
knowledge can be reused.   These structures have a low kolmogorov
complexity, so they can be "navigated" with low computing resources.

But they are a tiny part of bigger mathematical structures. That's why
we use big mathematical universe, like the model of ZF, or Category
theory.

If maths is all that can be described finitelly, then of course  you
are right. but I´m intuitively sure that the ones that are interesting
can be defined  briefly,  using an evolutuionary sense of what is
interesting.

I agree with you. The little numbers are the real stars :)

But the fact is that quickly, *some* rather little numbers have behaviors which we can't explain without referring to big numbers or even infinities. A diophantine polynomial of degree 4, with 54 variables, perhaps less, is already Turing universal. There are programs which does not halt, but you will need quite elaborate transfinite mathematics to prove it is the case.







So the demand of computation in each living being forces to admit
 that  universes too random or too simple, wiith no lineal or
 discontinuous macroscopic laws have no  complex spatio-temporal
volutes (that may be the aspect of life as looked from outside of our
four-dimensional universe).  The macroscopic laws are the macroscopic
effects of the underlying mathematical structures with which our
universe is isomorphic (or identical).

We need both, if only to make precise that very reasoning. Even in
comp, despite such kind of math is better seen as epistemological than
ontological.

There is a hole in the transition from  certain mathematical
properties in macroscopic laws to simple mathematical theories of
everything .  

Sure. especially that if we start from the observations, all theories are infinite extrapolation from finite sample of data.



The fact that strange, but relatively simple
mathematical structure (M theory)  

If you call that simple, even relatively ...



include islands of macroscopic laws
that are warm for life.

With comp, such picture is false. If we take it seriously, it leeds to a reductionism so strong that it eliminates consciousness and persons. It is contrary to the fact, if you agree that you are conscious <here-and-now>.

With the computationalist hypotheses, based on an invariance principle for consciousness, (yes doctor), we see that we have to justify the M-theory, or whatever describing correctly the physical reality, from a theory of consciousness (itself justifiable by the machine, for its justifiable part).



I do not know the necessity of this greed for
reduction.  The macroscopic laws can reigh in a hubble sphere,
sustained by a  gigant at the top of a turtle swimming in an ocean.

It is an open, but soluble problem. If this is correct (which I doubt) then the hubble sphere sphere sustained by a gigant at the top of a turtle swimming in an ocean (of what?) has to be derived from logic, numbers, addition and multiplication only.
That's the point.







And our very notion of what is intuitively considered mathematical:
"something  general simple and powerful enough"    has the hallmark of
scarcity of computation resources. (And absence of contradictions fits
in the notion of simplicity, because exception to rules have to be
memorized and dealt with extensively, one by one)

Perhaps not only is that way but even may be that  the absence of
contradictions ( the main rule of simplicity) or -in computationa
terms- the rule of  low kolmogorov complexity  _creates_ itself the
mathematics.

Precisely not. Kolmogorov complexity is to shallow, and lacks the
needed redundancy, depth, etc. to allow reasonable solution to the
comp measure problem.

I can not gasp from your terse definitions what  the comp measure
problem is .

Do you understand the notion of first person indeterminacy? Have you read:

It is a deductive reduction of the mind-body problem into a body problem in arithmetic. It gives the shape of the conceptual solution, and toe. 

Comp has the advantage of having already its science, computer science, which makes it possible to translate a problem of philosophy-theology in precise technical terms. 

The shape of the conceptual solution can be shown closer to Plato's theology, than Aristotle's theology, used by 5/5 atheists, 4/5 of the Abrahamic religion, 1/5 by the mystics, and large part of some eastern religion.



What i know is that, kolmogorov complexity is critical
for life. if living beings compute inputs to create appropriate
outputs for survival. And they do it.

Yes, it can have many application, but it is very rough, and computer science provides many more notion of complexity, and of reducibility. Brent cited a paper by Calude showing specifically this, notably. 

Kolmogorov complexity might be the key of the measure problem, but few people have succeeded of using it to progress. It might play some role in the selection of some particular dovetailer, but it can't work, by being non computable, and depending on constant. I don't know. I'm afraid that the possible role for Kolmogorov complexity will have to be derived, not assumed. or you might find an alternative formulation of comp. 






That is, for example, may be that the boolean logic for
example, is what it is not because it is consistent simpleand it´s
beatiful,   but because it is the shortest logic in terms of the
lenght of the description of its operations, and this is the reason
because we perceive it as simple and beatiful and consistent.

It is not the shortest logic. It has the simplest semantics, at the
propositional level. Combinators logic is far simpler conceptually,
but have even more complex semantically.

I meant the sortest binary logic.  

Classical logic is not the shorter binary logic. In term of the length of its possible formal descriptions.



I mean that any structure with
contradictions has longer description than the one without them.,

?
None logic get contradictions, with the notable exception of the paraconsistant logics.
Intuitionist logic is a consistent (free of contradiction) weakening of classical logic. Quantum logic too. 
Note also that the term logic is vague. Strictly speaking I don't need logic at the ontological level. I put it here for reason of simplicity.



so

the first is harder to discover and harder to deal with.,.

You preach a choir. Classical logic is the one with the simplest semantics. It is the common jewel of both Plato and Aristotle, and with comp, it forces to distinguish proof from truth. Which I think is the essence of science and religion (not of human academies and churches, alas).

It looks we agree on some things: the importance of classical logic. The need to restrict ourselves to the separable part of math.

Now, if you are interested in the mind-body problem, you might understand, with some work, that the comp hypothesis reduces the problem into a body problem, which takes the form of a relative measure problem on computations. It shows that the physical laws have an arithmetical origin, or a theoretical computer science origin.

I don't pretend that comp is true, just that it leads to that kind of reversal for the global toe. It can be considered as strongly reductionist ontologically, but it can be shown to prevent any normative or complete theory for the persons and any other universal numbers.

There might also be a flaw in the argument. Don't worry at all if you find one. UDA1-7 is "easy". 

The step 8 is too much concise in sane04, and you can find here a more detailed explanation:


The list was already discussing the "measure problem" related naturally with the everything-type of theories. Comp adds a lot, notably by making important the distinction between first person and third person view, which is the key for the mind-body problem.

The second part (of sane04) is technical and thus more difficult without the study of Mendelson, Boolos, or Davis 65 (the original paper of Gödel, Church, Kleene, Post, Rosser). But you don't need that to understand the comp mind-body problem, and the comp "reversal". 

Information and complexity are important concepts, but there are many other one, and in fine, it depends on what we are interested to solve or clarify.

Bruno

Evgenii Rudnyi

unread,
Feb 29, 2012, 3:08:51 PM2/29/12
to everyth...@googlegroups.com
How would you define "compute" in the sentence "a bacteria computes"? Is
this similar to what happens within a computer?

Evgenii


On 29.02.2012 15:58 Alberto G.Corona said the following:

Alberto G.Corona

unread,
Feb 29, 2012, 7:59:44 PM2/29/12
to Everything List


On 29 feb, 18:35, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 29 Feb 2012, at 15:47, Alberto G.Corona wrote:
>
>
>
>
> > On 29 feb, 11:20, Bruno Marchal <marc...@ulb.ac.be> wrote:
> >> On 29 Feb 2012, at 02:20, Alberto G.Corona wrote (to Stephen):
>
> >>> A thing that I often ask myself concerning MMH is  the question
> >>> about
> >>> what is mathematical and what is not?. The set of real numbers is a
> >>> mathematical structure, but also the set of real numbers plus the
> >>> point (1,1) in the plane is.
>
> >> Sure. Now, with comp, that mathematical structure is more easily
> >> handled in the "mind" of the universal machine. For the ontology we
> >> can use arithmetic, on which everyone agree. It is absolutely
> >> undecidable that there is more than that (with the comp assumption).
> >> So for the math, comp invite to assume only what is called "the
> >> sharable part of intuitionist and classical mathematics.
>
> > I do not thing in computations in terms of "minds of universal
> > machines" in the abstract sense but in terms of the needs of
> > computability of living beings.
>
> I am not sure I understand what you mean by that.
> What is your goal?
>
> The goal by default here is to build, or isolate (by reasoning from
> ideas that we can share) a theory of everything (a toe).
> And by toe, most of us means a theory unifying the known forces,
> without eliminating the person and consciousness.
>
My goal is the same. I start from the same COMP premises, but I do not
not see why the whole model of the universe has to be restricted to
being computable. I start from the idea of whathever model of an
universe that can localy evolve computers. A mathematical continuous
structure with infinite small substitution measure , and thus non
computable can evolve computers. well not just computers, but problem
adaptive systems, clearly separated from the environment, that respond
to external environment situations in order to preserve the internal
structures, to reproduce and so on.
that is not a problem as long as diophatine polynomials don´t usurpate
the role of boolean logic in our universe, and the transfinite
mathematics don´t vindicate a role in the second law of Newton. ;)
> read:http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract...
>
> It is a deductive reduction of the mind-body problem into a body
> problem in arithmetic. It gives the shape of the conceptual solution,
> and toe.
>
> Comp has the advantage of having already its science, computer
> science, which makes it possible to translate a problem of philosophy-
> theology in precise technical terms.
>
> The shape of the conceptual solution can be shown closer to Plato's
> theology, than Aristotle's theology, used by 5/5 atheists, 4/5 of the
> Abrahamic religion, 1/5 by the mystics, and large part of some eastern
> religion.
>
> > What i know is that, kolmogorov complexity is critical
> > for life. if living beings compute inputs to create appropriate
> > outputs for survival. And they do it.
>
> Yes, it can have many application, but it is very rough, and computer
> science provides many more notion of complexity, and of reducibility.
> Brent cited a paper by Calude showing specifically this, notably.
>
> Kolmogorov complexity might be the key of the measure problem, but few
> people have succeeded of using it to progress. It might play some role
> in the selection of some particular dovetailer, but it can't work, by
> being non computable, and depending on constant. I don't know. I'm
> afraid that the possible role for Kolmogorov complexity will have to
> be derived, not assumed. or you might find an alternative formulation
> of comp.

As I said above I do not see why a model of the universe as a whole
has to be restricted to the requirement of simulation. I see (local)
and macroscopic computability as an "antropic" requirement of Life,
but not more.
>
>
>
> >>> That is, for example, may be that the boolean logic for
> >>> example, is what it is not because it is consistent simpleand it´s
> >>> beatiful,   but because it is the shortest logic in terms of the
> >>> lenght of the description of its operations, and this is the reason
> >>> because we perceive it as simple and beatiful and consistent.
>
> >> It is not the shortest logic. It has the simplest semantics, at the
> >> propositional level. Combinators logic is far simpler conceptually,
> >> but have even more complex semantically.
>
> > I meant the sortest binary logic.
>
> Classical logic is not the shorter binary logic. In term of the length
> of its possible formal descriptions.
>
> > I mean that any structure with
> > contradictions has longer description than the one without them.,
>
> ?
> None logic get contradictions, with the notable exception of the
> paraconsistant logics.
> Intuitionist logic is a consistent (free of contradiction) weakening
> of classical logic. Quantum logic too.
> Note also that the term logic is vague. Strictly speaking I don't need
> logic at the ..

I can define a set and arbitrary ioperations with contradictions. I
can say True ´AND True is False half of the time. and True the other
half.depending on a third stocastic boolean variable that flip
according with some criteria. I can define multiplication of numbers
in weird ways so that i break the symetric an distributive
properties in certain cases . and so on. All of them can be defined
algorithmically or mathematically. In the broader sense, these
structures will be mathematical but with larger kolmogorov complexity
than the good ones.(and useless).
.
>
> leer más »

Alberto G.Corona

unread,
Feb 29, 2012, 8:13:11 PM2/29/12
to Everything List
Hi Evgenii,




Any biological activity involves  many chemical reactions that produce
intermediate results, These reactions involve molecules whose
structure are coded in DNA, transceribed and build by RNA . The
produced protein respond to some need of the bacteria as a result of
an internal or external event.  Perhaps some food has been engulfed in
the citoplasma and there is need for protein breaking enzimes.

If you see the sequence of reactions in a piece of paper, it has the
form of an algoritm.  It does nor matter if it is executed in
paralell, and several steps are executed at different times in
different locations, but this does not change the algorithmic nature
of the process, with the memorized set of instructions, the input  the
execution machinery and the output produced.

Alberto

Bruno Marchal

unread,
Mar 1, 2012, 4:32:05 AM3/1/12
to everyth...@googlegroups.com
Indeed. The COMP premises makes this impossible. If "I am a machine" then

1) the appearance of the universe existence and structure is derivable entirely from comp/arithmetic, in a precise way.
2) we can already deduce many things about the physical universe, notably that it is NOT computable.

Don't confuse comp "I can survive with a digital brain in the physical reality" with digital physics (the physical universe is Turing emulable). The second implies the first, but the first implies the negation of the second, so digital physics is completely contradictory by itself. With or without comp, digital physics leads to a contradiction, and so is false.




I start from the idea of whathever model of an
universe that can localy evolve computers. A mathematical continuous
structure with infinite small substitution measure , and thus non
computable can evolve computers. well not just computers,

It has to be like that with comp. It is part of the consequences of comp.



but problem
adaptive systems, clearly separated from the environment, that respond
to external environment situations in order to preserve the internal
structures, to reproduce and so on.

No problem.


The list advocates that 'everything' is simpler than 'something'. But
this leads to a measure problem.

It happens that the comp hypothesis gives crucial constraints on that
measure problem.




I agree with you. The little numbers are the real stars :)

But the fact is that quickly, *some* rather little numbers have
behaviors which we can't explain without referring to big numbers or
even infinities. A diophantine polynomial of degree 4, with 54
variables, perhaps less, is already Turing universal. There are
programs which does not halt, but you will need quite elaborate
transfinite mathematics to prove it is the case.

that is not a problem as long as diophatine polynomials don´t usurpate
the role of boolean logic in our universe, and the transfinite
mathematics don´t vindicate a role in the second law of Newton. ;)

There is no primitive physical universe, with comp. The physical universe is what numbers can observe from inside arithmetic. You might read the proof (UDA). It is not entirely obvious.









Kolmogorov complexity might be the key of the measure problem, but few
people have succeeded of using it to progress. It might play some role
in the selection of some particular dovetailer, but it can't work, by
being non computable, and depending on constant. I don't know. I'm
afraid that the possible role for Kolmogorov complexity will have to
be derived, not assumed. or you might find an alternative formulation
of comp.

As I said above I do not see why a model  of the universe as a whole
has to be restricted to the requirement of simulation.

But if you accept comp, the physical universe cannot be emulated digitally at all. It is only a projective view from inside. To predict any observable events, you have to make an infinite sum that NO computers can ever do. To understand this you have to understand the first person indeterminacy, and some invariance lemma for first person (conscious) views.



I see  (local)
and macroscopic computability as an "antropic" requirement of Life,
but not more.

The problem is that your consciousness is distributed uniformly on all computations. You need to take that into account. I am afraid that you are using an identity thesis like brain-mind which is incompatible with the comp theory.







That is, for example, may be that the boolean logic for
example, is what it is not because it is consistent simpleand it´s
beatiful,   but because it is the shortest logic in terms of the
lenght of the description of its operations, and this is the reason
because we perceive it as simple and beatiful and consistent.

It is not the shortest logic. It has the simplest semantics, at the
propositional level. Combinators logic is far simpler conceptually,
but have even more complex semantically.

I meant the sortest binary logic.

Classical logic is not the shorter binary logic. In term of the length
of its possible formal descriptions.

I mean that any structure with
contradictions has longer description than the one without them.,

?
None logic get contradictions, with the notable exception of the
paraconsistant logics.
Intuitionist logic is a consistent (free of contradiction) weakening
of classical logic. Quantum logic too.
Note also that the term logic is vague. Strictly speaking I don't need
logic at the ..

I can define a set and arbitrary ioperations with  contradictions.  

I have no idea of what is a set with contradiction. A contradiction is a proposition in a theory. A set is not a theory, but a mathematical object.



 I
can say True ´AND True is False half of the time. and True the other
half.depending on  a  third stocastic boolean variable that   flip
according with some criteria. I can define multiplication of numbers
in weird ways so that  i break the symetric an distributive
properties in certain cases . and so on. All of them can be defined
algorithmically or mathematically. In the broader sense, these
structures will be mathematical but with larger kolmogorov complexity
than the good ones.(and useless).


But if you assume comp, you can no more impose a simplicity criterium, you have to derived from the measure problem. read the papers or the archive. There is no doubt that simplicity is at play, and you can assume it for a while, but it might contradict comp, for the big numbers and theories does play a role, and it is up tp us to understand why they don't interfere so much. Open problem.

Bruno


Evgenii Rudnyi

unread,
Mar 1, 2012, 2:28:11 PM3/1/12
to everyth...@googlegroups.com
Alberto,

Chemical reactions take place not only in the cell. Let us take for
example the famous Belousov�Zhabotinsky reaction (nonlinear chemical
oscillator, there are several reactions in this process taking place
simultaneously). Could we say that a reactor with this reaction also
computes?

Evgenii

On 01.03.2012 02:13 Alberto G.Corona said the following:

Stephen P. King

unread,
Mar 2, 2012, 6:32:50 PM3/2/12
to everyth...@googlegroups.com
On 2/28/2012 8:20 PM, Alberto G.Corona wrote:
Dear Stephen,

A thing that I often ask myself concerning MMH is  the question about
what is mathematical and what is not?. The set of real numbers is a
mathematical structure, but also the set of real numbers plus the
point (1,1) in the plane is. The set of randomly chosen numbers { 1,4
3,4,.34, 3}  is because it can be described with the same descriptive
language of math. But the first of these structures have properties
and the others do not. The first can be infinite but can be described
with a single equation while the last   must be described
extensively. . At least some random universes (the finite ones) can be
described extensively, with the tools of mathematics but they don´t
count in the intuitive sense as mathematical.

    Dear Alberto,

    I distinguish between the existential and the essential aspects such that this question is not problematic. Let me elaborate. By Existence I mean the necessary possibility of the entity. By Essence I mean the collection of properties that are its identity. Existence is only contingent on whether or not said existence is self-consistent, in other words, if an entity's essence is such that it contradicts the possibility of its existence, then it cannot exist; otherwise entities exist, but nothing beyond the tautological laws of identity - "A is A" and Unicity - can be said to follow from that bare existence and we only consider those "laws" only after we reach the stage of epistemology.
    Essence, in the sense of properties seems to require a spectrum of stratification wherein properties can be associated and categories, modalities and aspects defined for such. It is this latter case of Essence that you seem to be considering in your discussion of the difference between the set of Real numbers and some set of random chosen numbers, since the former is defined as a complete whole by the set (or Category) theoretical definition of the Reals while the latter is contigent on a discription that must capture some particular collection, hence it is Unicity that matters, i.e. the "wholeness" of the set.
    I would venture to guess that the latter case of your examples always involves particular members of an example of the former case, e.g. the set of randomly chosen numbers that you mentioned is a subset of the set of Real numbers. Do there exist set (or Categories) that are "whole" that require the specification of every one of its members separately such that no finite description can capture its essence? I am not sure, thus I am only guessing here. One thing that we need to recall is that we are, by appearances, finite and can only apprehend finite details and properties. Is this limitation the result of necessity or contingency?
    Whatever the case it is, we should be careful not to draw conclusions about the inherent aspects of mathematical objects that follow from our individual ability to conceive of them. For example, I have a form of dyslexia that makes the mental manipulation of symbolic reasoning extremely difficult, I make up for this by reasoning in terms of more visual and proprioceptive senses and thus can understand mathematical entities very well. Given this disability, I might make claims that since I cannot understand the particular symbolic representations that I am a bit dubious of their existence or meaningfulness. Of course this is a rather absurd example, but I have often found that many claims by even eminent mathematicians  boils down to a similar situation. Many of the claims against the existence of infinities can fall under this situation.



 What is usually considered genuinely mathematical is any structure,
that can be described briefly. Also it must have good properties ,
operations, symmetries or isomorphisms with other structures so the
structure can be navigated and related with other structures and the
knowledge can be reused.   These structures have a low kolmogorov
complexity, so they can be "navigated" with low computing resources.

    So you are saying that finite describability is a prerequisite for an entity to be mathematical? What is the lowest upper bound on this limit and what would necessitate it? Does this imply that mathematics is constrained to some set of objects that only sapient entities can manipulate in a way that such manipulations are describable exactly in terms of a finite list or algorithm? Does this not seem a bit anthropocentric? But my question is more about the general direction and implication of your reasoning and not meant to imply anything in particular. I have often wondered about many of the debates that go on between mathematicians and wonder if we are all missing something deeper in our quest.
    For example, why is it that there are multiple and different set theories that have as axioms concepts that are so radically different. Witness the way that a set theory be such that it assumes the continuum hypothesis is true while other set theories assume that the continuum hypothesis is false. This arbitrariness would seem to indicate that mathematics is more like a game that minds play where all that matters is that all the "moves" are consistent with the "rules". But what if this is just a periphery symptom, an indication of something else where all we are thinking of is the bounding surface of the concepts?


So the demand of computation in each living being forces to admit
 that  universes too random or too simple, wiith no lineal or
 discontinuous macroscopic laws have no  complex spatio-temporal
volutes (that may be the aspect of life as looked from outside of our
four-dimensional universe).  The macroscopic laws are the macroscopic
effects of the underlying mathematical structures with which our
universe is isomorphic (or identical).

    But why must what we do be reducible to some definable set of procedures? Is there not a kind of prejudice in that idea, that all that we can know and experience must follow  some definable set of rules? Could it be that what is describable and delimited to follow a set of rules in the content of our knowledge, where as the processes of the world are inscrutable on their own. It is only after we sapient and intercommunicating beings have evolved concepts and explanations that there is something that we can identify as being, for example, "random" or "simple" or "complex" or spatio-temporal" or ... or some finite combination thereof.



And our very notion of what is intuitively considered mathematical:
"something  general simple and powerful enough"    has the hallmark of
scarcity of computation resources. (And absence of contradictions fits
in the notion of simplicity, because exception to rules have to be
memorized and dealt with extensively, one by one)

    I like this attention that you are focusing on "scarcity of resources". Are you considering that it is a situation that occurred due to per-existing conditions or is it more of the result of an optimization process? For example, a tiger has tripes and large teeth and other features because those features just happen to be the one's that "won the competition" for ensuring the survival of more tigers than a set of features that might have been expressed by just some random occurrence? I have pointed out a article by Stephen Wolfram that discusses how most systems in Nature happen to express behaviors and complexities that are such that the best possible computational simulation of those system by a computational system given physically possible resource availability is the actual evolution of those systems themselves. Could it be that a physical system in a real way is "the best possible computational simulation" of that particular system in that particular world? This would act as a natural mapping between the category of possible physical systems and the category of computations, in the sense that any computation is ultimately a transformation of information such that the generation of a simulation of some kind of process occurs.



Perhaps not only is that way but even may be that  the absence of
contradictions ( the main rule of simplicity) or -in computationa
terms- the rule of  low kolmogorov complexity  _creates_ itself the
mathematics. That is, for example, may be that the boolean logic for
example, is what it is not because it is consistent simpleand it´s
beatiful,   but because it is the shortest logic in terms of the
lenght of the description of its operations, and this is the reason
because we perceive it as simple and beatiful and consistent.
.

    I believe that the absence of contradictions is an imposed rule of a sort since it is only necessary to have logical non-contradiction to reproduce (copy) a given structure. I argue that this is the case because there is not a priori logical reason why a logical system based on a particular set of axioms should be ontologically prefered. The set of {0,1} maybe be a small set of possible variations that can be associated but why not { i, 1) or {Real Numbers} or {Complex Numbers}? We must be careful that we do not conflate the particular means by which we actually do think with the Nature of Reality itself. One thing we have been taught by Nature in the most forceful way is that Nature does not respect any preference of framing, coordinate system, or basis. Why would it necessarily prefer a particular logical system?
    To communicate about a structure would fall under this no-contradiction rule because to communicate coherently and effectively one must have, at some point in the communicative scheme, a means to generate a copy of the referent of the message. The so-called very weak anthropic principle states that observers can only observe themselves in worlds that are non-contradictory with their existence and we can argue many implications of this principle, but note that it places an observer and a world into a mutually defining role such that they as concepts stand or fall together. Is there a principle to applies to individual entities, so that we can consider what similar principle applies to any object, say an electron or a massive black hole?
    It could be that Boolean logic appears to use as the most simple, beautiful and consistent type of logic because the act of observation itself is contained to met the kind of optimization terms that you are pointing toward? I would also point out that the topological dual of Boolean algebras has the same appearance as what we consider when we think of the idea of "atoms in a void". Is this a mere coincidence? Could the logical way we observe the world define in a very real way exactly the kind of things that we observe? We usually observe, via our eyes and touch, objects composed of collections of points merely because we are, predominately, observing the world with a position basis?

Onward!

Stephen

Alberto G.Corona

unread,
Mar 4, 2012, 4:10:30 PM3/4/12
to Everything List
I have a severe cold,. I can not even calculate what is 27/2 (no
kidding). So due to lack of worthy arguments and to avoid to spread
the virus in this group, I will stay away for a week at least ;.)

On 3 mar, 00:32, "Stephen P. King" <stephe...@charter.net> wrote:
> On 2/28/2012 8:20 PM, Alberto G.Corona wrote:
>
> > Dear Stephen,
>
> > A thing that I often ask myself concerning MMH is  the question about
> > what is mathematical and what is not?. The set of real numbers is a
> > mathematical structure, but also the set of real numbers plus the
> > point (1,1) in the plane is. The set of randomly chosen numbers { 1,4
> > 3,4,.34, 3}  is because it can be described with the same descriptive
> > language of math. But the first of these structures have properties
> > and the others do not. The first can be infinite but can be described
> > with a single equation while the last   must be described
> > extensively. . At least some random universes (the finite ones) can be
> > described extensively, with the tools of mathematics but they don�t
> > count in the intuitive sense as mathematical.
>
>      Dear Alberto,
>
>      I distinguish between the existential and the essential aspects
> such that this question is not problematic. Let me elaborate. By
> Existence I mean the necessary possibility of the entity. By Essence I
> mean the collection of properties that are its identity. Existence is
> only contingent on whether or not said existence is self-consistent, in
> other words, if an entity's essence is such that it contradicts the
> possibility of its existence, then it cannot exist; otherwise entities
> exist, but nothing beyond the tautological laws of identity - "A is A"
> and Unicity <http://www.thefreedictionary.com/Unicity> - can be said to
> > example, is what it is not because it is consistent simpleand it�s
> > beatiful,   but because it is the shortest logic in terms of the
> > lenght of the description of its operations, and this is the reason
> > because we perceive it as simple and beatiful and consistent.
> > .
>
>      I believe that the absence of contradictions is an imposed rule of
> a sort since it is only necessary to have logical non-contradiction to
> reproduce (copy) a given structure. I argue that this is the case
> because there is not a priori logical reason why a logical system based
> on a particular set of axioms should be ontologically prefered. The set
> of {0,1} maybe be a small set of possible variations that can be
> associated but why not { i, 1) or {Real Numbers} or {Complex Numbers}?
> We must be careful that we do not conflate the particular means by which
> we actually do think with the Nature of Reality itself. One thing we
> have been taught by Nature in the most forceful way is that Nature does
> not respect any preference of framing, coordinate system, or basis. Why
> would it necessarily prefer a particular logical system?
>      To communicate about a structure would fall under this
> no-contradiction rule because to communicate coherently and effectively
> one must have, at some point in the communicative scheme, a means to
> generate a copy of the referent of the message. The so-called very weak
> anthropic principle states that observers can only observe themselves in ...
>
> leer más »
Reply all
Reply to author
Forward
0 new messages