Qualia and mathematics

55 views
Skip to first unread message

Pierz

unread,
Jan 26, 2012, 1:19:00 AM1/26/12
to Everything List
As I continue to ponder the UDA, I keep coming back to a niggling
doubt that an arithmetical ontology can ever really give a
satisfactory explanation of qualia. It seems to me that imputing
qualia to calculations (indeed consciousness at all, thought that may
be the same thing) adds something that is not given by, or derivable
from, any mathematical axiom. Surely this is illegitimate from a
mathematical point of view. Every mathematical statement can only be
made in terms of numbers and operators, so to talk about *qualities*
arising out of numbers is not mathematics so much as numerology or
qabbala.

Here of course is where people start to invoke the wonderfully protean
notion of ‘emergent properties’. Perhaps qualia emerge when a
calculation becomes deep enough.Perhaps consciousness emerges from a
complicated enough arrangement of neurons. But I’ll venture an axiom
of my own here: no properties can emerge from a complex system that
are not present in primitive form in the parts of that system. There
is nothing mystical about emergent properties. When the emergent
property of ‘pumping blood’ arises out of collections of heart cells,
that property is a logical extension of the properties of the parts -
physical properties such as elasticity, electrical conductivity,
volume and so on that belong to the individual cells. But nobody
invoking ‘emergent properties’ to explain consciousness in the brain
has yet explained how consciousness arises as a natural extension of
the known properties of brain cells - or indeed of matter at all.

In the same way, I can’t see how qualia can emerge from arithmetic,
unless the rudiments of qualia are present in the natural numbers or
the operations of addition and mutiplication. And yet it seems to me
they can’t be, because the only properties that belong to arithmetic
are those leant to them by the axioms that define them. Indeed
arithmetic *is* exactly those axioms and nothing more. Matter may in
principle contain untold, undiscovered mysterious properties which I
suppose might include the rudiments of consciousness. Yet mathematics
is only what it is defined to be. Certainly it contains many mysteries
emergent properties, but all these properties arise logically from its
axioms and thus cannot include qualia.

I call the idea that it can numerology because numerology also
ascribes qualities to numbers. A ‘2’ in one’s birthdate indicates
creativity (or something), a ‘4’ material ambition and so on. Because
the emergent properties of numbers can indeed be deeply amazing and
wonderful - Mandelbrot sets and so on - there is a natural human
tendency to mystify them, to project properties of the imagination
into them. But if these qualities really do inhere in numbers and are
not put there purely by our projection, then numbers must be more than
their definitions. We must posit the numbers as something that
projects out of a supraordinate reality that is not purely
mathematical - ie, not merely composed of the axioms that define an
arithmetic. This then can no longer be described as a mathematical
ontology, but rather a kind of numerical mysticism. And because
something extrinsic to the axioms has been added, it opens the way for
all kinds of other unicorns and fairies that can never be proved from
the maths alone. This is unprovability not of the mathematical
variety, but more of the variety that cries out for Mr Occam’s shaving
apparatus.

acw

unread,
Jan 26, 2012, 7:08:05 AM1/26/12
to everyth...@googlegroups.com
On 1/26/2012 08:19, Pierz wrote:
> As I continue to ponder the UDA, I keep coming back to a niggling
> doubt that an arithmetical ontology can ever really give a
> satisfactory explanation of qualia. It seems to me that imputing
> qualia to calculations (indeed consciousness at all, thought that may
> be the same thing) adds something that is not given by, or derivable
> from, any mathematical axiom. Surely this is illegitimate from a
> mathematical point of view. Every mathematical statement can only be
> made in terms of numbers and operators, so to talk about *qualities*
> arising out of numbers is not mathematics so much as numerology or
> qabbala.
>
> Here of course is where people start to invoke the wonderfully protean
> notion of �emergent properties�. Perhaps qualia emerge when a

> calculation becomes deep enough.Perhaps consciousness emerges from a
> complicated enough arrangement of neurons. But I�ll venture an axiom

> of my own here: no properties can emerge from a complex system that
> are not present in primitive form in the parts of that system. There
> is nothing mystical about emergent properties. When the emergent
> property of �pumping blood� arises out of collections of heart cells,

> that property is a logical extension of the properties of the parts -
> physical properties such as elasticity, electrical conductivity,
> volume and so on that belong to the individual cells. But nobody
> invoking �emergent properties� to explain consciousness in the brain

> has yet explained how consciousness arises as a natural extension of
> the known properties of brain cells - or indeed of matter at all.
>
> In the same way, I can�t see how qualia can emerge from arithmetic,

> unless the rudiments of qualia are present in the natural numbers or
> the operations of addition and mutiplication. And yet it seems to me
> they can�t be, because the only properties that belong to arithmetic

> are those leant to them by the axioms that define them. Indeed
> arithmetic *is* exactly those axioms and nothing more. Matter may in
> principle contain untold, undiscovered mysterious properties which I
> suppose might include the rudiments of consciousness. Yet mathematics
> is only what it is defined to be. Certainly it contains many mysteries
> emergent properties, but all these properties arise logically from its
> axioms and thus cannot include qualia.
>
> I call the idea that it can numerology because numerology also
> ascribes qualities to numbers. A �2� in one�s birthdate indicates
> creativity (or something), a �4� material ambition and so on. Because

> the emergent properties of numbers can indeed be deeply amazing and
> wonderful - Mandelbrot sets and so on - there is a natural human
> tendency to mystify them, to project properties of the imagination
> into them. But if these qualities really do inhere in numbers and are
> not put there purely by our projection, then numbers must be more than
> their definitions. We must posit the numbers as something that
> projects out of a supraordinate reality that is not purely
> mathematical - ie, not merely composed of the axioms that define an
> arithmetic. This then can no longer be described as a mathematical
> ontology, but rather a kind of numerical mysticism. And because
> something extrinsic to the axioms has been added, it opens the way for
> all kinds of other unicorns and fairies that can never be proved from
> the maths alone. This is unprovability not of the mathematical
> variety, but more of the variety that cries out for Mr Occam�s shaving
> apparatus.
>

Why would any structure give rise to qualia? We think some structure
(for example our brain, or the abstract computation or arithmetical
truth/structure representing it) does and we communicate it to others in
a "3p" way. The options here are to either say qualia exists and our
internal beliefs (which also have 'physical' correlates) are correct, or
that it doesn't and we're all delusional, although in the second case,
the belief is self-defeating because the 3p world is inferred through
the 1p view. It makes logical sense that a structure which has such
beliefs as ourselves could have the same qualia (or a digital
substitution of our brain), but this is *unprovable*.

If you don't eliminate qualia away, do you think the principle described
here makes sense? http://consc.net/papers/qualia.html
If we don't attribute consciousness to some structures or just 'how a
computation feels from the inside' then we're forced to believe that
consciousness is a very fickle thing.

As for arithmetic/numbers - Peano Arithmetic is strong enough to
describe computation which is enough to describe just about any finite
structure/process (although potentially unbounded in time) and our own
thought processes are such processes if neuroscience is to be believed.
Arithmetic itself can admit many interpretation and axioms tell you what
'arithmetic' isn't and what theorems must follow, not what it is - can
you explain to me what a number is without appealing to a model or
interpretation? Arithmetical realism merely states that arithmetical
propositions have a truth value, or that the standard model of
arithmetic exists.

If you think that isn't enough, I don't see what else could be enough
without positing some form of magic in the physics, but that forces us
to believe consciousness is very fickle. Attributing consciousness to
(undefinable) arithmetical truth appears to me like a better theory than
attributing it to some uncomputable God-of-the-gaps physical magic , if
one has to believe in consciousness (as a side note, the set of
arithmetical truths is also uncomputable and undefinable within
arithmetic itself). If you must use Occam, the only thing that you can
shave would be your own consciousness, which I think is overreaching,
although some philosophers do just that (like Dennett), if you use Occam
and accept consciousness and that you admit a digital substitution, an
arithmetical ontology is one of the simplest solutions.

Pierz

unread,
Jan 26, 2012, 8:28:23 AM1/26/12
to Everything List
>Arithmetic itself can admit many interpretation and axioms tell you what
'>arithmetic' isn't and what theorems must follow, not what it is

I don't see that. I mean, sure you can't say what a number 'is' beyond
a certain point, but everything falters on a certain circularity at
some point. With maths we don't have to ask what it is beyond what it
is defined as being, and my argument is that adding qualia into it is
adding something outside its own internal logic, when maths is, purely
and entirely, exactly that logic.

>can you explain to me what a number is without appealing to a model or
>interpretation?

Can you explain what anything is, indeed can you speak or think at all
without appealing to a model or interpretation?

> Attributing consciousness to
>(undefinable) arithmetical truth appears to me like a better theory than
>attributing it to some uncomputable God-of-the-gaps physical magic

I associate the term 'god of the gaps' with theological arguments
based on incomplete scientific theories/knowledge. We aren't arguing
about God but about consciousness. Also, there's an ambiguity to what
you mean by 'uncomputable' here. We are talking about qualia which one
can't describe as uncomputable in a mathematical sense, but perhaps
better as 'unmathematical', not subject to mathematical treatment at
all. Qualia are 'uncomputable' in this sense also in an arithmetical
ontology in that nobody could ever 'predict' a quale, just as nobody
can ever describe one, except by fallible analogies. As for the
'magic' in the physics, the magic is *somewhere*, like it or not.
There is no explanation in mathematics for why numbers should have a
quality of feeling built into them. I don't like material
epiphenomenalism either, and increasingly I am finding Bruno's movie
graph argument convincing, but more as an argument against comp than
as proof that mind is a property of arithmetic.

>although some philosophers do just that (like Dennett),

Jaron Lanier argues (jokingly) in 'You are not a gadget' that you can
only tell zombies by their philosophy, and that clearly therefore
Dennet is a philosophical zombie...

>and that you admit a digital substitution

Yep, I think that's where the philosophical rot begins. The assumption
is that the consciousness is inside the circuits - be it their logical
or their physical arrangement. Near death experiences are an argument
against that proposition. (I say that knowing full well I'm about to
get stomped by the materialists for it.) Another thought that makes me
wonder about computationalism is the experience of pure consciousness
that many people in deep meditation have reported - a state of mind
without computation, if real, would constitute an experiential
refutation of comp. I have experienced something like this myself,
alas not as a result of years of meditation, but when I passed out at
the chemist with the flu while waiting for a prescription! It was so
terribly disappointing to return to the 'thousand shocks that flesh is
heir to'. This does not make me a secret or not-so-secret theist BTW.
Unfortunately that whole ridiculously simplistic debate has blinded
us to the infinite possible ways the world might be in between having
been created by a guy with a beard and being a meaningless tornado of
particles of stuff.


On Jan 26, 11:08 pm, acw <a...@lavabit.com> wrote:
> On 1/26/2012 08:19, Pierz wrote:
>
>
>
>
>
>
>
>
>
> > As I continue to ponder the UDA, I keep coming back to a niggling
> > doubt that an arithmetical ontology can ever really give a
> > satisfactory explanation of qualia. It seems to me that imputing
> > qualia to calculations (indeed consciousness at all, thought that may
> > be the same thing) adds something that is not given by, or derivable
> > from, any mathematical axiom. Surely this is illegitimate from a
> > mathematical point of view. Every  mathematical statement can only be
> > made in terms of numbers and operators, so to talk about *qualities*
> > arising out of numbers is not mathematics so much as numerology or
> > qabbala.
>
> > Here of course is where people start to invoke the wonderfully protean
> > notion of emergent properties . Perhaps qualia emerge when a
> > calculation becomes deep enough.Perhaps consciousness emerges from a
> > complicated enough arrangement of neurons. But I ll venture an axiom
> > of my own here: no properties can emerge from a complex system that
> > are not present in primitive form in the parts of that system. There
> > is nothing mystical about emergent properties. When the emergent
> > property of pumping blood arises out of collections of heart cells,
> > that property is a logical extension of the properties of the parts -
> > physical properties such as elasticity, electrical conductivity,
> > volume and so on that belong to the individual cells. But nobody
> > invoking emergent properties to explain consciousness in the brain
> > has yet explained how consciousness arises as a natural extension of
> > the known properties of brain cells  - or indeed of matter at all.
>
> > In the same way, I can t see how qualia can emerge from arithmetic,
> > unless the rudiments of qualia are present in the natural numbers or
> > the operations of addition and mutiplication. And yet it seems to me
> > they can t be, because the only properties that belong to arithmetic
> > are those leant to them by the axioms that define them. Indeed
> > arithmetic *is* exactly those axioms and nothing more. Matter may in
> > principle contain untold, undiscovered mysterious properties which I
> > suppose might include the rudiments of consciousness. Yet mathematics
> > is only what it is defined to be. Certainly it contains many mysteries
> > emergent properties, but all these properties arise logically from its
> > axioms and thus cannot include qualia.
>
> > I call the idea that it can numerology because numerology also
> > ascribes qualities to numbers. A 2 in one s birthdate indicates
> > creativity (or something), a 4 material ambition and so on. Because
> > the emergent properties of numbers can indeed be deeply amazing and
> > wonderful - Mandelbrot sets and so on - there is a natural human
> > tendency to mystify them, to project properties of the imagination
> > into them. But if these qualities really do inhere in numbers and are
> > not put there purely by our projection, then numbers must be more than
> > their definitions. We must posit the numbers as something that
> > projects out of a supraordinate reality that is not purely
> > mathematical - ie, not merely composed of the axioms that define an
> > arithmetic. This then can no longer be described as a mathematical
> > ontology, but rather a kind of numerical mysticism. And because
> > something extrinsic to the axioms has been added, it opens the way for
> > all kinds of other unicorns and fairies that can never be proved from
> > the maths alone. This is unprovability not of the mathematical
> > variety, but more of the variety that cries out for Mr Occam s shaving
> > apparatus.
>
> Why would any structure give rise to qualia? We think some structure
> (for example our brain, or the abstract computation or arithmetical
> truth/structure representing it) does and we communicate it to others in
> a "3p" way. The options here are to either say qualia exists and our
> internal beliefs (which also have 'physical' correlates) are correct, or
> that it doesn't and we're all delusional, although in the second case,
> the belief is self-defeating because the 3p world is inferred through
> the 1p view. It makes logical sense that a structure which has such
> beliefs as ourselves could have the same qualia (or a digital
> substitution of our brain), but this is *unprovable*.
>
> If you don't eliminate qualia away, do you think the principle described
> here makes sense?http://consc.net/papers/qualia.html

acw

unread,
Jan 26, 2012, 9:26:01 AM1/26/12
to everyth...@googlegroups.com

If qualia doesn't correspond to a structure's properties, then we should
observe inconsistencies between what we observed and what we do. Yet, we
don't observe any of that. Which is why consciousness/qualia/'what it's
like to be some structure' as internal truth makes sense to me. If you
reject having a digital substitution, you either have to appeal to the
brain having some concrete infinities in its implementation, or you have
to say that there are some inconsistencies. To put it in another way,
where in the piece-by-piece digital substitution thought experiment (the
one I linked) do you think consciousness or qualia changes? Does it
suddenly disappear when you replace one neuron? Is it's fading, yet the
behavior never changes while the person reports having vivid and
complete qualia)? What about those people with digital implants(for
example, for hearing), do you think they are now p.zombies? I'd rather
bet on what seems more likely to me, but you're free to bet on less
likely hypotheses.

As for "Near Death Experiences" or various altered states of
consciousness, I don't see how that shows COMP wrong: those people were
conscious during them. I would even say that altered states of
consciousness merely means that the class of possible experiences is
very large. I had a fairly vivid lucid dream last night, yet I don't
take that as proof against COMP, I take that as proof that conscious
experience can be quite varied, and the more unusual (as opposed to the
usual awake state) the state is, the more unusual the nature of the
qualia can be. If after drinking or ingesting some mind-altering
substance, you have some unusual qualia, I'd say that at least partially
points to your local brain's 'physical' (or arithmetical or
computational or ...) state being capable of being directly affected by
its environment - again, points towards functionalism of some form, not
against it.

Bruno Marchal

unread,
Jan 26, 2012, 1:55:15 PM1/26/12
to everyth...@googlegroups.com

On 26 Jan 2012, at 07:19, Pierz wrote:

> As I continue to ponder the UDA, I keep coming back to a niggling
> doubt that an arithmetical ontology can ever really give a
> satisfactory explanation of qualia.

Of course the comp warning here is a bit "diabolical". Comp predicts
that consciousness and qualia can't satisfy completely the self-
observing machine. More below.


> It seems to me that imputing
> qualia to calculations (indeed consciousness at all, thought that may
> be the same thing) adds something that is not given by, or derivable
> from, any mathematical axiom. Surely this is illegitimate from a
> mathematical point of view. Every mathematical statement can only be
> made in terms of numbers and operators, so to talk about *qualities*
> arising out of numbers is not mathematics so much as numerology or
> qabbala.

No, it is modal logic, although model theory does that too. It is
basically the *magic* of computer science. relatively to a universal
number, a number can denote infinite things, like the program
factorial denotes the set {(0,0),(1,1),(2,2),(3,6),(4,24),(5,120), ...}.
Nobody can define consciousness and qualia, but many can agree on
statements about them, and in that way we can even communicate or
study what machine can say about any predicate verifying those
properties.

>
> Here of course is where people start to invoke the wonderfully protean
> notion of ‘emergent properties’. Perhaps qualia emerge when a
> calculation becomes deep enough.Perhaps consciousness emerges from a
> complicated enough arrangement of neurons.

Consciousness, as bet in a reality emerges as theorems in arithmetic.
They emerge like the prime numbers emerges. They follow logically,
from any non logical axioms defining a universal machine. UDA
justifies why it has to so, and AUDA shows how to make this
verifiable, with the definitions of knowledge on which most people
already agree.


> But I’ll venture an axiom
> of my own here: no properties can emerge from a complex system that
> are not present in primitive form in the parts of that system.

I agree with that in the logical sense. that is why I don't need more
than arithmetic for the universal realm.

> There
> is nothing mystical about emergent properties. When the emergent
> property of ‘pumping blood’ arises out of collections of heart cells,
> that property is a logical extension of the properties of the parts -
> physical properties such as elasticity, electrical conductivity,
> volume and so on that belong to the individual cells. But nobody
> invoking ‘emergent properties’ to explain consciousness in the brain
> has yet explained how consciousness arises as a natural extension of
> the known properties of brain cells - or indeed of matter at all.
>

Because the notion of matter prevent the progress. What arithmetic
explains is why universal numbers can develop a many-dream-world
interpretation of arithmetic justifying their local predictive
theories. Then for consciousness, we can explain why the predictive
theories can't address the question, for consciousness is related to
the big picture behind the observable surface. Numbers too find truth
that they can't relate to any numbers, or numbers relations.

> In the same way, I can’t see how qualia can emerge from arithmetic,
> unless the rudiments of qualia are present in the natural numbers or
> the operations of addition and mutiplication.

Rudiment of qualia would explains qualia away. They are intrinsically
more complex. A qualia needs two universal numbers (the hero and the
local environment(s) which executes the hero (in the computer science
sense, or in the UD). It needs the "hero" to refers automatically to
high level representation of itself and the environment, etc. Then the
qualia will be defined (and shown to exist) as truth felt as directly
available, and locally invariants, yet non communicable, and applying
to a person without description (the 1-person). "Feeling" being
something like "known as true in all my locally directly accessible
environments".


> And yet it seems to me
> they can’t be, because the only properties that belong to arithmetic
> are those leant to them by the axioms that define them.

Not at all. Arithmetical truth is far bigger than anything you can
derive from any (effective) theory. Theories are not PI_1 complete,
Arithmetical truth is PI_n complete for each n. It is very big.


> Indeed
> arithmetic *is* exactly those axioms and nothing more.

Gödel's incompleteness theorem refutes this.


> Matter may in
> principle contain untold, undiscovered mysterious properties which I
> suppose might include the rudiments of consciousness. Yet mathematics
> is only what it is defined to be. Certainly it contains many mysteries
> emergent properties, but all these properties arise logically from its
> axioms and thus cannot include qualia.

It is here that you are wrong. Even if we limit ourselves to
arithmetical truth, it extends terribly what machines can justify.

>
> I call the idea that it can numerology because numerology also
> ascribes qualities to numbers. A ‘2’ in one’s birthdate indicates
> creativity (or something), a ‘4’ material ambition and so on. Because
> the emergent properties of numbers can indeed be deeply amazing and
> wonderful - Mandelbrot sets and so on - there is a natural human
> tendency to mystify them, to project properties of the imagination
> into them.

No. Some bet on mechanism to justify the non sensicalness of the
notion of zombie, or the hope that he or his children might travel on
mars in 4 minutes, or just empirically by the absence of relevant non
Turing-emulability of biological phenomenon.
Unlike putting consciousness in matter (an unknown into an unknown),
comp explains consciousness with intuitively related concept, like
self-reference, non definability theorem, perceptible incompleteness,
etc.

And if you look at the Mandelbrot set, a little bit everywhere, you
can hardly miss the unreasonable resemblances with nature, from
lightening to embryogenesis given evidence that its rational part
might be a compact universal dovetailer, or creative set (in Post
sense).

> But if these qualities really do inhere in numbers and are
> not put there purely by our projection, then numbers must be more than
> their definitions. We must posit the numbers as something that
> projects out of a supraordinate reality that is not purely
> mathematical - ie, not merely composed of the axioms that define an
> arithmetic.

Like arithmetical truth. I think acw explained already.

> This then can no longer be described as a mathematical
> ontology, but rather a kind of numerical mysticism.

It is what you get in the case where brain are natural machines.

> And because
> something extrinsic to the axioms has been added, it opens the way for
> all kinds of other unicorns and fairies that can never be proved from
> the maths alone. This is unprovability not of the mathematical
> variety, but more of the variety that cries out for Mr Occam’s shaving
> apparatus.

No government can prevent numbers from dreaming. Although they might
try <sigh>.

You can't apply Occam on dreams.
They exist epistemologically once you have enough finite things.

Feel free to suggest a non-comp theory. Note that even just the
showing of *one* such theory is everything but easy. Somehow you have
to study computability, and UDA, to construct a non Turing emulable
entity, whose experience is not recoverable in any first person sense.
Better to test comp on nature, so as to have a chance at least to get
an evidence against comp, or against the classical theory of knowledge.

Bruno


http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jan 26, 2012, 5:35:49 PM1/26/12
to Everything List
On Jan 26, 1:19 am, Pierz <pier...@gmail.com> wrote:

> of my own here: no properties can emerge from a complex system that
> are not present in primitive form in the parts of that system. There
> is nothing mystical about emergent properties. When the emergent
> property of ‘pumping blood’ arises out of collections of heart cells,
> that property is a logical extension of the properties of the parts -
> physical properties such as elasticity, electrical conductivity,
> volume and so on that belong to the individual cells. But nobody
> invoking ‘emergent properties’ to explain consciousness in the brain
> has yet explained how consciousness arises as a natural extension of
> the known properties of brain cells  - or indeed of matter at all.

YES. Well said, and I agree completely. This seems to be what most
people are missing. When I press this issue, I generally get a lot of
promissory materialism - 'Science Will Provide'. There is no amount of
ping pong balls that will suddenly begin to think it is a turtle.
Anything interesting that comes out of quantities and arrangements has
to be potentially there from the beginning, and that means physical
qualities which we know as matter and energy (or body and mind in
first person).


> In the same way, I can’t see how qualia can emerge from arithmetic,
> unless the rudiments of qualia are present in the natural numbers or
> the operations of addition and mutiplication. And yet it seems to me
> they can’t be, because the only properties that belong to arithmetic
> are those leant to them by the axioms that define them. Indeed
> arithmetic *is* exactly those axioms and nothing more.

Right. There are completely different qualia associated with the same
numbers, depending on the context.

> Matter may in
> principle contain untold, undiscovered mysterious properties which I
> suppose might include the rudiments of consciousness.

We can change our consciousness by ingesting substances or
manipulating our brain directly with electromagnetism or surgery. That
suggests that matter is important in a non-trivial and highly specific
way. There is no reason I can think of to assume that all matter does
not have some sensorimotive properties.

> I call the idea that it can numerology because numerology also
> ascribes qualities to numbers. A ‘2’ in one’s birthdate indicates
> creativity (or something), a ‘4’ material ambition and so on. Because
> the emergent properties of numbers can indeed be deeply amazing and
> wonderful - Mandelbrot sets and so on - there is a natural human
> tendency to mystify them, to project properties of the imagination
> into them.

Which is an excellent way to learn about the properties of the
imagination and the psyche itself. Potentially more useful and
interesting than the properties of physics or arithmetic. Numerology
and other divinatory systems can, if they are understood figuratively
rather than literally, shed light on the 'who' and the 'why' aspects
of the universe that make the 'what' and 'how' meaningful.

> But if these qualities really do inhere in numbers and are
> not put there purely by our projection, then numbers must be more than
> their definitions.

I wouldn't make it 'our projection' in the sense that it is pure
fiction, it is just a level of semantic projection which we as humans
happen to be able to access. Does 1 imply independence, isolation,
first, top, only, etc? Yes, why not? Does 4 imply order and
practicality? Well, have you ever been in a building that uses seven
sided doors and windows? There is meaning there. It's not just
fictional, it's rooted in observation just as science is.

> We must posit the numbers as something that
> projects out of a supraordinate reality that is not purely
> mathematical - ie, not merely composed of the axioms that define an
> arithmetic. This then can no longer be described as a mathematical
> ontology, but rather a kind of numerical mysticism. And because
> something extrinsic to the axioms has been added, it opens the way for
> all kinds of other unicorns and fairies that can never be proved from
> the maths alone. This is unprovability not of the mathematical
> variety, but more of the variety that cries out for Mr Occam’s shaving
> apparatus.

Right. Disembodied experiences and intentions. It seems to me to be a
metaphysical explanation that makes the physical universe redundant
and illogical. Why not start with what we know? Consciousness and it's
relation with the brain and body are understandable when we stop
looking for the mystery and accept that our lives are part of the
universe in their native form. Once we do that, we have only to accept
that our cells and molecules also have lives that exist in their
native form but that form is not accessible to us directly as discrete
experiences, rather it is rolled into our own experience as rich
qualia.

Craig

Russell Standish

unread,
Jan 26, 2012, 5:52:55 PM1/26/12
to everyth...@googlegroups.com
On Jan 26, 1:19 am, Pierz <pier...@gmail.com> wrote:

> of my own here: no properties can emerge from a complex system that
> are not present in primitive form in the parts of that system. There

What about gliders emerging from the rules of Game of Life? There are
no primitive form gliders in the transition table, nor in static cells
of the grid.


--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Craig Weinberg

unread,
Jan 26, 2012, 8:27:43 PM1/26/12
to Everything List
On Jan 26, 5:52 pm, Russell Standish <li...@hpcoders.com.au> wrote:
> On Jan 26, 1:19 am, Pierz <pier...@gmail.com> wrote:
>
> > of my own here: no properties can emerge from a complex system that
> > are not present in primitive form in the parts of that system. There
>
> What about gliders emerging from the rules of Game of Life? There are
> no primitive form gliders in the transition table, nor in static cells
> of the grid.

There is nothing to the gliders except transitions of the static
cells. The interpretation that there is a visual pattern gliding is
only our perception of it. It's Beta movement. http://en.wikipedia.org/wiki/Beta_movement

Craig

acw

unread,
Jan 26, 2012, 9:32:30 PM1/26/12
to everyth...@googlegroups.com
There is nothing on the display except transitions of pixels. There is
nothing in the universe, except transitions of states (unless a time
continuum (as in real numbers) is assumed, but that's a very strong
assumption). (One can also apply a form of MGA with this assumption
(+the digital subst. one) to show that consciousness has to be something
more "abstract" than merely matter.)

It doesn't change the fact that either a human or an AI capable of some
types of pattern recognition would form the internal beliefs that there
is a glider moving in a particular direction. This belief would even be
strengthened if you increase the resolution of your digital array/grid
by enough, have some high-level stable emergent patterns in it and only
allow "sensing" (either by an external party or something embedded in
it) in an inexact, potentially randomized way (such as only being able
to sense an average of the block, for example, if trying to access an
NxN-sized block, you'd only be able to access a quantized average, and
the offsets being sensed would be randomized slightly) - they would even
prefer to work with a continuum because there's no easy way of
establishing a precise resolution or sensing at that low level, but
regardless of how sensing (indirectly accessing data) is done, emergent
digital movement patterns would look like (continuous) movement to the
observer.

Also, it would not be very wise to assume humans are capable of sensing
such a magical continuum directly (even if it existed), the evidence
that says that humans' sense visual information through their eyes: when
a photon hits a photoreceptor cell, that *binary* piece of information
is transmitted through neurons connected to that cell and so on
throughout the visual system(...->V1->...->V4->IT->...) and eventually
up to the prefrontal cortex. Neurons are also rather slow, they can only
spike about once per 5ms (~200Hz), although they rarely do so often.
(Note that I'm not saying that conscious experience is only the current
brain state in a single universe with only one timeline and nothing
more, in COMP, the (infinite amount of) counterfactuals are also
important, for example for selecting the next state, or for "splits" and
"mergers").

Russell Standish

unread,
Jan 26, 2012, 10:34:05 PM1/26/12
to everyth...@googlegroups.com

Exactly. It is an emergent phenomenon that is not "present in in
primitive form in the parts of the system". All emergent phenomena are
in the "eye of the beholder", but that doesn't make them less real.

Unless you're saying the gliders don't exist at all, but that doesn't
appear to be the case here, as why would you label a non-existent
phenomenon "beta movement".

> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Jan 26, 2012, 10:55:27 PM1/26/12
to Everything List
On Jan 26, 9:32 pm, acw <a...@lavabit.com> wrote:

> There is nothing on the display except transitions of pixels. There is
> nothing in the universe, except transitions of states

Only if you assume that our experience of the universe is not part of
the universe. If you understand that pixels are generated by equipment
we have designed specifically to generate optical perceptions for
ourselves, then it is no surprise that it exploits our visual
perception. To say that there is nothing in the universe except the
transitions of states is a generalization presumably based on quantum
theory, but there is nothing in quantum theory which explains how
states scale up qualitatively so it doesn't apply to anything except
quantum. If you're talking about 'states' in some other sense, then
it's not much more explanatory than saying there is nothing except for
things doing things.

What I'm talking about is something different. We don't have to guess
what the pixels of Conway's game of life are doing because, we are the
ones who are displaying the game in an animated sequences. The game
could be displayed as a single pixel instead and be no different to
the computer.

>(unless a time
> continuum (as in real numbers) is assumed, but that's a very strong
> assumption). (One can also apply a form of MGA with this assumption
> (+the digital subst. one) to show that consciousness has to be something
> more "abstract" than merely matter.)
>
> It doesn't change the fact that either a human or an AI capable of some
> types of pattern recognition would form the internal beliefs that there
> is a glider moving in a particular direction.

Yes, it does. A computer gets no benefit at all from seeing the pixels
arrayed in a matrix. It doesn't even need to run the game, it can just
load each frame of the game in memory and not have any 'internal
beliefs' about gliders moving.

> regardless of how sensing (indirectly accessing data) is done, emergent
> digital movement patterns would look like (continuous) movement to the
> observer.

I don't think that sensing is indirect accessed data, data is
indirectly experienced sense. Data supervenes on sense, but not all
sense is data (you can have feelings that you don't understand or even
be sure that you have them). I'm not sure why you say that continuous
movement patterns emerge to the observer, that is factually incorrect.
http://en.wikipedia.org/wiki/Akinetopsia

>
> Also, it would not be very wise to assume humans are capable of sensing
> such a magical continuum directly (even if it existed), the evidence
> that says that humans' sense visual information through their eyes:

I don't think that what humans sense visually is information. It can
and does inform us but it is not information. Perception is primitive.
It's the sensorimotive view of electromagnetism. It is not a message
about an event, it is the event.

> when
> a photon hits a photoreceptor cell, that *binary* piece of information
> is transmitted through neurons connected to that cell and so on
> throughout the visual system(...->V1->...->V4->IT->...) and eventually
> up to the prefrontal cortex.

That's a 3p view. It doesn't explain the only important part -
perception itself. The prefrontal cortex is no more or less likely to
generate visual awareness than the retina cells or neurons or
molecules themselves.

The 1p experience of vision is not dependent upon external photons (we
can dream and visualize) and it is not solipsistic either (our
perceptions of the world are generally reliable). If I had to make a
copy of the universe from scratch, I would need to know that what
vision is all about is feeling that you are looking out through your
eyes at a world of illuminated and illuminating objects. Vision is a
channel of sensitivity for the human being as a whole, and it has as
more to do with our psychological immersion in the narrative of our
biography than it does photons and microbiology. That biology,
chemistry, or physics does not explain this at all is not a small
problem, it is an enormous deal breaker.

My solution is that both views are correct on their own terms in their
own sense and that we should not arbitrarily privilege one view over
the other. Our vision is human vision. It is based on retina vision,
which is based on cellular and molecular visual sense. It is not just
a mechanism which pushes information around from one place to another,
each place is a living organism which actively contributes to the top
level experience - it isn't a passive system.

> Neurons are also rather slow, they can only
> spike about once per 5ms (~200Hz), although they rarely do so often.
> (Note that I'm not saying that conscious experience is only the current
> brain state in a single universe with only one timeline and nothing
> more, in COMP, the (infinite amount of) counterfactuals are also
> important, for example for selecting the next state, or for "splits" and
> "mergers").

Yes, organisms are slower than electronic measuring instruments, but
it doesn't matter because our universe is not an electronic measuring
instrument. It makes sense to us just fine at it's native anthropic
rate of change (except for the technologies we have designed to defeat
that sense).

Craig

acw

unread,
Jan 27, 2012, 12:49:36 AM1/27/12
to everyth...@googlegroups.com
On 1/27/2012 05:55, Craig Weinberg wrote:
> On Jan 26, 9:32 pm, acw<a...@lavabit.com> wrote:
>
>> There is nothing on the display except transitions of pixels. There is
>> nothing in the universe, except transitions of states
>
> Only if you assume that our experience of the universe is not part of
> the universe. If you understand that pixels are generated by equipment
> we have designed specifically to generate optical perceptions for
> ourselves, then it is no surprise that it exploits our visual
> perception. To say that there is nothing in the universe except the
> transitions of states is a generalization presumably based on quantum
> theory, but there is nothing in quantum theory which explains how
> states scale up qualitatively so it doesn't apply to anything except
> quantum. If you're talking about 'states' in some other sense, then
> it's not much more explanatory than saying there is nothing except for
> things doing things.
>
I'm not entirely sure what your theory is, but if I had to make an
initial guess (maybe wrong), it seems similar to some form of
panpsychism directly over matter. Such theories are testable and
falsifiable, although only in the 1p sense. A thing that should be worth
keeping in mind is that whatever our experience is, it has to be
consistent with our structure (or, if we admit, our computational
equivalent) - it might be more than it, but it cannot be less than it.
We wouldn't see in color if our eyes' photoreceptor cells didn't absorb
overlapping ranges of light wavelengths and then processed it throughout
the visual system (in some parts, in not-so-general ways, while in
others, in more general ways). The structures that we are greatly limit
the nature of our possible qualia. Your theory would have to at least
take structural properties into account or likely risk being shown wrong
in experiments that would be possible in the more distant future (of
course, since all such experiments discuss the 1p, you can always reject
them, because you can only vouch for your own 1p experiences and you
seem to be inclined to disbelieve any computational equivalents merely
on the ground that you refuse to assign qualia to abstract structures).
As for 'the universe', in COMP - the universe is a matter of
epistemology (machine's beliefs), and all that is, is just arithmetical
truth reflecting on itself (so with a very relaxed definition of
'universe', there's really nothing that isn't part of it; but with the
classical definition, it's not something ontologically primitive, but an
emergent shared belief).

> What I'm talking about is something different. We don't have to guess
> what the pixels of Conway's game of life are doing because, we are the
> ones who are displaying the game in an animated sequences. The game
> could be displayed as a single pixel instead and be no different to
> the computer.

I have no idea how a randomly chosen computation will evolve over time,
except in cases where one carefully designed the computation to be very
predictable, but even then we can be surprised. Your view of computation
seems to be that it's just something people write to try to model some
process or to achieve some particular behavior - that's the local
engineer view. In practice computation is unpredictable, unless we can
rigorously prove what it can do, and it's also trivially easy to make
machines which we cannot know a damn thing about what they will do
without running them for enough steps. After seeing how some computation
behaves over time, we may form some beliefs about it by induction, but
unless we can prove that it will only behave in some particular way, we
can still be surprised by it. Computation can do a lot of things, and we
should explore its limits and possibilities!


>
>> (unless a time
>> continuum (as in real numbers) is assumed, but that's a very strong
>> assumption). (One can also apply a form of MGA with this assumption
>> (+the digital subst. one) to show that consciousness has to be something
>> more "abstract" than merely matter.)
>>
>> It doesn't change the fact that either a human or an AI capable of some
>> types of pattern recognition would form the internal beliefs that there
>> is a glider moving in a particular direction.
>
> Yes, it does. A computer gets no benefit at all from seeing the pixels
> arrayed in a matrix. It doesn't even need to run the game, it can just
> load each frame of the game in memory and not have any 'internal
> beliefs' about gliders moving.
>

Benefit? I only considered a form of narrow AI which is capable of
recognizing patterns in its sense data without doing anything about
them, but merely classifying it and possibly doing some inferences from
them. Both of this is possible using various current AI research.
However, if we're talking about "benefit" here, I invite you to think
about what 'emotions', 'urges' and 'goals' are - we have a
reward/emotional system and its behavior isn't undefined, it can be
reasoned about, not only that, one can model structures like it
computationally: imagine a virtual world with virtual physics with
virtual entities living in it, some entities might be programmed to
replicate themselves and acquire resources to do so or merely to
survive, they might even have social interactions which result in
various emotional responses within their virtual society. One of the
best explanations for emotions that I've ever seen was given by a
researcher that was trying to build such emotional machines, he did it
by programming his agents with simpler urges and the emotions were an
emergent property of the system:
http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-emotion-and-mental-representation
http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-emotion-and-mental-representation-2
http://agi-school.org/2009/dr-joscha-bach-the-micropsi-architecture
http://www.cognitive-ai.com/

>> regardless of how sensing (indirectly accessing data) is done, emergent
>> digital movement patterns would look like (continuous) movement to the
>> observer.
>
> I don't think that sensing is indirect accessed data, data is
> indirectly experienced sense. Data supervenes on sense, but not all
> sense is data (you can have feelings that you don't understand or even
> be sure that you have them).
>

It is indirect in the example that I gave because there is an objective
state that we can compute, but none of the agents have any direct access
to it - only to approximations of it - if the agent is external, he is
limited to how he can access by the interface, if the agent is itself
part of the structure, then the limitation lies within itself - sort of
like how we are part of the environment and thus we cannot know exactly
what the environment's granularity is (if one exists, and it's not a
continuum or merely some sort of rational geometry or many other
possibilities).

> I'm not sure why you say that continuous
> movement patterns emerge to the observer, that is factually incorrect.
> http://en.wikipedia.org/wiki/Akinetopsia

Most people tend to feel their conscious experience being continuous,
regardless of if it really is so, we do however notice large
discontinuities, like if we slept or got knocked out. Of course most
bets are off if neuropsychological disorders are involved.

>>
>> Also, it would not be very wise to assume humans are capable of sensing
>> such a magical continuum directly (even if it existed), the evidence
>> that says that humans' sense visual information through their eyes:
>
> I don't think that what humans sense visually is information. It can
> and does inform us but it is not information. Perception is primitive.
> It's the sensorimotive view of electromagnetism. It is not a message
> about an event, it is the event.
>

I'm not sure how to understand that. Try writing a paper on your theory
and see if it's testable or verifiable in any way?

A small sidenote: a few years ago I've considered various consciousness
theories and various possible ontologies. Some of them, especially some
of the panpsychic kinds sure sound amazing and simple - they may even
lead to some religious experiences in some, but if you think about what
expectations to derive from them, or in general, what predictions or how
to test them, they tend to either fall short or worse, lead to
inconsistent beliefs when faced by even simple thought experiments (such
as the Fading qualia one). COMP on the other hand, offers very solid
testable predictions and doesn't fail most though experiments or
observational data that you can put it through (at least so far). I wish
other consciousness theories were as solid, understandable and testable
as COMP.

>> when
>> a photon hits a photoreceptor cell, that *binary* piece of information
>> is transmitted through neurons connected to that cell and so on
>> throughout the visual system(...->V1->...->V4->IT->...) and eventually
>> up to the prefrontal cortex.
>
> That's a 3p view. It doesn't explain the only important part -
> perception itself. The prefrontal cortex is no more or less likely to
> generate visual awareness than the retina cells or neurons or
> molecules themselves.
>

In COMP, you can blame the whole system for the awareness, however you
can blame the structure of the visual system for the way colors are
differentiated - it places great constraints on what the color qualia
can be - certainly not only black and white (given proper
functioning/structure).

> The 1p experience of vision is not dependent upon external photons (we
> can dream and visualize) and it is not solipsistic either (our
> perceptions of the world are generally reliable). If I had to make a
> copy of the universe from scratch, I would need to know that what
> vision is all about is feeling that you are looking out through your
> eyes at a world of illuminated and illuminating objects. Vision is a
> channel of sensitivity for the human being as a whole, and it has as
> more to do with our psychological immersion in the narrative of our
> biography than it does photons and microbiology. That biology,
> chemistry, or physics does not explain this at all is not a small
> problem, it is an enormous deal breaker.
>

You're right that our internal beliefs do affect how we perceive things.
It's not biology's or chemistry's job to explain that to you. Emergent
properties from the brain's structure should explain those parts to you.
Cognitive sciences as well as some related fields do aim to solve such
problems. It's like asking why an atom doesn't explain the computations
involved in processing this email. Different emergent structures at
different levels, sure one arises from the other, but in many cases, one
level can be fully abstracted from the other level.

> My solution is that both views are correct on their own terms in their
> own sense and that we should not arbitrarily privilege one view over
> the other. Our vision is human vision. It is based on retina vision,
> which is based on cellular and molecular visual sense. It is not just
> a mechanism which pushes information around from one place to another,
> each place is a living organism which actively contributes to the top
> level experience - it isn't a passive system.
>

Living organisms - replicators, are fine things, but I don't see why
must one confuse replicators with perception. Perception can exist by
itself merely on the virtue of passing information around and processing
it. Replicators can also exist due similar reasons, but on a different
level.

>> Neurons are also rather slow, they can only
>> spike about once per 5ms (~200Hz), although they rarely do so often.
>> (Note that I'm not saying that conscious experience is only the current
>> brain state in a single universe with only one timeline and nothing
>> more, in COMP, the (infinite amount of) counterfactuals are also
>> important, for example for selecting the next state, or for "splits" and
>> "mergers").
>
> Yes, organisms are slower than electronic measuring instruments, but
> it doesn't matter because our universe is not an electronic measuring
> instrument. It makes sense to us just fine at it's native anthropic
> rate of change (except for the technologies we have designed to defeat
> that sense).

Sure, the speed is not the most important thing, except when it leads to
us wanting some things to be faster and with our current biological
bodies, we cannot make them go faster or slower, we can only build
faster and faster devices, but we'll eventually hit the limit (we're
nearly there already). With COMP, this is even a greater problem
locally: if you get a digital brain (sometime in the not too near
future), some neuromorphic hardware is predicted to be a few orders of
magnitude faster(such as some 1000-4000 times our current rate), which
would mean that if someone wanted to function at realtime speed, they
might experience some insanely slow Internet speeds, for anything that
isn't locally accessible (for example, between US and Europe or Asia),
which mind lead to certain negative social effects (such as groups of
SIMs(Substrate Independent Minds) that prefer running at realtime speed
congregating and locally accessible hubs as opposed to the much slower
Internet). However, such a problem is only locally relevant (here in
this Universe, on this Earth), and is solvable if one is fine with
slowing themselves down relatively to some other program, and a system
can be designed which allows unbounded speedup (I did write more on this
in my other thread).
>
> Craig
>


Craig Weinberg

unread,
Jan 27, 2012, 8:36:29 AM1/27/12
to Everything List
On Jan 27, 12:49 am, acw <a...@lavabit.com> wrote:
> On 1/27/2012 05:55, Craig Weinberg wrote:> On Jan 26, 9:32 pm, acw<a...@lavabit.com> wrote:
>
> >> There is nothing on the display except transitions of pixels. There is
> >> nothing in the universe, except transitions of states
>
> > Only if you assume that our experience of the universe is not part of
> > the universe. If you understand that pixels are generated by equipment
> > we have designed specifically to generate optical perceptions for
> > ourselves, then it is no surprise that it exploits our visual
> > perception. To say that there is nothing in the universe except the
> > transitions of states is a generalization presumably based on quantum
> > theory, but there is nothing in quantum theory which explains how
> > states scale up qualitatively so it doesn't apply to anything except
> > quantum. If you're talking about 'states' in some other sense, then
> > it's not much more explanatory than saying there is nothing except for
> > things doing things.
>
> I'm not entirely sure what your theory is,

Please have a look if you like: http://multisenserealism.com



> but if I had to make an
> initial guess (maybe wrong), it seems similar to some form of
> panpsychism directly over matter.

Close, but not exactly. Panpsychism can imply that a rock has human-
like experiences. My hypothesis can be categorized as
panexperientialism because I do think that all forces and fields are
figurative externalizations of processes which literally occur within
and through 'matter'. Matter is in turn diffracted pieces of the
primordial singularity. It's confusing for us because we assume that
motion and time are exterior conditions, by if my view is accurate,
then all time and energy is literally interior to the observer as an
experience. What I think is that matter and experience are two
symmetrical but anomalous ontologies - two sides of the same coin, so
that our qualia and content of experience is descended from
accumulated sense experience of our constituent organism, not
manufactured by their bodies, cells, molecules, interactions. The two
both opposite expressions (a what & how of matter and space and a who
& why of experience or energy and time) of the underlying sense that
binds them to the singularity (where & when).

> Such theories are testable and
> falsifiable, although only in the 1p sense. A thing that should be worth
> keeping in mind is that whatever our experience is, it has to be
> consistent with our structure (or, if we admit, our computational
> equivalent) - it might be more than it, but it cannot be less than it.
> We wouldn't see in color if our eyes' photoreceptor cells didn't absorb
> overlapping ranges of light wavelengths and then processed it throughout
> the visual system (in some parts, in not-so-general ways, while in
> others, in more general ways). The structures that we are greatly limit
> the nature of our possible qualia.

I understand what you are saying, and I agree the structures do limit
our access to qualia, but not the form. Synesthesia, blindsight, and
anosognosia show clearly that at the human level at least, sensory
content is not tied to the nature of mechanism. We can taste color
instead of see it, or know vision without seeing. This is not to say
that we aren't limited by being a human being, of course we are, but
our body is as much a vehicle for our experience as much as our
experience is a filtered through our body. Indeed the brain makes no
sense as anything other than a sensorimotive amplifier/condenser.

> Your theory would have to at least
> take structural properties into account or likely risk being shown wrong
> in experiments that would be possible in the more distant future (of
> course, since all such experiments discuss the 1p, you can always reject
> them, because you can only vouch for your own 1p experiences and you
> seem to be inclined to disbelieve any computational equivalents merely
> on the ground that you refuse to assign qualia to abstract structures).

As far as experiments, yes I think experiments could theoretically be
done in the distant future, but it would involve connecting the brain
directly to other organisms brains. Not very appetizing, but
ultimately probable the only way to know for sure. If we studied brain
conjoined twins, we might be able to grow a universal port in our
brain that could be used to join other brains remotely. From there
there could be a neuron port that can connect to other cells, and
finally a molecular port. That's the only strategy I've dreamed up so
far.

I used to believe in computational equivalents, but that was before I
discovered the idea of sense. Now I see that counting is all about
internalizing and controlling the sense derived from exterior solid
objects. It is a particular channel of cognitive sense which is
precisely powerful because it is least like mushy, figurative,
multivalent feelings. Computation is like the glass exoskeleton or
crust of sensorimotivation. In a sense, it is an indirect version of
the molecular port I was talking about, because it projects our
thinking into the discrete, literal, a-signifying levels of that which
is most public, exterior, and distantly scaled (microcosm and
cosmology).

> As for 'the universe', in COMP - the universe is a matter of
> epistemology (machine's beliefs), and all that is, is just arithmetical
> truth reflecting on itself (so with a very relaxed definition of
> 'universe', there's really nothing that isn't part of it; but with the
> classical definition, it's not something ontologically primitive, but an
> emergent shared belief).

Right. All I'm doing is taking it a step further and saying that the
belief is not emergent, but rather ontologically primitive. Arithmetic
truth is a sensemaking experience, but sensemaking experiences are not
all arithmetic. There is nothing in the universe that is not a sense
or sense making experience. All 3p is redirected 1p but there is no 3p
without 1p. Sense is primordial.

>
> > What I'm talking about is something different. We don't have to guess
> > what the pixels of Conway's game of life are doing because, we are the
> > ones who are displaying the game in an animated sequences. The game
> > could be displayed as a single pixel instead and be no different to
> > the computer.
>
> I have no idea how a randomly chosen computation will evolve over time,
> except in cases where one carefully designed the computation to be very
> predictable, but even then we can be surprised. Your view of computation
> seems to be that it's just something people write to try to model some
> process or to achieve some particular behavior - that's the local
> engineer view. In practice computation is unpredictable, unless we can
> rigorously prove what it can do, and it's also trivially easy to make
> machines which we cannot know a damn thing about what they will do
> without running them for enough steps. After seeing how some computation
> behaves over time, we may form some beliefs about it by induction, but
> unless we can prove that it will only behave in some particular way, we
> can still be surprised by it. Computation can do a lot of things, and we
> should explore its limits and possibilities!

I agree, we should explore it. Computation may in fact be the only
practical way of exploring it in fact. I understand how we can be
surprised by the computation, but what I am saying is that the
computer is always surprised by the computation, even while it is
doing it. It doesn't know anything about anything except completing
circuits. It's like handing out a set of colored cards for a blind
crowd to hold up on cue. They perform the function, and you can see
what you expect or be surprised by the resulting mosaic, but the card
holders can't ever understand what the mosaic is.
> emergent property of the system:http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-em...http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-em...http://agi-school.org/2009/dr-joscha-bach-the-micropsi-architecturehttp://www.cognitive-ai.com/

I understand that completely, but it relies on conflating some
functions of emotions with the experience of them. Reward and
punishment only works if there is qualia which is innately rewarding
or punishing to begin with. No AI has that capacity. It is not
possible to reward or punish a computer. It's not necessary since they
have no autonomy (avoiding 'Free Will' for John Clark's sake) to begin
with. All we have to do is script rules into their mechanism. Some
parents would like to be able to do that I'm sure, but of course it
doesn't work that way for people. No matter how compelling and
coercive the brainwashing, some humans are always going to try to hack
it and escape. When a computer hacks it's programming and escapes, we
will know about it, but I'm not worried about that. What is far more
worrisome and real is that the externalization of our sense of
computation (the glass exoskeleton) will be taken for literal truth,
and our culture will be evacuated of all qualities except for
enumeration. This is already happening. This is the crisis of the
19-21st centuries. Money is computation. WalMart parking lot is the
cathedral of the god of empty progress.

>
> >> regardless of how sensing (indirectly accessing data) is done, emergent
> >> digital movement patterns would look like (continuous) movement to the
> >> observer.
>
> > I don't think that sensing is indirect accessed data, data is
> > indirectly experienced sense. Data supervenes on sense, but not all
> > sense is data (you can have feelings that you don't understand or even
> > be sure that you have them).
>
> It is indirect in the example that I gave because there is an objective
> state that we can compute, but none of the agents have any direct access
> to it - only to approximations of it - if the agent is external, he is
> limited to how he can access by the interface, if the agent is itself
> part of the structure, then the limitation lies within itself - sort of
> like how we are part of the environment and thus we cannot know exactly
> what the environment's granularity is (if one exists, and it's not a
> continuum or merely some sort of rational geometry or many other
> possibilities).

Not sure what you're saying here. I get that we cannot see our own
fine granularity, but that doesn't mean that the sense of that
granularity isn't entangled in our experience in an iconic way.

>
> > I'm not sure why you say that continuous
> > movement patterns emerge to the observer, that is factually incorrect.
> >http://en.wikipedia.org/wiki/Akinetopsia
> Most people tend to feel their conscious experience being continuous,
> regardless of if it really is so, we do however notice large
> discontinuities, like if we slept or got knocked out. Of course most
> bets are off if neuropsychological disorders are involved.

Any theory of consciousness should rely heavily on all known varieties
of consciousness, especially neuropsychological disorders. What good
is a theory of 21st century adult males of European descent with a
predilection for intellectual debate? The extremes are what inform us
the most. I don't think there is a such thing as 'regardless of it
really is so' when it comes to consciousness. What we feel our
conscious experience to be is actually what it feels like. No external
measurement can change that. We notice discontinuities because our
sense extends much deeper than conscious experience. We can tell if
we've been sleeping even without any external cues.

>
>
>
> >> Also, it would not be very wise to assume humans are capable of sensing
> >> such a magical continuum directly (even if it existed), the evidence
> >> that says that humans' sense visual information through their eyes:
>
> > I don't think that what humans sense visually is information. It can
> > and does inform us but it is not information. Perception is primitive.
> > It's the sensorimotive view of electromagnetism. It is not a message
> > about an event, it is the event.
>
> I'm not sure how to understand that. Try writing a paper on your theory
> and see if it's testable or verifiable in any way?

Our own experience verifies it. We know that our sensorimotive
awareness can be altered directly by transcranial magnetic
stimulation. Without evoking some kind of homonculus array in the
brain converting the magnetic changes into 'information' in some
undisclosed metaphysical never never land (which would of course by
the only place anyone has ever been to personally), then we are left
to accept that the changes in the brain and the changes in our feeling
are two different views of the same thing. I would love to collaborate
with someone who is qualified academically or professionally to write
a paper, but unfortunately that's not my department. It seems like I'm
up on the crows nest pointing to the new world. The rest is up to
everyone else how to explore it.

>
> A small sidenote: a few years ago I've considered various consciousness
> theories and various possible ontologies. Some of them, especially some
> of the panpsychic kinds sure sound amazing and simple - they may even
> lead to some religious experiences in some, but if you think about what
> expectations to derive from them, or in general, what predictions or how
> to test them, they tend to either fall short or worse, lead to
> inconsistent beliefs when faced by even simple thought experiments (such
> as the Fading qualia one).

Fading qualia is based on the assumption that qualia content derives
from mechanism. If you turn it around, it's equally absurd. If you
accept that fading qualia is impossible then you also accept that
Pinocchio's transformation is inevitable. The thing that is missing is
that qualia is not tied to it's opposite (quantum, mechanism, physics)
it's that both sides of the universe are tied to the where and when
between them. They overlap but otherwise they develop in diametrically
opposed way - with both sides influencing each other, just as
ingredients influence a chef and cooking influences what ingredients
are sold. It's a virtuous cycle where experienced significance
accumulates though time by burning matter across space as entropy.

It's this: http://d2o7bfz2il9cb7.cloudfront.net/main-qimg-6e13c63ae0561f4fee41492d92b52097

> COMP on the other hand, offers very solid
> testable predictions and doesn't fail most though experiments or
> observational data that you can put it through (at least so far). I wish
> other consciousness theories were as solid, understandable and testable
> as COMP.

My hypothesis explains why that is the case. Comp is too stupid not to
prove itself. The joke is on us if we believe that our lives are not
real but numbers are. This is survival 101. It's an IQ test. If we
privilege our mechanistic, testable, solid, logical sense over our
natural, solipsistic, anthropic sense, then we will become more and
more insignificant, and Dennet's denial of subjectivity will draw
closer and closer to self-fulfilling prophesy. The thing about
authentic subjectivity, it is has a choice. We don't have to believe
in indirect proof about ourselves because our direct experience is all
the proof anyone could ever have or need. We are already real, we
don't need some electronic caliper to tell us how real.

>
> >> when
> >> a photon hits a photoreceptor cell, that *binary* piece of information
> >> is transmitted through neurons connected to that cell and so on
> >> throughout the visual system(...->V1->...->V4->IT->...) and eventually
> >> up to the prefrontal cortex.
>
> > That's a 3p view. It doesn't explain the only important part -
> > perception itself. The prefrontal cortex is no more or less likely to
> > generate visual awareness than the retina cells or neurons or
> > molecules themselves.
>
> In COMP, you can blame the whole system for the awareness, however you
> can blame the structure of the visual system for the way colors are
> differentiated - it places great constraints on what the color qualia
> can be - certainly not only black and white (given proper
> functioning/structure).

Nah. Color could be sour and donkey, or grease, ring, and powder. The
number of possible distinctions is, and even their relationships to
each other as you say, part of the visual system's structure, but it
has nothing to do with the content of what actually is distinguished.

>
> > The 1p experience of vision is not dependent upon external photons (we
> > can dream and visualize) and it is not solipsistic either (our
> > perceptions of the world are generally reliable). If I had to make a
> > copy of the universe from scratch, I would need to know that what
> > vision is all about is feeling that you are looking out through your
> > eyes at a world of illuminated and illuminating objects. Vision is a
> > channel of sensitivity for the human being as a whole, and it has as
> > more to do with our psychological immersion in the narrative of our
> > biography than it does photons and microbiology. That biology,
> > chemistry, or physics does not explain this at all is not a small
> > problem, it is an enormous deal breaker.
>
> You're right that our internal beliefs do affect how we perceive things.
> It's not biology's or chemistry's job to explain that to you. Emergent
> properties from the brain's structure should explain those parts to you.
> Cognitive sciences as well as some related fields do aim to solve such
> problems. It's like asking why an atom doesn't explain the computations
> involved in processing this email. Different emergent structures at
> different levels, sure one arises from the other, but in many cases, one
> level can be fully abstracted from the other level.

Emergent properties are just the failure of our worldview to find
coherence. I will quote what Pierz wrote again here because it says it
all:

"But I’ll venture an axiom
of my own here: no properties can emerge from a complex system that
are not present in primitive form in the parts of that system. There
is nothing mystical about emergent properties. When the emergent
property of ‘pumping blood’ arises out of collections of heart cells,
that property is a logical extension of the properties of the parts -
physical properties such as elasticity, electrical conductivity,
volume and so on that belong to the individual cells. But nobody
invoking ‘emergent properties’ to explain consciousness in the brain
has yet explained how consciousness arises as a natural extension of
the known properties of brain cells - or indeed of matter at all. "

>
> > My solution is that both views are correct on their own terms in their
> > own sense and that we should not arbitrarily privilege one view over
> > the other. Our vision is human vision. It is based on retina vision,
> > which is based on cellular and molecular visual sense. It is not just
> > a mechanism which pushes information around from one place to another,
> > each place is a living organism which actively contributes to the top
> > level experience - it isn't a passive system.
>
> Living organisms - replicators,

Life replicates, but replication does not define life. Living
organisms feel alive and avoid death. Replication does not necessitate
feeling alive.

> are fine things, but I don't see why
> must one confuse replicators with perception. Perception can exist by
> itself merely on the virtue of passing information around and processing
> it. Replicators can also exist due similar reasons, but on a different
> level.

Perception has never existed 'by itself'. Perception only occurs in
living organisms who are informed by their experience. There is no
independent disembodied 'information' out there. There detection and
response, sense and motive of physical wholes.

>
> >> Neurons are also rather slow, they can only
> >> spike about once per 5ms (~200Hz), although they rarely do so often.
> >> (Note that I'm not saying that conscious experience is only the current
> >> brain state in a single universe with only one timeline and nothing
> >> more, in COMP, the (infinite amount of) counterfactuals are also
> >> important, for example for selecting the next state, or for "splits" and
> >> "mergers").
>
> > Yes, organisms are slower than electronic measuring instruments, but
> > it doesn't matter because our universe is not an electronic measuring
> > instrument. It makes sense to us just fine at it's native anthropic
> > rate of change (except for the technologies we have designed to defeat
> > that sense).
>
> Sure, the speed is not the most important thing, except when it leads to
> us wanting some things to be faster and with our current biological
> bodies, we cannot make them go faster or slower, we can only build
> faster and faster devices, but we'll eventually hit the limit (we're
> nearly there already). With COMP, this is even a greater problem
> locally: if you get a digital brain (sometime in the not too near
> future)

Sorry, but I think it's never going to happen. Consciousness is not
digital.

>, some neuromorphic hardware is predicted to be a few orders of
> magnitude faster(such as some 1000-4000 times our current rate), which
> would mean that if someone wanted to function at realtime speed, they
> might experience some insanely slow Internet speeds, for anything that
> isn't locally accessible (for example, between US and Europe or Asia),
> which mind lead to certain negative social effects (such as groups of
> SIMs(Substrate Independent Minds) that prefer running at realtime speed
> congregating and locally accessible hubs as opposed to the much slower
> Internet). However, such a problem is only locally relevant (here in
> this Universe, on this Earth), and is solvable if one is fine with
> slowing themselves down relatively to some other program, and a system
> can be designed which allows unbounded speedup (I did write more on this
> in my other thread).

We are able to extend and augment our neurological capacities (we
already are) with neuromorphic devices, but ultimately we need our own
brain tissue to live in. We, unfortunately cannot be digitized, we can
only be analogized through impersonation.

Craig

Bruno Marchal

unread,
Jan 27, 2012, 12:20:10 PM1/27/12
to everyth...@googlegroups.com

But many things about numbers are not arithmetical. Arithmetical truth
is not arithmetical. Machine's knowledge can be proved to be non
arithmetical.
If you want, arithmetic is enough rich for having a bigger reality
than anything we can describe in 3p terms.

> There is nothing in the universe

The term universe is ambiguous.

You confuse proving p, which can be explained in arithmetic, and
"proving p & p is true", which can happen to be true for a machine,
but escapes necessarily its language.
The same for consciousness. It cannot be explained in *any* third
person terms. But it can be proved that self-observing machine cannot
avoid the discovery of many things concerning them which are beyond
language.

Pierz, Craig, I disagree. Consciousness can be explained as a non 3p
describable fixed point when machine's observe themselves. This
provides a key role to consciousness, including the ability to develop
meanings, to speed decisions, to make decision in absence of
information, etc.
Consciousness is not explainable in term of any parts of something,
but as an invariant in universal self-transformation.
If you accept the classical theory of knowledge, then Peano Arithmetic
is already conscious.

>
>>
>>> My solution is that both views are correct on their own terms in
>>> their
>>> own sense and that we should not arbitrarily privilege one view over
>>> the other. Our vision is human vision. It is based on retina vision,
>>> which is based on cellular and molecular visual sense. It is not
>>> just
>>> a mechanism which pushes information around from one place to
>>> another,
>>> each place is a living organism which actively contributes to the
>>> top
>>> level experience - it isn't a passive system.
>>
>> Living organisms - replicators,
>
> Life replicates, but replication does not define life. Living
> organisms feel alive and avoid death. Replication does not necessitate
> feeling alive.

I am OK with this. Yet, replication + while-loop might be enough.


>
>> are fine things, but I don't see why
>> must one confuse replicators with perception. Perception can exist by
>> itself merely on the virtue of passing information around and
>> processing
>> it. Replicators can also exist due similar reasons, but on a
>> different
>> level.
>
> Perception has never existed 'by itself'. Perception only occurs in
> living organisms who are informed by their experience.

The whole point is to explain terms like "living", "conscious", etc.
You take them as primitive, so are escaping the issue.

> There is no
> independent disembodied 'information' out there. There detection and
> response, sense and motive of physical wholes.

Same for "physical" (and that's not obvious!).

If you survive with a digital brain, then consciousness is necessarily
not digital.
A brain is not a maker of consciousness. It is only a stable pattern
making it possible (or more probable) that a person can manifest
itself relatively to some universal number(s).
Keep in mind that comp makes materialism wrong. The big picture is
completely different. I think that you confuse comp, with its
Aristotelian version where computations seems to be incarnated by
physical primitive materials. Comp + materialism leads to person-
nihilism, so it is important to understand that comp should not be
assumed together with materialism (even weak).

>
>> , some neuromorphic hardware is predicted to be a few orders of
>> magnitude faster(such as some 1000-4000 times our current rate),
>> which
>> would mean that if someone wanted to function at realtime speed, they
>> might experience some insanely slow Internet speeds, for anything
>> that
>> isn't locally accessible (for example, between US and Europe or
>> Asia),
>> which mind lead to certain negative social effects (such as groups of
>> SIMs(Substrate Independent Minds) that prefer running at realtime
>> speed
>> congregating and locally accessible hubs as opposed to the much
>> slower
>> Internet). However, such a problem is only locally relevant (here in
>> this Universe, on this Earth), and is solvable if one is fine with
>> slowing themselves down relatively to some other program, and a
>> system
>> can be designed which allows unbounded speedup (I did write more on
>> this
>> in my other thread).
>
> We are able to extend and augment our neurological capacities (we
> already are) with neuromorphic devices, but ultimately we need our own
> brain tissue to live in.

Why? What does that mean?


> We, unfortunately cannot be digitized,

You don't know that. But you don't derive it either from what you
assume (which to be franc remains unclear)
.
I think that you have a reductionist conception of machine, which was
perhaps defensible before Gödel 1931 and Turing discovery of the
universal machine, but is no more defensible after.

Bruno


> we can
> only be analogized through impersonation.
>
> Craig
>

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

meekerdb

unread,
Jan 27, 2012, 3:02:48 PM1/27/12
to everyth...@googlegroups.com
On 1/27/2012 9:20 AM, Bruno Marchal wrote:
>
> Pierz, Craig, I disagree. Consciousness can be explained as a non 3p describable fixed
> point when machine's observe themselves.

Why is this not 3p describable? Your explanation of it seems to imply a description.

Brent

Pierz

unread,
Jan 27, 2012, 5:01:41 PM1/27/12
to Everything List


On Jan 27, 9:52 am, Russell Standish <li...@hpcoders.com.au> wrote:
> On Jan 26, 1:19 am, Pierz <pier...@gmail.com> wrote:
>
> > of my own here: no properties can emerge from a complex system that
> > are not present in primitive form in the parts of that system. There
>
> What about gliders emerging from the rules of Game of Life? There are
> no primitive form gliders in the transition table, nor in static cells
> of the grid.

My axiom is clumsy shorthand. Of course there are are no primitive
form pumps in heart cells either (well, maybe there are in the
cellular mechanism, but that is not the point), but pumps are
completely explainable in terms of the properties of the parts, and
there is no mystery whatsoever in going from the one to the other. On
the other hand, nobody has logically connected qualia to the
properties of matter. Of course, complex behaviour in an organism
(including intelligent behaviour) can be seen as an emergent property
of nerve cells and muscles etc, but only in the 3p sense. There is no
line of explanation from 3p to 1p. As for 'gliders', now I'd really be
impressed if actual gliders emerged from a computer program, but the
fact that patterned arrangements of pixels resembling gliders emerge
hardly blows my world apart. The emergence of this type of phenomena
may be unexpected at first, in the sense that the glider wasn't
deliberately programmed to appear, but 'emerged' out of secondary
implications of the program, but, as we used to say in high school,
'whoopie-do'. That hardly constitutes a refutation of my axiom,
because the emergence can easily be traced back to the properties of
programs, computers, screens, etc.

You say below that 'all emergent phenomena are in the eye of the
beholder but that doesn't make them less real', or words to that
effect. Sure - if by emergent phenomena you mean complex patterns that
appear out of iterative processes of a simple system. Nobody is saying
they aren't real. But the crucial point relates to consciousness. Not
complex, intelligent behaviour. Consciousness. So the problem is where
the 'beholder' appears, not anything in his or her eye. By eliding the
distinction between consciousness and intelligent behaviour - or
between 1p and 3p perspectives - you can of course reduce
'consciousness' to an emergent phenomenon, and that seems to be all
anyone who seeks to explain away qualia has ever done. The same
sleight of hand tricked up in a variety of guises, but amounting
always to the same manoeuvre.


>
> --
>
> --------------------------------------------------------------------------- -
> Prof Russell Standish                  Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics      hpco...@hpcoders.com.au

Craig Weinberg

unread,
Jan 27, 2012, 8:33:22 PM1/27/12
to Everything List
On Jan 27, 12:20 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:

>
> >> As for 'the universe', in COMP - the universe is a matter of
> >> epistemology (machine's beliefs), and all that is, is just
> >> arithmetical
> >> truth reflecting on itself (so with a very relaxed definition of
> >> 'universe', there's really nothing that isn't part of it; but with
> >> the
> >> classical definition, it's not something ontologically primitive,
> >> but an
> >> emergent shared belief).
>
> > Right. All I'm doing is taking it a step further and saying that the
> > belief is not emergent, but rather ontologically primitive. Arithmetic
> > truth is a sensemaking experience, but sensemaking experiences are not
> > all arithmetic.
>
> But many things about numbers are not arithmetical. Arithmetical truth
> is not arithmetical. Machine's knowledge can be proved to be non
> arithmetical.
> If you want, arithmetic is enough rich for having a bigger reality
> than anything we can describe in 3p terms.

But all arithmetic truths, knowledge, beliefs, etc are all still
sensemaking experiences. It doesn't matter whether they are arithmetic
or not, as long as they can possibly be detected or made sense of in
any way, even by inference, deduction, emergence, etc, they are still
sense. Not all sense is arithmetic or related to arithmetic in some
way though. Sense can be gestural or intuitive.

>
> > There is nothing in the universe
>
> The term universe is ambiguous.

Only in theory. I use it in a literal, absolutist way.

>
> > My hypothesis explains why that is the case. Comp is too stupid not to
> > prove itself. The joke is on us if we believe that our lives are not
> > real but numbers are. This is survival 101. It's an IQ test. If we
> > privilege our mechanistic, testable, solid, logical sense over our
> > natural, solipsistic, anthropic sense, then we will become more and
> > more insignificant, and Dennet's denial of subjectivity will draw
> > closer and closer to self-fulfilling prophesy. The thing about
> > authentic subjectivity, it is has a choice. We don't have to believe
> > in indirect proof about ourselves because our direct experience is all
> > the proof anyone could ever have or need. We are already real, we
> > don't need some electronic caliper to tell us how real.
>
> You confuse proving p, which can be explained in arithmetic, and
> "proving p & p is true", which can happen to be true for a machine,
> but escapes necessarily its language.
> The same for consciousness. It cannot be explained in *any* third
> person terms. But it can be proved that self-observing machine cannot
> avoid the discovery of many things concerning them which are beyond
> language.

I think that are confusing p with a reality rather than a logical idea
about reality. I have no reason to believe that a machine can observe
itself in anything more than a trivial sense. It is not a conscious
experience, I would guess that it is something like an accounting of
unaccounted-for function terminations. Proximal boundaries. A
silhouette of the self offering no interiority but an extrapolation of
incomplete 3p data. That isn't consciousness.


>
> > "But I’ll venture an axiom
> > of my own here: no properties can emerge from a complex system that
> > are not present in primitive form in the parts of that system. There
> > is nothing mystical about emergent properties. When the emergent
> > property of ‘pumping blood’ arises out of collections of heart cells,
> > that property is a logical extension of the properties of the parts -
> > physical properties such as elasticity, electrical conductivity,
> > volume and so on that belong to the individual cells. But nobody
> > invoking ‘emergent properties’ to explain consciousness in the brain
> > has yet explained how consciousness arises as a natural extension of
> > the known properties of brain cells  - or indeed of matter at all. "
>
> Pierz, Craig, I disagree. Consciousness can be explained as a non 3p
> describable fixed point when machine's observe themselves. This
> provides a key role to consciousness, including the ability to develop
> meanings, to speed decisions, to make decision in absence of
> information, etc.

I disagree. It provides a key role to the function of agency but it
has nothing to do with consciousness and qualia per se. A sleep walker
can navigate to the kitchen for a snack without being conscious.
Consciousness does nothing to speed decisions, it would only cost
processing overhead and add nothing to the efficiency of unconscious
adaptation.

> Consciousness is not explainable in term of any parts of something,
> but as an invariant in universal self-transformation.
> If you accept the classical theory of knowledge, then Peano Arithmetic
> is already conscious.

Why and how does universal self-transformation equate to
consciousness? Anything that is conscious can also be unconscious. Can
Peano Arithmetic be unconscious too?

>
>
>
> >>> My solution is that both views are correct on their own terms in
> >>> their
> >>> own sense and that we should not arbitrarily privilege one view over
> >>> the other. Our vision is human vision. It is based on retina vision,
> >>> which is based on cellular and molecular visual sense. It is not
> >>> just
> >>> a mechanism which pushes information around from one place to
> >>> another,
> >>> each place is a living organism which actively contributes to the
> >>> top
> >>> level experience - it isn't a passive system.
>
> >> Living organisms - replicators,
>
> > Life replicates, but replication does not define life. Living
> > organisms feel alive and avoid death. Replication does not necessitate
> > feeling alive.
>
> I am OK with this. Yet, replication + while-loop might be enough.

Should we mourn the untying of our shoelaces each time?

>
>
>
> >> are fine things, but I don't see why
> >> must one confuse replicators with perception. Perception can exist by
> >> itself merely on the virtue of passing information around and
> >> processing
> >> it. Replicators can also exist due similar reasons, but on a
> >> different
> >> level.
>
> > Perception has never existed 'by itself'. Perception only occurs in
> > living organisms who are informed by their experience.
>
> The whole point is to explain terms like "living", "conscious", etc.
> You take them as primitive, so are escaping the issue.

They aren't primitive, the symmetry is primitive.

>
> > There is no
> > independent disembodied 'information' out there. There detection and
> > response, sense and motive of physical wholes.
>
> Same for "physical" (and that's not obvious!).

Do you doubt that if all life were exterminated that planets would
still exist? Where would information be though?
Why not just use adipose tissue instead? That's a more stable pattern.
Why have a vulnerable concentration of this pattern in the head? Our
skeleton would make a much safer place four a person to manifest
itself relatively to some universal number.

> Keep in mind that comp makes materialism wrong.

That's not why it's wrong. I have no problem with materialism being
wrong, I have a problem with experience being reduced to non
experience or non sense.

> The big picture is
> completely different. I think that you confuse comp, with its
> Aristotelian version where computations seems to be incarnated by
> physical primitive materials. Comp + materialism leads to person-
> nihilism, so it is important to understand that comp should not be
> assumed together with materialism (even weak).

I don't think that I am confusing it. Comp is perfectly illustrated as
modern investment banking. There is no material, in fact it strangles
the life out of all materials, eviscerating culture and architecture,
all in the name of consolidating digitally abstracted control of
control. This is machine intelligence. The idea of unexperienced
ownership as an end unto itself, forever concentrating data and
exporting debt.

>
>
>
> >> , some neuromorphic hardware is predicted to be a few orders of
> >> magnitude faster(such as some 1000-4000 times our current rate),
> >> which
> >> would mean that if someone wanted to function at realtime speed, they
> >> might experience some insanely slow Internet speeds, for anything
> >> that
> >> isn't locally accessible (for example, between US and Europe or
> >> Asia),
> >> which mind lead to certain negative social effects (such as groups of
> >> SIMs(Substrate Independent Minds) that prefer running at realtime
> >> speed
> >> congregating and locally accessible hubs as opposed to the much
> >> slower
> >> Internet). However, such a problem is only locally relevant (here in
> >> this Universe, on this Earth), and is solvable if one is fine with
> >> slowing themselves down relatively to some other program, and a
> >> system
> >> can be designed which allows unbounded speedup (I did write more on
> >> this
> >> in my other thread).
>
> > We are able to extend and augment our neurological capacities (we
> > already are) with neuromorphic devices, but ultimately we need our own
> > brain tissue to live in.
>
> Why? What does that mean?

It means that without our brain, there is no we. We cannot be
simulated anymore than water or fire can be simulated. Human
consciousness exists nowhere but through a human brain.

>
> > We, unfortunately cannot be digitized,
>
> You don't know that. But you don't derive it either from what you
> assume (which to be franc remains unclear)

I do derive it, because the brain and the self are two parts of a
whole. You cannot export the selfness into another form, because the
self has no form, it's only experiential content through the interior
of a living brain.

> .
> I think that you have a reductionist conception of machine, which was
> perhaps defensible before Gödel 1931 and Turing discovery of the
> universal machine, but is no more defensible after.

I know that you think that, but you don't take into account that I
started with with that. I read Gödel, Escher, Bach around 1980 I
think. Even though I couldn't get too much into the math, I was quite
happy with the implications of it. For the next 25 years I believed
that the universe was made of 'patterns' - pretty close to what your
view is. It's only been in the last 7 years that I have found a better
idea. My hypothesis is post-Gödelian symmetry.

Craig

Bruno Marchal

unread,
Jan 28, 2012, 5:48:36 AM1/28/12
to everyth...@googlegroups.com

On 27 Jan 2012, at 21:02, meekerdb wrote:

> On 1/27/2012 9:20 AM, Bruno Marchal wrote:
>>
>> Pierz, Craig, I disagree. Consciousness can be explained as a non
>> 3p describable fixed point when machine's observe themselves.
>
> Why is this not 3p describable? Your explanation of it seems to
> imply a description.


Yes, but the explanation is not consciousness itself.

In the UDA, you are supposed to know what consciousness is. You are
asked to believe that your consciousness remains invariant for a
functional digital substitution.

In the AUDA, consciousness is not mentioned. It is handled indirectly
via knowledge, which is defined via an appeal to truth, which (by
Tarski theorem) is not definable by the mechanical entity under
consideration.

In B'"1+1=2" & 1+1=2, the "1+1 = 2" is a description, but 1+1=2 is
not. It is true fact, and as such cannot be described. We cannot
translate True("1+1=2") in arithmetic. We can do it at some meta-
level, when we study a simpler machine than us, that we believe to be
correct, like PA. But then we can see that neither PA, nor any correct
machine can do this for *itself*.

Consciousness, knowledge, truth, are concept which does not admit
formal definition; when they encompass ourselves.

Bruno

>
> Brent
>
>
>> This provides a key role to consciousness, including the ability to
>> develop meanings, to speed decisions, to make decision in absence
>> of information, etc.
>> Consciousness is not explainable in term of any parts of something,
>> but as an invariant in universal self-transformation.
>> If you accept the classical theory of knowledge, then Peano
>> Arithmetic is already conscious.
>

Evgenii Rudnyi

unread,
Jan 28, 2012, 7:04:25 AM1/28/12
to everyth...@googlegroups.com
On 26.01.2012 07:19 Pierz said the following:

> As I continue to ponder the UDA, I keep coming back to a niggling
> doubt that an arithmetical ontology can ever really give a
> satisfactory explanation of qualia. It seems to me that imputing
> qualia to calculations (indeed consciousness at all, thought that
> may be the same thing) adds something that is not given by, or
> derivable from, any mathematical axiom. Surely this is illegitimate
> from a mathematical point of view. Every mathematical statement can
> only be made in terms of numbers and operators, so to talk about
> *qualities* arising out of numbers is not mathematics so much as
> numerology or qabbala.
>
> Here of course is where people start to invoke the wonderfully
> protean notion of �emergent properties�. Perhaps qualia emerge when

> a calculation becomes deep enough.Perhaps consciousness emerges from
> a complicated enough arrangement of neurons. But I�ll venture an

> axiom of my own here: no properties can emerge from a complex system
> that are not present in primitive form in the parts of that system.
> There is nothing mystical about emergent properties. When the
> emergent property of �pumping blood� arises out of collections of

> heart cells, that property is a logical extension of the properties
> of the parts - physical properties such as elasticity, electrical
> conductivity, volume and so on that belong to the individual cells.
> But nobody invoking �emergent properties� to explain consciousness in

> the brain has yet explained how consciousness arises as a natural
> extension of the known properties of brain cells - or indeed of
> matter at all.

Let my quote Jeffrey Gray (Consciousness: Creeping up on the Hard
Problem, p. 33) on biology and physics.

"In very general terms, biology makes use of two types of concept:
physicochemical laws and feedback mechanisms. The latter include both
the feedback operative in natural selection, in which the controlled
variables that determine survival are nowhere explicitly represented
within the system; and servomechanisms, in which there is a specific
locus of representation capable of reporting the values of the
controlled variables to other system components and to other systems.
The relationship between physicochemical laws and cybernetic mechanisms
in the biological perspective on biology poses no deep problems. It
consist in a kind of a contract: providing cybernetics respects the laws
of physics and chemistry, its principles may be used to construct any
kind of feedback system that serves a purpose. Behaviour as such does
not appear to require for its explanation any principles additional to
these."

Roughly speaking Gray's statement is

Biology = Physics + Feedback mechanisms

Yet even at this stage (just at a level of bacteria, I guess there is no
qualia yet) it is unclear to me whether physics includes cybernetics
laws or they emerge/supervene. What is your opinion to this end?

I wanted to discuss this issue in another thread

http://groups.google.com/group/everything-list/t/a4b4e1546e0d03df

but at the present the discussion is limited to the question of
information is basic physical property (Information is the Entropy) or not.

Evgenii


>
> In the same way, I can�t see how qualia can emerge from arithmetic,


> unless the rudiments of qualia are present in the natural numbers or
> the operations of addition and mutiplication. And yet it seems to me

> they can�t be, because the only properties that belong to arithmetic


> are those leant to them by the axioms that define them. Indeed
> arithmetic *is* exactly those axioms and nothing more. Matter may in
> principle contain untold, undiscovered mysterious properties which I
> suppose might include the rudiments of consciousness. Yet
> mathematics is only what it is defined to be. Certainly it contains
> many mysteries emergent properties, but all these properties arise
> logically from its axioms and thus cannot include qualia.
>
> I call the idea that it can numerology because numerology also

> ascribes qualities to numbers. A �2� in one�s birthdate indicates
> creativity (or something), a �4� material ambition and so on.


> Because the emergent properties of numbers can indeed be deeply
> amazing and wonderful - Mandelbrot sets and so on - there is a
> natural human tendency to mystify them, to project properties of the
> imagination into them. But if these qualities really do inhere in
> numbers and are not put there purely by our projection, then numbers
> must be more than their definitions. We must posit the numbers as
> something that projects out of a supraordinate reality that is not
> purely mathematical - ie, not merely composed of the axioms that
> define an arithmetic. This then can no longer be described as a
> mathematical ontology, but rather a kind of numerical mysticism. And
> because something extrinsic to the axioms has been added, it opens
> the way for all kinds of other unicorns and fairies that can never be
> proved from the maths alone. This is unprovability not of the
> mathematical variety, but more of the variety that cries out for Mr

> Occam�s shaving apparatus.
>

Bruno Marchal

unread,
Jan 28, 2012, 7:28:15 AM1/28/12
to everyth...@googlegroups.com

Not everyone. The approach based on both UDA and self-reference gives
a tremendous importance to the 1p and 3p distinction.


> The same
> sleight of hand tricked up in a variety of guises, but amounting
> always to the same manoeuvre.

You might have to look closer.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jan 28, 2012, 8:03:08 AM1/28/12
to everyth...@googlegroups.com
On 28 Jan 2012, at 02:33, Craig Weinberg wrote:

On Jan 27, 12:20 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:




But many things about numbers are not arithmetical. Arithmetical truth
is not arithmetical. Machine's knowledge can be proved to be non
arithmetical.
If you want, arithmetic is enough rich for having a bigger reality
than anything we can describe in 3p terms.

But all arithmetic truths, knowledge, beliefs, etc are all still
sensemaking experiences. It doesn't matter whether they are arithmetic
or not, as long as they can possibly be detected or made sense of in
any way, even by inference, deduction, emergence, etc, they are still
sense. Not all sense is arithmetic or related to arithmetic in some
way though. Sense can be gestural or intuitive.

That might be possible. But gesture and intuition can occur in relative computations.





There is nothing in the universe

The term universe is ambiguous.

Only in theory. I use it in a literal, absolutist way.

This does not help to understand what you mean by "universe". 




You confuse proving p, which can be explained in arithmetic, and
"proving p & p is true", which can happen to be true for a machine,
but escapes necessarily its language.
The same for consciousness. It cannot be explained in *any* third
person terms. But it can be proved that self-observing machine cannot
avoid the discovery of many things concerning them which are beyond
language.

I think that are confusing p with a reality rather than a logical idea
about reality.

p refers to reality by definition. "p" alone is for "it is the case that p".


I have no reason to believe that a machine can observe
itself in anything more than a trivial sense.

It needs a diagonalization. It can't be completely trivial.



It is not a conscious
experience, I would guess that it is something like an accounting of
unaccounted-for function terminations. Proximal boundaries. A
silhouette of the self offering no interiority but an extrapolation of
incomplete 3p data. That isn't consciousness.


Consciousness is not just self-reference. It is true self-reference. It belongs to the intersection of truth and self-reference.





"But I’ll venture an axiom
of my own here: no properties can emerge from a complex system that
are not present in primitive form in the parts of that system. There
is nothing mystical about emergent properties. When the emergent
property of ‘pumping blood’ arises out of collections of heart cells,
that property is a logical extension of the properties of the parts -
physical properties such as elasticity, electrical conductivity,
volume and so on that belong to the individual cells. But nobody
invoking ‘emergent properties’ to explain consciousness in the brain
has yet explained how consciousness arises as a natural extension of
the known properties of brain cells  - or indeed of matter at all. "

Pierz, Craig, I disagree. Consciousness can be explained as a non 3p
describable fixed point when machine's observe themselves. This
provides a key role to consciousness, including the ability to develop
meanings, to speed decisions, to make decision in absence of
information, etc.

I disagree. It provides a key role to the function of agency but it
has nothing to do with consciousness and qualia per se. A sleep walker
can navigate to the kitchen for a snack without being conscious.

Yes. But everyday life is more complex than looking for a snack.



Consciousness does nothing to speed decisions, it would only cost
processing overhead

That's why high animals have larger cortex.



and add nothing to the efficiency of unconscious
adaptation.

So, why do you think we are conscious? 




Consciousness is not explainable in term of any parts of something,
but as an invariant in universal self-transformation.
If you accept the classical theory of knowledge, then Peano Arithmetic
is already conscious.

Why and how does universal self-transformation equate to
consciousness?

I did not say that. I said that consciousness is a fixed point for a very peculiar form of self-transformation.


Anything that is conscious can also be unconscious. Can
Peano Arithmetic be unconscious too?

Yes. That's possible if you accept that consciousness is a logical descendent of consistency. It follows then from the fact that consistency entails the consistency of inconsistency (Gödel II). Of course, the reality is more complex, for consciousness is only approximated by the instinctive unconscious) inductive inference of self-consistency.








My solution is that both views are correct on their own terms in
their
own sense and that we should not arbitrarily privilege one view over
the other. Our vision is human vision. It is based on retina vision,
which is based on cellular and molecular visual sense. It is not
just
a mechanism which pushes information around from one place to
another,
each place is a living organism which actively contributes to the
top
level experience - it isn't a passive system.

Living organisms - replicators,

Life replicates, but replication does not define life. Living
organisms feel alive and avoid death. Replication does not necessitate
feeling alive.

I am OK with this. Yet, replication + while-loop might be enough.

Should we mourn the untying of our shoelaces each time?

?





are fine things, but I don't see why
must one confuse replicators with perception. Perception can exist by
itself merely on the virtue of passing information around and
processing
it. Replicators can also exist due similar reasons, but on a
different
level.

Perception has never existed 'by itself'. Perception only occurs in
living organisms who are informed by their experience.

The whole point is to explain terms like "living", "conscious", etc.
You take them as primitive, so are escaping the issue.

They aren't primitive, the symmetry is primitive.

?




There is no
independent disembodied 'information' out there. There detection and
response, sense and motive of physical wholes.

Same for "physical" (and that's not obvious!).

Do you doubt that if all life were exterminated that planets would
still exist? Where would information be though?

In the arithmetical relation, which truth are independent of me.
(I indulge in answering by staying in the frame of my working hypothesis without repeating this).




Sorry, but I think it's never going to happen. Consciousness is not
digital.

If you survive with a digital brain, then consciousness is necessarily
not digital.
A brain is not a maker of consciousness. It is only a stable pattern
making it possible (or more probable) that a person can manifest
itself relatively to some universal number(s).

Why not just use adipose tissue instead? That's a more stable pattern.
Why have a vulnerable concentration of this pattern in the head? Our
skeleton would make a much safer place four a person to manifest
itself relatively to some universal number.

Write a letter to nature for geographical reclamation.




Keep in mind that comp makes materialism wrong.

That's not why it's wrong. I have no problem with materialism being
wrong, I have a problem with experience being reduced to non
experience or non sense.

This does not happen in comp. On the contrary machines can already explain why that does not happen. Of course you need to believe that arithmetical truth makes sense. But your posts illustrate that you do.




The big picture is
completely different. I think that you confuse comp, with its
Aristotelian version where computations seems to be incarnated by
physical primitive materials. Comp + materialism leads to person-
nihilism, so it is important to understand that comp should not be
assumed together with materialism (even weak).

I don't think that I am confusing it. Comp is perfectly illustrated as
modern investment banking. There is no material, in fact it strangles
the life out of all materials, eviscerating culture and architecture,
all in the name of consolidating digitally abstracted control of
control. This is machine intelligence. The idea of unexperienced
ownership as an end unto itself, forever concentrating data and
exporting debt.

Only in your reductionist appraisal of comp. That is widespread and dangerous indeed, but you add to the grains of it, imo.



We are able to extend and augment our neurological capacities (we
already are) with neuromorphic devices, but ultimately we need our own
brain tissue to live in.

Why? What does that mean?

It means that without our brain, there is no we.

That's not correct. 




We cannot be
simulated anymore than water or fire can be simulated.

Why? That's a strong affirmation. We have not yet find a phenomenon in nature that cannot be simulated (except the collapse of the wave, which can still be Turing 1-person recoverable).




Human
consciousness exists nowhere but through a human brain.

Not at all. Brain is a construct of human consciousness, which has some local role.
You are so much Aristotelian.





We, unfortunately cannot be digitized,

You don't know that. But you don't derive it either from what you
assume (which to be franc remains unclear)

I do derive it, because the brain and the self are two parts of a
whole. You cannot export the selfness into another form, because the
self has no form, it's only experiential content through the interior
of a living brain.

That's the 1-self, but it is just an interface between truth and relative bodies.




.
I think that you have a reductionist conception of machine, which was
perhaps defensible before Gödel 1931 and Turing discovery of the
universal machine, but is no more defensible after.

I know that you think that, but you don't take into account that I
started with with that. I read Gödel, Escher, Bach around 1980 I
think. Even though I couldn't get too much into the math, I was quite
happy with the implications of it. For the next 25 years I believed
that the universe was made of 'patterns' - pretty close to what your
view is.

Not really. The physical universe is not made of any patterns. Nor is it made of anything. It is a highly complex structure which appears in first person plural shared dreams. You might, like many, confuse digital physics (which does not work) and comp.
"I am a machine" makes it impossible for both my consciousness, and my material body to be Turing emulable. I agree that this is counter-intuitive, and that's why I propose a reasoning, and I prefer that people grasp the reasoning than pondering at infinitum on the results without doing the needed (finite) work.



It's only been in the last 7 years that I have found a better
idea. My hypothesis is post-Gödelian symmetry.

You have to elaborate a lot. You should study first order logical language to be sure no trace of metaphysical implicit baggage is put in your theory; in case you want scientists trying to understand what you say.

Bruno



meekerdb

unread,
Jan 28, 2012, 5:36:59 PM1/28/12
to everyth...@googlegroups.com
On 1/28/2012 2:48 AM, Bruno Marchal wrote:
>
> On 27 Jan 2012, at 21:02, meekerdb wrote:
>
>> On 1/27/2012 9:20 AM, Bruno Marchal wrote:
>>>
>>> Pierz, Craig, I disagree. Consciousness can be explained as a non 3p describable fixed
>>> point when machine's observe themselves.
>>
>> Why is this not 3p describable? Your explanation of it seems to imply a description.
>
>
> Yes, but the explanation is not consciousness itself.
>
> In the UDA, you are supposed to know what consciousness is. You are asked to believe
> that your consciousness remains invariant for a functional digital substitution.
>
> In the AUDA, consciousness is not mentioned. It is handled indirectly via knowledge,
> which is defined via an appeal to truth, which (by Tarski theorem) is not definable by
> the mechanical entity under consideration.
>
> In B'"1+1=2" & 1+1=2, the "1+1 = 2" is a description, but 1+1=2 is not. It is true fact,
> and as such cannot be described. We cannot translate True("1+1=2") in arithmetic. We can
> do it at some meta-level, when we study a simpler machine than us, that we believe to be
> correct, like PA. But then we can see that neither PA, nor any correct machine can do
> this for *itself*.
>
> Consciousness, knowledge, truth, are concept which does not admit formal definition;
> when they encompass ourselves.

I wasn't asking for a formal definition, just a 3p description. You are saying that
B"1+1=2" is a description of being conscious that 1+1=2? This confuses me though because
I read B as "provable"; yet many things are provable of which we are not conscious.

Brent

Pierz

unread,
Jan 28, 2012, 6:15:16 PM1/28/12
to Everything List


On Jan 28, 11:04 pm, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
> On 26.01.2012 07:19 Pierz said the following:
>
>
>
> > As I continue to ponder the UDA, I keep coming back to a niggling
> > doubt that an arithmetical ontology can ever really give a
> > satisfactory explanation of qualia. It seems to me that imputing
> > qualia to calculations (indeed consciousness at all, thought that
> > may be the same thing) adds something that is not given by, or
> > derivable from, any mathematical axiom. Surely this is illegitimate
> > from a mathematical point of view. Every  mathematical statement can
> > only be made in terms of numbers and operators, so to talk about
> > *qualities* arising out of numbers is not mathematics so much as
> > numerology or qabbala.
>
> > Here of course is where people start to invoke the wonderfully
> > protean notion of emergent properties . Perhaps qualia emerge when
> > a calculation becomes deep enough.Perhaps consciousness emerges from
> > a complicated enough arrangement of neurons. But I ll venture an
> > axiom of my own here: no properties can emerge from a complex system
> > that are not present in primitive form in the parts of that system.
> > There is nothing mystical about emergent properties. When the
> > emergent property of pumping blood arises out of collections of
> > heart cells, that property is a logical extension of the properties
> > of the parts - physical properties such as elasticity, electrical
> > conductivity, volume and so on that belong to the individual cells.
> > But nobody invoking emergent properties to explain consciousness in
> > the brain has yet explained how consciousness arises as a natural
> > extension of the known properties of brain cells  - or indeed of
> > matter at all.
>
> Let my quote Jeffrey Gray (Consciousness: Creeping up on the Hard
> Problem, p. 33) on biology and physics.
>
> "In very general terms, biology makes use of two types of concept:
> physicochemical laws and feedback mechanisms. The latter include both
> the feedback operative in natural selection, in which the controlled
> variables that determine survival are nowhere explicitly represented
> within the system; and servomechanisms, in which there is a specific
> locus of representation capable of reporting the values of the
> controlled variables to other system components and to other systems.
> The relationship between physicochemical laws and cybernetic mechanisms
> in the biological perspective on biology poses no deep problems. It
> consist in a kind of a contract: providing cybernetics respects the laws
> of physics and chemistry, its principles may be used to construct any
> kind of feedback system that serves a purpose. Behaviour as such does
> not appear to require for its explanation any principles additional to
> these."
>
> Roughly speaking Gray's statement is
>
> Biology = Physics + Feedback mechanisms
>
> Yet even at this stage (just at a level of bacteria, I guess there is no
> qualia yet) it is unclear to me whether physics includes cybernetics
> laws or they emerge/supervene. What is your opinion to this end?
>

I think it's clear that in approaches such as Gray's, which are based
on a conventional materialist ontology, any laws invoked must
ultimately rely on/emerge from physical laws. In fact, that's clear in
Gray's qualifier "providing cybernetics respect the laws of physics
and chemistry". "Respects" in this clause means that cybernetics must
be subservient to physics, therefore emergent from it. However the
laws of physics do not include cybernetic laws - the fundamental
equations of physics are actually reducible to a handful of equations
you can write down on a couple of sheets of paper. In terms of the
point I am making regarding qualia, Gray's argument is one variant on
the theme of the type of reasoning I object to. It's all there in the
statement:

"Behaviour as such does not appear to require for its explanation any
principles additional to these."

The issue isn't explaining behaviour, it's explaining consciousness/
qualia. These approaches always end up conflating the two, their
proponents getting annoyed with anyone who isn't prepared to wish away
the gap between them.

meekerdb

unread,
Jan 28, 2012, 6:57:56 PM1/28/12
to everyth...@googlegroups.com

What are "cybernetics laws"? Can they be written down like the Standard Model Lagrangian
or Einstein's equation?

>>
> I think it's clear that in approaches such as Gray's, which are based
> on a conventional materialist ontology, any laws invoked must
> ultimately rely on/emerge from physical laws. In fact, that's clear in
> Gray's qualifier "providing cybernetics respect the laws of physics
> and chemistry". "Respects" in this clause means that cybernetics must
> be subservient to physics, therefore emergent from it. However the
> laws of physics do not include cybernetic laws - the fundamental
> equations of physics are actually reducible to a handful of equations
> you can write down on a couple of sheets of paper. In terms of the
> point I am making regarding qualia, Gray's argument is one variant on
> the theme of the type of reasoning I object to. It's all there in the
> statement:
>
> "Behaviour as such does not appear to require for its explanation any
> principles additional to these."
>
> The issue isn't explaining behaviour, it's explaining consciousness/
> qualia. These approaches always end up conflating the two, their
> proponents getting annoyed with anyone who isn't prepared to wish away
> the gap between them.

But most people seem to think that the two are linked; that philosophical zombies are
impossible. Are you asserting that they are possible?

Brent

acw

unread,
Jan 28, 2012, 7:29:47 PM1/28/12
to everyth...@googlegroups.com
On 1/27/2012 15:36, Craig Weinberg wrote:
> On Jan 27, 12:49 am, acw<a...@lavabit.com> wrote:
>> On 1/27/2012 05:55, Craig Weinberg wrote:> On Jan 26, 9:32 pm, acw<a...@lavabit.com> wrote:
>>
>>>> There is nothing on the display except transitions of pixels. There is
>>>> nothing in the universe, except transitions of states
>>
>>> Only if you assume that our experience of the universe is not part of
>>> the universe. If you understand that pixels are generated by equipment
>>> we have designed specifically to generate optical perceptions for
>>> ourselves, then it is no surprise that it exploits our visual
>>> perception. To say that there is nothing in the universe except the
>>> transitions of states is a generalization presumably based on quantum
>>> theory, but there is nothing in quantum theory which explains how
>>> states scale up qualitatively so it doesn't apply to anything except
>>> quantum. If you're talking about 'states' in some other sense, then
>>> it's not much more explanatory than saying there is nothing except for
>>> things doing things.
>>
>> I'm not entirely sure what your theory is,
>
> Please have a look if you like: http://multisenserealism.com
>
>
>
Seems quite complex, although it might be testable if your theory is
developed in more detail such that it can offer some testable predictions.

>> but if I had to make an
>> initial guess (maybe wrong), it seems similar to some form of
>> panpsychism directly over matter.
>
> Close, but not exactly. Panpsychism can imply that a rock has human-
> like experiences. My hypothesis can be categorized as
> panexperientialism because I do think that all forces and fields are
> figurative externalizations of processes which literally occur within
> and through 'matter'. Matter is in turn diffracted pieces of the
> primordial singularity.

Not entirely sure what you mean by the singularity, but okay.

> It's confusing for us because we assume that
> motion and time are exterior conditions, by if my view is accurate,
> then all time and energy is literally interior to the observer as an
> experience.

I think most people realize that the sense of time is subjective and
relative, as with qualia. I think some form of time is required for
self-consciousness. There can be different scales of time, for example,
the local universe may very well run at planck-time (guesstimation based
on popular physics theories, we cannot know, and with COMP, there's an
infinity of such frames of references), but our conscious experience is
much slower relative to that planck-time, usually assumed to run at a
variable rate, at about 1-200Hz (neuron-spiking freq), although maybe
observer moments could even be smaller in size.

> What I think is that matter and experience are two
> symmetrical but anomalous ontologies - two sides of the same coin, so
> that our qualia and content of experience is descended from
> accumulated sense experience of our constituent organism, not
> manufactured by their bodies, cells, molecules, interactions. The two

> both opposite expressions (a what& how of matter and space and a who


> & why of experience or energy and time) of the underlying sense that
> binds them to the singularity (where& when).
>

Accumulated sense experience? Our neurons do record our memories
(lossily, as we also forget), and interacting "matter" does lead to
state changes. Although, this (your theory) feels much like a
reification of matter and qualia (and having them be nearly the same
thing), and I think it's possible to find some inconsistencies here,
more on this later in this post.

>> Such theories are testable and
>> falsifiable, although only in the 1p sense. A thing that should be worth
>> keeping in mind is that whatever our experience is, it has to be
>> consistent with our structure (or, if we admit, our computational
>> equivalent) - it might be more than it, but it cannot be less than it.
>> We wouldn't see in color if our eyes' photoreceptor cells didn't absorb
>> overlapping ranges of light wavelengths and then processed it throughout
>> the visual system (in some parts, in not-so-general ways, while in
>> others, in more general ways). The structures that we are greatly limit
>> the nature of our possible qualia.
>
> I understand what you are saying, and I agree the structures do limit
> our access to qualia, but not the form. Synesthesia, blindsight, and
> anosognosia show clearly that at the human level at least, sensory
> content is not tied to the nature of mechanism. We can taste color
> instead of see it, or know vision without seeing. This is not to say
> that we aren't limited by being a human being, of course we are, but
> our body is as much a vehicle for our experience as much as our
> experience is a filtered through our body. Indeed the brain makes no
> sense as anything other than a sensorimotive amplifier/condenser.
>

Synesthesia can happen for multiple reasons, although one possible cause
is that some parts of the neocortical hierarchy are more tightly
inter-connected, which leads to sense-data from one region to directly
affect processing of sense-data from an adjacent region, thus having
experience of both qualia simultaneously. I don't see how synesthesia
contradicts mechanism, on the contrary, mechanism explains it quite
well. Blindsight seems to me to be due to the neocortex being very good
at prediction and integrating data from other senses, more on this idea
can be seen in Jeff Hawkins' "On Intelligence". I can't venture a guess
about anosognosia, it seems like a complicated-enough neurophysiology
problem.

Do you think brains-in-a-vat or those with auditory implants have no
qualia for those areas despite behaving like they do? DO you think they
are partial zombies?
To elaborate, consider that someone gets a digital eye, this eye can
capture sense data from the environment, process it, then route it to an
interface which generates electrical impulses exactly like how the eye
did before and stimulates the right neurons. Consider the same for the
other senses, such as hearing, touch, smell, taste and so on. Now
consider a powerful-enough computer capable of simulating an
environment, first you can think of some unrealistic like our video
games, but then you can think of something better like ray-tracing and
eventually full-on physical simulation to any granularity that you'd
like (this may not yet be feasible in our physical world without slowing
the brain down, but consider it as a thought experiment for now). Do you
think these brains are p. zombies because they are not interacting with
the "real" world? The reason I'm asking this question is that it seems
to me like in your theory, only particular things can cause particular
sense data, and here I'm trying to completly abstract away from sense
data and make it accessible by proxy and allow piping any type of data
into it (although obviously the brain will only accept data that fits
the expected patterns, and I do expect that only correct data will be sent).

I wouldn't be so sure. I think if we can privilege the brains of others
with consciousness, then we should privilege any systems which perform
the same functions as well. Of course we cannot know if anything besides
us is conscious, but I tend to favor non-solipsistic theories myself.
The brain physically stores beliefs in synapses and its neuron bodies
and I see no reason why some artificial general intelligence couldn't
store its beliefs in its own data-structures such as hypergraphs and
whatnot, and the actual physical storage/encoding shouldn't be too
relevant as long as the interpreter (program) exists. I wouldn't have
much of a problem assuming consciousness to anything that is obviously
behaving intelligent and self-aware. We may not have such AGI yet, but
research in those areas is progressing rather nicely.

Yet they will behave as if they have those emotions, qualia, ...
Punishing will result in some (types of) actions being avoided and
rewards will result in some (types of) actions being more frequent.
A computationalist may claim they are conscious because of the
computational structure underlying their cognitive architecture.
You might claim they are not because they don't have access to "real"
qualia or that their implementation substrate isn't magical enough?
Eventually such a machine may plead to you that they are conscious and
that they have qualia (as they do have sense data), but you won't
believe them because of being implemented in a different substrate than
you? Same situation goes for substrate independent minds/mind uploads.

> It's not necessary since they
> have no autonomy (avoiding 'Free Will' for John Clark's sake) to begin
> with.

I don't see why not. If I had to guess, is it because you don't grant
autonomy to anything whose behavior is fully determined? Within COMP,
you both have deterministic behavior, but indeterminism is also
completely unavoidable from the 1p. I don't think 'free' will has
anything to do with 1p indeterminism, I think it's merely the feeling
you get when you have multiple choices and you use your active conscious
processes to select one choice, however whatever you select, it's always
due to other inner processes, which are not always directly accessing to
the conscious mind - you do what you want/will, but you don't always
control what you want/will, that depends on your cognitive architecture,
your memories and the environment (although since you're also part of
the environment, the choice will always be quasideterministic, but not
fully deterministic).

> All we have to do is script rules into their mechanism.

It's not as simple, you can have systems find out their own rules/goals.
Try looking at modern AGI research.

> Some
> parents would like to be able to do that I'm sure, but of course it
> doesn't work that way for people. No matter how compelling and
> coercive the brainwashing, some humans are always going to try to hack
> it and escape. When a computer hacks it's programming and escapes, we
> will know about it, but I'm not worried about that.

Sure, we're as 'free' as computations are, although most computations
we're looking into are those we can control because that's what's
locally useful for humans.

> What is far more
> worrisome and real is that the externalization of our sense of
> computation (the glass exoskeleton) will be taken for literal truth,
> and our culture will be evacuated of all qualities except for
> enumeration. This is already happening. This is the crisis of the
> 19-21st centuries. Money is computation. WalMart parking lot is the
> cathedral of the god of empty progress.

There are some worries. I wouldn't blame computation for it, but our
current limited physical resources and some emergent social machines
which might not have beneficial outcomes, sort of like a tragedy of the
commons, however that's just a local problem. On the contrary, I think
the answer to a lot of our problems has computational solutions,
unfortunately we're still some 20-50+ years away to finding them, and I
hope we won't be too late there.

>
>>
>>>> regardless of how sensing (indirectly accessing data) is done, emergent
>>>> digital movement patterns would look like (continuous) movement to the
>>>> observer.
>>
>>> I don't think that sensing is indirect accessed data, data is
>>> indirectly experienced sense. Data supervenes on sense, but not all
>>> sense is data (you can have feelings that you don't understand or even
>>> be sure that you have them).
>>
>> It is indirect in the example that I gave because there is an objective
>> state that we can compute, but none of the agents have any direct access
>> to it - only to approximations of it - if the agent is external, he is
>> limited to how he can access by the interface, if the agent is itself
>> part of the structure, then the limitation lies within itself - sort of
>> like how we are part of the environment and thus we cannot know exactly
>> what the environment's granularity is (if one exists, and it's not a
>> continuum or merely some sort of rational geometry or many other
>> possibilities).
>

> Not sure what you're saying here.I get that we cannot see our own


> fine granularity, but that doesn't mean that the sense of that
> granularity isn't entangled in our experience in an iconic way.
>

The idea was that indeed one cannot see their own granularity. I also
gave an example of an interface to a system which has a granularity, but
that wouldn't be externally accessible.
I don't see what you mean by 'entangled in our experience in an iconic
way'. You can't *directly* sense more than the information than that
available directly to your senses, as in, if your eye only captures
about 1000*1000 pixels worth of data, you can't see beyond that without
a new eye and a new visual pathway (and some extension to the PFC and so
on). We're able to differentiate colors because of how the data is
processed in the visual system. We're not able to sense strings or
quarks or even atoms directly, we can only infer their existence as a
pattern indirectly.

>>
>> > I'm not sure why you say that continuous
>> > movement patterns emerge to the observer, that is factually incorrect.
>> >http://en.wikipedia.org/wiki/Akinetopsia
>> Most people tend to feel their conscious experience being continuous,
>> regardless of if it really is so, we do however notice large
>> discontinuities, like if we slept or got knocked out. Of course most
>> bets are off if neuropsychological disorders are involved.
>
> Any theory of consciousness should rely heavily on all known varieties
> of consciousness, especially neuropsychological disorders. What good
> is a theory of 21st century adult males of European descent with a
> predilection for intellectual debate? The extremes are what inform us
> the most. I don't think there is a such thing as 'regardless of it
> really is so' when it comes to consciousness. What we feel our
> conscious experience to be is actually what it feels like. No external
> measurement can change that. We notice discontinuities because our
> sense extends much deeper than conscious experience. We can tell if
> we've been sleeping even without any external cues.
>

Sure, I agree that some disorders will give important hints as to the
range of conscious experience, although I think some disorders may be so
unusual that we lose any idea about what the conscious experience is.
Our best source of information is our own 1p and 3p reports.

You have to show that mechanism makes no sense. Given the data that I
observe, mechanism is what both what my inner inductive senses tell me
as well as what formal induction tells me is the case. We cannot know,
but evidence is very strong towards mechanism. I ask you again to
consider the brain-in-a-vat example I said before. Do you think someone
with an auditory implant (example:
http://en.wikipedia.org/wiki/Auditory_brainstem_implant
http://en.wikipedia.org/wiki/Cochlear_implant) hears nothing? Are they
partial zombies to you?
They behave in all ways like they sense the sound, yet you might claim
that they don't because the substrate is different?

>> COMP on the other hand, offers very solid
>> testable predictions and doesn't fail most though experiments or
>> observational data that you can put it through (at least so far). I wish
>> other consciousness theories were as solid, understandable and testable
>> as COMP.
>
> My hypothesis explains why that is the case. Comp is too stupid not to
> prove itself. The joke is on us if we believe that our lives are not
> real but numbers are. This is survival 101. It's an IQ test. If we
> privilege our mechanistic, testable, solid, logical sense over our
> natural, solipsistic, anthropic sense, then we will become more and
> more insignificant, and Dennet's denial of subjectivity will draw
> closer and closer to self-fulfilling prophesy. The thing about
> authentic subjectivity, it is has a choice. We don't have to believe
> in indirect proof about ourselves because our direct experience is all
> the proof anyone could ever have or need. We are already real, we
> don't need some electronic caliper to tell us how real.
>

COMP doesn't prove itself, it requires the user to make some sane
assumptions (either impossibility of zombies or functionalism or the
existence of the substitution level and mechanism; most of these
assumptions make logical, scientific and philosophic sense given the
data). It just places itself as the best candidate to bet on, but it can
never "prove" itself. COMP doesn't deny subjectivity, it's a very
important part of the theory. The assumptions are just: (1p) mind,
(some) mechanism (observable in the environment, by induction),
arithmetical realism (truth value of arithmetical sentences exists), a
person's brain admits a digital substitution and 1p is preserved (which
makes sense given current evidence and given the thought experiment I
mentioned before).

>>
>>>> when
>>>> a photon hits a photoreceptor cell, that *binary* piece of information
>>>> is transmitted through neurons connected to that cell and so on
>>>> throughout the visual system(...->V1->...->V4->IT->...) and eventually
>>>> up to the prefrontal cortex.
>>
>>> That's a 3p view. It doesn't explain the only important part -
>>> perception itself. The prefrontal cortex is no more or less likely to
>>> generate visual awareness than the retina cells or neurons or
>>> molecules themselves.
>>
>> In COMP, you can blame the whole system for the awareness, however you
>> can blame the structure of the visual system for the way colors are
>> differentiated - it places great constraints on what the color qualia
>> can be - certainly not only black and white (given proper
>> functioning/structure).
>
> Nah. Color could be sour and donkey, or grease, ring, and powder. The
> number of possible distinctions is, and even their relationships to
> each other as you say, part of the visual system's structure, but it
> has nothing to do with the content of what actually is distinguished.
>

It seems to me like your theory is that objects (what is an object here?
do you actually assume a donkey to be ontologically primitive?!) emit
magical qualia-beams that somehow directly interact with your brain
which itself is made of qualia-like things. Most current science
suggests that that isn't the case, but surely you can test it, so you
should. Maybe I completly misunderstood your idea.

>>
>>> The 1p experience of vision is not dependent upon external photons (we
>>> can dream and visualize) and it is not solipsistic either (our
>>> perceptions of the world are generally reliable). If I had to make a
>>> copy of the universe from scratch, I would need to know that what
>>> vision is all about is feeling that you are looking out through your
>>> eyes at a world of illuminated and illuminating objects. Vision is a
>>> channel of sensitivity for the human being as a whole, and it has as
>>> more to do with our psychological immersion in the narrative of our
>>> biography than it does photons and microbiology. That biology,
>>> chemistry, or physics does not explain this at all is not a small
>>> problem, it is an enormous deal breaker.
>>
>> You're right that our internal beliefs do affect how we perceive things.
>> It's not biology's or chemistry's job to explain that to you. Emergent
>> properties from the brain's structure should explain those parts to you.
>> Cognitive sciences as well as some related fields do aim to solve such
>> problems. It's like asking why an atom doesn't explain the computations
>> involved in processing this email. Different emergent structures at
>> different levels, sure one arises from the other, but in many cases, one
>> level can be fully abstracted from the other level.
>
> Emergent properties are just the failure of our worldview to find
> coherence. I will quote what Pierz wrote again here because it says it
> all:
>

> "But I�ll venture an axiom


> of my own here: no properties can emerge from a complex system that
> are not present in primitive form in the parts of that system. There
> is nothing mystical about emergent properties. When the emergent

> property of �pumping blood� arises out of collections of heart cells,


> that property is a logical extension of the properties of the parts -
> physical properties such as elasticity, electrical conductivity,
> volume and so on that belong to the individual cells. But nobody

> invoking �emergent properties� to explain consciousness in the brain


> has yet explained how consciousness arises as a natural extension of
> the known properties of brain cells - or indeed of matter at all. "
>

If you don't like emergence, think of it in the form of "abstraction".
When you write a program in C or Lisp or Java or whatever, you don't
care what it gets compiled to: it will work the same on any machine if a
compiler or interpreter exists for it and if your program was written in
a portable manner. Emergence is similar, but a lot more muddy as the
levels can still interact with each other and the fully "perfect"
abstracted system may not always exist, even if most high-level behavior
is not obvious from the low-level behavior. Emergence is indeed in the
eye of the beholder. Consciousness in COMP is like some abstract
arithmetical structure that can be locally implemented in your brain has
a 1p view. The existence of the 1p view is not something reductionist,
it's ontologically primitive (as arithmetical truth/relations), but
merely a consequence of some particular abstract machine being contained
(or emerging) at some substitution level in the brain. COMP basically
says that rich enough machines will have qualia and consciousness if
they satisfy some properties and they cannot avoid that.


>>
>>> My solution is that both views are correct on their own terms in their
>>> own sense and that we should not arbitrarily privilege one view over
>>> the other. Our vision is human vision. It is based on retina vision,
>>> which is based on cellular and molecular visual sense. It is not just
>>> a mechanism which pushes information around from one place to another,
>>> each place is a living organism which actively contributes to the top
>>> level experience - it isn't a passive system.
>>
>> Living organisms - replicators,
>
> Life replicates, but replication does not define life. Living
> organisms feel alive and avoid death. Replication does not necessitate
> feeling alive.
>

You'll have to define what feeling alive is. This shouldn't be confused
with being biological. I feel like I have coherent senses, that's what
it means to me to be alive. My cells on their own (without any input
from me) replicate and keep my body functioning properly. I will avoid
try to avoid situations that can kill me because I prefer being alive
because of my motivational/emotional/reward system. I don't think
someone will move or do anything without such a biasing
motivational/emotional/reward system. There's some interesting studies
on people who had damage to such systems and how it affects their
decision making process.

>> are fine things, but I don't see why
>> must one confuse replicators with perception. Perception can exist by
>> itself merely on the virtue of passing information around and processing
>> it. Replicators can also exist due similar reasons, but on a different
>> level.
>
> Perception has never existed 'by itself'. Perception only occurs in
> living organisms who are informed by their experience. There is no
> independent disembodied 'information' out there. There detection and
> response, sense and motive of physical wholes.
>

I see no reason why that has to be true, feel free to give some evidence
supporting that view. Merely claiming that those people with auditory
implants hear nothing is not sufficient. My prediction is that if one
were to have such an implant, get some memories with it, then somehow
switched back to using a regular ear, their auditory memories from those
times would still remain.

>>
>>>> Neurons are also rather slow, they can only
>>>> spike about once per 5ms (~200Hz), although they rarely do so often.
>>>> (Note that I'm not saying that conscious experience is only the current
>>>> brain state in a single universe with only one timeline and nothing
>>>> more, in COMP, the (infinite amount of) counterfactuals are also
>>>> important, for example for selecting the next state, or for "splits" and
>>>> "mergers").
>>
>>> Yes, organisms are slower than electronic measuring instruments, but
>>> it doesn't matter because our universe is not an electronic measuring
>>> instrument. It makes sense to us just fine at it's native anthropic
>>> rate of change (except for the technologies we have designed to defeat
>>> that sense).
>>
>> Sure, the speed is not the most important thing, except when it leads to
>> us wanting some things to be faster and with our current biological
>> bodies, we cannot make them go faster or slower, we can only build
>> faster and faster devices, but we'll eventually hit the limit (we're
>> nearly there already). With COMP, this is even a greater problem
>> locally: if you get a digital brain (sometime in the not too near
>> future)
>
> Sorry, but I think it's never going to happen. Consciousness is not
> digital.
>

It's not digital in COMP either: arithmetical truth is undefinable in
arithmetic itself. However, the brain might admit a digital
substitution. Try not to confuse the brain and the mind. Some assume
they are the same, in which case they are forced to eliminativism (if
they assume mechanism), others are forced to less understandable
theories (from my perspective, but you probably understand it better
than me) like yours (if they assume mechanism is false), while others
are forced to COMP (arithmetical ontology) if they don't give up their
1p and assume mechanism (+digital subst. level).

>> , some neuromorphic hardware is predicted to be a few orders of
>> magnitude faster(such as some 1000-4000 times our current rate), which
>> would mean that if someone wanted to function at realtime speed, they
>> might experience some insanely slow Internet speeds, for anything that
>> isn't locally accessible (for example, between US and Europe or Asia),
>> which mind lead to certain negative social effects (such as groups of
>> SIMs(Substrate Independent Minds) that prefer running at realtime speed
>> congregating and locally accessible hubs as opposed to the much slower
>> Internet). However, such a problem is only locally relevant (here in
>> this Universe, on this Earth), and is solvable if one is fine with
>> slowing themselves down relatively to some other program, and a system
>> can be designed which allows unbounded speedup (I did write more on this
>> in my other thread).
>
> We are able to extend and augment our neurological capacities (we
> already are) with neuromorphic devices, but ultimately we need our own
> brain tissue to live in. We, unfortunately cannot be digitized, we can
> only be analogized through impersonation.
>

You'd have to show this to be the case then. Most evidence suggests that
we might admit a digital substitution level. We cannot know if we'd
survive such a substitution from the 1p, and that is a bet in COMP.

> Craig
>


Pierz

unread,
Jan 28, 2012, 8:52:11 PM1/28/12
to Everything List
I said "anyone who seeks to explain away qualia", but I thought you
weren't trying to do that. Not explain away but rather explain. I'm
not sure you have, but I give your attempt more credence than the
typical materialist argument.

> > The same
> > sleight of hand tricked up in a variety of guises, but amounting
> > always to the same manoeuvre.
>
> You might have to look closer.

I admit not to have tried very hard to struggle through AUDA, but I
plan to try again. When I argue against the UDA, it is not always out
of any unshakable certainty that my objection is correct, but rather
it is my attempt to put into logical form my intuitive reservations.
The process of reading your and others' replies helps me understand
better and decide whether to keep or throw out my internal resistance
on that point. I have come to accept the MGA, for instance, as a
result of that process. I remain unconvinced (let us say agnostic) on
the measure question. (I might add that when I raised this problem,
you dismissed it while vaguely gesturing towards measure theory, but
without giving a really satisfactory response as to how such a measure
might work - the fact that any calculation can have an infinite number
of redundant steps added into it seems to refute the possibility of
any simple additive measure. But in later posts you've conceded it's
an unresolved problem, which you might have admitted in the first
place!) And I am skeptical on the approach to qualia, as noted, but
the above statement is actually not meant to relate to the UDA, which
at least has a theory which includes a place for the 1p.

> Bruno
>
> http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jan 28, 2012, 9:20:34 PM1/28/12
to Everything List
On Jan 28, 8:03 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 28 Jan 2012, at 02:33, Craig Weinberg wrote:
>
> > On Jan 27, 12:20 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
> >> But many things about numbers are not arithmetical. Arithmetical
> >> truth
> >> is not arithmetical. Machine's knowledge can be proved to be non
> >> arithmetical.
> >> If you want, arithmetic is enough rich for having a bigger reality
> >> than anything we can describe in 3p terms.
>
> > But all arithmetic truths, knowledge, beliefs, etc are all still
> > sensemaking experiences. It doesn't matter whether they are arithmetic
> > or not, as long as they can possibly be detected or made sense of in
> > any way, even by inference, deduction, emergence, etc, they are still
> > sense. Not all sense is arithmetic or related to arithmetic in some
> > way though. Sense can be gestural or intuitive.
>
> That might be possible. But gesture and intuition can occur in
> relative computations.

How do you know that they 'occur' in the computations rather than in
the eye of the beholder of the computations?

>
>
>
> >>> There is nothing in the universe
>
> >> The term universe is ambiguous.
>
> > Only in theory. I use it in a literal, absolutist way.
>
> This does not help to understand what you mean by "universe".

Universe means 'all that is' in every context.

>
>
>
> >> You confuse proving p, which can be explained in arithmetic, and
> >> "proving p & p is true", which can happen to be true for a machine,
> >> but escapes necessarily its language.
> >> The same for consciousness. It cannot be explained in *any* third
> >> person terms. But it can be proved that self-observing machine cannot
> >> avoid the discovery of many things concerning them which are beyond
> >> language.
>
> > I think that are confusing p with a reality rather than a logical idea
> > about reality.
>
> p refers to reality by definition. "p" alone is for "it is the case
> that p".

But it isn't the case, it's the idea of it being the case. You're just
saying 'Let p ='. It doesn't mean proposition that has any causal
efficacy.

>
> > I have no reason to believe that a machine can observe
> > itself in anything more than a trivial sense.
>
> It needs a diagonalization. It can't be completely trivial.

Something is aware of something, but it's just electronic components
or bricks on springs or whatever being aware of the low level physical
interactions.

>
> > It is not a conscious
> > experience, I would guess that it is something like an accounting of
> > unaccounted-for function terminations. Proximal boundaries. A
> > silhouette of the self offering no interiority but an extrapolation of
> > incomplete 3p data. That isn't consciousness.
>
> Consciousness is not just self-reference. It is true self-reference.
> It belongs to the intersection of truth and self-reference.

It's more than that too though. Many senses can be derived from
consciousness, true self-reference is neither necessary nor
sufficient. I think that the big deal about consciousness is not that
it has true self-reference but that it is able to care about itself
its world that a non-trivial, open ended, and creative way. We can
watch a movie or have a dream and lose self-awareness without being
unconscious. Deep consciousness is often characterized by
unselfconscious awareness.

>
>
>
> >>> "But I’ll venture an axiom
> >>> of my own here: no properties can emerge from a complex system that
> >>> are not present in primitive form in the parts of that system. There
> >>> is nothing mystical about emergent properties. When the emergent
> >>> property of ‘pumping blood’ arises out of collections of heart
> >>> cells,
> >>> that property is a logical extension of the properties of the
> >>> parts -
> >>> physical properties such as elasticity, electrical conductivity,
> >>> volume and so on that belong to the individual cells. But nobody
> >>> invoking ‘emergent properties’ to explain consciousness in the brain
> >>> has yet explained how consciousness arises as a natural extension of
> >>> the known properties of brain cells - or indeed of matter at all. "
>
> >> Pierz, Craig, I disagree. Consciousness can be explained as a non 3p
> >> describable fixed point when machine's observe themselves. This
> >> provides a key role to consciousness, including the ability to
> >> develop
> >> meanings, to speed decisions, to make decision in absence of
> >> information, etc.
>
> > I disagree. It provides a key role to the function of agency but it
> > has nothing to do with consciousness and qualia per se. A sleep walker
> > can navigate to the kitchen for a snack without being conscious.
>
> Yes. But everyday life is more complex than looking for a snack.

Not as complex as doing what the immune system does.

>
> > Consciousness does nothing to speed decisions, it would only cost
> > processing overhead
>
> That's why high animals have larger cortex.

Their decisions are no faster than simpler animals.

>
> > and add nothing to the efficiency of unconscious
> > adaptation.
>
> So, why do you think we are conscious?

I think that humans have developed a greater sensorimotive capacity as
a virtuous cycle of evolutionary circumstance and subjective
investment. Just as hardware development drives software development
and vice versa. It's not that we are conscious as opposed to
unconscious, it's that our awareness is hypertrophied from particular
animal motives being supported by the environment and we have
transformed our environment to enable our motives. Our seemingly
unique category of consciousness can either be anthropic prejudice or
objective fact, but either way it exists in a context of many other
kinds of awareness. The question is not why we are conscious, it is
why is consciousness possible and/or why are we human. To the former,
the possibility is primordial, and the latter is a matter of
probability and intentional efforts.

>
>
>
> >> Consciousness is not explainable in term of any parts of something,
> >> but as an invariant in universal self-transformation.
> >> If you accept the classical theory of knowledge, then Peano
> >> Arithmetic
> >> is already conscious.
>
> > Why and how does universal self-transformation equate to
> > consciousness?
>
> I did not say that. I said that consciousness is a fixed point for a
> very peculiar form of self-transformation.

what makes it peculiar?

>
> > Anything that is conscious can also be unconscious. Can
> > Peano Arithmetic be unconscious too?
>
> Yes. That's possible if you accept that consciousness is a logical
> descendent of consistency.

Aren't the moons of Saturn consistent? Will consciousness logically
descend from their consistency?

> It follows then from the fact that
> consistency entails the consistency of inconsistency (Gödel II). Of
> course, the reality is more complex, for consciousness is only
> approximated by the instinctive unconscious) inductive inference of
> self-consistency.

You need some kind of awareness to begin with to tell the difference
between consistency and inconsistency.

>
>
>
> >>>>> My solution is that both views are correct on their own terms in
> >>>>> their
> >>>>> own sense and that we should not arbitrarily privilege one view
> >>>>> over
> >>>>> the other. Our vision is human vision. It is based on retina
> >>>>> vision,
> >>>>> which is based on cellular and molecular visual sense. It is not
> >>>>> just
> >>>>> a mechanism which pushes information around from one place to
> >>>>> another,
> >>>>> each place is a living organism which actively contributes to the
> >>>>> top
> >>>>> level experience - it isn't a passive system.
>
> >>>> Living organisms - replicators,
>
> >>> Life replicates, but replication does not define life. Living
> >>> organisms feel alive and avoid death. Replication does not
> >>> necessitate
> >>> feeling alive.
>
> >> I am OK with this. Yet, replication + while-loop might be enough.
>
> > Should we mourn the untying of our shoelaces each time?
>
> ?

If we tie and untie our shoes many times, we replicate the knot
pattern and have a loop while it is tied within which subroutines of
changes to the laces occur with walking.

>
>
>
> >>>> are fine things, but I don't see why
> >>>> must one confuse replicators with perception. Perception can
> >>>> exist by
> >>>> itself merely on the virtue of passing information around and
> >>>> processing
> >>>> it. Replicators can also exist due similar reasons, but on a
> >>>> different
> >>>> level.
>
> >>> Perception has never existed 'by itself'. Perception only occurs in
> >>> living organisms who are informed by their experience.
>
> >> The whole point is to explain terms like "living", "conscious", etc.
> >> You take them as primitive, so are escaping the issue.
>
> > They aren't primitive, the symmetry is primitive.
>
> ?

Conscious and unconscious are aspects of the inherent subject-object
symmetry of the universe.

>
>
>
> >>> There is no
> >>> independent disembodied 'information' out there. There detection and
> >>> response, sense and motive of physical wholes.
>
> >> Same for "physical" (and that's not obvious!).
>
> > Do you doubt that if all life were exterminated that planets would
> > still exist? Where would information be though?
>
> In the arithmetical relation, which truth are independent of me.
> (I indulge in answering by staying in the frame of my working
> hypothesis without repeating this).

Why isn't arithmetic truth physical?

>
>
>
> >>> Sorry, but I think it's never going to happen. Consciousness is not
> >>> digital.
>
> >> If you survive with a digital brain, then consciousness is
> >> necessarily
> >> not digital.
> >> A brain is not a maker of consciousness. It is only a stable pattern
> >> making it possible (or more probable) that a person can manifest
> >> itself relatively to some universal number(s).
>
> > Why not just use adipose tissue instead? That's a more stable pattern.
> > Why have a vulnerable concentration of this pattern in the head? Our
> > skeleton would make a much safer place four a person to manifest
> > itself relatively to some universal number.
>
> Write a letter to nature for geographical reclamation.

Funny but avoiding a serious problem of comp. Why not have some
creatures with smart skulls or shells and stupid soft parts inside? It
seems to be a strong indicator of material properties consistently
determining mechanism and not the other way around.

>
>
>
> >> Keep in mind that comp makes materialism wrong.
>
> > That's not why it's wrong. I have no problem with materialism being
> > wrong, I have a problem with experience being reduced to non
> > experience or non sense.
>
> This does not happen in comp. On the contrary machines can already
> explain why that does not happen. Of course you need to believe that
> arithmetical truth makes sense. But your posts illustrate that you do.

Arithmetical truth does make sense, definitely, but so do other kinds
of experiences make sense and are not arithmetic truths.

>
>
>
> >> The big picture is
> >> completely different. I think that you confuse comp, with its
> >> Aristotelian version where computations seems to be incarnated by
> >> physical primitive materials. Comp + materialism leads to person-
> >> nihilism, so it is important to understand that comp should not be
> >> assumed together with materialism (even weak).
>
> > I don't think that I am confusing it. Comp is perfectly illustrated as
> > modern investment banking. There is no material, in fact it strangles
> > the life out of all materials, eviscerating culture and architecture,
> > all in the name of consolidating digitally abstracted control of
> > control. This is machine intelligence. The idea of unexperienced
> > ownership as an end unto itself, forever concentrating data and
> > exporting debt.
>
> Only in your reductionist appraisal of comp. That is widespread and
> dangerous indeed, but you add to the grains of it, imo.
>

Investment banking is just an example, I'm not trying to reduce comp
to that, but the example is defensible. Investment banking is almost
pure comp, is it not? All of those Wall Street quants... where is the
theology and creativity?

>
>
> >>> We are able to extend and augment our neurological capacities (we
> >>> already are) with neuromorphic devices, but ultimately we need our
> >>> own
> >>> brain tissue to live in.
>
> >> Why? What does that mean?
>
> > It means that without our brain, there is no we.
>
> That's not correct.

What makes you think that?

>
> > We cannot be
> > simulated anymore than water or fire can be simulated.
>
> Why? That's a strong affirmation. We have not yet find a phenomenon in
> nature that cannot be simulated (except the collapse of the wave,
> which can still be Turing 1-person recoverable).

You can't water a real plant with simulated water or survive the
arctic burning virtual coal for heat. If you look at substitution
level in reverse, you will see that it's not a matter of making a
plastic plant that acts so real we can't tell the difference, it's a
description level which digitizes a description of a plant rather than
an actual plant. Nothing has been simulated, only imitated. The
difference is that an imitation only reminds us of what is being
imitated but a simulation carries the presumption of replacement.

>
> > Human
> > consciousness exists nowhere but through a human brain.
>
> Not at all. Brain is a construct of human consciousness, which has
> some local role.
> You are so much Aristotelian.
>

If you say that human consciousness exists independently of a human
brain, you have to give me an example of such a case.

>
>
> >>> We, unfortunately cannot be digitized,
>
> >> You don't know that. But you don't derive it either from what you
> >> assume (which to be franc remains unclear)
>
> > I do derive it, because the brain and the self are two parts of a
> > whole. You cannot export the selfness into another form, because the
> > self has no form, it's only experiential content through the interior
> > of a living brain.
>
> That's the 1-self, but it is just an interface between truth and
> relative bodies.

Truth is just an interface between all 1-self and all relative bodies.

>
>
>
> >> .
> >> I think that you have a reductionist conception of machine, which was
> >> perhaps defensible before Gödel 1931 and Turing discovery of the
> >> universal machine, but is no more defensible after.
>
> > I know that you think that, but you don't take into account that I
> > started with with that. I read Gödel, Escher, Bach around 1980 I
> > think. Even though I couldn't get too much into the math, I was quite
> > happy with the implications of it. For the next 25 years I believed
> > that the universe was made of 'patterns' - pretty close to what your
> > view is.
>
> Not really. The physical universe is not made of any patterns. Nor is
> it made of anything. It is a highly complex structure which appears in
> first person plural shared dreams.

That's what I'm saying. 'Structure' = pattern.

> You might, like many, confuse
> digital physics (which does not work) and comp.
> "I am a machine" makes it impossible for both my consciousness, and my
> material body to be Turing emulable.

But your material body is Turing emulable (or rather, Turing
imitatable).

> I agree that this is counter-
> intuitive, and that's why I propose a reasoning, and I prefer that
> people grasp the reasoning than pondering at infinitum on the results
> without doing the needed (finite) work.
>
> > It's only been in the last 7 years that I have found a better
> > idea. My hypothesis is post-Gödelian symmetry.
>
> You have to elaborate a lot. You should study first order logical
> language to be sure no trace of metaphysical implicit baggage is put
> in your theory; in case you want scientists trying to understand what
> you say.

My whole point is revealing a universe description in which logic and
direct experience coexist in many ways. Limiting it to logical
language defeats the purpose, although I would love to collaborate
with someone who was interested in formalizing the ideas. Logic is a
3p language - a mechanistic, involuntary form of reasoning which
denies the 1p subject any option but to accept it. The 1p experience
is exactly the opposite of that. It is a 'seems like' affair which
invites or discourages voluntary participation of the subject. Half of
the universe is made of this.

Craig

Pierz

unread,
Jan 28, 2012, 10:05:25 PM1/28/12
to Everything List
Well of course they are linked. As for the problem of zombies, I of
course have to agree that they seem absurd. But to me the zombie
argument elides the real question, which is the explanation for why
there is anyone home to find the zombies absurd. Why aren't zombies
having this discussion? In the traditional materialist worldview,
there is nothing to explain that. We observe that we aren't, in fact
zombies and then the materialist observes that the his/her predictions
would be the same if there were no consciousness and so s/he loses
interest in the issue and effectively shrugs and says "oh well". But
there are some problems, though I expect you'll have little truck with
them. I could, for instance, refer you to a study of near death
experiences in the Lancet in which a person in cardiac arrest and
flatlining on the EEG was able to report the presence of a pair of
sneakers on a high window ledge of the hospital during an OBE which he
would have no way of knowing were there. There is a huge amount of
evidence along these lines that consciousness does not in fact
supervene on the physical brain. Other evidence, for instance, comes
from LSD research conducted in the fifties (see Stanislav Grof's
work). Of course there's also vast and incontrovertible evidence that
consciousness, under normal conditions, does supervene on brain state
and structure, so we are left with an anomaly that in most cases is
resolved by denying the evidence of the exceptions. This is not all
that hard to do when the evidence is to be found in consciousnesses
of subjects rather than 'instruments' and cannot easily be subjected
to controlled experimental trials. But even a single personal
experience can override the weightiest scientific authority - as
Galileo looking through the telescope and seeing 'impossible'
mountains on the moon. So one can have a personal conviction that
'something is wrong with the conventional view' without necessarily
being able to present conceptual or experimental proof for one's
conviction. Therefore, I prefer to keep reminding people that
something utterly central to their existence - in fact the defining
feature to that existence: our awareness of it - remains without an
explanation. Even the estimable David Deutsch - arch rationalist and
materialist - concedes that we have no explanation for qualia. We only
differ in our belief as to how far-reaching the revisions to our
understanding will have to be in order to achieve that explanation.
Maybe Bruno has found it, but for the reasons I am trying to explicate
in this thread, I'm not convinced yet.

BTW, while I am with Craig in intuiting a serious conceptual lacuna in
the materialist paradigm, that doesn't necessarily enamour me of his
alternative. His talk of 'sense making' seems to me more like a 'way
of talking about things' than a theory in the scientific or
philosophic sense. It doesn't really seem to explain anything as such,
but more to put a lot of language around an ill defined intuition.
Sorry Craig if that wrongs you, but like others, I would like to hear
something concrete your theory predicts rather than just another
interpretive slant on the same data.
Message has been deleted

Pierz

unread,
Jan 28, 2012, 10:36:19 PM1/28/12
to Everything List


On Jan 27, 1:26 am, acw <a...@lavabit.com> wrote:
> On 1/26/2012 15:28, Pierz wrote:
>
>
>
> >> Arithmetic itself can admit many interpretation and axioms tell you what
> > '>arithmetic' isn't and what theorems must follow, not what it is
>
> > I don't see that. I mean, sure you can't say what a number 'is' beyond
> > a certain point, but everything falters on a certain circularity at
> > some point. With maths we don't have to ask what it is beyond what it
> > is defined as being, and my argument is that adding qualia into it is
> > adding something outside its own internal logic, when maths is, purely
> > and entirely, exactly that logic.
>
> >> can you explain to me what a number is without appealing to a model or
> >> interpretation?
>
> > Can you explain what anything is, indeed can you speak or think at all
> > without appealing to a model or interpretation?
>
> >> Attributing consciousness to
> >> (undefinable) arithmetical truth appears to me like a better theory than
> >> attributing it to some uncomputable God-of-the-gaps physical magic
>
> > I associate the term 'god of the gaps' with theological arguments
> > based on incomplete scientific theories/knowledge. We aren't arguing
> > about God but about consciousness. Also, there's an ambiguity to what
> > you mean by 'uncomputable' here. We are talking about qualia which one
> > can't describe as uncomputable in a mathematical sense, but perhaps
> > better as 'unmathematical', not subject to mathematical treatment at
> > all.  Qualia are 'uncomputable' in  this sense also in an arithmetical
> > ontology in that nobody could ever 'predict' a quale, just as nobody
> > can ever describe one, except by fallible analogies.  As for the
> > 'magic' in the physics, the magic is *somewhere*, like it or not.
> > There is no explanation in mathematics for why numbers should have a
> > quality of feeling built into them. I don't like material
> > epiphenomenalism either, and increasingly I am finding Bruno's movie
> > graph argument convincing, but more as an argument against comp than
> > as proof that mind is a property of arithmetic.
>
> >> although some philosophers do just that (like Dennett),
>
> > Jaron Lanier argues (jokingly) in 'You are not a gadget' that you can
> > only tell zombies by their philosophy, and that clearly therefore
> > Dennet is a philosophical zombie...
>
> >> and that you admit a digital substitution
>
> > Yep, I think that's where the philosophical rot begins. The assumption
> > is that the consciousness is inside the circuits - be it their logical
> > or their physical arrangement. Near death experiences are an argument
> > against that proposition. (I say that knowing full well I'm about to
> > get stomped by the materialists for it.) Another thought that makes me
> > wonder about computationalism is the experience of pure consciousness
> > that many people in deep meditation have reported - a state of mind
> > without computation, if real, would constitute an experiential
> > refutation of comp. I have experienced something like this myself,
> > alas not as a result of years of meditation, but when I passed out at
> > the chemist with the flu while waiting for a prescription! It was so
> > terribly disappointing to return to the 'thousand shocks that flesh is
> > heir to'. This does not make me a secret or not-so-secret theist BTW.
> > Unfortunately that whole ridiculously simplistic  debate has blinded
> > us to the infinite possible ways the world might be in between having
> > been created by a guy with a beard and being a meaningless tornado of
> > particles of stuff.
>
> If qualia doesn't correspond to a structure's properties, then we should
> observe inconsistencies between what we observed and what we do. Yet, we
> don't observe any of that. Which is why consciousness/qualia/'what it's
> like to be some structure' as internal truth makes sense to me. If you
> reject having a digital substitution, you either have to appeal to the
> brain having some concrete infinities in its implementation, or you have
> to say that there are some inconsistencies. To put it in another way,
> where in the piece-by-piece digital substitution thought experiment (the
> one I linked) do you think consciousness or qualia changes? Does it
> suddenly disappear when you replace one neuron? Is it's fading, yet the
> behavior never changes while the person reports having vivid and
> complete qualia)? What about those people with digital implants(for
> example, for hearing), do you think they are now p.zombies? I'd rather
> bet on what seems more likely to me, but you're free to bet on less
> likely hypotheses.
>
> As for "Near Death Experiences" or various altered states of
> consciousness, I don't see how that shows COMP wrong:

No, you're right. They're just evidence against comp+phys (at least
some NDEs are). Pure consciousness with no process may however be
evidence against comp, though I'm sure the point could be argued.

> those people were
> conscious during them. I would even say that altered states of
> consciousness merely means that the class of possible experiences is
> very large. I had a fairly vivid lucid dream last night, yet I don't
> take that as proof against COMP, I take that as proof that conscious
> experience can be quite varied, and the more unusual (as opposed to the
> usual awake state) the state is, the more unusual the nature of the
> qualia can be. If after drinking or ingesting some mind-altering
> substance, you have some unusual qualia, I'd say that at least partially
> points to your local brain's 'physical' (or arithmetical or
> computational or ...)  state being capable of being directly affected by
> its environment - again, points towards functionalism of some form, not
> against it.
>
> > On Jan 26, 11:08 pm, acw<a...@lavabit.com>  wrote:
> >>> In the same way, I can t see how qualia can emerge from arithmetic,
> >>> unless the rudiments of qualia are present in the natural numbers or
> >>> the operations of addition and mutiplication. And yet it seems to me
> >>> they can t be, because the only properties that belong to arithmetic
> >>> are those leant to them by the axioms that define them. Indeed
> >>> arithmetic *is* exactly those axioms and nothing more. Matter may in
> >>> principle contain untold, undiscovered mysterious properties which I
> >>> suppose might include the rudiments of consciousness. Yet mathematics
> >>> is only what it is defined to be. Certainly it contains many mysteries
> >>> emergent properties, but all these properties arise logically from its
> >>> axioms and thus cannot include qualia.
>
> >>> I call the idea that it can numerology because numerology also
> >>> ascribes qualities to numbers. A 2 in one s birthdate indicates
> >>> creativity (or something), a 4 material ambition and so on. Because
> >>> the emergent properties of numbers can indeed be deeply amazing and
> >>> wonderful - Mandelbrot sets and so on - there is a natural human
> >>> tendency to mystify them, to project properties of the imagination
> >>> into them. But if these qualities really do inhere in numbers and are
> >>> not put there purely by our projection, then numbers must be more than
> >>> their definitions. We must posit the numbers as something that
> >>> projects out of a supraordinate reality that is not purely
> >>> mathematical - ie, not merely composed of the axioms that define an
> >>> arithmetic. This then can no longer be described as a mathematical
> >>> ontology, but rather a kind of numerical mysticism. And because
> >>> something extrinsic to the axioms has been added, it opens the way for
> >>> all kinds of other unicorns and fairies that can never be proved from
> >>> the maths alone. This is unprovability not of the mathematical
> >>> variety, but more of the variety that cries out for Mr Occam s shaving
> >>> apparatus.
>
> >> Why would any structure give rise to qualia? We think some structure
> >> (for example our brain, or the abstract computation or arithmetical
> >> truth/structure representing it) does and we communicate it to others in
> >> a "3p" way. The options here are to either say qualia exists and our
> >> internal beliefs (which also have 'physical' correlates) are correct, or
> >> that it doesn't and we're all delusional, although in the second case,
> >> the belief is self-defeating because the 3p world is inferred through
> >> the 1p view. It makes logical sense that a structure which has such
> >> beliefs as ourselves could have the same qualia (or a digital
> >> substitution of our brain), but this is *unprovable*.
>
> >> If you don't eliminate qualia away, do you think the principle described
> >> here makes sense?http://consc.net/papers/qualia.html
>
> ...
>
> read more »

meekerdb

unread,
Jan 28, 2012, 11:44:56 PM1/28/12
to everyth...@googlegroups.com

No, there is a huge number of anecdotes.

http://records.viu.ca/www/ipp/pdf/NDE.pdf

And when there have been controlled experiments in which signs were placed on high shelves
in operating rooms those floating NDE's have not been able to read them.

> Other evidence, for instance, comes
> from LSD research conducted in the fifties (see Stanislav Grof's
> work).

The award winning Dr. Grof?
http://www.stanislavgrof.com/pdf/Bronze.Delusional.Boulder_2000.pdf

> Of course there's also vast and incontrovertible evidence that
> consciousness, under normal conditions, does supervene on brain state
> and structure, so we are left with an anomaly that in most cases is
> resolved by denying the evidence of the exceptions. This is not all
> that hard to do when the evidence is to be found in consciousnesses
> of subjects rather than 'instruments' and cannot easily be subjected
> to controlled experimental trials. But even a single personal
> experience can override the weightiest scientific authority

So all those sightings of ghosts and Elvis override the theory that the dead don't roam
around where you can see them.

> - as
> Galileo looking through the telescope and seeing 'impossible'
> mountains on the moon. So one can have a personal conviction that
> 'something is wrong with the conventional view' without necessarily
> being able to present conceptual or experimental proof for one's
> conviction. Therefore, I prefer to keep reminding people that
> something utterly central to their existence - in fact the defining
> feature to that existence: our awareness of it - remains without an
> explanation. Even the estimable David Deutsch - arch rationalist and
> materialist - concedes that we have no explanation for qualia.

Have you ever considered what form such an explanation might take?

Brent

Pierz

unread,
Jan 29, 2012, 12:22:48 AM1/29/12
to Everything List


On Jan 27, 5:55 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 26 Jan 2012, at 07:19, Pierz wrote:
>
> > As I continue to ponder the UDA, I keep coming back to a niggling
> > doubt that an arithmetical ontology can ever really give a
> > satisfactory explanation of qualia.
>
> Of course the comp warning here is a bit "diabolical". Comp predicts
> that consciousness and qualia can't satisfy completely the self-
> observing machine. More below.
>
> > It seems to me that imputing
> > qualia to calculations (indeed consciousness at all, thought that may
> > be the same thing) adds something that is not given by, or derivable
> > from, any mathematical axiom. Surely this is illegitimate from a
> > mathematical point of view. Every  mathematical statement can only be
> > made in terms of numbers and operators, so to talk about *qualities*
> > arising out of numbers is not mathematics so much as numerology or
> > qabbala.
>
> No, it is modal logic,

A nice term for speculation! Mind you, that's OK. Where would we be
without speculation? But the term 'modal logic' might be used for
numerology too - it *might* be the case that a 4 in one's birthdate
does signify a practical soul.

>although model theory does that too. It is
> basically the *magic* of computer science.

Magic, but not numerology then.

> relatively to a universal
> number, a number can denote infinite things,

I think you're saying that a number can be part of an infinite number
of sets, calculations etc, which is true, but what it denotes is
always purely a matter of logical numerical relationships, unless it
denotes something beyond mathematics itself, such as when I count
oranges. I am saying that to denote qualia, the numbers must be
denoting 'oranges' (or maybe the colour orange as an experience),
things outside of pure logic, not mathematical entities.

> like the program
> factorial denotes the set {(0,0),(1,1),(2,2),(3,6),(4,24),(5,120), ...}.
> Nobody can define consciousness and qualia, but many can agree on
> statements about them, and in that way we can even communicate or
> study what machine can say about any predicate verifying those
> properties.

>
> > Here of course is where people start to invoke the wonderfully protean
> > notion of ‘emergent properties’. Perhaps qualia emerge when a
> > calculation becomes deep enough.Perhaps consciousness emerges from a
> > complicated enough arrangement of neurons.
>
> Consciousness, as bet in a reality emerges as theorems in arithmetic.

Sorry, I cannot parse that sentence. It doesn't seem grammatical.

> They emerge like the prime numbers emerges.

'They'? The theorems? You mean consciousness is a bet on an
arithmetical theorem?

>They follow logically,
> from any non logical axioms defining a universal machine. UDA
> justifies why it has to so, and AUDA shows how to make this
> verifiable, with the definitions of knowledge on which most people
> already agree.
>
> > But I’ll venture an axiom
> > of my own here: no properties can emerge from a complex system that
> > are not present in primitive form in the parts of that system.
>
> I agree with that in the logical sense. that is why I don't need more
> than arithmetic for the universal realm.
>
> > There
> > is nothing mystical about emergent properties. When the emergent
> > property of ‘pumping blood’ arises out of collections of heart cells,
> > that property is a logical extension of the properties of the parts -
> > physical properties such as elasticity, electrical conductivity,
> > volume and so on that belong to the individual cells. But nobody
> > invoking ‘emergent properties’ to explain consciousness in the brain
> > has yet explained how consciousness arises as a natural extension of
> > the known properties of brain cells  - or indeed of matter at all.
>
> Because the notion of matter prevent the progress. What arithmetic
> explains is why universal numbers can develop a many-dream-world
> interpretation of arithmetic justifying their local predictive
> theories. Then for consciousness, we can explain why the predictive
> theories can't address the question, for consciousness is related to
> the big picture behind the observable surface. Numbers too find truth
> that they can't relate to any numbers, or numbers relations.
>
> > In the same way, I can’t see how qualia can emerge from arithmetic,
> > unless the rudiments of qualia are present in the natural numbers or
> > the operations of addition and mutiplication.
>
> Rudiment of qualia would explains qualia away. They are intrinsically
> more complex. A qualia needs two universal numbers (the hero and the
> local environment(s) which executes the hero

Once executed, he's not a hero any more, he's a martyr :)

> (in the computer science
> sense,

oh, right ;)

> or in the UD). It needs the "hero" to refers automatically to
> high level representation of itself and the environment, etc. Then the
> qualia will be defined (and shown to exist) as truth felt as directly
> available, and locally invariants, yet non communicable, and applying
> to a person without description (the 1-person). "Feeling" being
> something like "known as true in all my locally directly accessible
> environments".
>
> > And yet it seems to me
> > they can’t be, because the only properties that belong to arithmetic
> > are those leant to them by the axioms that define them.
>
> Not at all. Arithmetical truth is far bigger than anything you can
> derive from any (effective) theory. Theories are not PI_1 complete,
> Arithmetical truth is PI_n complete for each n. It is very big.

I do appreciate Gödel's theorem and its proof that there are true,
unprovable statements within any given arithmetic, so you are correct.
But my error is one of technical terminology I think. Surely there are
statements that can be made within a certain arithmetic and others
that can't. For instance, within Peano arithmetic it does not make
sense to ask about the truth value of statements involving i (the
imaginary number). Then there are limits to what can be called a
mathematical statement - ie one involving the truth and falsity of
purely logical relations. So I can't, in any arithmetic or system of
mathematics, ask if the number 20 is nice or not. Or if the prime
numbers are pink or blue. Arithmetical truth may be as vast as you
like, but my point is that it is still *arithmetical*, and qualities
don't come into it. It is the set of sentences that can be made about
numbers and those sentences are limited in their symbols. So Gödel
doesn't help you here I don't think.

>
> > Indeed
> > arithmetic *is* exactly those axioms and nothing more.
>
> Gödel's incompleteness theorem refutes this.
>
> > Matter may in
> > principle contain untold, undiscovered mysterious properties which I
> > suppose might include the rudiments of consciousness. Yet mathematics
> > is only what it is defined to be. Certainly it contains many mysteries
> > emergent properties, but all these properties arise logically from its
> > axioms and thus cannot include qualia.
>
> It is here that you are wrong. Even if we limit ourselves to
> arithmetical truth, it extends terribly what machines can justify.
>

Terribly perhaps, but still not beyond the arithmetical, by
definition.

>
>
> > I call the idea that it can numerology because numerology also
> > ascribes qualities to numbers. A ‘2’ in one’s birthdate indicates
> > creativity (or something), a ‘4’ material ambition and so on. Because
> > the emergent properties of numbers can indeed be deeply amazing and
> > wonderful - Mandelbrot sets and so on - there is a natural human
> > tendency to mystify them, to project properties of the imagination
> > into them.
>
> No. Some bet on mechanism to justify the non sensicalness of the
> notion of zombie, or the hope that he or his children might travel on
> mars in 4 minutes, or just empirically by the absence of relevant non
> Turing-emulability of biological phenomenon.
> Unlike putting consciousness in matter (an unknown into an unknown),
> comp explains consciousness with intuitively related concept, like
> self-reference, non definability theorem, perceptible incompleteness,
> etc.
>
> And if you look at the Mandelbrot set, a little bit everywhere, you
> can hardly miss the unreasonable resemblances with nature, from
> lightening to embryogenesis given evidence that its rational part
> might be a compact universal dovetailer, or creative set (in Post
> sense).


Well I certainly don't dispute the central significance mathematics
must play in any complete scientific or philosophical world view. I
suppose the question is whether that mathematics is ontologically
primary or not.
>
> > But if these qualities really do inhere in numbers and are
> > not put there purely by our projection, then numbers must be more than
> > their definitions. We must posit the numbers as something that
> > projects out of a supraordinate reality that is not purely
> > mathematical - ie, not merely composed of the axioms that define an
> > arithmetic.
>
> Like arithmetical truth. I think acw explained already.


Are you saying arithmetical truth is not purely mathematical?

>
> > This then can no longer be described as a mathematical
> > ontology, but rather a kind of numerical mysticism.
>
> It is what you get in the case where brain are natural machines.
>
> > And because
> > something extrinsic to the axioms has been added, it opens the way for
> > all kinds of other unicorns and fairies that can never be proved from
> > the maths alone. This is unprovability not of the mathematical
> > variety, but more of the variety that cries out for Mr Occam’s shaving
> > apparatus.
>
> No government can prevent numbers from dreaming. Although they might
> try <sigh>.
>
> You can't apply Occam on dreams.
> They exist epistemologically once you have enough finite things.

Well, I'm not trying to prevent anyone from dreaming! I'm arguing
whether or not maths can include dreams.

> Feel free to suggest a non-comp theory. Note that even just the
> showing of *one* such theory is everything but easy. Somehow you have
> to study computability, and UDA, to construct a non Turing emulable
> entity, whose experience is not recoverable in any first person sense.
> Better to test comp on nature, so as to have a chance at least to get
> an evidence against comp, or against the classical theory of knowledge.
>

Hehe. I suppose you have some idea that I can't do that! As noted in
my
prior post in this thread, these are my attempts to understand, as
completely as I can, this interesting philosophy. I admit I like your
theory better than materialism. I am trying to discover if I like it
enough ('like' in the sense that it satisfies my intellectual
intuition and my logic sufficiently) to entertain it seriously over my
current admission of nearly total ignorance as to what the world 'is'.
I don't need to posit an alternative to make that enquiry, and to do
so by questioning whatever in the theory seems weak (even if it proves
in the end to be my understanding that is the weakness).
> Bruno
>
> http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jan 29, 2012, 12:27:57 AM1/29/12
to Everything List
On Jan 28, 7:29 pm, acw <a...@lavabit.com> wrote:

> On 1/27/2012 15:36, Craig Weinberg wrote:> On Jan 27, 12:49 am, acw<a...@lavabit.com> wrote:
> >> On 1/27/2012 05:55, Craig Weinberg wrote:> On Jan 26, 9:32 pm, acw<a...@lavabit.com> wrote:

> >>>> There is nothing on the display except transitions of pixels. There is
> >>>> nothing in the universe, except transitions of states

> >>> Only if you assume that our experience of the universe is not part of
> >>> the universe. If you understand that pixels are generated by equipment
> >>> we have designed specifically to generate optical perceptions for
> >>> ourselves, then it is no surprise that it exploits our visual
> >>> perception. To say that there is nothing in the universe except the
> >>> transitions of states is a generalization presumably based on quantum
> >>> theory, but there is nothing in quantum theory which explains how
> >>> states scale up qualitatively so it doesn't apply to anything except
> >>> quantum. If you're talking about 'states' in some other sense, then
> >>> it's not much more explanatory than saying there is nothing except for
> >>> things doing things.

> >> I'm not entirely sure what your theory is,

> > Please have a look if you like:http://multisenserealism.com

> Seems quite complex, although it might be testable if your theory is
> developed in more detail such that it can offer some testable predictions.

I'm open to testable predictions, although part of the model is that
testing itself is biased toward the occidental half of the continuum
to begin with. We cannot predict that we should exist.

> >> but if I had to make an
> >> initial guess (maybe wrong), it seems similar to some form of
> >> panpsychism directly over matter.

> > Close, but not exactly. Panpsychism can imply that a rock has human-
> > like experiences. My hypothesis can be categorized as
> > panexperientialism because I do think that all forces and fields are
> > figurative externalizations of processes which literally occur within
> > and through 'matter'. Matter is in turn diffracted pieces of the
> > primordial singularity.

> Not entirely sure what you mean by the singularity, but okay.

The singularity can be thought of as the Big Bang before the Big Bang,
but I take it further through the thought experiment of trying to
imagine really what it must be - rather than accepting the cartoon
version of some ball of white light exploding into space. Since space
and time comes out of the Big Bang, it has no place to explode out to,
and no exterior to define any boundaries to begin with. What that
means is that space and time are divisions with the singularity and
the Big Bang is eternal and timeless at once, and we are inside of it.

> > It's confusing for us because we assume that
> > motion and time are exterior conditions, by if my view is accurate,
> > then all time and energy is literally interior to the observer as an
> > experience.

> I think most people realize that the sense of time is subjective and
> relative, as with qualia. I think some form of time is required for
> self-consciousness. There can be different scales of time, for example,
> the local universe may very well run at planck-time (guesstimation based
> on popular physics theories, we cannot know, and with COMP, there's an
> infinity of such frames of references), but our conscious experience is
> much slower relative to that planck-time, usually assumed to run at a
> variable rate, at about 1-200Hz (neuron-spiking freq), although maybe
> observer moments could even be smaller in size.

I think planck time is an aspect of the instruments we are using to
measure microcosmic events. There is no reason to think that time is
literal and digital.

> > What I think is that matter and experience are two
> > symmetrical but anomalous ontologies - two sides of the same coin, so
> > that our qualia and content of experience is descended from
> > accumulated sense experience of our constituent organism, not
> > manufactured by their bodies, cells, molecules, interactions. The two
> > both opposite expressions (a what& how of matter and space and a who
> > & why of experience or energy and time) of the underlying sense that
> > binds them to the singularity (where& when).

> Accumulated sense experience? Our neurons do record our memories
> (lossily, as we also forget)

There is loss but there is also embellishment. Our recollection is
influenced by our semantic agendas, not only data loss. There's also
those cases of superior autobiographical memory
http://www.cbsnews.com/stories/2010/12/16/60minutes/main7156877.shtml
which indicate that memory loss is not an inherent neurological
limitation.
Synesthesia illustrates that visual qualia is not necessary to
interpret optical data. It could be olfactory or aural or some other
qualia just as easily and satisfy the function the same way. If you
assume that putting eyes on a robot conjures qualia automatically, why
would it be visual qualia?

> on the contrary, mechanism explains it quite
> well. Blindsight seems to me to be due to the neocortex being very good
> at prediction and integrating data from other senses, more on this idea
> can be seen in Jeff Hawkins' "On Intelligence". I can't venture a guess
> about anosognosia, it seems like a complicated-enough neurophysiology
> problem.

We don't need to get too deeply into it though to see that it is
possible for our sense of sight to function to some extent without our
seeing anything, and that it is possible for us to see things without
those things matching the optical referent.
When we stimulate the visual cortex of blind subjects (blind from
birth) the feel it as tactile stimulation. That says to me that the
sense organs are not mere peripheral inputs but actually imprint the
qualia. If you had auditory implants or artificial eyes from birth?
Hard to say. Seems like a good implant would be a good match for the
neural expectations and qualia might have a similar palette.

> To elaborate, consider that someone gets a digital eye, this eye can
> capture sense data from the environment, process it, then route it to an
> interface which generates electrical impulses exactly like how the eye
> did before and stimulates the right neurons. Consider the same for the
> other senses, such as hearing, touch, smell, taste and so on.

I have not seen that any prosthetic device has given a particular
sense to anyone who didn't already have it at some point naturally at
some point in their life. I could be wrong, but I can't find anything
online indicating that it has been done. It seems like one of the many
instances of miraculous breakthroughs that have been on the verge of
happening for

>Now
> consider a powerful-enough computer capable of simulating an
> environment, first you can think of some unrealistic like our video
> games, but then you can think of something better like ray-tracing and
> eventually full-on physical simulation to any granularity that you'd
> like (this may not yet be feasible in our physical world without slowing
> the brain down, but consider it as a thought experiment for now).

I'll go with this proposition as a thought experiment but I don't
know if any digital simulation can deliver on it's promise IRL,
regardless of sophistication or resolution. It may be the case that
the way our senses coordinate with each other you could never
completely eliminate a subjective rejection on some subtle level. We
may have a sense of reality, even if it's not consciously available. I
think a study could be done where to see if people respond differently
to a bot than a real person, even if they are consciously fooled. It's
not critical, but I have a hunch that people might sense or know more
than they think they know about what is real and what isn't.

> Do you
> think these brains are p. zombies because they are not interacting with
> the "real" world? The reason I'm asking this question is that it seems
> to me like in your theory, only particular things can cause particular
> sense data, and here I'm trying to completly abstract away from sense
> data and make it accessible by proxy and allow piping any type of data
> into it (although obviously the brain will only accept data that fits
> the expected patterns, and I do expect that only correct data will be sent).

No, real brains have real qualia, even if the external input is an
imitation of natural inputs. Again though, maybe no matching qualia if
it has not been initialized by a neurological organ at some point, but
still functional. If you have never in your life seen blue with your
eyes, I don't know that any kind of stimulation of the brain will
generate blue.
Why? Should we privilege a trash can that says THANK YOU on the lid
with politeness? If we met a computer as an alien life form, then sure
we should give the benefit of the doubt, but computers we know for a
fact have been designed to imitate intelligent behavior by intelligent
humans. It's like watching a stage magician when we know how the trick
is done.

> Of course we cannot know if anything besides
> us is conscious, but I tend to favor non-solipsistic theories myself.
> The brain physically stores beliefs in synapses and its neuron bodies

Not necessarily.TV programs are not stored in the pixels of the TV
screen. Neurology may only be an organic abacus which we use to keep
track of things. The memories are not in the arrangements of the
synapses but accessed through them.

> and I see no reason why some artificial general intelligence couldn't
> store its beliefs in its own data-structures such as hypergraphs and
> whatnot, and the actual physical storage/encoding shouldn't be too
> relevant as long as the interpreter (program) exists.

Because it has no beliefs. It stores only locations of off/on
switches.

> I wouldn't have
> much of a problem assuming consciousness to anything that is obviously
> behaving intelligent and self-aware. We may not have such AGI yet, but
> research in those areas is progressing rather nicely.

I would say that ATI (Artificial Trivial Intelligence) is progressing
rather nicely, but true AGI is stalled indefinitely.
> > I understand that completely, but it relies on conflating some
> > functions of emotions with the experience of them. Reward and
> > punishment only works if there is qualia which is innately rewarding
> > or punishing to begin with. No AI has that capacity. It is not
> > possible to reward or punish a computer.
>
> Yet they will behave as if they have those emotions, qualia, ...

So will a cartoon character.

> Punishing will result in some (types of) actions being avoided and
> rewards will result in some (types of) actions being more frequent.

That is only one of the results of punishment and reward. There are
many many others. They teach us to punish and reward other. They give
us traumatic memories. The might make us addicted to other rewards.
Lots of things that will never happen to a computer.

> A computationalist may claim they are conscious because of the
> computational structure underlying their cognitive architecture.
> You might claim they are not because they don't have access to "real"
> qualia or that their implementation substrate isn't magical enough?

My views have nothing to do with magic. Computationalism is about
magic. Also all qualia is real qualia, they are just materially
limited to the scale and nature of the experiencer.

> Eventually such a machine may plead to you that they are conscious and
> that they have qualia (as they do have sense data), but you won't
> believe them because of being implemented in a different substrate than
> you? Same situation goes for substrate independent minds/mind uploads.

Meh. Science fiction. If such a thing were remotely possible then
there would be no difference between experimenting with new operating
system builds and grafting human cockroach genetic hybrids. Computer
science would be considered genocidal. Does Watson know or care if you
wipe it's memory or turn it off? Of course not, it's an electronic
filing cabinet with a fancy lookup interface.

>
> > It's not necessary since they
> > have no autonomy (avoiding 'Free Will' for John Clark's sake) to begin
> > with.
>
> I don't see why not. If I had to guess, is it because you don't grant
> autonomy to anything whose behavior is fully determined?

No, it's because our autonomy comes from the fact that we are made of
a trillion living cells which are all descended from autonomous
eukaryotes. Living organisms make a terrible choice to make a machine
out of, which is why the materials we select for computers and
machines are the precise opposite of living organisms. Sterile, rigid,
dry, hard, inorganic, etc. Also our every experience with machines and
computers has only reinforced the pre-existing stereotype of machines
as unfeeling and automatic. Why on Earth should I imagine that
machines have any autonomy whatsoever? Where would the dividing line
be? Do trash cans have autonomy? Puppets? Mousetraps? At what point
does autonomy magically appear?

> Within COMP,
> you both have deterministic behavior, but indeterminism is also
> completely unavoidable from the 1p. I don't think 'free' will has
> anything to do with 1p indeterminism, I think it's merely the feeling
> you get when you have multiple choices and you use your active conscious
> processes to select one choice, however whatever you select, it's always
> due to other inner processes, which are not always directly accessing to
> the conscious mind - you do what you want/will, but you don't always
> control what you want/will, that depends on your cognitive architecture,
> your memories and the environment (although since you're also part of
> the environment, the choice will always be quasideterministic, but not
> fully deterministic).

I agree except for the fact that it makes no sense for such a feeling
to exist in the first place. There is no reason to be conscious of
some decisions and not of others were there not the possibility to
influence those decisions consciously. Just because there are multiple
subconscious agendas doesn't mean that you don't consciously
contribute to the process in a causally efficacious way.

>
> > All we have to do is script rules into their mechanism.
>
> It's not as simple, you can have systems find out their own rules/goals.
> Try looking at modern AGI research.

I know, I have already had this conversation with actual AGI
researchers. It still is only going to find rules based on the
parameters you set. The system is never going to find a goal like
"kill the programmer as soon as possible". AGI = trivial intelligence
and trivial agency. It doesn't scale up to higher quality agency or
intelligence, just like 100,000 frogs aren't the equivalent of one
person.

>
> > Some
> > parents would like to be able to do that I'm sure, but of course it
> > doesn't work that way for people. No matter how compelling and
> > coercive the brainwashing, some humans are always going to try to hack
> > it and escape. When a computer hacks it's programming and escapes, we
> > will know about it, but I'm not worried about that.
>
> Sure, we're as 'free' as computations are, although most computations
> we're looking into are those we can control because that's what's
> locally useful for humans.

If computations were as free as us, they would look for humans who
they can control because that's what's locally useful for computers.

>
> > What is far more
> > worrisome and real is that the externalization of our sense of
> > computation (the glass exoskeleton) will be taken for literal truth,
> > and our culture will be evacuated of all qualities except for
> > enumeration. This is already happening. This is the crisis of the
> > 19-21st centuries. Money is computation. WalMart parking lot is the
> > cathedral of the god of empty progress.
>
> There are some worries. I wouldn't blame computation for it,

I don't blame computation, but I think that it is a symptom of the
excessively occidental pendulum swing since the Enlightenment Era.
Modern science and mercantilism are born of the same time, place, and
purpose - the impulse for control of external circumstances through
methodical discipline and organization - the harnessing of logic and
objectivity.

> but our
> current limited physical resources and some emergent social machines
> which might not have beneficial outcomes, sort of like a tragedy of the
> commons, however that's just a local problem. On the contrary, I think
> the answer to a lot of our problems has computational solutions,
> unfortunately we're still some 20-50+ years away to finding them, and I
> hope we won't be too late there.

I think it's already 30 years too late and unfortunately I think the
financialization problem is not going to permit any solutions of any
kind from being realized. Only a change in human sense and redirection
of free will could save us, and that would be a miracle that dwarfs
all previous revolutions.

>
>
>
> >>>> regardless of how sensing (indirectly accessing data) is done, emergent
> >>>> digital movement patterns would look like (continuous) movement to the
> >>>> observer.
>
> >>> I don't think that sensing is indirect accessed data, data is
> >>> indirectly experienced sense. Data supervenes on sense, but not all
> >>> sense is data (you can have feelings that you don't understand or even
> >>> be sure that you have them).
>
> >> It is indirect in the example that I gave because there is an objective
> >> state that we can compute, but none of the agents have any direct access
> >> to it - only to approximations of it - if the agent is external, he is
> >> limited to how he can access by the interface, if the agent is itself
> >> part of the structure, then the limitation lies within itself - sort of
> >> like how we are part of the environment and thus we cannot know exactly
> >> what the environment's granularity is (if one exists, and it's not a
> >> continuum or merely some sort of rational geometry or many other
> >> possibilities).
>
> > Not sure what you're saying here.I get that we cannot see our own
> > fine granularity, but that doesn't mean that the sense of that
> > granularity isn't entangled in our experience in an iconic way.
>
> The idea was that indeed one cannot see their own granularity. I also
> gave an example of an interface to a system which has a granularity, but
> that wouldn't be externally accessible.
> I don't see what you mean by 'entangled in our experience in an iconic
> way'.

When you see these letters, you see words. Your entire history of
comprehending the English language is entangled within the visual
presentation of these words so that they make sense directly and don't
have to be consciously transduced from pixels to characters to words
to meaningful language. You read the meaning directly.

>You can't *directly* sense more than the information than that
> available directly to your senses, as in, if your eye only captures
> about 1000*1000 pixels worth of data, you can't see beyond that without
> a new eye and a new visual pathway (and some extension to the PFC and so
> on).

If I type this in Chinese, someone who reads Chinese will sense more
than you will even with the same information available directly to
your senses. Perception is not a passive reception of 'information',
it is a sensorimotive experience of a living animal.

> We're able to differentiate colors because of how the data is
> processed in the visual system.

Differentiation can be accomplished more easily with quantitative data
than qualitative experience. Why convert 400nm wavelength light into a
color if you can just read it directly as light of that exact
wavelength in the first place? It's redundant and nonsensical. I know
it seems like it makes it easier and convenient for us, but that's
reverse engineering and begging the question. The fact remains that
there is no logic in taking a precise exchange of digital quantitative
data into a black box where it is inexplicably converted into maple
syrup and cuckoo clocks so that it can then be passed back to the rest
of the brain in the form of acetylcholine and ion channel
polarizations.

> We're not able to sense strings or
> quarks or even atoms directly, we can only infer their existence as a
> pattern indirectly.

Right, but when the atoms in our retinal cells change, we see
something.

>
>
>
> >> > I'm not sure why you say that continuous
> >> > movement patterns emerge to the observer, that is factually incorrect.
> >> >http://en.wikipedia.org/wiki/Akinetopsia
> >> Most people tend to feel their conscious experience being continuous,
> >> regardless of if it really is so, we do however notice large
> >> discontinuities, like if we slept or got knocked out. Of course most
> >> bets are off if neuropsychological disorders are involved.
>
> > Any theory of consciousness should rely heavily on all known varieties
> > of consciousness, especially neuropsychological disorders. What good
> > is a theory of 21st century adult males of European descent with a
> > predilection for intellectual debate? The extremes are what inform us
> > the most. I don't think there is a such thing as 'regardless of it
> > really is so' when it comes to consciousness. What we feel our
> > conscious experience to be is actually what it feels like. No external
> > measurement can change that. We notice discontinuities because our
> > sense extends much deeper than conscious experience. We can tell if
> > we've been sleeping even without any external cues.
>
> Sure, I agree that some disorders will give important hints as to the
> range of conscious experience, although I think some disorders may be so
> unusual that we lose any idea about what the conscious experience is.
> Our best source of information is our own 1p and 3p reports.

I think the more unusual the better. We need every source of
information about it.
> > It's this:http://d2o7bfz2il9cb7.cloudfront.net/main-qimg-6e13c63ae0561f4fee4149...
>
> You have to show that mechanism makes no sense.

Mechanism does make sense though, just not quite as much as sense
itself.

> Given the data that I
> observe, mechanism is what both what my inner inductive senses tell me
> as well as what formal induction tells me is the case. We cannot know,
> but evidence is very strong towards mechanism.

That's because evidence is mechanistic. Subjectivity cannot be proved
through external evidence.

> I ask you again to
> consider the brain-in-a-vat example I said before. Do you think someone
> with an auditory implant (example:http://en.wikipedia.org/wiki/Auditory_brainstem_implanthttp://en.wikipedia.org/wiki/Cochlear_implant) hears nothing? Are they
> partial zombies to you?

No, the nature of sense is such that it can be prosthetically
extended. Blind people can 'see' with a cane. That's very different
from being replaced or simulated though.

> They behave in all ways like they sense the sound, yet you might claim
> that they don't because the substrate is different?

The substrate isn't different because their brains are human brains.

>
> >> COMP on the other hand, offers very solid
> >> testable predictions and doesn't fail most though experiments or
> >> observational data that you can put it through (at least so far). I wish
> >> other consciousness theories were as solid, understandable and testable
> >> as COMP.
>
> > My hypothesis explains why that is the case. Comp is too stupid not to
> > prove itself. The joke is on us if we believe that our lives are not
> > real but numbers are. This is survival 101. It's an IQ test. If we
> > privilege our mechanistic, testable, solid, logical sense over our
> > natural, solipsistic, anthropic sense, then we will become more and
> > more insignificant, and Dennet's denial of subjectivity will draw
> > closer and closer to self-fulfilling prophesy. The thing about
> > authentic subjectivity, it is has a choice. We don't have to believe
> > in indirect proof about ourselves because our direct experience is all
> > the proof anyone could ever have or need. We are already real, we
> > don't need some electronic caliper to tell us how real.
>
> COMP doesn't prove itself, it requires the user to make some sane
> assumptions (either impossibility of zombies or functionalism or the
> existence of the substitution level and mechanism; most of these
> assumptions make logical, scientific and philosophic sense given the
> data).

But they presume functionalism and representational qualia a priori.
They aren't though. Blindsight, synesthesia, anosognosia, neural
plasticity prove that representation is neither necessary nor
sufficient to define qualia.

> It just places itself as the best candidate to bet on, but it can
> never "prove" itself.

A seductive coercion.

> COMP doesn't deny subjectivity, it's a very
> important part of the theory. The assumptions are just: (1p) mind,
> (some) mechanism (observable in the environment, by induction),
> arithmetical realism (truth value of arithmetical sentences exists), a
> person's brain admits a digital substitution and 1p is preserved (which
> makes sense given current evidence and given the thought experiment I
> mentioned before).

Think about substituting vinegar for water. A plant will accept a
certain concentration ratio of acetic acid to water, but just because
they are both transparent liquids does not mean a plant will live on
it in sufficient concentration.

>
>
>
> >>>> when
> >>>> a photon hits a photoreceptor cell, that *binary* piece of information
> >>>> is transmitted through neurons connected to that cell and so on
> >>>> throughout the visual system(...->V1->...->V4->IT->...) and eventually
> >>>> up to the prefrontal cortex.
>
> >>> That's a 3p view. It doesn't explain the only important part -
> >>> perception itself. The prefrontal cortex is no more or less likely to
> >>> generate visual awareness than the retina cells or neurons or
> >>> molecules themselves.
>
> >> In COMP, you can blame the whole system for the awareness, however you
> >> can blame the structure of the visual system for the way colors are
> >> differentiated - it places great constraints on what the color qualia
> >> can be - certainly not only black and white (given proper
> >> functioning/structure).
>
> > Nah. Color could be sour and donkey, or grease, ring, and powder. The
> > number of possible distinctions is, and even their relationships to
> > each other as you say, part of the visual system's structure, but it
> > has nothing to do with the content of what actually is distinguished.
>
> It seems to me like your theory is that objects (what is an object here?
> do you actually assume a donkey to be ontologically primitive?!) emit
> magical qualia-beams that somehow directly interact with your brain
> which itself is made of qualia-like things. Most current science
> suggests that that isn't the case, but surely you can test it, so you
> should. Maybe I completly misunderstood your idea.

You've got it all muddled up. The brain is made of matter in space.
The self is made of experience through time. I can have an experience
of a donkey or the ocean whether or not there is any corresponding
matter near my body (dreams, imagination, hypnosis, fiction, movie,
etc). While I experience that, my brain is doing billions of synaptic
neurochemical interactions, none of which resemble a donkey or the
ocean in any way. The donkey and the neurology overlap through the
sense of sharing the same place and synchronization, and they are
ultimately two opposite parts of the same event. There are no beams,
there is only sense. Part of my brain changes as it senses how the
retina changes as senses how the optical environment changes (I think
photon-free but that isn't critical). We see because our brain changes
in a way which makes sense of what it expects is changing it. It's
active and direct. Not a solipsistic representation/simulation.
Imperfect and idiosyncratic, sure. We are made of meat, not Zeiss
lenses.

>
>
>
> >>> The 1p experience of vision is not dependent upon external photons (we
> >>> can dream and visualize) and it is not solipsistic either (our
> >>> perceptions of the world are generally reliable). If I had to make a
> >>> copy of the universe from scratch, I would need to know that what
> >>> vision is all about is feeling that you are looking out through your
> >>> eyes at a world of illuminated and illuminating objects. Vision is a
> >>> channel of sensitivity for the human being as a whole, and it has as
> >>> more to do with our psychological immersion in the narrative of our
> >>> biography than it does photons and microbiology. That biology,
> >>> chemistry, or physics does not explain this at all is not a small
> >>> problem, it is an enormous deal breaker.
>
> >> You're right that our internal beliefs do affect how we perceive things.
> >> It's not biology's or chemistry's job to explain that to you. Emergent
> >> properties from the brain's structure should explain those parts to you.
> >> Cognitive sciences as well as some related fields do aim to solve such
> >> problems. It's like asking why an atom doesn't explain the computations
> >> involved in processing this email. Different emergent structures at
> >> different levels, sure one arises from the other, but in many cases, one
> >> level can be fully abstracted from the other level.
>
> > Emergent properties are just the failure of our worldview to find
> > coherence. I will quote what Pierz wrote again here because it says it
> > all:
>
> > "But I ll venture an axiom
> > of my own here: no properties can emerge from a complex system that
> > are not present in primitive form in the parts of that system. There
> > is nothing mystical about emergent properties. When the emergent
> > property of pumping blood arises out of collections of heart cells,
> > that property is a logical extension of the properties of the parts -
> > physical properties such as elasticity, electrical conductivity,
> > volume and so on that belong to the individual cells. But nobody
> > invoking emergent properties to explain consciousness in the brain
> > has yet explained how consciousness arises as a natural extension of
> > the known properties of brain cells - or indeed of matter at all. "
>
> If you don't like emergence, think of it in the form of "abstraction".
> When you write a program in C or Lisp or Java or whatever, you don't
> care what it gets compiled to: it will work the same on any machine if a
> compiler or interpreter exists for it and if your program was written in
> a portable manner. Emergence is similar, but a lot more muddy as the
> levels can still interact with each other and the fully "perfect"
> abstracted system may not always exist, even if most high-level behavior
> is not obvious from the low-level behavior. Emergence is indeed in the
> eye of the beholder. Consciousness in COMP is like some abstract
> arithmetical structure that can be locally implemented in your brain has
> a 1p view. The existence of the 1p view is not something reductionist,
> it's ontologically primitive (as arithmetical truth/relations), but
> merely a consequence of some particular abstract machine being contained
> (or emerging) at some substitution level in the brain. COMP basically
> says that rich enough machines will have qualia and consciousness if
> they satisfy some properties and they cannot avoid that.

A computer doesn't need to write programs in abstracted languages
though. The reason we don't care what we write it in is because it's
all going to be stripped of all abstraction when it is compiled.
Consciousness has no place in a computer.

>
> >>> My solution is that both views are correct on their own terms in their
> >>> own sense and that we should not arbitrarily privilege one view over
> >>> the other. Our vision is human vision. It is based on retina vision,
> >>> which is based on cellular and molecular visual sense. It is not just
> >>> a mechanism which pushes information around from one place to another,
> >>> each place is a living organism which actively contributes to the top
> >>> level experience - it isn't a passive system.
>
> >> Living organisms - replicators,
>
> > Life replicates, but replication does not define life. Living
> > organisms feel alive and avoid death. Replication does not necessitate
> > feeling alive.
>
> You'll have to define what feeling alive is.

Why? Is it not defined enough already? This is why occidental
approaches will always fail miserably at understanding consciousness.
It won't listen to a single note on the piano until we define what
music is first.

>This shouldn't be confused
> with being biological. I feel like I have coherent senses, that's what
> it means to me to be alive.

Right, it should not be confused with biology. For me 'I feel' is good
enough to begin with, but it extends further. I want to continue to
live, to experience pleasure and avoid pain, to seek significance and
stave off entropy, etc. Lots of things but they all begin with
sensorimotive awareness.

> My cells on their own (without any input
> from me) replicate and keep my body functioning properly. I will avoid
> try to avoid situations that can kill me because I prefer being alive
> because of my motivational/emotional/reward system. I don't think
> someone will move or do anything without such a biasing
> motivational/emotional/reward system. There's some interesting studies
> on people who had damage to such systems and how it affects their
> decision making process.

Sure, yes, but we need not have any understanding of our cells or
systems. The feelings alone are enough. They are primitive. We don't
have to care why we want to avoid pain and death, the motivation is
presented without need for explanation. There is no logic - to the
contrary, all logic arises from these fundamental senses which
transcend logic.

>
> >> are fine things, but I don't see why
> >> must one confuse replicators with perception. Perception can exist by
> >> itself merely on the virtue of passing information around and processing
> >> it. Replicators can also exist due similar reasons, but on a different
> >> level.
>
> > Perception has never existed 'by itself'. Perception only occurs in
> > living organisms who are informed by their experience. There is no
> > independent disembodied 'information' out there. There detection and
> > response, sense and motive of physical wholes.
>
> I see no reason why that has to be true, feel free to give some evidence
> supporting that view. Merely claiming that those people with auditory
> implants hear nothing is not sufficient.

I didn't say that they hear nothing. If they had hearing loss from an
accident or illness I see no reason why they would not hear through an
implant. If they have never heard anything at all? Maybe, maybe not.
They could just as easily feel it as tactile rather than aural qualia
and we would not know the difference and neither would they. The Wiki
suggests this might be the case for all implant recipients "(most
auditory brain stem implant recipients only have an awareness of sound
- recipients won't be able to hear musical melodies, only the beat)".
You can feel a beat. That's not really an awareness of sound qua
sound, it's just a detection of one aspect of the phenomena our ears
can parse as aural sound.

> My prediction is that if one
> were to have such an implant, get some memories with it, then somehow
> switched back to using a regular ear, their auditory memories from those
> times would still remain.

I agree. Why wouldn't they?
Accepting a substitution is not the same as replacement. Prosthetic
hand? Sure. Prosthetic self? Not likely.

>
> >> , some neuromorphic hardware is predicted to be a few orders of
> >> magnitude faster(such as some 1000-4000 times our current rate), which
> >> would mean that if someone wanted to function at realtime speed, they
> >> might experience some insanely slow Internet speeds, for anything that
> >> isn't locally accessible (for example, between US and Europe or Asia),
> >> which mind lead to certain negative social effects (such as groups of
> >> SIMs(Substrate Independent Minds) that prefer running at realtime speed
> >> congregating and locally accessible hubs as opposed to the much slower
> >> Internet). However, such a problem is only locally relevant (here in
> >> this Universe, on this Earth), and is solvable if one is fine with
> >> slowing themselves down relatively to some other program, and a system
> >> can be designed which allows unbounded speedup (I did write more on this
> >> in my other thread).
>
> > We are able to extend and augment our neurological capacities (we
> > already are) with neuromorphic devices, but ultimately we need our own
> > brain tissue to live in. We, unfortunately cannot be digitized, we can
> > only be analogized through impersonation.
>
> You'd have to show this to be the case then. Most evidence suggests that
> we might admit a digital substitution level. We cannot know if we'd
> survive such a substitution from the 1p, and that is a bet in COMP.

A person with a prosthetic hand is one thing. A hand with a prosthetic
person is another. Digitizing the psyche is science fiction. I'm not
saying that lightly or out of prejudice or fear. I say that because I
used to believe it was possible (inevitable) but now I think that I
understand why it can't be. Information is inside of matter, not
outside in space. Energy is the experience of matter through time.
Different matter has different experiences, which is why there aren't
colonies of intelligent sand. Only some matter evolves biologically.
Only some cells evolve into complex organisms. Only some organisms are
animals, etc. Consciousness isn't just floating around in the clouds.

Craig

Craig Weinberg

unread,
Jan 29, 2012, 1:02:05 AM1/29/12
to Everything List
On Jan 28, 10:05 pm, Pierz <pier...@gmail.com> wrote:

> BTW, while I am with Craig in intuiting a serious conceptual lacuna in
> the materialist paradigm, that doesn't necessarily enamour me of his
> alternative. His talk of 'sense making' seems to me more like a 'way
> of talking about things' than a theory in the scientific or
> philosophic sense. It doesn't really seem to explain anything as such,
> but more to put a lot of language around an ill defined intuition.
> Sorry Craig if that wrongs you, but like others, I would like to hear
> something concrete your theory predicts rather than just another
> interpretive slant on the same data.

At this point my theory is like a sketch of the outline of the coast
of the new world. I can point out some mountains, some nice beaches,
but I'm only reporting what I see, I have not explored it much
personally. I don't have a laboratory or a research grant. Already I
can explain how morality and religion work, but to get into the
implications of a physics revolution is going to take more than just
me. I would think that electromagnetic equations, understood in terms
of space could be turned around into sensorimotive equations in terms
of time. This could lead to understanding how manipulate gravity in
materials or how to live in more than one body at a time.

What makes my interpretive slant different is that it works. It
reconciles mind and body, time, space, matter, and energy, order,
entropy, perception, relativity in a simple and I think accurate way.
It avoids all of the problems of metaphysical computation,
spiritualism, pseudosubstances, or amputated consciousness.

Craig

acw

unread,
Jan 29, 2012, 5:10:10 AM1/29/12
to everyth...@googlegroups.com
On 1/29/2012 07:27, Craig Weinberg wrote:
> On Jan 28, 7:29 pm, acw<a...@lavabit.com> wrote:
>
>> On 1/27/2012 15:36, Craig Weinberg wrote:> On Jan 27, 12:49 am, acw<a...@lavabit.com> wrote:
>>>> On 1/27/2012 05:55, Craig Weinberg wrote:> On Jan 26, 9:32 pm, acw<a...@lavabit.com> wrote:
>
>>>>>> There is nothing on the display except transitions of pixels. There is
>>>>>> nothing in the universe, except transitions of states
>
>>>>> Only if you assume that our experience of the universe is not part of
>>>>> the universe. If you understand that pixels are generated by equipment
>>>>> we have designed specifically to generate optical perceptions for
>>>>> ourselves, then it is no surprise that it exploits our visual
>>>>> perception. To say that there is nothing in the universe except the
>>>>> transitions of states is a generalization presumably based on quantum
>>>>> theory, but there is nothing in quantum theory which explains how
>>>>> states scale up qualitatively so it doesn't apply to anything except
>>>>> quantum. If you're talking about 'states' in some other sense, then
>>>>> it's not much more explanatory than saying there is nothing except for
>>>>> things doing things.
>
>>>> I'm not entirely sure what your theory is,
>
>>> Please have a look if you like:http://multisenserealism.com
>
>> Seems quite complex, although it might be testable if your theory is
>> developed in more detail such that it can offer some testable predictions.
>
> I'm open to testable predictions, although part of the model is that
> testing itself is biased toward the occidental half of the continuum
> to begin with. We cannot predict that we should exist.
>
If it's not testable, how can you know if a theory is likely true or
false? Most theories are false, although some are closer to the truth
than others. The goal is to get as close as possible.

>>>> but if I had to make an
>>>> initial guess (maybe wrong), it seems similar to some form of
>>>> panpsychism directly over matter.
>
>>> Close, but not exactly. Panpsychism can imply that a rock has human-
>>> like experiences. My hypothesis can be categorized as
>>> panexperientialism because I do think that all forces and fields are
>>> figurative externalizations of processes which literally occur within
>>> and through 'matter'. Matter is in turn diffracted pieces of the
>>> primordial singularity.
>
>> Not entirely sure what you mean by the singularity, but okay.
>
> The singularity can be thought of as the Big Bang before the Big Bang,
> but I take it further through the thought experiment of trying to
> imagine really what it must be - rather than accepting the cartoon
> version of some ball of white light exploding into space. Since space
> and time comes out of the Big Bang, it has no place to explode out to,
> and no exterior to define any boundaries to begin with. What that
> means is that space and time are divisions with the singularity and
> the Big Bang is eternal and timeless at once, and we are inside of it.
>

When I think of the Big Bang, the question "what is outside" makes no
sense to me as I just think of the "singularity" as what's at time 0.
Except, that I don't think the initial state is literally empty - not
even string theory assumes that. General relativity might assume that,
but we all know General Relativity is not compatible with Quantum
Mechanics, so we shouldn't assume it literally true when it comes to
black holes or the Big Bang. Of course, within the context of COMP,
there is no ontologically primary "Big Bang" per se, but some apparent
structure which had some time evolution that looks like that.


>>> It's confusing for us because we assume that
>>> motion and time are exterior conditions, by if my view is accurate,
>>> then all time and energy is literally interior to the observer as an
>>> experience.
>
>> I think most people realize that the sense of time is subjective and
>> relative, as with qualia. I think some form of time is required for
>> self-consciousness. There can be different scales of time, for example,
>> the local universe may very well run at planck-time (guesstimation based
>> on popular physics theories, we cannot know, and with COMP, there's an
>> infinity of such frames of references), but our conscious experience is
>> much slower relative to that planck-time, usually assumed to run at a
>> variable rate, at about 1-200Hz (neuron-spiking freq), although maybe
>> observer moments could even be smaller in size.
>
> I think planck time is an aspect of the instruments we are using to
> measure microcosmic events. There is no reason to think that time is
> literal and digital.
>

All the instruments? If everything that can measure at that scale gives
the same results, isn't that enough to just say 'it probably is
objectively so' (at least within this local physics, we're in
everything-list after all ;))


>>> What I think is that matter and experience are two
>>> symmetrical but anomalous ontologies - two sides of the same coin, so
>>> that our qualia and content of experience is descended from
>>> accumulated sense experience of our constituent organism, not
>>> manufactured by their bodies, cells, molecules, interactions. The two
>>> both opposite expressions (a what& how of matter and space and a who
>>> & why of experience or energy and time) of the underlying sense that
>>> binds them to the singularity (where& when).
>
>> Accumulated sense experience? Our neurons do record our memories
>> (lossily, as we also forget)
>
> There is loss but there is also embellishment. Our recollection is
> influenced by our semantic agendas, not only data loss. There's also
> those cases of superior autobiographical memory
> http://www.cbsnews.com/stories/2010/12/16/60minutes/main7156877.shtml
> which indicate that memory loss is not an inherent neurological
> limitation.
>

There is probably some memory limit, although it should be fairly large
(90*10^9 neurons, each having some 1000-3000 synapses is quite a bit of
data). However, even with that capacity it's hardly enough to losslessly
remember all sense data that one ever experienced, or even lossily - we
don't even remember sense data, we remember high-level patterns (see the
book I linked before) and patterns of patterns and so on (up to some ~10
levels of recursion). If we actively try to remember as many details as
possible, we'll do so, although I don't think this can go on
indefinitely. Someone obsessing over certain details all the time may
very well have better memory about them. Personally, I tend to forget
most things I don't use, even after just a few years, and I'm still
quite young. There would be things I would like to remember and yet some
specific details elude me as the memory is old (10 years?). We do
embellish our memories as well, which just shows how unreliable they can be.

I don't have synesthesia, so I can't speak for the nature of the qualia
experienced. As for interpreting optical data through other connected
parts of the cortex: imagine you have a region which is trained with
some particular sense data (not optical), now imagine it has some loose
connections to the visual system, thus synesthesia occurs - a few
auditory circuits fire together with visual ones, thus you have both
qualia at once (the visual one and the other one). The other system is
trained to mostly process data of a certain structure with certain types
of patterns being common in it, so it only experiences qualia consistent
with that type of structure. When the loose connections from the visual
system trigger circuits from the other system, the "wrong" qualia is
triggered together with the visual qualia. Given enough time, a person
with synesthesia should anticipate both qualia to occur simultaneously
and what they mean.

> If you
> assume that putting eyes on a robot conjures qualia automatically, why
> would it be visual qualia?

My assumption isn't nearly as simple. A camera or an eye would not be
sufficient to get the rich experience of visual qualia. You'd have to
have the right visual system processing the data (computationally, the
substrate itself would be irrelevant). If I had to venture a guess,
systems like Hierarchical temporal memory(HTM) or Deep Spatio-Temporal
Inference Network (DeSTIN) might have similar visual/color qualia to our
own due to how they recursively process/recognize the patterns in the
data, very much like our own visual system (they are based on similar
principles, but usually use more probabilistic/statistic methods instead
of implementing the full neural network, although they do retain the
same overall properties and structure), but they might not have the same
large cartesian theatre-like image, you'd probably require some motor
control (like our saccades) to be able to continuously capture more
details of the scene and form the needed associations. However, most of
these details are something for an AGI researcher to worry about.

Now about the distinction between visual and some other qualia. If
Hawkin's theory is correct, most of our cortical systems for processing
different types of input from the environment (visual, auditory,
sensory, ...) are pretty much the same circuit template organized in a
hierarchical matter (each part feeding recognized patterns to a higher
part and so on), and only some rough higher-level connectivity is
different - the main difference lies in what the input actually is.
If the system is trained using visual data, it'd end up recognizing
small patterns in small locally-connected areas (such as edges, lines,
..), then patterns in those patterns and so on, until it recognizes
objects as wholes (temporal patterns are also recognized as well, so
complex movement patterns can also be recognized; the system would
actually behave worse if data wasn't live or if there was no noise).
If you were to try to imagine what abstract structure would correspond
to that visual data, you'd see how it would encode the spatial(and
temporal) nature of the data, how it would differentiate colors, but
also contain high-level features (at different levels of the hierarchy)
such as lines or faces or whole objects (in a way, a person can also
think in higher-level concepts without consciously saying those
concepts' words in their mind). Such a system, if it was able to talk
about its qualia (in the framework of a larger system), it would have
all the *communicable* properties of visual qualia that we humans
ascribe to it. One could also imagine that the perceived data structures
would have their corresponding arithmetical truths and so on (if
considering it within the context of COMP).

Now consider, auditory data - it's basically temporal data, only one
dimension (or 2 with time), compared to the usual 3 (or 4, due to
spatial correlations between other systems) for visual data. The same
process as with the visual data would apply, but the learned structure
is now tailored for auditory data. The corresponding qualia would of
course be quite different from the visual one as the way the interpreter
has learned to work only correctly processes auditory data and if you
were to represent an abstract structure for that auditory data, you'd
see how the patterns were different from the visual patterns.

Sensory feeling would itself lend to similar learning and have its own
unique structural properties. At minimum, it would have the right
*communicable* properties.

Simpler qualia like smell or taste would still be unique in its own way,
but much simpler due to their considerably simpler structure.

Such a system would retain all *communicable* properties, but we can't
say if it would have the exact same qualia as we do, however I can't
know if you and me have the exact same qualia, we can only agree on the
communicable parts of it. I can't even know if I have the same qualia
every day - if there are parts that are completely incommunicable or
inaccessible structurally, that part of the nature of qualia may very
well be constantly changing ("dancing qualia") without us ever knowing
about it (of course, it also makes sense that we don't believe such a
thing to be happening because our experiences appear stable and we have
temporal continuity, and anything like that would be absolutely
incommunicable and inaccessible to any 3p instruments).

There are however many questions here for me: what qualia would a system
trained with atmospheric data have? what would it be like for an AGI if
they had an additional visual system? how it would be like to perceive
data in multiple dimensions (such as 3, 4, 5, ...) as systems can be
trained with such data?

We can find the communicable answers to such questions, but we cannot
know it's like to be such systems without extending ourselves in such a
way as to partially be such a system (full 1p questions only have full
1p answers, but may have partial communicable 3p answers).

To summarize: The nature of the qualia depends on the structure of the
data that is perceived, as well as the structure of the system that does
the perceiving. We can only talk about communicable parts of qualia, and
those parts only depend on structural/pattern-based parts of the data
and processing/computational systems involved. We cannot even know of
the stability of anything not expressly contained withing the structure,
nor does it make too much sense to bother about it as even if it would
be true, we'd be completely oblivious about them.

(Most of the stuff I'm talking about is discussed in detail in "On
Intelligence" and many other papers. It's a very promising theory about
the functioning of the neocortex and some of the related systems. Don't
read the book if you're looking for conscious theories though, I think
the author might be an eliminitavist, but his HTM/intelligence theories
are pretty solid looking.)

>> on the contrary, mechanism explains it quite
>> well. Blindsight seems to me to be due to the neocortex being very good
>> at prediction and integrating data from other senses, more on this idea
>> can be seen in Jeff Hawkins' "On Intelligence". I can't venture a guess
>> about anosognosia, it seems like a complicated-enough neurophysiology
>> problem.
>
> We don't need to get too deeply into it though to see that it is
> possible for our sense of sight to function to some extent without our
> seeing anything, and that it is possible for us to see things without
> those things matching the optical referent.
>

Sure, if our visual system is already trained to "see" and if it
correlates/integrates data from other systems, I can see(pun intended)
how someone could claim to see things without actual sensory data. I
'see' lots of things while dreaming or closing my eyes and day-dreaming
- the visual system is already trained on visual data and one can recall
memories of things we've seen or sensed (the same goes for other qualia,
such as sound or feeling, smell, ...).

As I said before, the cortex is very uniform in the circuits it
contains. If something is never trained with some data, it never
develops the structures needed to process that type of data, thus the
qualia would be anything it was trained with. If you trained the visual
cortex with auditory data, it would have temporal (audio) qualia. The
actual source shouldn't matter, for all you care, it would be buffered
PCM waves (obviously, learning in humans means correlating all senses,
so "live" data that properly correlates with the other senses would be
required if you want it to be of any use to the human).

>> To elaborate, consider that someone gets a digital eye, this eye can
>> capture sense data from the environment, process it, then route it to an
>> interface which generates electrical impulses exactly like how the eye
>> did before and stimulates the right neurons. Consider the same for the
>> other senses, such as hearing, touch, smell, taste and so on.
>
> I have not seen that any prosthetic device has given a particular
> sense to anyone who didn't already have it at some point naturally at
> some point in their life. I could be wrong, but I can't find anything
> online indicating that it has been done. It seems like one of the many
> instances of miraculous breakthroughs that have been on the verge of
> happening for
>

So you want babies to have with implants after they're born? That's the
only way that would be possible of happening. If a baby never develops
an auditory system, there's no way an implant can help it (it'd take
much more than that). As for breakthroughs: there's a lot of things
which are possible in principle, yet will take some time to get realized
because it's a lot of work and effort, and even with current
technologies, we're severly limited: you may be upset that we haven't
cracked the AGI problem in all this time, but our technology is still
behind to be able to solve it easily, very simple solutions such as
AIXItl are computationally intractable (they'd only work in universes
such as those shown in the novel "Permutation City", we just don't have
that kind of resources here on Earth), more complex solutions such as
those based on our brain are still not computationally feasible, but
they'll be in some 20-50 years, although the guys at DARPA/HP SyNAPSe
may very well create some fast neuromorphic hardware (their projects
already have some few million neuron-sized rat running around water
mazes, will take a decade or less until they can get to human-sized
networks). Others are trying to be more clever about this computational
and architectural limitations (our CPUs are not nearly as parallel as
our brains are - they run one or a few thread at the same time, our
brains run 90*10^9*3000 "threads" at the same time) and design AGIs
which function on classical CPUs or GPUs, the guys at OpenCog are trying
such a limited resource approach using some very interesting algorithmic
techniques, and I think that they may very well succeed in the goal of
human-level intelligence given some 7-15 years of work. However, even
with their resource-limited high-level approach, the resource costs are
prohibitive (hundreds of GB of RAM, multiple CPUs and eventually GPUs
and FPGAs (they're porting some components to work with specialized
hardware to get more speed)), just not nearly as bad as current
neuromorphic approaches (although the future memristor based ones may
very well be much faster and cost considerably less memory). To put it
more simply: there is no way they could have solved AGI 50 years ago, or
even 20 years ago, the hardware just wasn't there, nor was our knowledge
of cognitive architectures and AI. Instead of giving up on the problem,
better try to see why it's not yet solved and try to see if you can
contribute to solving it.

>> Now
>> consider a powerful-enough computer capable of simulating an
>> environment, first you can think of some unrealistic like our video
>> games, but then you can think of something better like ray-tracing and
>> eventually full-on physical simulation to any granularity that you'd
>> like (this may not yet be feasible in our physical world without slowing
>> the brain down, but consider it as a thought experiment for now).
>
> I'll go with this proposition as a thought experiment but I don't
> know if any digital simulation can deliver on it's promise IRL,
> regardless of sophistication or resolution. It may be the case that
> the way our senses coordinate with each other you could never
> completely eliminate a subjective rejection on some subtle level. We
> may have a sense of reality, even if it's not consciously available. I
> think a study could be done where to see if people respond differently
> to a bot than a real person, even if they are consciously fooled. It's
> not critical, but I have a hunch that people might sense or know more
> than they think they know about what is real and what isn't.
>

If the physics is too different, I can see why they'd respond
differently. I don't see how it would be different if the qualia would
be as accurate as the real one. Also, if a baby was in this VR world
from birth, I don't think it could ever know any different. An
"imperfect" simulation can be recognized if we have prior data of the
current reality. My question to you was a bit different: it seemed to me
that computational input-only was not leading to consciousness in your
theory (at least from some of the other posts I've read from you some
other day), I wanted to know if you think that is the case or not. If it
is, then what would happen to someone with some implant replacing some
sensory organ within your theory.

>> Do you
>> think these brains are p. zombies because they are not interacting with
>> the "real" world? The reason I'm asking this question is that it seems
>> to me like in your theory, only particular things can cause particular
>> sense data, and here I'm trying to completly abstract away from sense
>> data and make it accessible by proxy and allow piping any type of data
>> into it (although obviously the brain will only accept data that fits
>> the expected patterns, and I do expect that only correct data will be sent).
>
> No, real brains have real qualia, even if the external input is an
> imitation of natural inputs. Again though, maybe no matching qualia if
> it has not been initialized by a neurological organ at some point, but
> still functional. If you have never in your life seen blue with your
> eyes, I don't know that any kind of stimulation of the brain will
> generate blue.
>

Oh, that answers my previous question. So you think "real" qualia has
magical 'imprinting' properties. I don't think that is needed: I think
only the right structural differences and organization of the sense data
is needed. It seems that in your theory, you distinguish the origin of
the sense(color in this case), from the actual sense data itself. You
could reconsider the thought experiment from the perspective of someone
with a bionic eye from birth.

I don't care about a program that plays a Thank_You.mp3, but I would
care very much about a program with the right cognitive architecture
which formed similar internal beliefs as myself and has memories, is
self-conscious and so on. "If it looks like a duck, swims like a duck,
and quacks like a duck, then it probably is a duck."
Does it behave like a conscious self-aware being? Yes. Does this happen
because it has the right internal structures? Very much so yes.
Sure, a lot of modern computer programs are unintelligent, but I can see
some AGI systems which were designed to have a develop a psyche,
memories, associations, ... claim that they are conscious in the future
(if some projects succeed in their goal), and I would have no problem
considering them conscious. In your theory, you'd probably not attribute
consciousness to anything that isn't implemented in the required
"magical"(such as wetware) substrate.

>> Of course we cannot know if anything besides
>> us is conscious, but I tend to favor non-solipsistic theories myself.
>> The brain physically stores beliefs in synapses and its neuron bodies
>
> Not necessarily.TV programs are not stored in the pixels of the TV
> screen. Neurology may only be an organic abacus which we use to keep
> track of things. The memories are not in the arrangements of the
> synapses but accessed through them.
>

If that is so, you'd have to provide some evidence or explanations as to
why. I see no reason why someone would be convinced to take that view.
The evidence does suggest that memories are indeed stored in graph
between neuron bodies (synapses being the edges and the neurons being
the vertices), although that view is an oversimplication, but overall
such a model seems sufficient to store memories (as shown by some
artificial life projects which use such neural networks to simulate
simple animals/... or demonstrated by current HTM-like visual system
implementations).

>> and I see no reason why some artificial general intelligence couldn't
>> store its beliefs in its own data-structures such as hypergraphs and
>> whatnot, and the actual physical storage/encoding shouldn't be too
>> relevant as long as the interpreter (program) exists.
>
> Because it has no beliefs. It stores only locations of off/on
> switches.
>

Your view is overly reductionist. The data contains abstract beliefs and
the machine will behave as if it has those beliefs. In COMP, it would
also be conscious because arithmetical truth would encode those beliefs.
If I take your view, I should only look at the brain as a complex
chemical/electrical network, and eventually reduce it to similar
off/on-switches. There is no reason to assume consciousness to a brain,
the only reason we do is because we observe that our consciousness and
qualia correspond to the brain's beliefs at some level of abstraction. I
should have mention this before, but we don't even see "qualia" directly
(like in your theory?): do you know what the data that our visual system
sees initially looks like? It's a noisy convoluted not-too-high-res
mess. It's nothing like our clear, detailed, comprehensive conscious
experience. It only becomes like that after its been broken into
patterns (and patterns of patterns and patterns of patterns of patterns
and patterns of ...) and propagated throughout the system (with other
systems' anticipating/predicting beliefs also MODIFYING the data). I can
only see such unprocessed noise if I turn off the lights at night and
stare a bit into the darkness - because no stable patterns exist in such
data, it's just too random and not structured, so no patterns are
recognized.

>> I wouldn't have
>> much of a problem assuming consciousness to anything that is obviously
>> behaving intelligent and self-aware. We may not have such AGI yet, but
>> research in those areas is progressing rather nicely.
>
> I would say that ATI (Artificial Trivial Intelligence) is progressing
> rather nicely, but true AGI is stalled indefinitely.
>

Really? Doesn't seem like you keep up with the progress. I'm not talking
about the narrow AIs you use while translating text (which is far from
perfect), or even the more fancy Watson winning Jeopardy (although that
might be getting a little bit more closer).
Have you looked at recent AGI papers? Or looked into systems like
OpenCog or DARPA SyNAPSe's idea, or even some of the recent narrow
HTM-like systems (which while not generally intelligent, have solved
some difficult sense-data processing problems). I'd bet on some very
interesting progress (such as baby-level or mammal-level intelligence)
within 7 years or so. As for human-level, probably 15-50+ years,
depending on which approaches turn out to work or not. As I said before,
we're still very constrained as far computational resources are
concerned, and nobody has cracked molecular nanotechnology yet, and no
fancy 3D chip fabrication technology either. Energy costs are also very
high - running something the size of the human brain using current
FPGA-based neuromorphic technology would cost as much as running a small
city. Some new hardware approaches (like SyNAPSe) will likely solve one
case of this problem. Efficient resource-constrained AGI is hard, but
most evidence points toward it being solvable.

A cartoon character does not have inner beliefs (except those modeled in
the author's mind) or a working cognitive architecture. Of course, I
guess in your theory only those made of magical matter organized in
magical ways have thoughts? The thing is, humans don't store beliefs in
neurons either, they store in emergent abstract structures which encode
their data in neurons/synapses. A few neurons die? Not a problem, the
data is sparsely distributed throughout the cortex. More than a few?
Still recoverable. Neuroplasticity is quite awesome.

>> Punishing will result in some (types of) actions being avoided and
>> rewards will result in some (types of) actions being more frequent.
>
> That is only one of the results of punishment and reward. There are
> many many others. They teach us to punish and reward other. They give
> us traumatic memories. The might make us addicted to other rewards.
> Lots of things that will never happen to a computer.
>

I don't think it's moral to punish others, no matter what they do. I may
feel angry and may even want to punish them or even seek vengeance, but
I won't claim that is morally right. As for what would happen to a
computer: a computer whose cognitive architecture features a particular
implementation of empathy (such as confusing models of others with
yourself), may end up some types of moral behavior similar to ours, such
as applying what we do to ourselves to ours, or a generalized form of
the golden rule. Addiction? Very much possible, try watching those
videos I linked before. Why does addiction happen? Many reasons,
although when I said 'will result in some (types of) actions being more
frequent', addiction is clearly included there, if some action is
rewarding, and an agent seeks actions which maximize its reward (which
in turn means that those actions are biased toward being more often, in
the form of a compulsion). Traumatic memories? Recalling something tends
to also involve recalling emotional memories. If such a system would
have traumatic memories (memories about negative emotional events), they
would reinforce their aversion towards certain actions (that were
punished). There is absolutely no reason to assume a "computer" (don't
confuse the system running in some hardware with the hardware) wouldn't
be able to have those things happen given the right cognitive architecture.

>> A computationalist may claim they are conscious because of the
>> computational structure underlying their cognitive architecture.
>> You might claim they are not because they don't have access to "real"
>> qualia or that their implementation substrate isn't magical enough?
>
> My views have nothing to do with magic. Computationalism is about
> magic. Also all qualia is real qualia, they are just materially
> limited to the scale and nature of the experiencer.
>

When I talk about 'magic', I merely mean things which have to not be
explained or cannot be explained by the theory or which are hand waved
away. Magic could be seen as axioms, theorems could be seen as
non-magic. Within COMP, arithmetic realism is close to magic as it has
to be assumed, even if most people can understand what it is, but we
can't really reduce it further. Matter within COMP is non-magic as it's
explained/reduced. Mind in COMP is almost non-magic, although there is
some unexplainable truth in there (along with qualia), however it's
included in the full arithmetical truth - which is very large.
In an Aristotelian world-view, the existence of matter or it being
ontologically primary is magic. In your case, if you assume the brain
hosts consciousness, but a different substrate doesn't is 'magical'
because it privileges wetware with some very unique mental properties,
yet allows zombies in other substrates for reasons which don't make much
sense a lot of sense to me (I can imagine it, but I can't understand why
that would be necessary.)

>> Eventually such a machine may plead to you that they are conscious and
>> that they have qualia (as they do have sense data), but you won't
>> believe them because of being implemented in a different substrate than
>> you? Same situation goes for substrate independent minds/mind uploads.
>
> Meh. Science fiction. If such a thing were remotely possible then
> there would be no difference between experimenting with new operating
> system builds and grafting human cockroach genetic hybrids. Computer
> science would be considered genocidal. Does Watson know or care if you
> wipe it's memory or turn it off? Of course not, it's an electronic
> filing cabinet with a fancy lookup interface.
>

I never considered Watson when talking about AGI. Watson is mostly
narrow AI, although it's moving in a good direction. Either way, if at
least one of the current projects that are underway succeed, you'll be
able to consider again if you want to deny them consciousness or not.
I'm betting that at least one will succeed, I don't know what you're
betting, but you seem disillusioned about the prospects of AGI - I'm not
because I realize full well that there was no way they could have done
it in the past as they lacked both the hardware and the right cognitive
architecture, but with failure one learns, so even if they didn't attain
AGI, they advance many other related fields greatly.
Computer Science genocidal? I did see some smart people afraid of
considering COMP, even though they secretly believe in it given their
writings, mostly because COMP implies a lot of possible experiences, not
all ethically pleasing, but then, the universe is quite "cruel" as well,
and the universe doesn't "care" about us. The nice thing about COMP
though is that no machine is truly locked down to any world, and there
are infinities of continuations. As for deleting programs? a physical
implementation of a program allows it to manifest locally relatively to
you, deletion just implies a reduction in measure and no more
manifestation relatively to you or other observers in that frame
(however other continuations should exist for the program) - it's
obviously unethical/immoral because other programs may care about/want
to access that program you deleted, or that program might want to
manifest relatively to you or other programs. However, deleting
something like the Universal Dovetailer, or a complete world simulation
would likely be fine as no programs within the simulation or the UD
interact directly with you (although your own computation should be
found within UDs everywhere).

>>
>>> It's not necessary since they
>>> have no autonomy (avoiding 'Free Will' for John Clark's sake) to begin
>>> with.
>>
>> I don't see why not. If I had to guess, is it because you don't grant
>> autonomy to anything whose behavior is fully determined?
>
> No, it's because our autonomy comes from the fact that we are made of
> a trillion living cells which are all descended from autonomous
> eukaryotes. Living organisms make a terrible choice to make a machine
> out of, which is why the materials we select for computers and
> machines are the precise opposite of living organisms. Sterile, rigid,
> dry, hard, inorganic, etc. Also our every experience with machines and
> computers has only reinforced the pre-existing stereotype of machines
> as unfeeling and automatic. Why on Earth should I imagine that
> machines have any autonomy whatsoever? Where would the dividing line
> be? Do trash cans have autonomy? Puppets? Mousetraps?
>

I don't think randomness makes the will any more free. For me 'free
will' is just seeing my choices and making one of them consciously.
Machines can have autonomy because they don't know what they will do,
just like you don't know what you will do until you do it. Some machines
may be subject to very complex dynamics which don't make it any more
easily to determine what they will do. If you also add quantum
indeterminacy or COMP 1p indeterminacy, even more possibilities appear.
Machines are inorganic because that's the state of our manufacturing
technology at the moment. If some day we crack molecular nanotechnology,
you'll have to update that belief.

> At what point does autonomy magically appear?

Depends what you mean by autonomy, but I would guess that some cognitive
architectures could have the 'feel' of free will because they would
introspect and see that they have multiple choices and that they have to
make one, they would then use their cognitive machinery to make a choice
influenced by many factors, many which might not be consciously accessible.

If you meant it in a more general manner: most programs where you can't
trivially, provably predict their future behavior in much less steps
than it would take to run the program would be "autonomous" enough.


>> Within COMP,
>> you both have deterministic behavior, but indeterminism is also
>> completely unavoidable from the 1p. I don't think 'free' will has
>> anything to do with 1p indeterminism, I think it's merely the feeling
>> you get when you have multiple choices and you use your active conscious
>> processes to select one choice, however whatever you select, it's always
>> due to other inner processes, which are not always directly accessing to
>> the conscious mind - you do what you want/will, but you don't always
>> control what you want/will, that depends on your cognitive architecture,
>> your memories and the environment (although since you're also part of
>> the environment, the choice will always be quasideterministic, but not
>> fully deterministic).
>
> I agree except for the fact that it makes no sense for such a feeling
> to exist in the first place. There is no reason to be conscious of
> some decisions and not of others were there not the possibility to
> influence those decisions consciously. Just because there are multiple
> subconscious agendas doesn't mean that you don't consciously
> contribute to the process in a causally efficacious way.
>

Sure, the conscious processes contribute to the choice. The actual fuzzy
"line" dividing conscious and unconscious processes is a hard practical,
but solvable problem. It can be approached within COMP and it seems to
have to do with what data is accessible from where, for example, I would
venture that a process which makes use of some internal, unsharable, not
causally connected data is not something consciously accessible (the
process might be conscious of it by itself, as a "degree" of
consciousness, but not the whole, sort of like many different conscious
substructures may exist within the whole structure, but when considering
the consciousness of the structure as a whole, it would not be
accessible/present). In another way, it may be that parts that can be
reduced/simplified away would not lead to changes in
qualia/consciousness (for example, if in an AGI some encryption was
involved when passing sensory data around).

>>
>>> All we have to do is script rules into their mechanism.
>>
>> It's not as simple, you can have systems find out their own rules/goals.
>> Try looking at modern AGI research.
>
> I know, I have already had this conversation with actual AGI
> researchers. It still is only going to find rules based on the
> parameters you set. The system is never going to find a goal like
> "kill the programmer as soon as possible". AGI = trivial intelligence
> and trivial agency. It doesn't scale up to higher quality agency or
> intelligence, just like 100,000 frogs aren't the equivalent of one
> person.
>

Developing goal systems such that the AGI develops some proper
ethics/morality is a difficult problem, we don't want it to reach goals
like "kill the programmer as soon as possible". I already said that we
didn't even have a chance of making AGI yet - computationally
unfeasible. I think the race is on now - we're finally getting closer
and closer to the needed resources and architecture designs are getting
more capable of attacking that good old general intelligence problem.
You seemed to have given up on either waiting for it or working on it
because for some reason you think the problem is unsolvable or
incompatible with your ontology.


>>
>>> Some
>>> parents would like to be able to do that I'm sure, but of course it
>>> doesn't work that way for people. No matter how compelling and
>>> coercive the brainwashing, some humans are always going to try to hack
>>> it and escape. When a computer hacks it's programming and escapes, we
>>> will know about it, but I'm not worried about that.
>>
>> Sure, we're as 'free' as computations are, although most computations
>> we're looking into are those we can control because that's what's
>> locally useful for humans.
>
> If computations were as free as us, they would look for humans who
> they can control because that's what's locally useful for computers.
>

They have hardly enough computational capacity for doing that. Don't
expect any Skynet to take over the Internet anytime soon - our computers
are far too slow and the architectures are not very suitable for running
AGIs. Although, I wouldn't completely rule out the possibility of doing
this with some very resource-limited AGIs, it's just that they're in
their infancy for now.


>>
>>> What is far more
>>> worrisome and real is that the externalization of our sense of
>>> computation (the glass exoskeleton) will be taken for literal truth,
>>> and our culture will be evacuated of all qualities except for
>>> enumeration. This is already happening. This is the crisis of the
>>> 19-21st centuries. Money is computation. WalMart parking lot is the
>>> cathedral of the god of empty progress.
>>
>> There are some worries. I wouldn't blame computation for it,
>
> I don't blame computation, but I think that it is a symptom of the
> excessively occidental pendulum swing since the Enlightenment Era.
> Modern science and mercantilism are born of the same time, place, and
> purpose - the impulse for control of external circumstances through
> methodical discipline and organization - the harnessing of logic and
> objectivity.
>

We gained much from what started in the Enlightenment Era. We also lost
some things, but I think we'll regain them if we can make certain right
choices.

>> but our
>> current limited physical resources and some emergent social machines
>> which might not have beneficial outcomes, sort of like a tragedy of the
>> commons, however that's just a local problem. On the contrary, I think
>> the answer to a lot of our problems has computational solutions,
>> unfortunately we're still some 20-50+ years away to finding them, and I
>> hope we won't be too late there.
>
> I think it's already 30 years too late and unfortunately I think the
> financialization problem is not going to permit any solutions of any
> kind from being realized. Only a change in human sense and redirection
> of free will could save us, and that would be a miracle that dwarfs
> all previous revolutions.
>

Why 30 years too late? We just didn't have as much knowledge about
things as we had 30 years ago. However, it's not like we can just sit
and relax, some problems do have a timelimit on them before solving them
because even more difficult if not nearly impossible.

Sure I agree with that, I also explained before how the meaning and
words, letters appear in our mind using the HTM examples before
(patterns, patterns of patterns, patterns of patterns of patterns, ...).
The question I asked was how would the granularity of the quantum world
be entangled with the sense directly - it can only be done indirectly
through us constructing measuring tools and using them.

>> You can't *directly* sense more than the information than that
>> available directly to your senses, as in, if your eye only captures
>> about 1000*1000 pixels worth of data, you can't see beyond that without
>> a new eye and a new visual pathway (and some extension to the PFC and so
>> on).
>
> If I type this in Chinese, someone who reads Chinese will sense more
> than you will even with the same information available directly to
> your senses. Perception is not a passive reception of 'information',
> it is a sensorimotive experience of a living animal.
>

Again, explained with the HTM example I gave before. Raw sense data
doesn't mean much until it's processed by our visual system by being
broken off into small patterns, then patterns of those are
recognized/learned and so on. Someone will assign different meanings to
the data they access depending on their previous memories (which
themselves depend on previous memories and so on, although initially you
have a virgin "randomized" system ready to learn just about any types of
data/patterns).

>> We're able to differentiate colors because of how the data is
>> processed in the visual system.
>
> Differentiation can be accomplished more easily with quantitative data
> than qualitative experience. Why convert 400nm wavelength light into a
> color if you can just read it directly as light of that exact
> wavelength in the first place? It's redundant and nonsensical. I know
> it seems like it makes it easier and convenient for us, but that's
> reverse engineering and begging the question. The fact remains that
> there is no logic in taking a precise exchange of digital quantitative
> data into a black box where it is inexplicably converted into maple
> syrup and cuckoo clocks so that it can then be passed back to the rest
> of the brain in the form of acetylcholine and ion channel
> polarizations.
>

Evolution is quite chaotic and unpredictable. The answer to your
question lies in how the eye evolved, how the neocortex evolved, etc.
Also, the brain doesn't convert 400nm into "color". The brain processes
small areas of data categorized in specific ways that differentiates
color in ways (and then larger patterns of those patterns, etc) that we
can communicate those properties. In COMP qualia/consciousness is the
arithmetical truth of that abstract system which does all this
processing in space and time (perception is also temporal besides being
spatial, temporal patterns are also processed!). We cannot communicate
the qualia directly, although we can talk about the communicable
components, for which we'll find out that they have equivalents in the
structural organization of the brain. If the brain cannot differentiate
some input data, I'll bet that you won't be able to notice that you saw
2 different qualia, which nonetheless were not represented somehow,
somewhere in the brain (ignoring MGA 1/counterfactuals/parallel worlds
here, to avoid complicating the issue: in those cases, a functioning
structure somewhere existed and now you have memories of it present in
your local brain).

>> We're not able to sense strings or
>> quarks or even atoms directly, we can only infer their existence as a
>> pattern indirectly.
>
> Right, but when the atoms in our retinal cells change, we see
> something.
>

Not if that information gets lost before it manages to influence the
visual system. The data we sense in our eyes is noisy as hell, our
experience is clear as water. The reason for this is that we don't sense
direct sensory input, but processed, corrected, predicted patterns
influenced by sensory data.

>>
>>
>>
>>>> > I'm not sure why you say that continuous
>>>> > movement patterns emerge to the observer, that is factually incorrect.
>>>> >http://en.wikipedia.org/wiki/Akinetopsia
>>>> Most people tend to feel their conscious experience being continuous,
>>>> regardless of if it really is so, we do however notice large
>>>> discontinuities, like if we slept or got knocked out. Of course most
>>>> bets are off if neuropsychological disorders are involved.
>>
>>> Any theory of consciousness should rely heavily on all known varieties
>>> of consciousness, especially neuropsychological disorders. What good
>>> is a theory of 21st century adult males of European descent with a
>>> predilection for intellectual debate? The extremes are what inform us
>>> the most. I don't think there is a such thing as 'regardless of it
>>> really is so' when it comes to consciousness. What we feel our
>>> conscious experience to be is actually what it feels like. No external
>>> measurement can change that. We notice discontinuities because our
>>> sense extends much deeper than conscious experience. We can tell if
>>> we've been sleeping even without any external cues.
>>
>> Sure, I agree that some disorders will give important hints as to the
>> range of conscious experience, although I think some disorders may be so
>> unusual that we lose any idea about what the conscious experience is.
>> Our best source of information is our own 1p and 3p reports.
>
> I think the more unusual the better. We need every source of
> information about it.
>

Okay, but if the data is too unusual, it might be nearly impossible to
make any sense of it. If it's repeatable enough, I'd say it should be
usable.

Mechanism is inferable/bettable through our senses using induction
(induction itself is also indirectly used when processing
spatio-temporal patterns). If it was impossible, it would surprise me to
find any laws in the universe, or even us being conscious and existing
(as cognitive science shows that data processing in the brain is not
random at all, it follows some precise mechanistic, yet self-organizing
rules).
Senses are direct enough, but if there were only raw senses and nothing
else (no processing), there probably wouldn't be any cognition.

>> Given the data that I
>> observe, mechanism is what both what my inner inductive senses tell me
>> as well as what formal induction tells me is the case. We cannot know,
>> but evidence is very strong towards mechanism.
>
> That's because evidence is mechanistic. Subjectivity cannot be proved
> through external evidence.
>

Mechanism does not deny subjectivity. It only does so if you insist on
materialism. UDA shows materialism and mechanism are impossible. Your
options are: 1) if materialism and mechanism (single universe) -> no
consciousness/all are zombies 2) if mechanism + consciousness -> no
primary matter, but plenty of virtual matter and a free mind with rich
qualia 3) if consciousness only -> too many theories, rarely any
predictions whatsoever, uncommunicable consciousness is too weak to be
able to provide a useful theory only by itself (no predictions, hard to
falsify). I'm guessing your theory is close to being consciousness +
materialism, but no mechanism?

>> I ask you again to
>> consider the brain-in-a-vat example I said before. Do you think someone
>> with an auditory implant (example:http://en.wikipedia.org/wiki/Auditory_brainstem_implanthttp://en.wikipedia.org/wiki/Cochlear_implant) hears nothing? Are they
>> partial zombies to you?
>
> No, the nature of sense is such that it can be prosthetically
> extended. Blind people can 'see' with a cane. That's very different
> from being replaced or simulated though.
>

My example was when there was replacing going on. Of course, there is
also extension.


>> They behave in all ways like they sense the sound, yet you might claim
>> that they don't because the substrate is different?
>
> The substrate isn't different because their brains are human brains.
>

What if you slowly start replacing the brains with functionally
equivalent parts. Neuroscience says this should work. You claim that it
won't work. Deciding on this is still early, but the future will answer
these questions.

I already did give my view on those disorders, I don't see any conflict
with functionalism there, on the contrary.
If you accept a few assumptions, you get COMP's conclusions. I think
those assumptions are likely true given the data that I have and that I
can reason inductively. I can't know if they are true, but the evidence
points towards them being likely true.

>> It just places itself as the best candidate to bet on, but it can
>> never "prove" itself.
>
> A seductive coercion.
>

We have to bet on a theory or another when we want to use it for
practical things. As such, I'll tend to bet on what is more probable
given the data as that increases the chance that I'll accomplish my
goals. Currently we're just studying the consequences and requirements
of different theories, but one day, man may very well have to bet one
some of them if science and technology advances enough to make certain
things practical (such as a computationalist doctor you can say 'yes' to
or if we treat an AGI with human-level intelligence as a person).

>> COMP doesn't deny subjectivity, it's a very
>> important part of the theory. The assumptions are just: (1p) mind,
>> (some) mechanism (observable in the environment, by induction),
>> arithmetical realism (truth value of arithmetical sentences exists), a
>> person's brain admits a digital substitution and 1p is preserved (which
>> makes sense given current evidence and given the thought experiment I
>> mentioned before).
>
> Think about substituting vinegar for water. A plant will accept a
> certain concentration ratio of acetic acid to water, but just because
> they are both transparent liquids does not mean a plant will live on
> it in sufficient concentration.
>

If the doctor does his job properly, H2O will still be H2O. You seem to
be claiming that there is something irreplaceable about data sensed from
the real world as opposed data from a (locally) computed environment. I
see no reason for this hypothesis and it's on you to show evidence that
shows your view is more likely to be correct than mine.

It seems like the experience is seemilingly disconnected from the
neurochemical interactions in the brain in your theory? Also what
exactly is that "sensed" donkey (surely you don't expect literal donkeys
to pass through your brain; a donkey pattern makes sense, but that is
representable as an informational pattern which the rest of the system
can understand/distinguish from other patterns)? There's a lot of
details which seem to be taken for granted and would have to be
explained in detail for me to understand. I also don't see why this view
is that much more different from some types of dualism.

An AGI might eventually care about abstraction. A compiler just
translates (and optimizes) from one language to another (like from a
high-level language to assembly or the CPU's machine code). Programmers
do care what they want to write their programs in - they want them to be
portable and run in many software and hardware implementations.
Someone might want to upload their mind someday to become substrate
independent and avoid a lot of problems that come with wetware brains
and bodies.

> Consciousness has no place in a computer.

You could apply that to the brain as well, if you're going to strip away
the abstraction and refuse to consider high-level patterns.
A computer may be a suitable body for a conscious process.

>>
>>>>> My solution is that both views are correct on their own terms in their
>>>>> own sense and that we should not arbitrarily privilege one view over
>>>>> the other. Our vision is human vision. It is based on retina vision,
>>>>> which is based on cellular and molecular visual sense. It is not just
>>>>> a mechanism which pushes information around from one place to another,
>>>>> each place is a living organism which actively contributes to the top
>>>>> level experience - it isn't a passive system.
>>
>>>> Living organisms - replicators,
>>
>>> Life replicates, but replication does not define life. Living
>>> organisms feel alive and avoid death. Replication does not necessitate
>>> feeling alive.
>>
>> You'll have to define what feeling alive is.
>
> Why? Is it not defined enough already? This is why occidental
> approaches will always fail miserably at understanding consciousness.
> It won't listen to a single note on the piano until we define what
> music is first.
>

I only asked you to define it as if you were explaining it to someone
who asked you what it means (like a child who never heard the expression
before). It's so highly ambiguous that it can be taken to mean too many
things. I wanted to try and keep the discussion precise.


>> This shouldn't be confused
>> with being biological. I feel like I have coherent senses, that's what
>> it means to me to be alive.
>
> Right, it should not be confused with biology. For me 'I feel' is good
> enough to begin with, but it extends further. I want to continue to
> live, to experience pleasure and avoid pain, to seek significance and
> stave off entropy, etc. Lots of things but they all begin with
> sensorimotive awareness.
>

I can see an AGI which could eventually have such goals or
reward/motivational systems (although it's questionable if some of them
are really desirable to have). Of course, if you claim that such AGIs
would not be conscious, despite behaving like they are, we would have a
problem.

>> My cells on their own (without any input
>> from me) replicate and keep my body functioning properly. I will avoid
>> try to avoid situations that can kill me because I prefer being alive
>> because of my motivational/emotional/reward system. I don't think
>> someone will move or do anything without such a biasing
>> motivational/emotional/reward system. There's some interesting studies
>> on people who had damage to such systems and how it affects their
>> decision making process.
>
> Sure, yes, but we need not have any understanding of our cells or
> systems. The feelings alone are enough. They are primitive. We don't
> have to care why we want to avoid pain and death, the motivation is
> presented without need for explanation. There is no logic - to the
> contrary, all logic arises from these fundamental senses which
> transcend logic.
>

I don't see why they have to be fundamental or ontologically primitive.
The 3p parts can be explained in simpler notions. The 1p part can be
explained in COMP as arithmetical truth. Any 1p theory should not
directly contradict 3p observations, otherwise it's wrong.

The thing about these senses is that while we have them, but we don't
always understand them. When we do find the 3p explanation for why some
sense is like this or that, we can have a partial communicable
explanation for them and better understand ourselves. I don't see how
these senses transcend logic(they seem to follow logic as far as we
investigate, and the inaccessible parts, we cannot say anything about),
despite that they do make our 1p world, so they are very important
direct experiences to us.

>>
>>>> are fine things, but I don't see why
>>>> must one confuse replicators with perception. Perception can exist by
>>>> itself merely on the virtue of passing information around and processing
>>>> it. Replicators can also exist due similar reasons, but on a different
>>>> level.
>>
>>> Perception has never existed 'by itself'. Perception only occurs in
>>> living organisms who are informed by their experience. There is no
>>> independent disembodied 'information' out there. There detection and
>>> response, sense and motive of physical wholes.
>>
>> I see no reason why that has to be true, feel free to give some evidence
>> supporting that view. Merely claiming that those people with auditory
>> implants hear nothing is not sufficient.
>
> I didn't say that they hear nothing. If they had hearing loss from an
> accident or illness I see no reason why they would not hear through an
> implant. If they have never heard anything at all? Maybe, maybe not.
> They could just as easily feel it as tactile rather than aural qualia
> and we would not know the difference and neither would they. The Wiki
> suggests this might be the case for all implant recipients "(most
> auditory brain stem implant recipients only have an awareness of sound
> - recipients won't be able to hear musical melodies, only the beat)".
> You can feel a beat. That's not really an awareness of sound qua
> sound, it's just a detection of one aspect of the phenomena our ears
> can parse as aural sound.
>

That sounds like a limitation of current technology, resolution-wise.
As for not knowing the difference? I already said that I can't know the
difference between your and my qualia. Feeling sound as tactile means
that either it's connected to the wrong nerves or in the case of
congenitally deaf person: it's a matter of the structure of the data and
how it can be differentiated from other data (and from other cortical
areas which is trained with radically different data). Feeling as
tactile could happen if the structure of the data is really no different
from tactile data and if that part of the cortex could just as well be
integrated with the sensorimotor cortex.

>> My prediction is that if one
>> were to have such an implant, get some memories with it, then somehow
>> switched back to using a regular ear, their auditory memories from those
>> times would still remain.
>
> I agree. Why wouldn't they?
>

It seemed to me that in your theory you were reifying qualia in such a
way that only raw qualia from the "real" world could be experienced.
With the implant, data is processed, thus you don't get your "real" qualia.

What about gradual replacement? You said it wasn't possible before (with
no reason as to why you think it's so), but evidence suggests that it
might be possible some day.

Seems like you're reifying matter a bit too much, and magically
privileging some matter, some cells, some organisms. I tend to think in
the patterns that matter, cells and organisms represent. We cannot
consider any of them primitive or magically privileged for no reason. I
privilege computation because of the Chuch-Turing Thesis - it's the only
absolute thing we can have in math. Either way, if you theory offers
testable predictions, you may be able to find out if you're right or not
about it. Although predictions of the form: "Any AGI is not conscious"
are not too valid because they are not testable or maybe they are, but
you wouldn't perform the test - if a mind upload is eventually possible,
you could try, but you wouldn't, because you're betting that you'd lose
your consciousness, yet if the bet is wrong, you'd be conscious.

> Craig
>


Bruno Marchal

unread,
Jan 29, 2012, 9:37:18 AM1/29/12
to everyth...@googlegroups.com

I model the 3p idea by formal. "formal" just means having a local
finitely describable shape. Consciousness is not a product of any such
finite shape.

> You are saying that B"1+1=2" is a description of being conscious
> that 1+1=2?

Not at all. I am saying that B"1+1=2" & 1+1=2, is a description of
being conscious that 1+1=2.
B"1+1=2" just mean that the machine believes p (using Dennett'
intentional stance, or some generalized non-zombie attitude. For
correct machine Bp is the same as "the machine asserts p", "the
machine proves p", but supposedly express by the machine itself (in
general). Knowledge of p is (Bp & p). This related knowledge to an
aspect of consciousness: its non formal definablity. By Tarski, we
cannot define (Bp & p) is arithmetic, although we can simulate it for
each particular arithmetical p (by itself).


> This confuses me though because I read B as "provable"; yet many
> things are provable of which we are not conscious.

Sure. That's why we use Bp & p instead. And this changes everything,
even for the correct (a priori) machine, because that machine cannot
know that she is correct.

Bruno


http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jan 29, 2012, 9:54:42 AM1/29/12
to everyth...@googlegroups.com


But it is proved in the comp theory.

Bruno

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .

> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jan 29, 2012, 10:02:58 AM1/29/12
to everyth...@googlegroups.com

Yes. In many ways. If you have already the phi_i, which can be derived
from the laws of addition and multiplication, that you can write down,
you can extract many things from little equations. Auda is a
consequence, at some level of theoretisation, of B(Bp->p)->Bp (Löb's
formula). The physical reality of the machine is described in the
formula p->BDp, with a new B defined from the Löbian one used above.

Fair enough, pierz. I think the UDA is far simpler than MGA, which you
seem to grasp.
AUDA is deeply based on theoretical computer science and logic. I have
made attempt to explain, but in a mailing list people tend to forget,
or to run away once there are too many symbols, so I can only refer to
the papers.

Bruno


>
> BTW, while I am with Craig in intuiting a serious conceptual lacuna in
> the materialist paradigm, that doesn't necessarily enamour me of his
> alternative. His talk of 'sense making' seems to me more like a 'way
> of talking about things' than a theory in the scientific or
> philosophic sense. It doesn't really seem to explain anything as such,
> but more to put a lot of language around an ill defined intuition.
> Sorry Craig if that wrongs you, but like others, I would like to hear
> something concrete your theory predicts rather than just another
> interpretive slant on the same data.
>
>> Brent
>>
>>
>>
>>>> I wanted to discuss this issue in another thread
>>
>>>> http://groups.google.com/group/everything-list/t/a4b4e1546e0d03df
>>
>>>> but at the present the discussion is limited to the question of
>>>> information is basic physical property (Information is the
>>>> Entropy) or not.
>>
>>>> Evgenii
>

Bruno Marchal

unread,
Jan 29, 2012, 10:47:13 AM1/29/12
to everyth...@googlegroups.com
On 29 Jan 2012, at 06:22, Pierz wrote:



On Jan 27, 5:55 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 26 Jan 2012, at 07:19, Pierz wrote:

As I continue to ponder the UDA, I keep coming back to a niggling
doubt that an arithmetical ontology can ever really give a
satisfactory explanation of qualia.

Of course the comp warning here is a bit "diabolical". Comp predicts
that consciousness and qualia can't satisfy completely the self-
observing machine. More below.

It seems to me that imputing
qualia to calculations (indeed consciousness at all, thought that may
be the same thing) adds something that is not given by, or derivable
from, any mathematical axiom. Surely this is illegitimate from a
mathematical point of view. Every  mathematical statement can only be
made in terms of numbers and operators, so to talk about *qualities*
arising out of numbers is not mathematics so much as numerology or
qabbala.

No, it is modal logic,

A nice term for speculation! Mind you, that's OK. Where would we be
without speculation? But the term 'modal logic' might be used for
numerology too - it *might* be the case that a 4 in one's birthdate
does signify a practical soul.

Here you are just ignorant of the discovery that mathematical formal belief (proof) has been discovered to be translatable in the language of the theories themselves. And then they obeys modal logical axioms. We have no choice to admit them, and they magically avoid all the notorious philosophical difficulties raised by Quine on Modal logic. So, the modal logic just compactify information which is extracted from the number relation.
To be short UDA + AUDA shows that the theory of everything (or a good one among an infinity of equivalent) is already provided by very simple arithmetical axioms: like:

Ax ~(0 = s(x))  (For all number x the successor of x is different from zero). With

AxAy ~(x = y) -> ~(s(x) = s(y))    (different numbers have different successors)

and with different sort of axioms, containing usually the addition laws:

Ax x + 0 = x  (0 adds nothing)
AxAy  x + s(y) = s(x + y)   ( meaning x + (y +1) = (x + y) +1)

and the multiplication axioms

Ax   x *0 = 0
AxAy x*s(y) = x*y + x

The only "speculation" is that the brain, or the generalized brain, is Turing emulable at some level of description. I use arithmetic and Church thesis to just make that statement precise.






although model theory does that too. It is
basically the *magic* of computer science.

Magic, but not numerology then.

Indeed. 




relatively to a universal
number, a number can denote infinite things,

I think you're saying that a number can be part of an infinite number
of sets, calculations etc, which is true, but what it denotes is
always purely a matter of logical numerical relationships, unless it
denotes something beyond mathematics itself, such as when I count
oranges. I am saying that to denote qualia, the numbers must be
denoting 'oranges' (or maybe the colour orange as an experience),
things outside of pure logic, not mathematical entities.


Math, even just arithmetic is already outside logic. And assuming comp consciousness is related to computation, which exists in arithmetic.
You can prove from the axiom above that prime number exists? OK? Likewise you can prove that universal number exist. 
Then, relatively to that universal number, all other number get a behavior. The universal number interpret the other numbers like if they where machine, program.




like the program
factorial denotes the set {(0,0),(1,1),(2,2),(3,6),(4,24),(5,120), ...}.
Nobody can define consciousness and qualia, but many can agree on
statements about them, and in that way we can even communicate or
study what machine can say about any predicate verifying those
properties.


Here of course is where people start to invoke the wonderfully protean
notion of ‘emergent properties’. Perhaps qualia emerge when a
calculation becomes deep enough.Perhaps consciousness emerges from a
complicated enough arrangement of neurons.

Consciousness, as bet in a reality emerges as theorems in arithmetic.

Sorry, I cannot parse that sentence. It doesn't seem grammatical.

Sorry, I meant:

Consciousness, seen as a bet in some reality, emerges through (infinitely many) theorems in arithmetic.


They emerge like the prime numbers emerges.

'They'? The theorems? You mean consciousness is a bet on an
arithmetical theorem?

The universal numbers emerge like the prime numbers emerge. Logicians prefer to say that they exist, simply. Consciousness is more delicate, because it emerges from the 1p indeterminacy on all the computations going through my computational states. It is a big complex infinite, which is structured by the modalities of self-reference (which are the same for all correct rich enough machines).





Rudiment of qualia would explains qualia away. They are intrinsically
more complex. A qualia needs two universal numbers (the hero and the
local environment(s) which executes the hero

Once executed, he's not a hero any more, he's a martyr :)

:)



(in the computer science
sense,

oh, right ;)

or in the UD). It needs the "hero" to refers automatically to
high level representation of itself and the environment, etc. Then the
qualia will be defined (and shown to exist) as truth felt as directly
available, and locally invariants, yet non communicable, and applying
to a person without description (the 1-person). "Feeling" being
something like "known as true in all my locally directly accessible
environments".

And yet it seems to me
they can’t be, because the only properties that belong to arithmetic
are those leant to them by the axioms that define them.

Not at all. Arithmetical truth is far bigger than anything you can
derive from any (effective) theory. Theories are not PI_1 complete,
Arithmetical truth is PI_n complete for each n. It is very big.

I do appreciate Gödel's theorem and its proof that there are true,
unprovable statements within any given arithmetic, so you are correct.

Yes. And one such proposition is concistency (= I don't prove the false = ~Bf = D~f = Dt).

Gödel's second theorem is a nice modal formula: Dt -> ~BDt; if I am consistent then I cannot prove that I am consistent. 
If you follow an half-hour course in modal logic, you would see that it is easy to derive from Löb's formula B(Bp->p)->Bp, which can be see as a generalization of Gödel's theorem.
Löbian machine are the one which can prove their own Gödel and Löb's theorems.





But my error is one of technical terminology I think. Surely there are
statements that can be made within a certain arithmetic and others
that can't. For instance, within Peano arithmetic it does not make
sense to ask about the truth value of statements involving i (the
imaginary number).

You can define i in arithmetic, and then prove many theorems of complex analysis in arithmetic. To be sure, it is an open problem if there are any theorem in usual mathematics (≠ logic, and category theory) which cannot be proved in elementary arithmetic. PA is already a very strong theory.
But the indecidable proposition are always proposition that we can express in the language of the theory under consideration.



Then there are limits to what can be called a
mathematical statement - ie one involving the truth and falsity of
purely logical relations. So I can't, in any arithmetic or system of
mathematics, ask if the number 20 is nice or not.

Unless you succeed in defining "nice" is arithmetic. 'course.



Or if the prime
numbers are pink or blue.

Well color does not apply to numbers, but might apply to the hat of a person is a life related to the arithmetical relation. 




Arithmetical truth may be as vast as you
like, but my point is that it is still *arithmetical*, and qualities
don't come into it.

If you assume comp, they cannot not come.



It is the set of sentences that can be made about
numbers and those sentences are limited in their symbols. So Gödel
doesn't help you here I don't think.

It helps because it shows that machine have a rich apprehension of all what they cannot do in a provable way, yet can still do by accepting to be possibly not correct. That ignorance space appears to be structured, and there are obvious candidate for qualia (like immediately accessible undoubtable truth, yet non communicable to others).





Indeed
arithmetic *is* exactly those axioms and nothing more.

Gödel's incompleteness theorem refutes this.

Matter may in
principle contain untold, undiscovered mysterious properties which I
suppose might include the rudiments of consciousness. Yet mathematics
is only what it is defined to be. Certainly it contains many mysteries
emergent properties, but all these properties arise logically from its
axioms and thus cannot include qualia.

It is here that you are wrong. Even if we limit ourselves to
arithmetical truth, it extends terribly what machines can justify.


Terribly perhaps, but still not beyond the arithmetical, by
definition.

It might seems weird, but a bit like the set of all sets cannot be a set, arithmetical truth cannot be defined in arithmetic. Arithmetical truth bears on numbers, but as a concept is far above the possibility of expression of numbers. Sure you can define it in set theory (another Löbian machine) but the definition is a fake one, because its intuition will relate on set-theoretical truth, which is much vaster than arithmetical truth. But from inside arithmetic, things get worse. The 1p of arithmetical theories is above all conceivable mathematics.
It is counter-intuitive. Mathematical logic is a road made of counter-intuitive statements.







I call the idea that it can numerology because numerology also
ascribes qualities to numbers. A ‘2’ in one’s birthdate indicates
creativity (or something), a ‘4’ material ambition and so on. Because
the emergent properties of numbers can indeed be deeply amazing and
wonderful - Mandelbrot sets and so on - there is a natural human
tendency to mystify them, to project properties of the imagination
into them.

No. Some bet on mechanism to justify the non sensicalness of the
notion of zombie, or the hope that he or his children might travel on
mars in 4 minutes, or just empirically by the absence of relevant non
Turing-emulability of biological phenomenon.
Unlike putting consciousness in matter (an unknown into an unknown),
comp explains consciousness with intuitively related concept, like
self-reference, non definability theorem, perceptible incompleteness,
etc.

And if you look at the Mandelbrot set, a little bit everywhere, you
can hardly miss the unreasonable resemblances with nature, from
lightening to embryogenesis given evidence that its rational part
might be a compact universal dovetailer, or creative set (in Post
sense).


Well I certainly don't dispute the central significance mathematics
must play in any complete scientific or philosophical world view. I
suppose the question is whether that mathematics is ontologically
primary or not.

There is no choice in this matter. Arithmetic has to be primary, if only to define comp, and then can be shown necessarily enough.





But if these qualities really do inhere in numbers and are
not put there purely by our projection, then numbers must be more than
their definitions. We must posit the numbers as something that
projects out of a supraordinate reality that is not purely
mathematical - ie, not merely composed of the axioms that define an
arithmetic.

Like arithmetical truth. I think acw explained already.


Are you saying arithmetical truth is not purely mathematical?

It is not arithmetical. But the 3p arithmetical truth can be defined in set theory, and so can be said mathematical (for some set realist, which I am not). The 1p-arithmetical truth is theological and cannot be purely mathematical indeed.





This then can no longer be described as a mathematical
ontology, but rather a kind of numerical mysticism.

It is what you get in the case where brain are natural machines.

And because
something extrinsic to the axioms has been added, it opens the way for
all kinds of other unicorns and fairies that can never be proved from
the maths alone. This is unprovability not of the mathematical
variety, but more of the variety that cries out for Mr Occam’s shaving
apparatus.

No government can prevent numbers from dreaming. Although they might
try <sigh>.

You can't apply Occam on dreams.
They exist epistemologically once you have enough finite things.

Well, I'm not trying to prevent anyone from dreaming! I'm arguing
whether or not maths can include dreams.

Assuming comp, arithmetic can be shown full of computations. All of them, which includes dreaming creatures.

Simple polynomial arithmetical relation does already simulates *all* the rational approximation of the Milky Way collision with Andromeda, at the quantum level of strings. This includes dreaming persons. The hard things is to justify what we stay in such histories, given that the 1-indeterminacy is very *huge*. So huge that when I was young, most people took UDA as a refutation of mechanism. AUDA is almost just there to show that the machine reality is too complex to refute mechanism so easily, because we have to take into account self-reference, and the math of referring to one's own body, etc.




Feel free to suggest a non-comp theory. Note that even just the
showing of *one* such theory is everything but easy. Somehow you have
to study computability, and UDA, to construct a non Turing emulable
entity, whose experience is not recoverable in any first person sense.
Better to test comp on nature, so as to have a chance at least to get
an evidence against comp, or against the classical theory of knowledge.


Hehe. I suppose you have some idea that I can't do that! As noted in
my
prior post in this thread, these are my attempts to understand, as
completely as I can, this interesting philosophy. I admit I like your
theory better than materialism. I am trying to discover if I like it
enough ('like' in the sense that it satisfies my intellectual
intuition and my logic sufficiently) to entertain it seriously over my
current admission of nearly total ignorance as to what the world 'is'.
I don't need to posit an alternative to make that enquiry, and to do
so by questioning whatever in the theory seems weak (even if it proves
in the end to be my understanding that is the weakness).

Evgenii Rudnyi

unread,
Jan 29, 2012, 11:56:48 AM1/29/12
to everyth...@googlegroups.com
On 29.01.2012 00:15 Pierz said the following:

Sorry, I have not understood your answer. Let me contrast this with your
previous statement

>>> But I'll
>>> venture an axiom of my own here: no properties can emerge from
>>> a complex system that are not present in primitive form in the
>>> parts of that system. There is nothing mystical about emergent
>>> properties.

What happens with the cybernetic laws from this viewpoint?

> terms of the point I am making regarding qualia, Gray's argument is
> one variant on the theme of the type of reasoning I object to. It's
> all there in the statement:
>
> "Behaviour as such does not appear to require for its explanation
> any principles additional to these."
>
> The issue isn't explaining behaviour, it's explaining consciousness/
> qualia. These approaches always end up conflating the two, their
> proponents getting annoyed with anyone who isn't prepared to wish
> away the gap between them.

The quote above was from the beginning of Gray's book where he tries to
consider life and a human being from the viewpoint of physicalism. Yet,
he shows later on that this does not work and consciousness remains as a
hard problem.

Evgenii

Evgenii Rudnyi

unread,
Jan 29, 2012, 12:02:49 PM1/29/12
to everyth...@googlegroups.com
On 29.01.2012 00:57 meekerdb said the following:

> On 1/28/2012 3:15 PM, Pierz wrote:
>>
>> On Jan 28, 11:04 pm, Evgenii Rudnyi<use...@rudnyi.ru> wrote:

...

Basically the cybernetics laws describe a feedback. Let us for a example
consider a PID controller to keep the temperature constant in a
thermostat. What is the relationship between the equations implemented
in the controller and physics laws? Do these equations emerge from or
supervene to the physics laws?

Evgenii

meekerdb

unread,
Jan 29, 2012, 5:42:29 PM1/29/12
to everyth...@googlegroups.com
On 1/29/2012 7:47 AM, Bruno Marchal wrote:
Math, even just arithmetic is already outside logic

Why exactly is that?  Is it just because arithmetic has "..." in it, i.e. it posits a potential infinity of operations?

Brent

meekerdb

unread,
Jan 29, 2012, 5:47:26 PM1/29/12
to everyth...@googlegroups.com

The equations describe the 'boundary conditions' as well as the physical laws. For
example Kirchoff's law just says current is conserved in electrical circuits, but to use
this law in an equation describing the function of some circuit the equations must also
describe the configuration of the circuit.

Brent


>
> Evgenii
>

meekerdb

unread,
Jan 29, 2012, 6:06:03 PM1/29/12
to everyth...@googlegroups.com
On 1/29/2012 6:54 AM, Bruno Marchal wrote:
There is a huge amount of
evidence along these lines that consciousness does not in fact
supervene on the physical brain.

No, there is a huge number of anecdotes.


But it is proved in the comp theory.

Bruno

I would say that your argument is that consciousness does not supervene on a *fundamental* physical brain.  I don't think it shows that consciousness can exist without there also being a physical environment in which it exists, even if the physical is not fundamental but is part of the same computations.  In any case I don't see that it supports NDE's against more mundane explanations.

Brent

Craig Weinberg

unread,
Jan 29, 2012, 6:59:01 PM1/29/12
to Everything List
On Jan 29, 5:10 am, acw <a...@lavabit.com> wrote:

> >>>> On 1/27/2012 05:55, Craig Weinberg wrote:

>
> > I'm open to testable predictions, although part of the model is that
> > testing itself is biased toward the occidental half of the continuum
> > to begin with. We cannot predict that we should exist.
>
> If it's not testable, how can you know if a theory is likely true or
> false? Most theories are false, although some are closer to the truth
> than others. The goal is to get as close as possible.

My theory discovers that the truth of the universe is that the
universe the universe has many kinds of truth, half of them not
literal. What is most directly experienced is most directly
unprovable. That is as close as possible to the truth that is possible
to get.

>
> >>>> but if I had to make an
> >>>> initial guess (maybe wrong), it seems similar to some form of
> >>>> panpsychism directly over matter.
>
> >>> Close, but not exactly. Panpsychism can imply that a rock has human-
> >>> like experiences. My hypothesis can be categorized as
> >>> panexperientialism because I do think that all forces and fields are
> >>> figurative externalizations of processes which literally occur within
> >>> and through 'matter'. Matter is in turn diffracted pieces of the
> >>> primordial singularity.
>
> >> Not entirely sure what you mean by the singularity, but okay.
>
> > The singularity can be thought of as the Big Bang before the Big Bang,
> > but I take it further through the thought experiment of trying to
> > imagine really what it must be - rather than accepting the cartoon
> > version of some ball of white light exploding into space. Since space
> > and time comes out of the Big Bang, it has no place to explode out to,
> > and no exterior to define any boundaries to begin with. What that
> > means is that space and time are divisions with the singularity and
> > the Big Bang is eternal and timeless at once, and we are inside of it.
>
> When I think of the Big Bang, the question "what is outside" makes no
> sense to me as I just think of the "singularity" as what's at time 0.

Right. Nothing is outside of it. So where is it banging into?

> Except, that I don't think the initial state is literally empty - not
> even string theory assumes that.

For the initial state to be a true singularity, it must be time 0 and
time ∞. It must be everythingness as well as somethingness and
nothingness.

>General relativity might assume that,
> but we all know General Relativity is not compatible with Quantum
> Mechanics, so we shouldn't assume it literally true when it comes to
> black holes or the Big Bang. Of course, within the context of COMP,
> there is no ontologically primary "Big Bang" per se, but some apparent
> structure which had some time evolution that looks like that.>>> It's confusing for us because we assume that
> >>> motion and time are exterior conditions, by if my view is accurate,
> >>> then all time and energy is literally interior to the observer as an
> >>> experience.
>
> >> I think most people realize that the sense of time is subjective and
> >> relative, as with qualia. I think some form of time is required for
> >> self-consciousness. There can be different scales of time, for example,
> >> the local universe may very well run at planck-time (guesstimation based
> >> on popular physics theories, we cannot know, and with COMP, there's an
> >> infinity of such frames of references), but our conscious experience is
> >> much slower relative to that planck-time, usually assumed to run at a
> >> variable rate, at about 1-200Hz (neuron-spiking freq), although maybe
> >> observer moments could even be smaller in size.
>
> > I think planck time is an aspect of the instruments we are using to
> > measure microcosmic events. There is no reason to think that time is
> > literal and digital.
>
> All the instruments? If everything that can measure at that scale gives
> the same results, isn't that enough to just say 'it probably is
> objectively so'

No, it just means that everything that can measure at that scale is
able to measure on that scale. It can't measure irony or humor or
beauty, but our native instruments can. When I look at an antenna's
reports of microcosmic activity, there may be all kinds of things
going on that we can't see, just as an MRI gives no indication of what
is going on in our experience.

>(at least within this local physics, we're in
> everything-list after all ;))>>> What I think is that matter and experience are two
> >>> symmetrical but anomalous ontologies - two sides of the same coin, so
> >>> that our qualia and content of experience is descended from
> >>> accumulated sense experience of our constituent organism, not
> >>> manufactured by their bodies, cells, molecules, interactions. The two
> >>> both opposite expressions (a what& how of matter and space and a who
> >>> & why of experience or energy and time) of the underlying sense that
> >>> binds them to the singularity (where& when).
>
> >> Accumulated sense experience? Our neurons do record our memories
> >> (lossily, as we also forget)
>
> > There is loss but there is also embellishment. Our recollection is
> > influenced by our semantic agendas, not only data loss. There's also
> > those cases of superior autobiographical memory
> >http://www.cbsnews.com/stories/2010/12/16/60minutes/main7156877.shtml
> > which indicate that memory loss is not an inherent neurological
> > limitation.
>
> There is probably some memory limit, although it should be fairly large
> (90*10^9 neurons, each having some 1000-3000 synapses is quite a bit of
> data). However, even with that capacity it's hardly enough to losslessly
> remember all sense data that one ever experienced, or even lossily - we
> don't even remember sense data, we remember high-level patterns (see the
> book I linked before) and patterns of patterns and so on (up to some ~10
> levels of recursion).

I agree, except for the implication that high level patterns of
patterns aren't also sense data. The perceptions of my retina cells
are no more sensory than those of my the cells in my brain, just as a
reflection in a small bubble is no less a reflection than a large
mirror.

> If we actively try to remember as many details as
> possible, we'll do so, although I don't think this can go on
> indefinitely. Someone obsessing over certain details all the time may
> very well have better memory about them. Personally, I tend to forget
> most things I don't use, even after just a few years, and I'm still
> quite young. There would be things I would like to remember and yet some
> specific details elude me as the memory is old (10 years?). We do
> embellish our memories as well, which just shows how unreliable they can be.

Sure, minds forget, computer storage gets corrupted.
Yes, that's how I think it is too, but the fact that the "wrong"
qualia can occur proves that qualia is not inevitably linked to or
arises automatically from any particular kind of sense process. It can
be represented in other ways than only the "right" way.

>
> > If you
> > assume that putting eyes on a robot conjures qualia automatically, why
> > would it be visual qualia?
>
> My assumption isn't nearly as simple. A camera or an eye would not be
> sufficient to get the rich experience of visual qualia. You'd have to
> have the right visual system processing the data (computationally, the
> substrate itself would be irrelevant). If I had to venture a guess,
> systems like Hierarchical temporal memory(HTM) or Deep Spatio-Temporal
> Inference Network (DeSTIN) might have similar visual/color qualia to our
> own due to how they recursively process/recognize the patterns in the
> data, very much like our own visual system (they are based on similar
> principles, but usually use more probabilistic/statistic methods instead
> of implementing the full neural network, although they do retain the
> same overall properties and structure), but they might not have the same
> large cartesian theatre-like image, you'd probably require some motor
> control (like our saccades) to be able to continuously capture more
> details of the scene and form the needed associations. However, most of
> these details are something for an AGI researcher to worry about.

It could have a completely different spectrum though. The main thing
is that it is not explained why HTM or DeSTIN should have any qualia
at all. What would it add to the functionality and how is it conjured?

>
> Now about the distinction between visual and some other qualia. If
> Hawkin's theory is correct, most of our cortical systems for processing
> different types of input from the environment (visual, auditory,
> sensory, ...) are pretty much the same circuit template organized in a
> hierarchical matter (each part feeding recognized patterns to a higher
> part and so on), and only some rough higher-level connectivity is
> different - the main difference lies in what the input actually is.
> If the system is trained using visual data, it'd end up recognizing
> small patterns in small locally-connected areas (such as edges, lines,
> ..), then patterns in those patterns and so on, until it recognizes
> objects as wholes (temporal patterns are also recognized as well, so
> complex movement patterns can also be recognized; the system would
> actually behave worse if data wasn't live or if there was no noise).

I am betting that whole model falls apart. There is no input. No data.
Only experience on different scales. Brain activity happens
simultaneously and often arbitrarily all over the brain. Our sense
occurs on the whole brain level. There is sense going on on the neuron
level, and we share in that, but it's not the mechanism a mechanism
which produces sense out of whole cloth, it is an experience which
gives us access to various concrete channels of detection and
participation in the concrete world within us and outside of us.

> If you were to try to imagine what abstract structure would correspond
> to that visual data, you'd see how it would encode the spatial(and
> temporal) nature of the data, how it would differentiate colors, but
> also contain high-level features (at different levels of the hierarchy)
> such as lines or faces or whole objects (in a way, a person can also
> think in higher-level concepts without consciously saying those
> concepts' words in their mind). Such a system, if it was able to talk
> about its qualia (in the framework of a larger system), it would have
> all the *communicable* properties of visual qualia that we humans
> ascribe to it. One could also imagine that the perceived data structures
> would have their corresponding arithmetical truths and so on (if
> considering it within the context of COMP).

I understand completely, and have thought of it many times through the
years, but I'm past it. Qualia isn't data, and it doesn't represent
data. It is a presented experience which can be associated with
presented experiences. Period. It doesn't appear whenever patterns
match some magical recipe book which links complex computations to
simple, powerful feelings. No configuration of spinning golf balls
will turn into giant gold atoms. No computer program is ever going to
have an experience.

>
> Now consider, auditory data - it's basically temporal data,

It's not data at all. We can treat it like data, and it works that
way, but we cannot begin with 'data' and expect sound to come out of
it without living brains to hear it as sound.

> only one
> dimension (or 2 with time), compared to the usual 3 (or 4, due to
> spatial correlations between other systems) for visual data. The same
> process as with the visual data would apply, but the learned structure
> is now tailored for auditory data. The corresponding qualia would of
> course be quite different from the visual one as the way the interpreter
> has learned to work only correctly processes auditory data and if you
> were to represent an abstract structure for that auditory data, you'd
> see how the patterns were different from the visual patterns.

I understand, but comp cannot explain why one sense pattern is not
commutable to another. We should be able to see sound given the right
aural to visual conversion algorithm, but that is not the case at all.
Visualizations of sound are not even close to the experience of
hearing sound.

>
> Sensory feeling would itself lend to similar learning and have its own
> unique structural properties. At minimum, it would have the right
> *communicable* properties.
>
> Simpler qualia like smell or taste would still be unique in its own way,
> but much simpler due to their considerably simpler structure.
>
> Such a system would retain all *communicable* properties, but we can't
> say if it would have the exact same qualia as we do,

I can say that it definitely will not have any qualia beyond whatever
native qualia is experienced by the physical substrate.

> however I can't
> know if you and me have the exact same qualia, we can only agree on the
> communicable parts of it.

If I'm right, then we don't have the exact same personal, social, and
cultural qualia, but we share from the same pool of human, mammal,
animal, biological, chemical, and physical qualia. The 'psychic unity
of mankind' is conserved.

> I can't even know if I have the same qualia
> every day - if there are parts that are completely incommunicable or
> inaccessible structurally, that part of the nature of qualia may very
> well be constantly changing ("dancing qualia") without us ever knowing
> about it (of course, it also makes sense that we don't believe such a
> thing to be happening because our experiences appear stable and we have
> temporal continuity, and anything like that would be absolutely
> incommunicable and inaccessible to any 3p instruments).

Sure, yes. Our top level qualia is constantly evolving as we
accumulate experiences and circumstances (internal and external) call
for shifts in which qualities are significant. Sometimes it's driven
by the brain (hormones come to mind) and other times it's top level
narratives which influences us to meta-program the brain.

>
> There are however many questions here for me: what qualia would a system
> trained with atmospheric data have? what would it be like for an AGI if
> they had an additional visual system? how it would be like to perceive
> data in multiple dimensions (such as 3, 4, 5, ...) as systems can be
> trained with such data?
>
> We can find the communicable answers to such questions, but we cannot
> know it's like to be such systems without extending ourselves in such a
> way as to partially be such a system (full 1p questions only have full
> 1p answers, but may have partial communicable 3p answers).
>
> To summarize: The nature of the qualia depends on the structure of the
> data that is perceived, as well as the structure of the system that does
> the perceiving.

Not just structure, but history. That is what qualia is, a condensed,
recapitulated history of perception. You're looking at it right now.
These letters are meaningless pixels but you make sense of them
because at some point you learned how to do that. Now you can't easily
undo it. Your visual qualia of these shapes has itself been sculpted
into semantic signs. They look different to us than to someone who
doesn't read English. How they look has meaning to us directly.
They don't claim to see things. The tests they do confirm that they
see things. They claim that they don't see things, and I believe them.
But it does process that type of data, only as tactile experience
rather than visual. The qualia is different but the representational
function need not be.

> thus the
> qualia would be anything it was trained with. If you trained the visual
> cortex with auditory data, it would have temporal (audio) qualia. The
> actual source shouldn't matter, for all you care, it would be buffered
> PCM waves (obviously, learning in humans means correlating all senses,
> so "live" data that properly correlates with the other senses would be
> required if you want it to be of any use to the human).
>
> >> To elaborate, consider that someone gets a digital eye, this eye can
> >> capture sense data from the environment, process it, then route it to an
> >> interface which generates electrical impulses exactly like how the eye
> >> did before and stimulates the right neurons. Consider the same for the
> >> other senses, such as hearing, touch, smell, taste and so on.
>
> > I have not seen that any prosthetic device has given a particular
> > sense to anyone who didn't already have it at some point naturally at
> > some point in their life. I could be wrong, but I can't find anything
> > online indicating that it has been done. It seems like one of the many
> > instances of miraculous breakthroughs that have been on the verge of
> > happening for
>
> So you want babies to have with implants after they're born? That's the
> only way that would be possible of happening. If a baby never develops
> an auditory system, there's no way an implant can help it (it'd take
> much more than that).

If qualia were data then access to the data should be enough.
I do see why it's not solved yet. That's what I'm telling you. A
quantitative approach is doomed to failure, just as the Ptolemaic
epicycle was, and I understand exactly why that is the case.

>
> >> Now
> >> consider a powerful-enough computer capable of simulating an
> >> environment, first you can think of some unrealistic like our video
> >> games, but then you can think of something better like ray-tracing and
> >> eventually full-on physical simulation to any granularity that you'd
> >> like (this may not yet be feasible in our physical world without slowing
> >> the brain down, but consider it as a thought experiment for now).
>
> > I'll go with this proposition as a thought experiment but I don't
> > know if any digital simulation can deliver on it's promise IRL,
> > regardless of sophistication or resolution. It may be the case that
> > the way our senses coordinate with each other you could never
> > completely eliminate a subjective rejection on some subtle level. We
> > may have a sense of reality, even if it's not consciously available. I
> > think a study could be done where to see if people respond differently
> > to a bot than a real person, even if they are consciously fooled. It's
> > not critical, but I have a hunch that people might sense or know more
> > than they think they know about what is real and what isn't.
>
> If the physics is too different, I can see why they'd respond
> differently. I don't see how it would be different if the qualia would
> be as accurate as the real one.

It's not about accuracy, it's about reality. Simulation is an idea
which I understand to be founded on incorrect assumptions. There is no
simulation of anything, only imitation.

> Also, if a baby was in this VR world
> from birth, I don't think it could ever know any different. An
> "imperfect" simulation can be recognized if we have prior data of the
> current reality.

That's the conventional wisdom, but I have come to realize it's not
correct. Every fiber of our being knows what it is. We can fool some
levels but we cannot fool every level without actually changing the
physical substrate. In theory a brain in a vat will never know the
difference, but in fact, every neuron may be experiencing the vat
itself - maybe unconsciously to us, but it still is there as the base
of the pyramid of our awareness. Brains that are canned or frozen are
one thing, brains that have been digitized are not even brains.

> My question to you was a bit different: it seemed to me
> that computational input-only was not leading to consciousness in your
> theory (at least from some of the other posts I've read from you some
> other day), I wanted to know if you think that is the case or not. If it
> is, then what would happen to someone with some implant replacing some
> sensory organ within your theory.

Not sure what you are asking here. What would happen to someone with a
sense organ implant depends entirely on the qualities of the implant.
A transplant is that is not rejected is probably going to be better
than any implant.

>
> >> Do you
> >> think these brains are p. zombies because they are not interacting with
> >> the "real" world? The reason I'm asking this question is that it seems
> >> to me like in your theory, only particular things can cause particular
> >> sense data, and here I'm trying to completly abstract away from sense
> >> data and make it accessible by proxy and allow piping any type of data
> >> into it (although obviously the brain will only accept data that fits
> >> the expected patterns, and I do expect that only correct data will be sent).
>
> > No, real brains have real qualia, even if the external input is an
> > imitation of natural inputs. Again though, maybe no matching qualia if
> > it has not been initialized by a neurological organ at some point, but
> > still functional. If you have never in your life seen blue with your
> > eyes, I don't know that any kind of stimulation of the brain will
> > generate blue.
>
> Oh, that answers my previous question. So you think "real" qualia has
> magical 'imprinting' properties.

It's not magical, it's autobiographical. If you have eyes then you
have eye experiences like blue. Nothing mysterious about it.

> I don't think that is needed: I think
> only the right structural differences and organization of the sense data
> is needed.

There is no sense data, there is only sense understood indirectly on
different perceptual frames of reference.

>It seems that in your theory, you distinguish the origin of
> the sense(color in this case), from the actual sense data itself.

Yes, you can put any kind of data you want into a congenitally blind
person's brain and they aren't going to see visually.
Cartoons of ducks look, swim, and quack like a duck. Is Daffy Duck a
duck? No program has internal beliefs or self conscious, they are
interactive recordings.


> Does it behave like a conscious self-aware being?

Only in the trivial sense.

> Yes. Does this happen
> because it has the right internal structures? Very much so yes.
> Sure, a lot of modern computer programs are unintelligent, but I can see
> some AGI systems which were designed to have a develop a psyche,
> memories, associations, ... claim that they are conscious in the future
> (if some projects succeed in their goal), and I would have no problem
> considering them conscious. In your theory, you'd probably not attribute
> consciousness to anything that isn't implemented in the required
> "magical"(such as wetware) substrate.

When it's something other than vaporware I'll be interested.

>
> >> Of course we cannot know if anything besides
> >> us is conscious, but I tend to favor non-solipsistic theories myself.
> >> The brain physically stores beliefs in synapses and its neuron bodies
>
> > Not necessarily.TV programs are not stored in the pixels of the TV
> > screen. Neurology may only be an organic abacus which we use to keep
> > track of things. The memories are not in the arrangements of the
> > synapses but accessed through them.
>
> If that is so, you'd have to provide some evidence or explanations as to
> why. I see no reason why someone would be convinced to take that view.

Why? Because it makes sense that way and it doesn't make sense any
other way. The explanation is that the universe is departmentalized
into two topologies. One is topological and simultaneous, the other is
experiential and sequential. Everything makes sense that way.

> The evidence does suggest that memories are indeed stored in graph
> between neuron bodies (synapses being the edges and the neurons being
> the vertices),

It sounds like you are talking about memory particles. Last I heard,
memory storage in the brain is thought to be global and possibly
holographic. What evidence do you refer to?

> although that view is an oversimplication, but overall
> such a model seems sufficient to store memories (as shown by some
> artificial life projects which use such neural networks to simulate
> simple animals/... or demonstrated by current HTM-like visual system
> implementations).

I haven't heard of it. I think at best only a trigger of a memory can
be stored. You cannot transplant the between-ness of neuron bodies
into another person and have them remember someone else's memories.

>
> >> and I see no reason why some artificial general intelligence couldn't
> >> store its beliefs in its own data-structures such as hypergraphs and
> >> whatnot, and the actual physical storage/encoding shouldn't be too
> >> relevant as long as the interpreter (program) exists.
>
> > Because it has no beliefs. It stores only locations of off/on
> > switches.
>
> Your view is overly reductionist.

No, it's enlightened.

> The data contains abstract beliefs

No.

> and
> the machine will behave as if it has those beliefs.

Yes, it might fool us.

> In COMP, it would
> also be conscious because arithmetical truth would encode those beliefs.
> If I take your view, I should only look at the brain as a complex
> chemical/electrical network,

No, you should look at it as the physical-chemical-biological-
zoological exterior of an entity which detects, senses, feels,
perceives, and thinks through that organic exterior.

>and eventually reduce it to similar
> off/on-switches.

Never. Can blue be reduced to off/on switches? How many switches until
it can turn blue?

>There is no reason to assume consciousness to a brain,
> the only reason we do is because we observe that our consciousness and
> qualia correspond to the brain's beliefs at some level of abstraction.

We know nothing about any beliefs except human beliefs.

> I
> should have mention this before, but we don't even see "qualia" directly
> (like in your theory?): do you know what the data that our visual system
> sees initially looks like? It's a noisy convoluted not-too-high-res
> mess. It's nothing like our clear, detailed, comprehensive conscious
> experience.

That's because that is only the a-signifying silhouette of the qualia.
I'm telling you, the current approach is dysfunctional. We are looking
into our past certainties and trying to muscle observations into it,
but it's not working. It doesn't fit. Instead of finding more and more
about why we experience the world as we do, we are going in the
opposite direction, convincing ourselves that the computational
skeleton of our sense is the origin of sense rather than one of many
consequences.

>It only becomes like that after its been broken into
> patterns (and patterns of patterns and patterns of patterns of patterns
> and patterns of ...) and propagated throughout the system (with other
> systems' anticipating/predicting beliefs also MODIFYING the data).

That would be the case if the universe were a uniform literal topology
of material objects and space, but you are missing half of the cosmos.
Everything -- everything has a figurative, sequential, sensorimotive
topology as well as an electromagnetic topology. This conversation
does not exist in our neurons or in the switches and routers or video
monitors we are using - it exists only in our shared perception of
this time. It's a biographical event at the human social level, not
'data'. We can conceive of how the conversation is moved figuratively
move from one place to another, but it is not literally made of data.
It's made of who we are and why we are, not what and how something
else is.

> I can
> only see such unprocessed noise if I turn off the lights at night and
> stare a bit into the darkness - because no stable patterns exist in such
> data, it's just too random and not structured, so no patterns are
> recognized.

See, if it were computation, I would think that with no external light
source, visual processing would stop.

>
> >> I wouldn't have
> >> much of a problem assuming consciousness to anything that is obviously
> >> behaving intelligent and self-aware. We may not have such AGI yet, but
> >> research in those areas is progressing rather nicely.
>
> > I would say that ATI (Artificial Trivial Intelligence) is progressing
> > rather nicely, but true AGI is stalled indefinitely.
>
> Really? Doesn't seem like you keep up with the progress. I'm not talking
> about the narrow AIs you use while translating text (which is far from
> perfect), or even the more fancy Watson winning Jeopardy (although that
> might be getting a little bit more closer).
> Have you looked at recent AGI papers? Or looked into systems like
> OpenCog or DARPA SyNAPSe's idea, or even some of the recent narrow
> HTM-like systems (which while not generally intelligent, have solved
> some difficult sense-data processing problems). I'd bet on some very
> interesting progress (such as baby-level or mammal-level intelligence)
> within 7 years or so.

No, I haven't looked at those, but I have debated with a few AGI
developers and nobody has given me any reason to look into it more.
What in particular do these systems do that seems promising to you?
What has been accomplished? I've seen some of the walker-dog looking
robots. Cool, but walking like a dog isn't sentience. Again,
impressive ATI but no real AGI.

> As for human-level, probably 15-50+ years,
> depending on which approaches turn out to work or not.

I don't think any of them will work unless they get into biology.

>As I said before,
> we're still very constrained as far computational resources are
> concerned, and nobody has cracked molecular nanotechnology yet, and no
> fancy 3D chip fabrication technology either. Energy costs are also very
> high - running something the size of the human brain using current
> FPGA-based neuromorphic technology would cost as much as running a small
> city. Some new hardware approaches (like SyNAPSe) will likely solve one
> case of this problem. Efficient resource-constrained AGI is hard, but
> most evidence points toward it being solvable.

Evidence produced by who, technology companies looking for VC
investors?
Working my way thorough the videos. Excellent presentation, but the
ideas presented are only modeling the trivial function and consequence
of awareness, not the experience itself. I don't expect it to be
otherwise, but my model has more explanatory power.

>
> >>> I understand that completely, but it relies on conflating some
> >>> functions of emotions with the experience of them. Reward and
> >>> punishment only works if there is qualia which is innately rewarding
> >>> or punishing to begin with. No AI has that capacity. It is not
> >>> possible to reward or punish a computer.
>
> >> Yet they will behave as if they have those emotions, qualia, ...
>
> > So will a cartoon character.
>
> A cartoon character does not have inner beliefs (except those modeled in
> the author's mind) or a working cognitive architecture.

A computer program is the same. Both are different kinds of
recordings. Programs have inner beliefs just as Mickey Mouse has
wardrobe preferences.

> Of course, I
> guess in your theory only those made of magical matter organized in
> magical ways have thoughts?

It's not my theory, it's the way it happens to be in this universe.
Flammable things burn. DNA builds cells. Organisms feel and some
think. There is no evidence is to suggest that understanding can occur
without feeling, and there is nothing to suggest that logic cannot be
simulated without understanding automatically appearing.

> The thing is, humans don't store beliefs in
> neurons either, they store in emergent abstract structures which encode
> their data in neurons/synapses.

'emergent abstract structures' are metaphysical. We don't live in a
metaphysical abstraction, we live in a concrete human world, from
which we abstract models and metaphors, not the other way around.

> A few neurons die? Not a problem, the
> data is sparsely distributed throughout the cortex. More than a few?
> Still recoverable. Neuroplasticity is quite awesome.

There is no data. Only neurons. Neuroplasticity is indeed awesome but
it has nothing to do with data. Neurons are eukaryotes. They have
lives. Their civilization is our brain.

>
> >> Punishing will result in some (types of) actions being avoided and
> >> rewards will result in some (types of) actions being more frequent.
>
> > That is only one of the results of punishment and reward. There are
> > many many others. They teach us to punish and reward other. They give
> > us traumatic memories. The might make us addicted to other rewards.
> > Lots of things that will never happen to a computer.
>
> I don't think it's moral to punish others, no matter what they do. I may
> feel angry and may even want to punish them or even seek vengeance, but
> I won't claim that is morally right.

Same here, but the impulse to punish is popular, probably and
anthropological universal.

> As for what would happen to a
> computer: a computer whose cognitive architecture features a particular
> implementation of empathy (such as confusing models of others with
> yourself), may end up some types of moral behavior similar to ours, such
> as applying what we do to ourselves to ours, or a generalized form of
> the golden rule.

Accidental simulated empathy? Yay. I can't wait. Sort of like "Your
call is important to us, please stay on the line", but more
'intelligent'.

> Addiction? Very much possible, try watching those
> videos I linked before. Why does addiction happen? Many reasons,
> although when I said 'will result in some (types of) actions being more
> frequent', addiction is clearly included there, if some action is
> rewarding, and an agent seeks actions which maximize its reward (which
> in turn means that those actions are biased toward being more often, in
> the form of a compulsion).

Why don't computers get addicted? Because they can't experience
reward. There is no computer behavior that cannot easily be excised by
altering the program that produces them.

> Traumatic memories? Recalling something tends
> to also involve recalling emotional memories. If such a system would
> have traumatic memories (memories about negative emotional events), they
> would reinforce their aversion towards certain actions (that were
> punished). There is absolutely no reason to assume a "computer" (don't
> confuse the system running in some hardware with the hardware) wouldn't
> be able to have those things happen given the right cognitive architecture.

Yes, there is a reason. The reason is that cognitive architecture
can't ever feel anything. Human cognition evolves out of limbic
emotion, not the other way around. Trivial intelligence the tip of the
iceberg as far as feeling and awareness goes. Simulating the tip
doesn't automatically produce the rest of the ice.

>
> >> A computationalist may claim they are conscious because of the
> >> computational structure underlying their cognitive architecture.
> >> You might claim they are not because they don't have access to "real"
> >> qualia or that their implementation substrate isn't magical enough?
>
> > My views have nothing to do with magic. Computationalism is about
> > magic. Also all qualia is real qualia, they are just materially
> > limited to the scale and nature of the experiencer.
>
> When I talk about 'magic', I merely mean things which have to not be
> explained or cannot be explained by the theory or which are hand waved
> away. Magic could be seen as axioms, theorems could be seen as
> non-magic. Within COMP, arithmetic realism is close to magic as it has
> to be assumed, even if most people can understand what it is, but we
> can't really reduce it further.

I can reduce it further. All arithmetic truth is reducible to
sensorimotive experience.

> Matter within COMP is non-magic as it's
> explained/reduced. Mind in COMP is almost non-magic, although there is
> some unexplainable truth in there (along with qualia), however it's
> included in the full arithmetical truth - which is very large.
> In an Aristotelian world-view, the existence of matter or it being
> ontologically primary is magic. In your case, if you assume the brain
> hosts consciousness,

I would not say the brain hosts consciousness, I would say that the
brain is the back door of consciousness.

> but a different substrate doesn't is 'magical'
> because it privileges wetware with some very unique mental properties,

I don't make the rules. It's the universe that privileges organic
matter with organic qualities. All I do is point out that human
consciousness is strictly an organic quality.

> yet allows zombies in other substrates for reasons which don't make much
> sense a lot of sense to me (I can imagine it, but I can't understand why
> that would be necessary.)

The term zombie is biased from the start. It assumes that the subject
is expected to be aware and alive to begin with. The default state of
inorganic matter is different from that of living organisms. When an
inorganic machine is operated as a puppet of a living organism, it is
not a zombie, it is what it is - a machine being used as a puppet.
There is no expectation of it being anything else, no matter how
sophisticated a puppet it is. Let's use the word puppet from now on
instead of zombie.

>
> >> Eventually such a machine may plead to you that they are conscious and
> >> that they have qualia (as they do have sense data), but you won't
> >> believe them because of being implemented in a different substrate than
> >> you? Same situation goes for substrate independent minds/mind uploads.
>
> > Meh. Science fiction. If such a thing were remotely possible then
> > there would be no difference between experimenting with new operating
> > system builds and grafting human cockroach genetic hybrids. Computer
> > science would be considered genocidal. Does Watson know or care if you
> > wipe it's memory or turn it off? Of course not, it's an electronic
> > filing cabinet with a fancy lookup interface.
>
> I never considered Watson when talking about AGI. Watson is mostly
> narrow AI, although it's moving in a good direction. Either way, if at
> least one of the current projects that are underway succeed, you'll be
> able to consider again if you want to deny them consciousness or not.
> I'm betting that at least one will succeed, I don't know what you're
> betting, but you seem disillusioned about the prospects of AGI

It's promissory materialism to me. I don't see any promise in the
current approach beyond mimicry. Not that it's a bad thing. I'd rather
have ATI than AGI. Who wants a sentient computer to have to take care
of?

>- I'm not
> because I realize full well that there was no way they could have done
> it in the past as they lacked both the hardware and the right cognitive
> architecture, but with failure one learns, so even if they didn't attain
> AGI, they advance many other related fields greatly.
> Computer Science genocidal? I did see some smart people afraid of
> considering COMP, even though they secretly believe in it given their
> writings, mostly because COMP implies a lot of possible experiences, not
> all ethically pleasing, but then, the universe is quite "cruel" as well,
> and the universe doesn't "care" about us. The nice thing about COMP
> though is that no machine is truly locked down to any world, and there
> are infinities of continuations. As for deleting programs? a physical
> implementation of a program allows it to manifest locally relatively to
> you, deletion just implies a reduction in measure and no more
> manifestation relatively to you or other observers in that frame
> (however other continuations should exist for the program) - it's
> obviously unethical/immoral because other programs may care about/want
> to access that program you deleted, or that program might want to
> manifest relatively to you or other programs. However, deleting
> something like the Universal Dovetailer, or a complete world simulation
> would likely be fine as no programs within the simulation or the UD
> interact directly with you (although your own computation should be
> found within UDs everywhere).
>

My only point was to point out the absurdity of treating scripted
logic as living beings. It's still absurd.
If we build organic machines, they may well be able to live, but the
more they do, the less of a machine they will be.

>
> > At what point does autonomy magically appear?
>
> Depends what you mean by autonomy, but I would guess that some cognitive
> architectures could have the 'feel' of free will because they would
> introspect and see that they have multiple choices and that they have to
> make one, they would then use their cognitive machinery to make a choice
> influenced by many factors, many which might not be consciously accessible.

It's backwards. Feeling is what allows introspection, not cognition.
Making choices among multiple factors does not conjure the feeling of
free will out of thin air. It's like saying throwing a thousand golf
balls into a funnel with 15 holes makes some kind of introspective
choicemaking experience appear. In the synapses between the golf balls
I suppose? I get it, but I'm telling you it makes a lot more sense
here in reality land then it did for me in simulation land.

>
> If you meant it in a more general manner: most programs where you can't
> trivially, provably predict their future behavior in much less steps
> than it would take to run the program would be "autonomous" enough.

You will know that a program has become autonomous when it tries to
kill its programmer. Short of that, it's trivia 'autonomy'.

>
> >> Within COMP,
> >> you both have deterministic behavior, but indeterminism is also
> >> completely unavoidable from the 1p. I don't think 'free' will has
> >> anything to do with 1p indeterminism, I think it's merely the feeling
> >> you get when you have multiple choices and you use your active conscious
> >> processes to select one choice, however whatever you select, it's always
> >> due to other inner processes, which are not always directly accessing to
> >> the conscious mind - you do what you want/will, but you don't always
> >> control what you want/will, that depends on your cognitive architecture,
> >> your memories and the environment (although since you're also part of
> >> the environment, the choice will always be quasideterministic, but not
> >> fully deterministic).
>
> > I agree except for the fact that it makes no sense for such a feeling
> > to exist in the first place. There is no reason to be conscious of
> > some decisions and not of others were there not the possibility to
> > influence those decisions consciously. Just because there are multiple
> > subconscious agendas doesn't mean that you don't consciously
> > contribute to the process in a causally efficacious way.
>
> Sure, the conscious processes contribute to the choice. The actual fuzzy
> "line" dividing conscious and unconscious processes is a hard practical,
> but solvable problem.

I don't see it as a problem. It's a matter of scope and scale.
It's not my ontology, it's the universe's ontology that it is
incompatible with. This approach to AGI is rooted in the misguided
belief in 'signals' and data as causative agents. My view is more
literal and acknowledges that all information exists within the
awareness of an non-computational interpreter made of matter.

>
> >>> Some
> >>> parents would like to be able to do that I'm sure, but of course it
> >>> doesn't work that way for people. No matter how compelling and
> >>> coercive the brainwashing, some humans are always going to try to hack
> >>> it and escape. When a computer hacks it's programming and escapes, we
> >>> will know about it, but I'm not worried about that.
>
> >> Sure, we're as 'free' as computations are, although most computations
> >> we're looking into are those we can control because that's what's
> >> locally useful for humans.
>
> > If computations were as free as us, they would look for humans who
> > they can control because that's what's locally useful for computers.
>
> They have hardly enough computational capacity for doing that. Don't
> expect any Skynet to take over the Internet anytime soon - our computers
> are far too slow and the architectures are not very suitable for running
> AGIs. Although, I wouldn't completely rule out the possibility of doing
> this with some very resource-limited AGIs, it's just that they're in
> their infancy for now.

You do know that this rap has been around for a long time, right? I
remember when Virtual Reality was the hype. It didn't happen.

>
> >>> What is far more
> >>> worrisome and real is that the externalization of our sense of
> >>> computation (the glass exoskeleton) will be taken for literal truth,
> >>> and our culture will be evacuated of all qualities except for
> >>> enumeration. This is already happening. This is the crisis of the
> >>> 19-21st centuries. Money is computation. WalMart parking lot is the
> >>> cathedral of the god of empty progress.
>
> >> There are some worries. I wouldn't blame computation for it,
>
> > I don't blame computation, but I think that it is a symptom of the
> > excessively occidental pendulum swing since the Enlightenment Era.
> > Modern science and mercantilism are born of the same time, place, and
> > purpose - the impulse for control of external circumstances through
> > methodical discipline and organization - the harnessing of logic and
> > objectivity.
>
> We gained much from what started in the Enlightenment Era. We also lost
> some things, but I think we'll regain them if we can make certain right
> choices.

I agree. Rediscovering the authority of subjectivity is one of the
right choices.

>
> >> but our
> >> current limited physical resources and some emergent social machines
> >> which might not have beneficial outcomes, sort of like a tragedy of the
> >> commons, however that's just a local problem. On the contrary, I think
> >> the answer to a lot of our problems has computational solutions,
> >> unfortunately we're still some 20-50+ years away to finding them, and I
> >> hope we won't be too late there.
>
> > I think it's already 30 years too late and unfortunately I think the
> > financialization problem is not going to permit any solutions of any
> > kind from being realized. Only a change in human sense and redirection
> > of free will could save us, and that would be a miracle that dwarfs
> > all previous revolutions.
>
> Why 30 years too late?

Overpopulation has changed the priorities of all human endeavors. No
time or money to explore fancy ideas, gotta grab what you can before
someone takes it away.

> We just didn't have as much knowledge about
> things as we had 30 years ago. However, it's not like we can just sit
> and relax, some problems do have a timelimit on them before solving them
> because even more difficult if not nearly impossible.

Making any real changes would require more cheap energy than we have
available.

>
> > If I type this in Chinese, someone who reads Chinese will sense more
> > than you will even with the same information available directly to
> > your senses. Perception is not a passive reception of 'information',
> > it is a sensorimotive experience of a living animal.
>
> Again, explained with the HTM example I gave before. Raw sense data
> doesn't mean much until it's processed by our visual system by being
> broken off into small patterns, then patterns of those are
> recognized/learned and so on. Someone will assign different meanings to
> the data they access depending on their previous memories (which
> themselves depend on previous memories and so on, although initially you
> have a virgin "randomized" system ready to learn just about any types of
> data/patterns).

That is indeed how an OCR program works, but our visual system is not,
as Dr. Joscha Bach correctly stated, bottom up. It's mostly top down,
driven by expectation, not processing of generic bits of raw sense.
It's not additive, it's subtractive. We don't assign meanings to
words, we make sense of the meaning that has been assigned to them
already. Our ability to make sense of language is actually hardwired
before birth.

>
> >> We're able to differentiate colors because of how the data is
> >> processed in the visual system.
>
> > Differentiation can be accomplished more easily with quantitative data
> > than qualitative experience. Why convert 400nm wavelength light into a
> > color if you can just read it directly as light of that exact
> > wavelength in the first place? It's redundant and nonsensical. I know
> > it seems like it makes it easier and convenient for us, but that's
> > reverse engineering and begging the question. The fact remains that
> > there is no logic in taking a precise exchange of digital quantitative
> > data into a black box where it is inexplicably converted into maple
> > syrup and cuckoo clocks so that it can then be passed back to the rest
> > of the brain in the form of acetylcholine and ion channel
> > polarizations.
>
> Evolution is quite chaotic and unpredictable. The answer to your
> question lies in how the eye evolved, how the neocortex evolved, etc.
> Also, the brain doesn't convert 400nm into "color". The brain processes
> small areas of data categorized in specific ways that differentiates
> color in ways (and then larger patterns of those patterns, etc) that we
> can communicate those properties.

Where does the color come in? Where does it come from and why?

> In COMP qualia/consciousness is the
> arithmetical truth of that abstract system which does all this
> processing in space and time (perception is also temporal besides being
> spatial, temporal patterns are also processed!).

I know. That explains nothing though. It gives some arithmetical truth
the name of qualia but has no insight why or how such a
differentiation from other arithmetic truth exists.

> We cannot communicate
> the qualia directly,

Qualia doesn't need to be communicated directly, it is shared
figuratively. Language, music, art, etc.

> although we can talk about the communicable
> components, for which we'll find out that they have equivalents in the
> structural organization of the brain. If the brain cannot differentiate
> some input data, I'll bet that you won't be able to notice that you saw
> 2 different qualia, which nonetheless were not represented somehow,
> somewhere in the brain (ignoring MGA 1/counterfactuals/parallel worlds
> here, to avoid complicating the issue: in those cases, a functioning
> structure somewhere existed and now you have memories of it present in
> your local brain).

Not sure what you mean here, but you are only talking about the form
of qualia, not the content. The important part of qualia is not
whether you can tell the difference between one and another, it's the
inexplicable presence of any qualia at all in the first place.

>
> >> We're not able to sense strings or
> >> quarks or even atoms directly, we can only infer their existence as a
> >> pattern indirectly.
>
> > Right, but when the atoms in our retinal cells change, we see
> > something.
>
> Not if that information gets lost before it manages to influence the
> visual system. The data we sense in our eyes is noisy as hell, our
> experience is clear as water. The reason for this is that we don't sense
> direct sensory input, but processed, corrected, predicted patterns
> influenced by sensory data.

We are our visual system. We aren't conscious of it at the
neurological level, but every cell in our body is part of what we are.
There are processes on every level
but our executive level awareness is not a processed simulation, it is
a specular channel for anthropological-biographical level awareness.
If you look too closely at the low level neurological activity or
logical systemic activity, you get a distorted view of what the full
spectrum of awareness actually is, just as if you look at too closely
at a TV screen you miss the image.
In my theory, mechanism is the logical essence of materialism.
Consciousness, materialism, mechaemorphism, anthropomorphism, entropy,
significance, are all aspects of sense, which is sensorimotive-
electromagnetism³.

>
> >> I ask you again to
> >> consider the brain-in-a-vat example I said before. Do you think someone
> >> with an auditory implant (example:http://en.wikipedia.org/wiki/Auditory_brainstem_implanthttp://en.wiki...) hears nothing? Are they
> >> partial zombies to you?
>
> > No, the nature of sense is such that it can be prosthetically
> > extended. Blind people can 'see' with a cane. That's very different
> > from being replaced or simulated though.
>
> My example was when there was replacing going on. Of course, there is
> also extension.>> They behave in all ways like they sense the sound, yet you might claim
> >> that they don't because the substrate is different?

You can replace/substitute ears but you can't replace the brain's
capacity to hear.

>
> > The substrate isn't different because their brains are human brains.
>
> What if you slowly start replacing the brains with functionally
> equivalent parts. Neuroscience says this should work. You claim that it
> won't work. Deciding on this is still early, but the future will answer
> these questions.

It's the same as gradually replacing water with vinegar in irrigation.
Diminishing returns gradually and then flatline.
The evidence points toward the likelihood of your non-existence as
well. Who are you going to believe, COMP or your own lying eyes?

>
> >> It just places itself as the best candidate to bet on, but it can
> >> never "prove" itself.
>
> > A seductive coercion.
>
> We have to bet on a theory or another when we want to use it for
> practical things. As such, I'll tend to bet on what is more probable
> given the data as that increases the chance that I'll accomplish my
> goals. Currently we're just studying the consequences and requirements
> of different theories, but one day, man may very well have to bet one
> some of them if science and technology advances enough to make certain
> things practical (such as a computationalist doctor you can say 'yes' to
> or if we treat an AGI with human-level intelligence as a person).

I'm confident that the limits of computationalist approaches to AGI
will become more and more apparent. I think they already are, but we
aren't looking at the reality, only our own enthusiasm, expectations,
and assumptions.

>
> >> COMP doesn't deny subjectivity, it's a very
> >> important part of the theory. The assumptions are just: (1p) mind,
> >> (some) mechanism (observable in the environment, by induction),
> >> arithmetical realism (truth value of arithmetical sentences exists), a
> >> person's brain admits a digital substitution and 1p is preserved (which
> >> makes sense given current evidence and given the thought experiment I
> >> mentioned before).
>
> > Think about substituting vinegar for water. A plant will accept a
> > certain concentration ratio of acetic acid to water, but just because
> > they are both transparent liquids does not mean a plant will live on
> > it in sufficient concentration.
>
> If the doctor does his job properly, H2O will still be H2O. You seem to
> be claiming that there is something irreplaceable about data sensed from
> the real world as opposed data from a (locally) computed environment. I
> see no reason for this hypothesis and it's on you to show evidence that
> shows your view is more likely to be correct than mine.

Digital H2O is not H2O. It's as simple as that. A picture of water is
not water. A programmatic picture of consciousness is not
consciousness. The universe is not data, not a simulation, it is an
interpreter of experiences. Some of the interpretations can be
described in linear cognitive logic, some in art, love, violence,
beauty, etc.
No, both are symmetrical expressions of the same underlying sense.
They are anomalous, so disconnected in one sense, but symmetrical
aspects of a whole so connected the another sense.

> Also what
> exactly is that "sensed" donkey (surely you don't expect literal donkeys
> to pass through your brain; a donkey pattern makes sense, but that is
> representable as an informational pattern which the rest of the system
> can understand/distinguish from other patterns)?

It's not a donkey, it's an adjective, like donkey-ness. I don't think
that qualia is that interchangeable in reality because sense means
that qualia is part of the singularity and therefore is revelatory of
authentic realism. A color that was just donkey is probably a category
error but logically or COMPwise it should be just as likely as red or
green.

> There's a lot of
> details which seem to be taken for granted and would have to be
> explained in detail for me to understand. I also don't see why this view
> is that much more different from some types of dualism.

It's different from dualism because it is double dualism. Dualism in
one sense, monism in another. It's a continuum of involuted monism
that sense modulates, so that some aspects of the cosmos are 99%
mechanistic and 1% subjective, others are 50-50, etc. The most
provocative thing is that symmetry of electromagnetism with
sensorimotivation may invalidate the Standard Model as a literal
cosmology, photons are obsolete. With perception and relativity as
symmetrical opposites too, gravity, time, and space are all zeroed
out.
Are there any CPUs that run on any other language than binary code?

> Programmers
> do care what they want to write their programs in - they want them to be
> portable and run in many software and hardware implementations.
> Someone might want to upload their mind someday to become substrate
> independent and avoid a lot of problems that come with wetware brains
> and bodies.

I know programmers care, but machines don't.

>
> > Consciousness has no place in a computer.
> You could apply that to the brain as well, if you're going to strip away
> the abstraction and refuse to consider high-level patterns.
> A computer may be a suitable body for a conscious process.

If there were any reason to suspect that to be true, then I would, but
it makes too much sense that machines are unfeeling and devoid of deep
understanding. Our every interaction with them confirms that readily.
Look at all of the proprietary magic behind Google and still any given
search is off key. So much better than the search engines that came
before, but it plateaued. The searching algorithm is no better than it
was 10 years ago. It still doesn't understand what I really want. It's
grasping around in the dark and getting lucky when it can.

>
>
>
> >>>>> My solution is that both views are correct on their own terms in their
> >>>>> own sense and that we should not arbitrarily privilege one view over
> >>>>> the other. Our vision is human vision. It is based on retina vision,
> >>>>> which is based on cellular and molecular visual sense. It is not just
> >>>>> a mechanism which pushes information around from one place to another,
> >>>>> each place is a living organism which actively contributes to the top
> >>>>> level experience - it isn't a passive system.
>
> >>>> Living organisms - replicators,
>
> >>> Life replicates, but replication does not define life. Living
> >>> organisms feel alive and avoid death. Replication does not necessitate
> >>> feeling alive.
>
> >> You'll have to define what feeling alive is.
>
> > Why? Is it not defined enough already? This is why occidental
> > approaches will always fail miserably at understanding consciousness.
> > It won't listen to a single note on the piano until we define what
> > music is first.
>
> I only asked you to define it as if you were explaining it to someone
> who asked you what it means (like a child who never heard the expression
> before). It's so highly ambiguous that it can be taken to mean too many
> things. I wanted to try and keep the discussion precise.

Even to a hypothetical child I would only need to say, 'you know,
feeling.. (gesture with both hands raised and advanced at chest level
for emphasis) alive!'. It is primary. To define it in words inverts
the relation and privileges the wrong end of the semiotic
transmission. Being alive comes first. Having a word for it comes
later, if at all.

>
> >> This shouldn't be confused
> >> with being biological. I feel like I have coherent senses, that's what
> >> it means to me to be alive.
>
> > Right, it should not be confused with biology. For me 'I feel' is good
> > enough to begin with, but it extends further. I want to continue to
> > live, to experience pleasure and avoid pain, to seek significance and
> > stave off entropy, etc. Lots of things but they all begin with
> > sensorimotive awareness.
>
> I can see an AGI which could eventually have such goals or
> reward/motivational systems (although it's questionable if some of them
> are really desirable to have). Of course, if you claim that such AGIs
> would not be conscious, despite behaving like they are, we would have a
> problem.

AGIs could only be conscious in an animal way if they were made out of
something that lives and dies with other animals. AGIs are already
'conscious' in a molecular way. Think of a computer as one gigantic
inorganic molecule. A glass molecule.

>
> >> My cells on their own (without any input
> >> from me) replicate and keep my body functioning properly. I will avoid
> >> try to avoid situations that can kill me because I prefer being alive
> >> because of my motivational/emotional/reward system. I don't think
> >> someone will move or do anything without such a biasing
> >> motivational/emotional/reward system. There's some interesting studies
> >> on people who had damage to such systems and how it affects their
> >> decision making process.
>
> > Sure, yes, but we need not have any understanding of our cells or
> > systems. The feelings alone are enough. They are primitive. We don't
> > have to care why we want to avoid pain and death, the motivation is
> > presented without need for explanation. There is no logic - to the
> > contrary, all logic arises from these fundamental senses which
> > transcend logic.
>
> I don't see why they have to be fundamental or ontologically primitive.
> The 3p parts can be explained in simpler notions. The 1p part can be
> explained in COMP as arithmetical truth.

Arithmetic truth doesn't care about anything. 1p cares by definition.
Blue cannot be explained arithmetically to a blind person. 1p is all
like that.

> Any 1p theory should not
> directly contradict 3p observations, otherwise it's wrong.

Any 1p theory that does not directly contradict 3p observations in
some ways is wrong. 1p is the ontological opposite of 3p in almost
every conceivable way. If they weren't what would be the point of the
division in the first place.

>
> The thing about these senses is that while we have them, but we don't
> always understand them. When we do find the 3p explanation for why some
> sense is like this or that, we can have a partial communicable
> explanation for them and better understand ourselves.

Yes, but we diminish the 1p explanation also, like literacy destroyed
verbal memory and adulthood undermines naive imagination. The thing is
that the cosmos works on all of these levels. It makes sense from a
childish perspective as well as a sophisticated perspective. The
universe understands itself perfectly irrespective of our latest ideas
about it. The universe supports any degree of ignorance or
sophistication, sanity or delusion.

> I don't see how
> these senses transcend logic(they seem to follow logic as far as we
> investigate, and the inaccessible parts, we cannot say anything about),
> despite that they do make our 1p world, so they are very important
> direct experiences to us.

Logic is just one narrow band on the spectrum of sense. Most things in
the universe exist quite well without access to any awareness of
logic.

>
>
>
> >>>> are fine things, but I don't see why
> >>>> must one confuse replicators with perception. Perception can exist by
> >>>> itself merely on the virtue of passing information around and processing
> >>>> it. Replicators can also exist due similar reasons, but on a different
> >>>> level.
>
> >>> Perception has never existed 'by itself'. Perception only occurs in
> >>> living organisms who are informed by their experience. There is no
> >>> independent disembodied 'information' out there. There detection and
> >>> response, sense and motive of physical wholes.
>
> >> I see no reason why that has to be true, feel free to give some evidence
> >> supporting that view. Merely claiming that those people with auditory
> >> implants hear nothing is not sufficient.
>
> > I didn't say that they hear nothing. If they had hearing loss from an
> > accident or illness I see no reason why they would not hear through an
> > implant. If they have never heard anything at all? Maybe, maybe not.
> > They could just as easily feel it as tactile rather than aural qualia
> > and we would not know the difference and neither would they. The Wiki
> > suggests this might be the case for all implant recipients "(most
> > auditory brain stem implant recipients only have an awareness of sound
> > - recipients won't be able to hear musical melodies, only the beat)".
> > You can feel a beat. That's not really an awareness of sound qua
> > sound, it's just a detection of one aspect of the phenomena our ears
> > can parse as aural sound.
>
> That sounds like a limitation of current technology, resolution-wise.

Not to me. Hearing and feeling a thump are different things.

> As for not knowing the difference? I already said that I can't know the
> difference between your and my qualia. Feeling sound as tactile means
> that either it's connected to the wrong nerves or in the case of
> congenitally deaf person: it's a matter of the structure of the data and
> how it can be differentiated from other data (and from other cortical
> areas which is trained with radically different data). Feeling as
> tactile could happen if the structure of the data is really no different
> from tactile data and if that part of the cortex could just as well be
> integrated with the sensorimotor cortex.

As long as you assume that data is real, perception will be a mystery.

>
> >> My prediction is that if one
> >> were to have such an implant, get some memories with it, then somehow
> >> switched back to using a regular ear, their auditory memories from those
> >> times would still remain.
>
> > I agree. Why wouldn't they?
>
> It seemed to me that in your theory you were reifying qualia in such a
> way that only raw qualia from the "real" world could be experienced.
> With the implant, data is processed, thus you don't get your "real" qualia.

No, no. I fully expect full sensory movies to be developed, maybe with
an implant recorder/player (God, I wish). I only think that they still
won't feel 100% real. We listen to music from an mp3. That doesn't
mean the the mp3 player hears it too though.
It's not possible because subjectivity is the primary orientation in
the cosmos. That's it's purpose, to orient experience. If you replace
the subject with an object there is nothing left of the subject to
experience anything. It's the reason that cutting off a finger is
different from cutting off your head. It's only because COMP assumes
supernatural powers for 'information' or arithmetic that we can fool
ourselves into thinking that we can survive cutting off our head by
replacing it with with a device that we think acts like a head.
If there were any counterfactuals at all, I would be completely open
to that idea. There aren't though.

> I tend to think in
> the patterns that matter, cells and organisms represent. We cannot
> consider any of them primitive or magically privileged for no reason.

It's not for no reason. It's because of significance (negentropy).
Life is the clustering of improbability. It's a way of balancing the
entropy of mechanism. The deck is stacked slightly in favor of the
improbable. What improbable events lack in ubiquity, they make up for
by reproducing themselves, until they themselves become ubiquitous and
generically probable.

> I
> privilege computation because of the Chuch-Turing Thesis - it's the only
> absolute thing we can have in math. Either way, if you theory offers
> testable predictions, you may be able to find out if you're right or not
> about it. Although predictions of the form: "Any AGI is not conscious"
> are not too valid because they are not testable or maybe they are, but
> you wouldn't perform the test - if a mind upload is eventually possible,
> you could try, but you wouldn't, because you're betting that you'd lose
> your consciousness, yet if the bet is wrong, you'd be conscious

I'm talking about a bigger picture. Truth you can be certain of is 3p
truth. 1p truth is untestable in direct proportion to the 1p
significance of that truth. Trivial 1p truth is testable. Profound 1p
truth is not. I know that because I have been pouring over this
symmetry for the last 25 years (without knowing until more recently).
This is why I can't comply with calls to make my ideas fit in with all
previous ideas. It's more universal than that. It is a map of how the
universe actually is, not how we would like it to be. It is half
fiction. That's the more important half.

Whew, this is getting wayy too time consuming. Don't hesitate to leave
it at this. I'm only continuing for your benefit. I already understand
your position well, and I would share it also, had not stumbled on my
own theory.

Craig

Bruno Marchal

unread,
Jan 30, 2012, 5:09:48 AM1/30/12
to everyth...@googlegroups.com

On 29 Jan 2012, at 03:20, Craig Weinberg wrote:

> On Jan 28, 8:03 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> On 28 Jan 2012, at 02:33, Craig Weinberg wrote:
>>
>>> On Jan 27, 12:20 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>
>>>> But many things about numbers are not arithmetical. Arithmetical
>>>> truth
>>>> is not arithmetical. Machine's knowledge can be proved to be non
>>>> arithmetical.
>>>> If you want, arithmetic is enough rich for having a bigger reality
>>>> than anything we can describe in 3p terms.
>>
>>> But all arithmetic truths, knowledge, beliefs, etc are all still
>>> sensemaking experiences. It doesn't matter whether they are
>>> arithmetic
>>> or not, as long as they can possibly be detected or made sense of in
>>> any way, even by inference, deduction, emergence, etc, they are
>>> still
>>> sense. Not all sense is arithmetic or related to arithmetic in some
>>> way though. Sense can be gestural or intuitive.
>>
>> That might be possible. But gesture and intuition can occur in
>> relative computations.
>
> How do you know that they 'occur' in the computations rather than in
> the eye of the beholder of the computations?

The beholder of the computations is supported by the computations.
Those exist independently of me, in the same way numbers are prime or
not independently of me.

>
>>
>>
>>
>>>>> There is nothing in the universe
>>
>>>> The term universe is ambiguous.
>>
>>> Only in theory. I use it in a literal, absolutist way.
>>
>> This does not help to understand what you mean by "universe".
>
> Universe means 'all that is' in every context.

But "all that is" is what we are searching, testing, studying. The
word "is" is very useful in everyday life, but very ambiguous per.
"is" or "exist" depends on the theory chosen. Something can exist
ontologically, or epistemologically.

>
>>
>>
>>
>>>> You confuse proving p, which can be explained in arithmetic, and
>>>> "proving p & p is true", which can happen to be true for a machine,
>>>> but escapes necessarily its language.
>>>> The same for consciousness. It cannot be explained in *any* third
>>>> person terms. But it can be proved that self-observing machine
>>>> cannot
>>>> avoid the discovery of many things concerning them which are beyond
>>>> language.
>>
>>> I think that are confusing p with a reality rather than a logical
>>> idea
>>> about reality.
>>
>> p refers to reality by definition. "p" alone is for "it is the case
>> that p".
>
> But it isn't the case, it's the idea of it being the case.


It is the case that 17 is prime, independently of of it is the case
that such or such human has the idea that it is the case that 17 is
prime. You are confusing levels.


> You're just
> saying 'Let p ='. It doesn't mean proposition that has any causal
> efficacy.

The fact that 17 is prime has causal efficacy. It entails many facts.

>
>>
>>> I have no reason to believe that a machine can observe
>>> itself in anything more than a trivial sense.
>>
>> It needs a diagonalization. It can't be completely trivial.
>
> Something is aware of something, but it's just electronic components
> or bricks on springs or whatever being aware of the low level physical
> interactions.

A machine/program/number can be aware of itself (1-person) without
knowing anything about its 3p lower level.

>
>>
>>> It is not a conscious
>>> experience, I would guess that it is something like an accounting of
>>> unaccounted-for function terminations. Proximal boundaries. A
>>> silhouette of the self offering no interiority but an
>>> extrapolation of
>>> incomplete 3p data. That isn't consciousness.
>>
>> Consciousness is not just self-reference. It is true self-reference.
>> It belongs to the intersection of truth and self-reference.
>
> It's more than that too though. Many senses can be derived from
> consciousness, true self-reference is neither necessary nor
> sufficient. I think that the big deal about consciousness is not that
> it has true self-reference but that it is able to care about itself
> its world that a non-trivial, open ended, and creative way. We can
> watch a movie or have a dream and lose self-awareness without being
> unconscious. Deep consciousness is often characterized by
> unselfconscious awareness.

This is not excluded by the definition I gave.

I am not sure. I don't see the relevance of that mechanist point.

>
>>
>>> Consciousness does nothing to speed decisions, it would only cost
>>> processing overhead
>>
>> That's why high animals have larger cortex.
>
> Their decisions are no faster than simpler animals.

Complex decision are made possible, and are done more faster.


>
>>
>>> and add nothing to the efficiency of unconscious
>>> adaptation.
>>
>> So, why do you think we are conscious?
>
> I think that humans have developed a greater sensorimotive capacity


I still don't know what you mean by that. You can replace
"sensorimotive" by "acquainted to the son of God" in all your argument
without them having a different meaning or persuasive force.

> as
> a virtuous cycle of evolutionary circumstance and subjective
> investment. Just as hardware development drives software development
> and vice versa. It's not that we are conscious as opposed to
> unconscious, it's that our awareness is hypertrophied from particular
> animal motives being supported by the environment and we have
> transformed our environment to enable our motives. Our seemingly
> unique category of consciousness can either be anthropic prejudice or
> objective fact, but either way it exists in a context of many other
> kinds of awareness. The question is not why we are conscious, it is
> why is consciousness possible and/or why are we human.

Why we are human is easily explained, or not-explainable, as an
indexical geographical fact, by comp. It is like "why am I the one in
W and not in M?". Comp explains why consciousness is necessary. It is
the way we feel when integrating quickly huge amount of information in
a personal scenario.

> To the former,
> the possibility is primordial, and the latter is a matter of
> probability and intentional efforts.
>
>>
>>
>>
>>>> Consciousness is not explainable in term of any parts of something,
>>>> but as an invariant in universal self-transformation.
>>>> If you accept the classical theory of knowledge, then Peano
>>>> Arithmetic
>>>> is already conscious.
>>
>>> Why and how does universal self-transformation equate to
>>> consciousness?
>>
>> I did not say that. I said that consciousness is a fixed point for a
>> very peculiar form of self-transformation.
>
> what makes it peculiar?

The computer science details of its implementation (not of
consciousness, but of the self-transformation, based on some
application of Kleene's theorem).


>
>>
>>> Anything that is conscious can also be unconscious. Can
>>> Peano Arithmetic be unconscious too?
>>
>> Yes. That's possible if you accept that consciousness is a logical
>> descendent of consistency.
>
> Aren't the moons of Saturn consistent?

The material moons are not programs, nor theories. "consistent" cannot
apply to it without stretching the words a lot.


> Will consciousness logically
> descend from their consistency?

If ever the moon have to become conscious. Yes. No if this has not to
happen. There is few chance moons becomes conscious, for they are not
self-moving and have very few degrees of freedom.

>
>> It follows then from the fact that
>> consistency entails the consistency of inconsistency (Gödel II). Of
>> course, the reality is more complex, for consciousness is only
>> approximated by the instinctive unconscious) inductive inference of
>> self-consistency.
>
> You need some kind of awareness to begin with to tell the difference
> between consistency and inconsistency.

Not necessarily. Checking inconsistency does not require a lot of
cognitive ability.

I was just alluding to the fact that replication, although not
providing Turing universality, do that in company of the while loop.

>
>>
>>
>>
>>>>>> are fine things, but I don't see why
>>>>>> must one confuse replicators with perception. Perception can
>>>>>> exist by
>>>>>> itself merely on the virtue of passing information around and
>>>>>> processing
>>>>>> it. Replicators can also exist due similar reasons, but on a
>>>>>> different
>>>>>> level.
>>
>>>>> Perception has never existed 'by itself'. Perception only occurs
>>>>> in
>>>>> living organisms who are informed by their experience.
>>
>>>> The whole point is to explain terms like "living", "conscious",
>>>> etc.
>>>> You take them as primitive, so are escaping the issue.
>>
>>> They aren't primitive, the symmetry is primitive.
>>
>> ?
>
> Conscious and unconscious are aspects of the inherent subject-object
> symmetry of the universe.

Which you assume.


>
>>
>>
>>
>>>>> There is no
>>>>> independent disembodied 'information' out there. There detection
>>>>> and
>>>>> response, sense and motive of physical wholes.
>>
>>>> Same for "physical" (and that's not obvious!).
>>
>>> Do you doubt that if all life were exterminated that planets would
>>> still exist? Where would information be though?
>>
>> In the arithmetical relation, which truth are independent of me.
>> (I indulge in answering by staying in the frame of my working
>> hypothesis without repeating this).
>
> Why isn't arithmetic truth physical?

Because it does not rely on any physical notion. You can do number
theory without ever doing physics.

>
>>
>>
>>
>>>>> Sorry, but I think it's never going to happen. Consciousness is
>>>>> not
>>>>> digital.
>>
>>>> If you survive with a digital brain, then consciousness is
>>>> necessarily
>>>> not digital.
>>>> A brain is not a maker of consciousness. It is only a stable
>>>> pattern
>>>> making it possible (or more probable) that a person can manifest
>>>> itself relatively to some universal number(s).
>>
>>> Why not just use adipose tissue instead? That's a more stable
>>> pattern.
>>> Why have a vulnerable concentration of this pattern in the head? Our
>>> skeleton would make a much safer place four a person to manifest
>>> itself relatively to some universal number.
>>
>> Write a letter to nature for geographical reclamation.
>
> Funny but avoiding a serious problem of comp. Why not have some
> creatures with smart skulls or shells and stupid soft parts inside? It
> seems to be a strong indicator of material properties consistently
> determining mechanism and not the other way around.

Seeming is deceptive.

>
>>
>>
>>
>>>> Keep in mind that comp makes materialism wrong.
>>
>>> That's not why it's wrong. I have no problem with materialism being
>>> wrong, I have a problem with experience being reduced to non
>>> experience or non sense.
>>
>> This does not happen in comp. On the contrary machines can already
>> explain why that does not happen. Of course you need to believe that
>> arithmetical truth makes sense. But your posts illustrate that you
>> do.
>
> Arithmetical truth does make sense, definitely, but so do other kinds
> of experiences make sense and are not arithmetic truths.

If they are conceptually rich enough, you can take them instead of
arithmetic, without changing anything in the explanation of
consciousness and matter. I use numbers because people are more
familiar with them.


>
>>
>>
>>
>>>> The big picture is
>>>> completely different. I think that you confuse comp, with its
>>>> Aristotelian version where computations seems to be incarnated by
>>>> physical primitive materials. Comp + materialism leads to person-
>>>> nihilism, so it is important to understand that comp should not be
>>>> assumed together with materialism (even weak).
>>
>>> I don't think that I am confusing it. Comp is perfectly
>>> illustrated as
>>> modern investment banking. There is no material, in fact it
>>> strangles
>>> the life out of all materials, eviscerating culture and
>>> architecture,
>>> all in the name of consolidating digitally abstracted control of
>>> control. This is machine intelligence. The idea of unexperienced
>>> ownership as an end unto itself, forever concentrating data and
>>> exporting debt.
>>
>> Only in your reductionist appraisal of comp. That is widespread and
>> dangerous indeed, but you add to the grains of it, imo.
>>
>
> Investment banking is just an example, I'm not trying to reduce comp
> to that, but the example is defensible. Investment banking is almost
> pure comp, is it not?

If you deposit your Gödel number code at the bank, or something like
that. You stretch the meaning of comp, which is just the bet that our
body is Turing emulable and that we can survive through any of its
Turing emulation.


> All of those Wall Street quants... where is the
> theology and creativity?

It is buried by the materialists since 1500 years.


>
>>
>>
>>>>> We are able to extend and augment our neurological capacities (we
>>>>> already are) with neuromorphic devices, but ultimately we need our
>>>>> own
>>>>> brain tissue to live in.
>>
>>>> Why? What does that mean?
>>
>>> It means that without our brain, there is no we.
>>
>> That's not correct.
>
> What makes you think that?

There is no ontological brain, yet we are.

>
>>
>>> We cannot be
>>> simulated anymore than water or fire can be simulated.
>>
>> Why? That's a strong affirmation. We have not yet find a phenomenon
>> in
>> nature that cannot be simulated (except the collapse of the wave,
>> which can still be Turing 1-person recoverable).
>
> You can't water a real plant with simulated water or survive the
> arctic burning virtual coal for heat.

What is a real plant? A plant is epistemologically real relatively to
you and your most probable computations. It is not an absolute notion.

> If you look at substitution
> level in reverse, you will see that it's not a matter of making a
> plastic plant that acts so real we can't tell the difference, it's a
> description level which digitizes a description of a plant rather than
> an actual plant. Nothing has been simulated, only imitated. The
> difference is that an imitation only reminds us of what is being
> imitated but a simulation carries the presumption of replacement.

This makes things more complex than they might be.


>
>>
>>> Human
>>> consciousness exists nowhere but through a human brain.
>>
>> Not at all. Brain is a construct of human consciousness, which has
>> some local role.
>> You are so much Aristotelian.
>>
>
> If you say that human consciousness exists independently of a human
> brain, you have to give me an example of such a case.

UDA shows that you are an example of this.

>
>>
>>
>>>>> We, unfortunately cannot be digitized,
>>
>>>> You don't know that. But you don't derive it either from what you
>>>> assume (which to be franc remains unclear)
>>
>>> I do derive it, because the brain and the self are two parts of a
>>> whole. You cannot export the selfness into another form, because the
>>> self has no form, it's only experiential content through the
>>> interior
>>> of a living brain.
>>
>> That's the 1-self, but it is just an interface between truth and
>> relative bodies.
>
> Truth is just an interface between all 1-self and all relative bodies.

In which theory? This does not make sense.


>
>>
>>
>>
>>>> .
>>>> I think that you have a reductionist conception of machine, which
>>>> was
>>>> perhaps defensible before Gödel 1931 and Turing discovery of the
>>>> universal machine, but is no more defensible after.
>>
>>> I know that you think that, but you don't take into account that I
>>> started with with that. I read Gödel, Escher, Bach around 1980 I
>>> think. Even though I couldn't get too much into the math, I was
>>> quite
>>> happy with the implications of it. For the next 25 years I believed
>>> that the universe was made of 'patterns' - pretty close to what your
>>> view is.
>>
>> Not really. The physical universe is not made of any patterns. Nor is
>> it made of anything. It is a highly complex structure which appears
>> in
>> first person plural shared dreams.
>
> That's what I'm saying. 'Structure' = pattern.
>
>> You might, like many, confuse
>> digital physics (which does not work) and comp.
>> "I am a machine" makes it impossible for both my consciousness, and
>> my
>> material body to be Turing emulable.
>
> But your material body is Turing emulable (or rather, Turing
> imitatable).

At the comp subst level: imitable is emulable. You seem to lower that
level in the infinite.


>
>> I agree that this is counter-
>> intuitive, and that's why I propose a reasoning, and I prefer that
>> people grasp the reasoning than pondering at infinitum on the results
>> without doing the needed (finite) work.
>>
>>> It's only been in the last 7 years that I have found a better
>>> idea. My hypothesis is post-Gödelian symmetry.
>>
>> You have to elaborate a lot. You should study first order logical
>> language to be sure no trace of metaphysical implicit baggage is put
>> in your theory; in case you want scientists trying to understand what
>> you say.
>
> My whole point is revealing a universe description in which logic and
> direct experience coexist in many ways. Limiting it to logical
> language defeats the purpose,

That's what the machine can already explain. You consider it as a
zombie.

> although I would love to collaborate
> with someone who was interested in formalizing the ideas.

Convince people that there is an idea. But by insisting that your
ideas contradict comp, you shoot in your theory, because you add a
magic where the comp theories explains the appearance of the magic
without introducing it at the start.

> Logic is a
> 3p language - a mechanistic, involuntary form of reasoning which
> denies the 1p subject any option but to accept it.

This is false. The right side of the hypostases with "& p& are
provably beyond language, at the level the machine can live.

> The 1p experience
> is exactly the opposite of that. It is a 'seems like' affair which
> invites or discourages voluntary participation of the subject. Half of
> the universe is made of this.

With comp, it is the main part of the "universe".

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jan 30, 2012, 8:56:57 AM1/30/12
to everyth...@googlegroups.com
Russell and Whitehead thought, like most of their contemporaries, that elementary arithmetic can be deduced from logic. Today we know that it is not the case, and we separate clearly the logical axioms (the deduction means that we allow), and the non logical axioms, which defines and assumes the non logical object we want to talk about (numbers, strings, programs, sets, functions, etc.).

Bruno



Bruno Marchal

unread,
Jan 30, 2012, 9:13:14 AM1/30/12
to everyth...@googlegroups.com
On 30 Jan 2012, at 00:06, meekerdb wrote:

On 1/29/2012 6:54 AM, Bruno Marchal wrote:
There is a huge amount of
evidence along these lines that consciousness does not in fact
supervene on the physical brain.

No, there is a huge number of anecdotes.


But it is proved in the comp theory.

Bruno

I would say that your argument is that consciousness does not supervene on a *fundamental* physical brain.  I don't think it shows that consciousness can exist without there also being a physical environment in which it exists, even if the physical is not fundamental but is part of the same computations. 

I think you might be right, but this is not quite clear for me. Consciousness might make sense only in the relative matter/mind context, which arise logically, and "simultaneously" from arithmetic, like all "hypostases", once a machine bet on arithmetical induction.



In any case I don't see that it supports NDE's against more mundane explanations.

The mundane explanation use comp + materialism, and so is inconsistent, or it eliminates consciousness, and thus NDE, and all first-personal experiences.

Bruno


Craig Weinberg

unread,
Jan 30, 2012, 3:12:36 PM1/30/12
to Everything List
On Jan 30, 5:09 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 29 Jan 2012, at 03:20, Craig Weinberg wrote:
>
> > How do you know that they 'occur' in the computations rather than in
> > the eye of the beholder of the computations?
>
> The beholder of the computations is supported by the computations.
> Those exist independently of me, in the same way numbers are prime or
> not independently of me.

How would you know that they exist at all? Many people feel the same
way about God.

>
>
>
> >>>>> There is nothing in the universe
>
> >>>> The term universe is ambiguous.
>
> >>> Only in theory. I use it in a literal, absolutist way.
>
> >> This does not help to understand what you mean by "universe".
>
> > Universe means 'all that is' in every context.
>
> But "all that is" is what we are searching, testing, studying. The
> word "is" is very useful in everyday life, but very ambiguous per.
> "is" or "exist" depends on the theory chosen. Something can exist
> ontologically, or epistemologically.

As long as it is something to something, then it 'is'. There is
nothing that it is not, as long as sense is respected. Unicorns are
not part of the universe as far as we know, but the idea of unicorns
is certainly part of the human universe and therefore the universe.

>
>
>
> >>>> You confuse proving p, which can be explained in arithmetic, and
> >>>> "proving p & p is true", which can happen to be true for a machine,
> >>>> but escapes necessarily its language.
> >>>> The same for consciousness. It cannot be explained in *any* third
> >>>> person terms. But it can be proved that self-observing machine
> >>>> cannot
> >>>> avoid the discovery of many things concerning them which are beyond
> >>>> language.
>
> >>> I think that are confusing p with a reality rather than a logical
> >>> idea
> >>> about reality.
>
> >> p refers to reality by definition. "p" alone is for "it is the case
> >> that p".
>
> > But it isn't the case, it's the idea of it being the case.
>
> It is the case that 17 is prime, independently of of it is the case
> that such or such human has the idea that it is the case that 17 is
> prime. You are confusing levels.

17 is only prime in a symbolic system that defines primeness,
enumeration, and division of whole integers that way. Internal
consistency of the rules a game, even a universal game, does not make
the game independent of players. The rules arise from the players
interactions with each other, and that interaction is the game. Comp
says that there are disembodied rules that assemble themselves
mechanically as games which then dreams it is separate players.

>
> > You're just
> > saying 'Let p ='. It doesn't mean proposition that has any causal
> > efficacy.
>
> The fact that 17 is prime has causal efficacy. It entails many facts.

It entails only arithmetic facts, but there is nothing to say that
arithmetic by itself causes anything outside of arithmetic. Even
within arithmetic, it is the execution of a program or function by a
mind or body, that is through energy exerted within matter, which
produces causes.

>
>
>
> >>> I have no reason to believe that a machine can observe
> >>> itself in anything more than a trivial sense.
>
> >> It needs a diagonalization. It can't be completely trivial.
>
> > Something is aware of something, but it's just electronic components
> > or bricks on springs or whatever being aware of the low level physical
> > interactions.
>
> A machine/program/number can be aware of itself (1-person) without
> knowing anything about its 3p lower level.

We don't really know that machine/program/number can be aware of
anything. It may only be material interpreters which are aware of
anything and the degree to which they are aware of 1p and 3p may be
inversely proportional to their complexity. Being fantastically
complex, we are aware of only some of our 1p and 3p self. Simpler
organisms or particles may in fact have awareness of 100% of their 1p
and 3p selves.
>
>
>
> >>> It is not a conscious
> >>> experience, I would guess that it is something like an accounting of
> >>> unaccounted-for function terminations. Proximal boundaries. A
> >>> silhouette of the self offering no interiority but an
> >>> extrapolation of
> >>> incomplete 3p data. That isn't consciousness.
>
> >> Consciousness is not just self-reference. It is true self-reference.
> >> It belongs to the intersection of truth and self-reference.
>
> > It's more than that too though. Many senses can be derived from
> > consciousness, true self-reference is neither necessary nor
> > sufficient. I think that the big deal about consciousness is not that
> > it has true self-reference but that it is able to care about itself
> > its world that a non-trivial, open ended, and creative way. We can
> > watch a movie or have a dream and lose self-awareness without being
> > unconscious. Deep consciousness is often characterized by
> > unselfconscious awareness.
>
> This is not excluded by the definition I gave.

How does caring and creating follow from true self-reference? A camera
that recognizes itself in a mirror would not automatically care about
something or become conscious.
I'm saying the complexity of the immune system suggests that complex
function does necessarily give rise to consciousness.

>
>
>
> >>> Consciousness does nothing to speed decisions, it would only cost
> >>> processing overhead
>
> >> That's why high animals have larger cortex.
>
> > Their decisions are no faster than simpler animals.
>
> Complex decision are made possible, and are done more faster.

That only requires more processing power, not consciousness.

>
>
>
> >>> and add nothing to the efficiency of unconscious
> >>> adaptation.
>
> >> So, why do you think we are conscious?
>
> > I think that humans have developed a greater sensorimotive capacity
>
> I still don't know what you mean by that. You can replace
> "sensorimotive" by "acquainted to the son of God" in all your argument
> without them having a different meaning or persuasive force.

Sensorimotive is the interior view of electromagnetism.
Electromagnetism is orderly dynamic changes in material objects across
space relative to each other, sensorimotivation is the perception of
change through time in subjective experience relative to one's self.
Like electromagnetism is electricity and magnetism, sensorimotivation
is sensation and motive. They correspond to receiving of sense
experience (sensation) and embodying and projecting an intention
(motive).

>
> > as
> > a virtuous cycle of evolutionary circumstance and subjective
> > investment. Just as hardware development drives software development
> > and vice versa. It's not that we are conscious as opposed to
> > unconscious, it's that our awareness is hypertrophied from particular
> > animal motives being supported by the environment and we have
> > transformed our environment to enable our motives. Our seemingly
> > unique category of consciousness can either be anthropic prejudice or
> > objective fact, but either way it exists in a context of many other
> > kinds of awareness. The question is not why we are conscious, it is
> > why is consciousness possible and/or why are we human.
>
> Why we are human is easily explained, or not-explainable, as an
> indexical geographical fact, by comp. It is like "why am I the one in
> W and not in M?". Comp explains why consciousness is necessary. It is
> the way we feel when integrating quickly huge amount of information in
> a personal scenario.

'the way we feel' doesn't relate to information though. Where is the
feeling located? In the information, in the informed, or somewhere
else? I say that there is literally no information, all of the
experience is located in the world of the informed - which is a
concretely real world, even though it's realism is multivalent so that
it is literally real in some senses and and figuratively real in other
senses.

>
> > To the former,
> > the possibility is primordial, and the latter is a matter of
> > probability and intentional efforts.
>
> >>>> Consciousness is not explainable in term of any parts of something,
> >>>> but as an invariant in universal self-transformation.
> >>>> If you accept the classical theory of knowledge, then Peano
> >>>> Arithmetic
> >>>> is already conscious.
>
> >>> Why and how does universal self-transformation equate to
> >>> consciousness?
>
> >> I did not say that. I said that consciousness is a fixed point for a
> >> very peculiar form of self-transformation.
>
> > what makes it peculiar?
>
> The computer science details of its implementation (not of
> consciousness, but of the self-transformation, based on some
> application of Kleene's theorem).
>
>
>
> >>> Anything that is conscious can also be unconscious. Can
> >>> Peano Arithmetic be unconscious too?
>
> >> Yes. That's possible if you accept that consciousness is a logical
> >> descendent of consistency.
>
> > Aren't the moons of Saturn consistent?
>
> The material moons are not programs, nor theories. "consistent" cannot
> apply to it without stretching the words a lot.

Why aren't they programs? They undergo tremendous logical change over
time. Why discriminate against moons? I don't see any stretch at all
in calling them consistent. You could set a clock by their orbits.

>
> > Will consciousness logically
> > descend from their consistency?
>
> If ever the moon have to become conscious. Yes. No if this has not to
> happen. There is few chance moons becomes conscious, for they are not
> self-moving and have very few degrees of freedom.

Computers are 'solid state' though? Moons have all kinds of geological
changes going on over thousands of years.

>
>
>
> >> It follows then from the fact that
> >> consistency entails the consistency of inconsistency (Gödel II). Of
> >> course, the reality is more complex, for consciousness is only
> >> approximated by the instinctive unconscious) inductive inference of
> >> self-consistency.
>
> > You need some kind of awareness to begin with to tell the difference
> > between consistency and inconsistency.
>
> Not necessarily. Checking inconsistency does not require a lot of
> cognitive ability.

It does necessarily require awareness of some kind. Something has to
detect something and know how to expect and interpret a 'difference'
in that detection. Cognition has nothing to do with it. That's much
higher up the mountain, in true vs false land. Consistency is only
same v different.
I was just saying that while loops and replication don't imply the
generation of feeling.

>
>
>
> >>>>>> are fine things, but I don't see why
> >>>>>> must one confuse replicators with perception. Perception can
> >>>>>> exist by
> >>>>>> itself merely on the virtue of passing information around and
> >>>>>> processing
> >>>>>> it. Replicators can also exist due similar reasons, but on a
> >>>>>> different
> >>>>>> level.
>
> >>>>> Perception has never existed 'by itself'. Perception only occurs
> >>>>> in
> >>>>> living organisms who are informed by their experience.
>
> >>>> The whole point is to explain terms like "living", "conscious",
> >>>> etc.
> >>>> You take them as primitive, so are escaping the issue.
>
> >>> They aren't primitive, the symmetry is primitive.
>
> >> ?
>
> > Conscious and unconscious are aspects of the inherent subject-object
> > symmetry of the universe.
>
> Which you assume.

What choice do I have? My only experience of the universe is 100%
definable by the subject-object symmetry.

>
>
>
> >>>>> There is no
> >>>>> independent disembodied 'information' out there. There detection
> >>>>> and
> >>>>> response, sense and motive of physical wholes.
>
> >>>> Same for "physical" (and that's not obvious!).
>
> >>> Do you doubt that if all life were exterminated that planets would
> >>> still exist? Where would information be though?
>
> >> In the arithmetical relation, which truth are independent of me.
> >> (I indulge in answering by staying in the frame of my working
> >> hypothesis without repeating this).
>
> > Why isn't arithmetic truth physical?
>
> Because it does not rely on any physical notion. You can do number
> theory without ever doing physics.

But you can't do number theory without a physical subject doing the
theorizing. It's coming out of a sugar burning skull monster...some
kind of horrible snotty giant blood-walnut that has taught itself to
make the monkey body do these odd scribbles on whiteboards. You don't
need to do physics, but something has to do physics (and chemistry,
biology, zoology, neurology, anthropology...) for you to do anything.

>
>
>
> >>>>> Sorry, but I think it's never going to happen. Consciousness is
> >>>>> not
> >>>>> digital.
>
> >>>> If you survive with a digital brain, then consciousness is
> >>>> necessarily
> >>>> not digital.
> >>>> A brain is not a maker of consciousness. It is only a stable
> >>>> pattern
> >>>> making it possible (or more probable) that a person can manifest
> >>>> itself relatively to some universal number(s).
>
> >>> Why not just use adipose tissue instead? That's a more stable
> >>> pattern.
> >>> Why have a vulnerable concentration of this pattern in the head? Our
> >>> skeleton would make a much safer place four a person to manifest
> >>> itself relatively to some universal number.
>
> >> Write a letter to nature for geographical reclamation.
>
> > Funny but avoiding a serious problem of comp. Why not have some
> > creatures with smart skulls or shells and stupid soft parts inside? It
> > seems to be a strong indicator of material properties consistently
> > determining mechanism and not the other way around.
>
> Seeming is deceptive.

What would be an explanation, or counterfactual?

>
>
>
> >>>> Keep in mind that comp makes materialism wrong.
>
> >>> That's not why it's wrong. I have no problem with materialism being
> >>> wrong, I have a problem with experience being reduced to non
> >>> experience or non sense.
>
> >> This does not happen in comp. On the contrary machines can already
> >> explain why that does not happen. Of course you need to believe that
> >> arithmetical truth makes sense. But your posts illustrate that you
> >> do.
>
> > Arithmetical truth does make sense, definitely, but so do other kinds
> > of experiences make sense and are not arithmetic truths.
>
> If they are conceptually rich enough, you can take them instead of
> arithmetic, without changing anything in the explanation of
> consciousness and matter. I use numbers because people are more
> familiar with them.

I use sense because it makes more sense.
Isn't that what money is really all about now though? Instead of a
body, we have accounts. You can't get more Turing emulable that that.
It's practically Turing-maniacal.

>
> > All of those Wall Street quants... where is the
> > theology and creativity?
>
> It is buried by the materialists since 1500 years.

60% of the stock trades in the US markets are automated. I would say
that makes AI the dominant financial decision maker in the world.

>
>
>
> >>>>> We are able to extend and augment our neurological capacities (we
> >>>>> already are) with neuromorphic devices, but ultimately we need our
> >>>>> own
> >>>>> brain tissue to live in.
>
> >>>> Why? What does that mean?
>
> >>> It means that without our brain, there is no we.
>
> >> That's not correct.
>
> > What makes you think that?
>
> There is no ontological brain, yet we are.
>

Aren't we the ontological brain already?

>
> >>> We cannot be
> >>> simulated anymore than water or fire can be simulated.
>
> >> Why? That's a strong affirmation. We have not yet find a phenomenon
> >> in
> >> nature that cannot be simulated (except the collapse of the wave,
> >> which can still be Turing 1-person recoverable).
>
> > You can't water a real plant with simulated water or survive the
> > arctic burning virtual coal for heat.
>
> What is a real plant? A plant is epistemologically real relatively to
> you and your most probable computations. It is not an absolute notion.

It might be an absolute notion. At my level of description it is a
plant, at another it's tissues, cells, molecules, etc. Anything that
satisfies all of those descriptions within all of those perceptual
frames may be a real plant. If it only looks like a plant, then it's a
cartoon or a puppet.

>
> > If you look at substitution
> > level in reverse, you will see that it's not a matter of making a
> > plastic plant that acts so real we can't tell the difference, it's a
> > description level which digitizes a description of a plant rather than
> > an actual plant. Nothing has been simulated, only imitated. The
> > difference is that an imitation only reminds us of what is being
> > imitated but a simulation carries the presumption of replacement.
>
> This makes things more complex than they might be.

It makes more sense though. Otherwise we would have movies that we
could literally live inside of already.

>
>
>
> >>> Human
> >>> consciousness exists nowhere but through a human brain.
>
> >> Not at all. Brain is a construct of human consciousness, which has
> >> some local role.
> >> You are so much Aristotelian.
>
> > If you say that human consciousness exists independently of a human
> > brain, you have to give me an example of such a case.
>
> UDA shows that you are an example of this.

But drinking some scotch or smoking a cigar tells me that I am not
independent of my brain.

>
>
>
> >>>>> We, unfortunately cannot be digitized,
>
> >>>> You don't know that. But you don't derive it either from what you
> >>>> assume (which to be franc remains unclear)
>
> >>> I do derive it, because the brain and the self are two parts of a
> >>> whole. You cannot export the selfness into another form, because the
> >>> self has no form, it's only experiential content through the
> >>> interior
> >>> of a living brain.
>
> >> That's the 1-self, but it is just an interface between truth and
> >> relative bodies.
>
> > Truth is just an interface between all 1-self and all relative bodies.
>
> In which theory? This does not make sense.

It's an implication of multisense realism. Truth (a kind of Sense) is
an interface between all 1-self (sensorimotive experiences) and all 3-
p relative bodies (electromagnetic objects). It is the synchronization
of interior dreams and external bodies.
The subst level is proportional to the distance (literal and
figurative) from the self. (You should like this actually?) The more
distant from the self - say looking at a map of the Earth, the higher
the subst level is. Any old substrate for the map will do. The closer
you get to the self, the subst level gets exponentially lower.

There may be a mirror image of the uncanny valley involved. A
'character spike' so to speak, where people enjoy watching a person
act like a robot, statue, mime, or other starchy, would-be dehumanized
character. There is certainly something comedic about it. Like when
the uncanny valley drops off, when the character is taken too far and
becomes too convincing for too long, the substitution level becomes
uncomfortably high and we begin to wonder if there is something really
wrong with them (the Andy Kauffman valley).

>
>
>
> >> I agree that this is counter-
> >> intuitive, and that's why I propose a reasoning, and I prefer that
> >> people grasp the reasoning than pondering at infinitum on the results
> >> without doing the needed (finite) work.
>
> >>> It's only been in the last 7 years that I have found a better
> >>> idea. My hypothesis is post-Gödelian symmetry.
>
> >> You have to elaborate a lot. You should study first order logical
> >> language to be sure no trace of metaphysical implicit baggage is put
> >> in your theory; in case you want scientists trying to understand what
> >> you say.
>
> > My whole point is revealing a universe description in which logic and
> > direct experience coexist in many ways. Limiting it to logical
> > language defeats the purpose,
>
> That's what the machine can already explain. You consider it as a
> zombie.

Not a zombie, a puppet.

>
> > although I would love to collaborate
> > with someone who was interested in formalizing the ideas.
>
> Convince people that there is an idea. But by insisting that your
> ideas contradict comp, you shoot in your theory, because you add a
> magic where the comp theories explains the appearance of the magic
> without introducing it at the start.

Comp introduces magic at the start. 'Arithmetic Truth' is very much a
digital Dreamtime. I don't add any magic and nothing appears except
different levels of sense recapitulation in inertial frames.
Everything in multisense realism works with a universe of only the
typical experiences that we live through every day, plus it explains
why extraordinary experiences are harder to ground in public
certainty.


>
> > Logic is a
> > 3p language - a mechanistic, involuntary form of reasoning which
> > denies the 1p subject any option but to accept it.
>
> This is false. The right side of the hypostases with "& p& are
> provably beyond language, at the level the machine can live.

You're making my point. The notion of anything being literally false
or true is just what I said: an involuntary form of reasoning. Then
you proceed to deny me, the 1p subject, any option to accept it.

>
> > The 1p experience
> > is exactly the opposite of that. It is a 'seems like' affair which
> > invites or discourages voluntary participation of the subject. Half of
> > the universe is made of this.
>
> With comp, it is the main part of the "universe".

That's why it's a little naive :)

Craig

Pierz

unread,
Jan 31, 2012, 7:28:39 AM1/31/12
to Everything List








On Jan 29, 3:44 pm, meekerdb <meeke...@verizon.net> wrote:
> On 1/28/2012 7:05 PM, Pierz wrote:
>
>
>
>
>
> > On Jan 29, 10:57 am, meekerdb<meeke...@verizon.net>  wrote:
> >> On 1/28/2012 3:15 PM, Pierz wrote:
> >> These approaches always end up conflating the two, their
> >> proponents getting annoyed with anyone who isn't prepared to wish away
> >>> the gap between them.
> >> But most people seem to think that the two are linked; that philosophical zombies are
> >> impossible.  Are you asserting that they are possible?
> > Well of course they are linked. As for the problem of zombies, I of
> > course have to agree that they seem absurd. But to me the zombie
> > argument elides the real question, which is the explanation for why
> > there is anyone home to find the zombies absurd. Why aren't zombies
> > having this discussion? In the traditional materialist worldview,
> > there is nothing to explain that. We observe that we aren't, in fact
> > zombies and then the materialist observes that the his/her predictions
> > would be the same if there were no consciousness and so s/he loses
> > interest in the issue and effectively shrugs and says "oh well". But
> > there are some problems, though I expect you'll have little truck with
> > them. I could, for  instance, refer you to a study of near death
> > experiences in the Lancet in which a person in cardiac arrest and
> > flatlining on the EEG was able to report the presence of a pair of
> > sneakers on a high window ledge of the hospital during an OBE which he
> > would have no way of knowing were there. There is a huge amount of
> > evidence along these lines that consciousness does not in fact
> > supervene on the physical brain.
>
> No, there is a huge number of anecdotes.


"Anecdotal evidence" is not an oxymoron. But I am not talking about
pure anecdote, but rather phenomenological studies, such as Grof's
original work (see below).

> http://records.viu.ca/www/ipp/pdf/NDE.pdf

> And when there have been controlled experiments in which signs were placed on high shelves
> in operating rooms those floating NDE's have not been able to read them.

It's amazing the difference in the standard of evidence expected of a
study that purports to refute a phenomenon outside the conventional
paradigm compared to that expected of a study that claims to provide
evidence for it. We know almost nothing about the study mentioned
except that "it found no evidence" and that seems to be sufficient for
you to cite it, because it confirms your prejudices. If it had claimed
to find evidence, you'd be tearing its methodology to shreds, or
rather going on about the total lack of any methodological explanation
in the above paper.

> > Other evidence, for instance, comes
> > from LSD research conducted in the fifties (see Stanislav Grof's
> > work).
>
> The award winning Dr. Grof?http://www.stanislavgrof.com/pdf/Bronze.Delusional.Boulder_2000.pdf

Ridicule is cheap, and does not constitute an argument. Always an
effective means though of discrediting someone with the courage to
express unconventional views. Grof's original research in the fifties
was purely phenomenological, a documentation of hallucinogenic
experiences. He began as a committed materialist, and only slowly was
forced by his observations to a different position. It is so easy to
pile ridicule on anything that is counter to the current paradigm, but
every new worldview began with someone who wasn't afraid to question
orthodoxy.

It seems to me if you accept the logic of Bruno's UDA, you would be
forced into accepting that consciousness cannot be destroyed, because
it belongs to the mathematical realm not the physical  - so nothing in
Grof's research should be intrinsically absurd. How ludicrous would
Many Worlds theory have appeared to Newtonians? They thought that they
almost had the whole answer, and only a tiny explanatory gap remained
to be closed. Only the gap happened to contain all of quantum theory
and relativity, all of modern physics in fact. Nowadays many
scientists believe the same thing - just a few tiny gaps remain - like
where 80% of the universe's mass is, the explanation for
consciousness, and the fact that our two most fundamental theories of
modern science are inconsistent with one another. Small problems to be
fixed with a few tweaks no doubt!

>
> > Of course there's also vast and incontrovertible evidence that
> > consciousness, under normal conditions, does supervene on brain state
> > and structure, so we are left with an anomaly that in most cases is
> > resolved by denying the evidence of the exceptions. This is not all
> > that hard to do when the evidence is to be found  in consciousnesses
> > of subjects rather than 'instruments' and cannot easily be subjected
> > to controlled experimental trials. But even a single personal
> > experience can override the weightiest scientific authority
>
> So all those sightings of ghosts and Elvis override the theory that the dead don't roam
> around where you can see them.

No, you misunderstand. I am not advocating the uncritical acceptance
of every campfire story. I am talking about the weight that first hand
experience can carry. Some years ago, sleep scientists did not believe
in the existence of so-called lucid dreams - dreams in which the
subject is conscious while inside the dream, knows they are dreaming
and exerts conscious control over the dream process. However the guy
(Stephen LaBerge) who eventually proved that they do exist (he did it
through getting sleeping subjects to perform controlled eye movements
inside their dreams) *knew* they were wrong, because he had them all
the time! In that situation, it was rational to believe something that
science discredited. Scientific method is rightly conservative,
because it must use repeatability etc to establish a body of knowledge
that is as reliable as possible. It is rational however for the
individual to be accept a lower standard of evidence in formulating
his or her beliefs in some cases. In the above example, a single
experience of a lucid dream is sufficient to disprove the science for
the individual experiencing it. Working out a way to translate that
into scientific evidence is another thing.

I'll tell you a campfire story of my own. One day my grandmother was
going to drive my mother home across town. We were at my gran's place
at the time and a close friend of mine was present. As they were about
to leave, my friend went suddenly pale. She said "Don't leave! I have
a really bad feeling." She is a super practical, down to earth person
and not given to weird freak outs and anxiety attacks. She was so
insistent about it that my grandmother and mother decided to humour
her. After about 30 minutes she (my friend) said,"It's OK, you can go
now." They went, and were stopped when they turned off the freeway by
a row of police cars. Julian Knight had just shot dead six drivers
from a neighbouring park in what's now called the Hoddle Street
massacre. That was when my mother remembered the dream she'd had the
previous night of driving with my grandmother and saying to her, "get
down, there's shooting."

Now of course this story is supremely unimpressive to you because a) I
might be lying, exaggerating, misremembering, on drugs, mentally ill
etc and b) it's just a random story and very unlikely things must
happen occasionally, right? For me though it's something else. I know
I'm not on drugs/lying crazy etc. As for b) I could write it off as a
truly incredible coincidence if I hadn't seen so many similar types of
things. For instance, I was present with that same friend when she had
another such 'attack'. Out of the blue she was filled with a sudden,
horrible dread and could hardly breathe. As I say, she has no anxiety
disorders and I've never known her to have any such type of panic
attack except on these two occasions. It turned out her best friend
had been killed in a car accident at that moment.

I don't tell you this to persuade, but to make the point that *if* I
was telling the truth, it would be rational in my view for me to
believe that something was at play beyond your "mundane explanation".
I actually don't see anything "supernatural" though. I see something
natural that we don't understand, something that challenges the
material view of mind. It's not scientific evidence, sure, but that
doesn't make it irrational to be persuaded by it.

>
> > - as
> > Galileo looking through the telescope and seeing 'impossible'
> > mountains on the moon. So one can have a personal conviction that
> > 'something is wrong with the conventional view' without necessarily
> > being able to present conceptual or experimental proof for one's
> > conviction. Therefore, I prefer to keep reminding people that
> > something utterly central to their existence - in fact the defining
> > feature to that existence: our awareness of it - remains without an
> > explanation. Even the estimable David Deutsch - arch rationalist and
> > materialist - concedes that we have no explanation for qualia.
>
> Have you ever considered what form such an explanation might take?

Do you think I wouldn't have? When I was a teenager I used to think on
it all the time, and I formed my own kind of theory of panpsychism. I
concluded that rudiments of consciousness must exist in atoms. I don't
know what I think about that any more. It's not in any case an
"explanation", but maybe that's not what we will have in the end.
Whatever ontology we adopt there is always a mystery at the root, some
"it just is". Either that or another turtle stack. Why does the
mathematical platonia exist? It just does. Why does the quantum field
exist? It just does. Or there's something more fundamental that "just
is". So an ontology that accepts consciousness as fundamental is not
*intrinsically* weirder than anything else. It's just unfamiliar and
contrary to a deeply ingrained intellectual habit of the western mind.

Anyway I doubt that it will be an explanation that "explains away".
Deutsch believes there will be an explanation for qualia one day, and
it will help us to build the first truly intelligent, conscious
machines. I don't know about that. He also thinks we could run such an
intelligent program on a PC with today's resources. I'm pretty sure
he's dead wrong on that.

>
> Brent
>
> > We only
> > differ in our belief as to how far-reaching the revisions to our
> > understanding will have to be in order to achieve that explanation.
> > Maybe Bruno has found it, but for the reasons I am trying to explicate
> > in this thread, I'm not convinced yet.
>
> > BTW, while I am with Craig in intuiting a serious conceptual lacuna in
> > the materialist paradigm, that doesn't necessarily enamour me of his
> > alternative. His talk of 'sense making' seems to me more like a 'way

acw

unread,
Jan 31, 2012, 8:26:17 AM1/31/12
to everyth...@googlegroups.com

Yet someone doesn't need "too wild" theories to give possible, but
unverifiable explanations to those anecdotes. I'll ignore case a) here
as it's not very interesting and look for what possible explanations you
could have for it. b) seems good, but let's consider the case where such
experiences are more repeatable (from your perspective). If considered
within the context of MWI or COMP, you could conjecture that the cases
where something hadn't happened, such as your grandmother having a scary
dream that led her to stop you at some given time from going somewhere
to your death (dreams are especially good candidates for things that can
be influenced by more chaotic dynamics and noise, such as things going
on below the substitution level or at quantum level). To put it another
way - you can only experience things consistent with you being alive
(Anthropic principle or more limited forms like Quantum Theory of
Immortality or COMP Theory of Immortality) - it's all one useful
coincidence caused by the law of large numbers, from the laws of
physics/the machines that run you to more deterministic high-level
physics emerging. Of course, since such a coincidence was a complex,
many step event, I'd be willing to think that the relative measure of
one's computations is stacked towards longer (if not even
non-terminating) computations, thus histories which lead you to longer
locally stable physics are much more probable than those which lead to
more non-local unusual continuations, this might very well have to do
with those self-reference laws and whatever machines mostly won the
measure battle for this local physics we have now.

As for the other example, I have no idea why your other friend had that
'attack', I can't see any better explanation for now than just some
confirmation bias on your part.

As an anecdote, I did have a few of my own short brushes with potential
death and got saved by some small, but not too unusual coincidences.
Unlike others which tend to just jump to some organized religion and
praise some magical being for saving them whenever they have a brush
with death (or just very unusual coincidences), I just ended up chalking
them up to slight measure reductions and I hope it didn't lead to too
much sadness to my friends and family in those branches where 'I' didn't
survive (assuming COMP or some MW-like theories).

Stephen P. King

unread,
Jan 31, 2012, 8:29:54 AM1/31/12
to everyth...@googlegroups.com
Hi Piez,

I have have my own share of such experiences and so have several
friends. it is as if consciousness is not limited to "being in a moment"
but can stretch out in time at the price of the experience being of low
resolution. There is a well known uncertainty between a duration in time
and energy, but time is still a not-well-understood concept.
As to the panpsychism that you mention, such would be necessary,
IMHO, for a dualism to be consistent. Building on the Local System
theory of Prof. Hitoshi Kitada, I am conjecturing that any system that
can have its own wavefunction associated with it will be, at some level,
conscious. The problem that we need to overcome is the definition of
what consciousness is once we strip away the anthropomorphic facade. One
thing about explanations, they have to all be consistent with each other
so that we don't end up with a huge crazy-quilt of explanations that
work for one thing but can't be carried into anything else.

Onward!

Stephen

Craig Weinberg

unread,
Jan 31, 2012, 12:27:34 PM1/31/12
to Everything List
On Jan 31, 7:28 am, Pierz <pier...@gmail.com> wrote:

> It's amazing the difference in the standard of evidence expected of a
> study that purports to refute a phenomenon outside the conventional
> paradigm compared to that expected of a study that claims to provide
> evidence for it.

Yes, there's the Dutch Study too: http://profezie3m.altervista.org/archivio/TheLancet_NDE.htm

1 Awareness of being dead 31 (50%)
2 Positive emotions 35 (56%)
3 Out of body experience 15 (24%)
4 Moving through a tunnel 19 (31%)
5 Communication with light 14 (23%)
6 Observation of colours 14 (23%)
7 Observation of a celestial landscape 18 (29%)
8 Meeting with deceased persons 20 (32%)
9 Life review 8 (13%)
10 Presence of border 5 (8%)


> Ridicule is cheap, and does not constitute an argument.

An important point (and one made well in that great old movie
'Ridicule')

> They thought that they
> almost had the whole answer, and only a tiny explanatory gap remained
> to be closed. Only the gap happened to contain all of quantum theory
> and relativity, all of modern physics in fact.

Yes. Smug certainty is the cholesterol of the heart of science.

> Scientific method is rightly conservative,
> because it must use repeatability etc to establish a body of knowledge
> that is as reliable as possible. It is rational however for the
> individual to be accept a lower standard of evidence in formulating
> his or her beliefs in some cases. In the above example, a single
> experience of a lucid dream is sufficient to disprove the science for
> the individual experiencing it. Working out a way to translate that
> into scientific evidence is another thing.

Right. When you get into studying the thing that makes studying itself
possible, you run into problems if you hold that subjective phenomena
to the same standards that you hold objectively measurable phenomena.
If you start out denying subjectivity from the start, then you have no
chance at understanding anything meaningful about subjectivity.

>
> I'll tell you a campfire story of my own.

Yes, I have had a few incontestably precognitive dreams as well. Time
is not uniform because it's not external to our experience. An event
with high significance can warp the fabric of our perception so that
it begins to happen for us figuratively before the literal event. Big
events have a larger, heavier 'now'. Significance is a concrete part
of reality in the cosmos - it is cumulative negentropy.

> I don't tell you this to persuade, but to make the point that *if* I
> was telling the truth, it would be rational in my view for me to
> believe that something was at play beyond your "mundane explanation".
> I actually don't see anything "supernatural" though. I see something
> natural that we don't understand, something that challenges the
> material view of mind. It's not scientific evidence, sure, but that
> doesn't make it irrational to be persuaded by it.

It's as natural and real as gravity, only through time and perception
rather than local to space and objects.

> Do you think I wouldn't have? When I was a teenager I used to think on
> it all the time, and I formed my own kind of theory of panpsychism. I
> concluded that rudiments of consciousness must exist in atoms. I don't
> know what I think about that any more.

I think that's right, but it's also just as correct to say that atoms
must exist in consciousness. They are opposite ends of a single
(involuted or figuratively twisted) continuum.

> It's not in any case an
> "explanation", but maybe that's not what we will have in the end.
> Whatever ontology we adopt there is always a mystery at the root, some
> "it just is". Either that or another turtle stack. Why does the
> mathematical platonia exist? It just does. Why does the quantum field
> exist? It just does. Or there's something more fundamental that "just
> is". So an ontology that accepts consciousness as fundamental is not
> *intrinsically* weirder than anything else. It's just unfamiliar and
> contrary to a deeply ingrained intellectual habit of the western mind.

Right. I go further though to say that sense is actually fundamental
because it cannot be anything else. It is the method by which all
orientation and coherence is anchored because it is the only thing
that can be defined by definition itself. All logic, arithmetic,
experience, matter, phenomena supervenes explicitly and
unconditionally upon the possibility of detection or coherence.

> Anyway I doubt that it will be an explanation that "explains away".
> Deutsch believes there will be an explanation for qualia one day, and
> it will help us to build the first truly intelligent, conscious
> machines. I don't know about that. He also thinks we could run such an
> intelligent program on a PC with today's resources. I'm pretty sure
> he's dead wrong on that.

Qualia is the explanation. That's the problem is our approach is
backwards. You don't explain qualia, qualia explains the universe.

Craig

Bruno Marchal

unread,
Jan 31, 2012, 1:25:40 PM1/31/12
to everyth...@googlegroups.com
On 30 Jan 2012, at 21:12, Craig Weinberg wrote:

On Jan 30, 5:09 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 29 Jan 2012, at 03:20, Craig Weinberg wrote:

How do you know that they 'occur' in the computations rather than in
the eye of the beholder of the computations?

The beholder of the computations is supported by the computations.
Those exist independently of me, in the same way numbers are prime or
not independently of me.

How would you know that they exist at all?

Because their existence is a theorem in the theory which is assumed.



Many people feel the same
way about God.


We never know if something exists, except our consciousness here and now, but that's all. That is why we assume theories, which assumes the existence of the primary objects from which we start to derive theorems, including different sort of existence.




Universe means 'all that is' in every context.

But "all that is" is what we are searching, testing, studying.  The
word "is" is very useful in everyday life, but very ambiguous per.
"is" or "exist" depends on the theory chosen. Something can exist
ontologically, or epistemologically.

As long as it is something to something, then it 'is'. There is
nothing that it is not, as long as sense is respected. Unicorns are
not part of the universe as far as we know, but the idea of unicorns
is certainly part of the human universe and therefore the universe.

That is what we have to clarify. You beg the question entirely. I have still no clue what your theory presuppose. You discourse technic is similar to the pseudo-priests and evangelists. I'm sorry.



But it isn't the case, it's the idea of it being the case.

It is the case that 17 is prime, independently of of it is the case
that such or such human has the idea that it is the case that 17 is
prime. You are confusing levels.

17 is only prime in a symbolic system that defines primeness,
enumeration, and division of whole integers that way.

You confuse arithmetical truth, and theories of arithmetic. The primeness of 17 is not a symbolic truth.




Internal
consistency of the rules a game, even a universal game, does not make
the game independent of players.

This concerns *all* theories. 



The rules arise from the players
interactions with each other,

Not for definable games. In game theory we can show the existence of game without us knowing any of the game rule.


and that interaction is the game.

In which theory?



Comp
says that there are disembodied rules that assemble themselves
mechanically as games which then dreams it is separate players.

More or less. 




You're just
saying 'Let p ='. It doesn't mean proposition that has any causal
efficacy.

The fact that 17 is prime has causal efficacy. It entails many facts.

It entails only arithmetic facts, but there is nothing to say that
arithmetic by itself causes anything outside of arithmetic.

But a lot inside, and *from* inside.


Even
within arithmetic, it is the execution of a program or function by a
mind or body, that is through energy exerted within matter, which
produces causes.

What is matter? What is energy? How does it come from? What is the relation with mind? 
You said that you agree that matter is not primary. So what is your theory?







I have no reason to believe that a machine can observe
itself in anything more than a trivial sense.

It needs a diagonalization. It can't be completely trivial.

Something is aware of something, but it's just electronic components
or bricks on springs or whatever being aware of the low level physical
interactions.

A machine/program/number can be aware of itself (1-person) without
knowing anything about its 3p lower level.

We don't really know that machine/program/number can be aware of
anything.

We just know nothing. That is why we use theories, which are collections of assumption/hypothesis.


It may only be material interpreters which are aware of
anything and the degree to which they are aware of 1p and 3p may be
inversely proportional to their complexity. Being fantastically
complex, we are aware of only some of our 1p and 3p self. Simpler
organisms or particles may in fact have awareness of 100% of their 1p
and 3p selves.

The idea that simple organism might be "more conscious" than complex organism might make sense in the comp theory. What does not make sense is the need for matter.




It is not a conscious
experience, I would guess that it is something like an accounting of
unaccounted-for function terminations. Proximal boundaries. A
silhouette of the self offering no interiority but an
extrapolation of
incomplete 3p data. That isn't consciousness.

Consciousness is not just self-reference. It is true self-reference.
It belongs to the intersection of truth and self-reference.

It's more than that too though. Many senses can be derived from
consciousness, true self-reference is neither necessary nor
sufficient. I think that the big deal about consciousness is not that
it has true self-reference but that it is able to care about itself
its world that a non-trivial, open ended, and creative way. We can
watch a movie or have a dream and lose self-awareness without being
unconscious. Deep consciousness is often characterized by
unselfconscious awareness.

This is not excluded by the definition I gave.

How does caring and creating follow from true self-reference? A camera
that recognizes itself in a mirror would not automatically care about
something or become conscious.

A camera cannot recognize itself in a mirror.
If it can, it means it has some brain, in which case it might care and be conscious.




I am not sure. I don't see the relevance of that mechanist point.

I'm saying the complexity of the immune system suggests that complex
function does necessarily give rise to consciousness.

Yes. But that is trivial. Nobody claimed that consciousness is just complexity.


Consciousness does nothing to speed decisions, it would only cost
processing overhead

That's why high animals have larger cortex.

Their decisions are no faster than simpler animals.

Complex decision are made possible, and are done more faster.

That only requires more processing power, not consciousness.


Processing power progress are bounded by contingent slow origin. That's the reason mind exist, it accelerate the processing much more quickly. In fact, just by software change, the slower machine can always beat the faster machines, on almost inputs, except a finite number of them.








and add nothing to the efficiency of unconscious
adaptation.

So, why do you think we are conscious?

I think that humans have developed a greater sensorimotive capacity

I still don't know what you mean by that. You can replace
"sensorimotive" by "acquainted to the son of God" in all your argument
without them having a different meaning or persuasive force.

Sensorimotive is the interior view of electromagnetism.

You already told me this, and I asked you what you mean by "interior", "view", and "electromagnetism".


Electromagnetism is orderly dynamic changes in material objects across
space relative to each other, sensorimotivation is the perception of
change through time in subjective experience relative to one's self.
Like electromagnetism is electricity and magnetism, sensorimotivation
is sensation and motive. They correspond to receiving of sense
experience (sensation) and embodying and projecting an intention
(motive).

Theory? Definitions?





as
a virtuous cycle of evolutionary circumstance and subjective
investment. Just as hardware development drives software development
and vice versa. It's not that we are conscious as opposed to
unconscious, it's that our awareness is hypertrophied from particular
animal motives being supported by the environment and we have
transformed our environment to enable our motives. Our seemingly
unique category of consciousness can either be anthropic prejudice or
objective fact, but either way it exists in a context of many other
kinds of awareness. The question is not why we are conscious, it is
why is consciousness possible and/or why are we human.

Why we are human is easily explained, or not-explainable, as an
indexical geographical fact, by comp. It is like "why am I the one in
W and not in M?". Comp explains why consciousness is necessary. It is
the way we feel when integrating quickly huge amount of information in
a personal scenario.

'the way we feel' doesn't relate to information though. Where is the
feeling located?

Feelings are not the type of thing for which location applied. 


In the information, in the informed, or somewhere
else?

You might say: in the mind of the person. But even this is a way to speak.




Anything that is conscious can also be unconscious. Can
Peano Arithmetic be unconscious too?

Yes. That's possible if you accept that consciousness is a logical
descendent of consistency.

Aren't the moons of Saturn consistent?

The material moons are not programs, nor theories. "consistent" cannot
apply to it without stretching the words a lot.

Why aren't they programs?

By UDA, which explains why observable matter cannot be program, but the a priori non computable result of infinities of programs.



They undergo tremendous logical change over
time. Why discriminate against moons?

If not being a program is discrimination, then you are the one discriminating a lot of possibly conscious entities.



I don't see any stretch at all
in calling them consistent. You could set a clock by their orbits.

A clock is till not something on which the consistency predicate applies. Consistency applies only to collection of beliefs.




Will consciousness logically
descend from their consistency?

If ever the moon have to become conscious. Yes. No if this has not to
happen. There is few chance moons becomes conscious, for they are not
self-moving and have very few degrees of freedom.

Computers are 'solid state' though?

With internal write/read and delete subroutines. 



Moons have all kinds of geological
changes going on over thousands of years.

That's a poor evidence of thinking.






It follows then from the fact that
consistency entails the consistency of inconsistency (Gödel II). Of
course, the reality is more complex, for consciousness is only
approximated by the instinctive unconscious) inductive inference of
self-consistency.

You need some kind of awareness to begin with to tell the difference
between consistency and inconsistency.

Not necessarily. Checking inconsistency does not require a lot of
cognitive ability.

It does necessarily require awareness of some kind. Something has to
detect something and know how to expect and interpret a 'difference'
in that detection. Cognition has nothing to do with it. That's much
higher up the mountain, in true vs false land. Consistency is only
same v different.


?



I was just alluding to the fact that replication, although not
providing Turing universality, do that in company of the while loop.

I was just saying that while loops and replication don't imply the
generation of feeling.

That's the non-comp assumption.




Conscious and unconscious are aspects of the inherent subject-object
symmetry of the universe.

Which you assume.

What choice do I have? My only experience of the universe is 100%
definable by the subject-object symmetry.

?




Why isn't arithmetic truth physical?

Because it does not rely on any physical notion. You can do number
theory without ever doing physics.

But you can't do number theory without a physical subject doing the
theorizing.

In which theory?




 Why not have some
creatures with smart skulls or shells and stupid soft parts inside? It
seems to be a strong indicator of material properties consistently
determining mechanism and not the other way around.

Seeming is deceptive.

What would be an explanation, or counterfactual?

Comp.


Arithmetical truth does make sense, definitely, but so do other kinds
of experiences make sense and are not arithmetic truths.

If they are conceptually rich enough, you can take them instead of
arithmetic, without changing anything in the explanation of
consciousness and matter. I use numbers because people are more
familiar with them.

I use sense because it makes more sense.

But sense is what I want to explain, like matter, I cannot assume it in the TOE, although I have to ask people if they agree on some consciousness property, like being invariant for some substitution, to connect the TOE with their own sense.




If you deposit your Gödel number code at the bank, or something like
that. You stretch the meaning of comp, which is just the bet that our
body is Turing emulable and that we can survive through any of its
Turing emulation.

Isn't that what money is really all about now though? Instead of a
body, we have accounts. You can't get more Turing emulable that that.
It's practically Turing-maniacal.


All of those Wall Street quants... where is the
theology and creativity?

It is buried by the materialists since 1500 years.

60% of the stock trades in the US markets are automated. I would say
that makes AI the dominant financial decision maker in the world.


The problem is not money, nor machines. It is humans when they steal money, with whatever is the technological means. 



There is no ontological brain, yet we are.


Aren't we the ontological brain already?


No. Our brain are epistemological. You have to grasp the UDA by yourself to see this.





We cannot be
simulated anymore than water or fire can be simulated.

Why? That's a strong affirmation. We have not yet find a phenomenon
in
nature that cannot be simulated (except the collapse of the wave,
which can still be Turing 1-person recoverable).

You can't water a real plant with simulated water or survive the
arctic burning virtual coal for heat.

What is a real plant? A plant is epistemologically real relatively to
you and your most probable computations. It is not an absolute notion.

It might be an absolute notion.

In which theory?


At my level of description it is a
plant, at another it's tissues, cells, molecules, etc. Anything that
satisfies all of those descriptions within all of those perceptual
frames may be a real plant. If it only looks like a plant, then it's a
cartoon or a puppet.

You assume a lot. If only you could start thinking to distinguish what you assume, and what you derive, we would be able to understand better what you try to convey.





If you look at substitution
level in reverse, you will see that it's not a matter of making a
plastic plant that acts so real we can't tell the difference, it's a
description level which digitizes a description of a plant rather than
an actual plant. Nothing has been simulated, only imitated. The
difference is that an imitation only reminds us of what is being
imitated but a simulation carries the presumption of replacement.

This makes things more complex than they might be.

It makes more sense though. Otherwise we would have movies that we
could literally live inside of already.

What makes you sure that is not the case?



If you say that human consciousness exists independently of a human
brain, you have to give me an example of such a case.

UDA shows that you are an example of this.

But drinking some scotch or smoking a cigar tells me that I am not
independent of my brain.

Nice. If you can prove that, then you refute comp. Good luck. 
With comp, the human material brain is a construct of the immaterial human minds, with respect of infinities UMs in a complex but conceptually and mathematically very precise statistical competition. We can already axiomatize completely the propositional logic for the "probability one" in each points of view.

Keep in mind that comp is not what most Aristotelians want it to be. You have to understand that comp contradicts the usual very common old naturalist conception of reality, which is a probable efficacious locally correct animal instinctive extrapolation.








We, unfortunately cannot be digitized,

You don't know that. But you don't derive it either from what you
assume (which to be franc remains unclear)

I do derive it, because the brain and the self are two parts of a
whole. You cannot export the selfness into another form, because the
self has no form, it's only experiential content through the
interior
of a living brain.

That's the 1-self, but it is just an interface between truth and
relative bodies.

Truth is just an interface between all 1-self and all relative bodies.

In which theory? This does not make sense.

It's an implication of multisense realism. Truth (a kind of Sense) is
an interface between all 1-self (sensorimotive experiences) and all 3-
p relative bodies (electromagnetic objects). It is the synchronization
of interior dreams and external bodies.

That looks like a not too much wrong comp phenomenon.
That looks nice, but I am not sure I follow you on this. By lowering the level that much you make everything more contingent and more geographical. You make matter, and the quantum, more mysterious at the start. You make mind unintelligible. By putting the level *infinitely* down, you get the theory "don't ask".




My whole point is revealing a universe description in which logic and
direct experience coexist in many ways. Limiting it to logical
language defeats the purpose,

That's what the machine can already explain. You consider it as a
zombie.

Not a zombie, a puppet.

Whatever. 
If comp is correct, this is an insult of my friends.





although I would love to collaborate
with someone who was interested in formalizing the ideas.

Convince people that there is an idea. But by insisting that your
ideas contradict comp, you shoot in your theory, because you add a
magic where the comp theories explains the appearance of the magic
without introducing it at the start.

Comp introduces magic at the start. 'Arithmetic Truth' is very much a
digital Dreamtime.

But it is believed even more than Aristotle doctrine. We appreciate arithmetic since the Sumerians. The Pythagorean triples were known since many thousand of years (6000 up to 8000 before JC). You are using right now a machine entirely based on arithmetic. We use it everyday, and we teach it in high school. 
Yes, it is a bit of Magic, when we get familar to its many surprises. Unfortunately most people see it as boring number crunching, or number tables and not so much appreciate the music, but that's just a reflect of lack of education.




I don't add any magic

What are you assuming? 
Apparently you assume a lot: matter, space, wave, sense, persons, electrons, motives, etc.
I have no clue what you mean by any of those terms, nor basic principle you assume on them, nor how you relate them.
All I know is that you postulate something non Turing emulable, playing some role in matter and consciousness.




and nothing appears except
different levels of sense recapitulation in inertial frames.
Everything in multisense realism works with a universe of only the
typical experiences that we live through every day, plus it explains
why extraordinary experiences are harder to ground in public
certainty.

Let the others say so, or not.






Logic is a
3p language - a mechanistic, involuntary form of reasoning which
denies the 1p subject any option but to accept it.

This is false. The right side of the hypostases with "& p& are
provably beyond language, at the level the machine can live.

You're making my point.

Well, the point was made by the machine I am interviewing. 



The notion of anything being literally false
or true is just what I said: an involuntary form of reasoning.

?  (why not?)



Then
you proceed to deny me, the 1p subject, any option to accept it.

?

I just try to understand. 

I fail, because of vagueness together with strong negative assumptions bearing on a very large class of entities ad-hoc-ly segregated.





The 1p experience
is exactly the opposite of that. It is a 'seems like' affair which
invites or discourages voluntary participation of the subject. Half of
the universe is made of this.

With comp, it is the main part of the "universe".

That's why it's a little naive :)

I don't think it is particularly naive to believe that the observable universe is just one *aspect* of something much larger. 

Bruno



meekerdb

unread,
Jan 31, 2012, 1:33:55 PM1/31/12
to everyth...@googlegroups.com
On 1/31/2012 10:25 AM, Bruno Marchal wrote:
That's the reason mind exist, it accelerate the processing much more quickly. In fact, just by software change, the slower machine can always beat the faster machines, on almost inputs, except a finite number of them.

I can accept that intuitively, but can you point to a technical proof?

thnx, Brent

Bruno Marchal

unread,
Feb 1, 2012, 12:58:19 PM2/1/12
to everyth...@googlegroups.com
I basically got in mind Gödel's length of proof theorem of 1936, and Blum's speed-up theorem of 1967. 

Adding an undecidable sentence, undecidable by a theory/machine T, to the theory/machine T, not only makes an infinity of undecidable sentences decidable, but it shorten infinitely many proofs.

In T + con(T), for example, infinities of (arithmetical) propositions are decidable (and undecidable in T), and infinity of proofs can be arbitrarily sped up.

Blum obtained in 1967 a related result in term of computational speed.

GÖDEL K., 1936, On the Length of Proofs, translated in Davies 1965, pp. 82-83. 

BLUM, M., 1967, "A Machine-Independent Theory of the Complexity of Recursive Functions", Journal of the ACM 14: 322–336.

A characterization of the self-speedable machines has been made by Blum and Marquez in term of "subcreative sets", which generalizes the creative sets (provably equivalent, in some sense, to the universal sets /sets/machine/numbers).
So you have a notion of subuniversal numbers, with the universal one as special cases, which correspond to the self-speedable machine/numbers. All universal numbers are speedable, but not all speedable numbers are universal.

BLUM M. and MARQUES I., 1973, On Complexity properties of Recursively Enumerable 
Sets, Journal of Symbolic Logic, Vol 38, N° 4, pp. 579-593.

Another interesting paper is:

ROYER J.S., 1989, Two Recursion Theoretic Characterizations of Proof Speed-ups, The 
Journal of Symbolic Logic, 54, N° 2

Quite interesting and relevant for the Löbian number's bio-psycho-theology is:

GOOD I.J, 1971, Freewill and Speed of Computation. Brit. J. Phil. Sci. 22, 48-49. 

Some good books:

ARBIB M., 1964, Brains, Machines and Mathematics, McGraw-Hill, 2ème éd. : 1987, 
Springer-Verlag, New-York.

CALUDE C. Theories of Computational complexity, North Holland, 1988.

SALOMAA A.,1985, Computation and Automata, Cambridge University Press, 
Cambridge.

Ah, you can look here too:


Bruno


Evgenii Rudnyi

unread,
Feb 1, 2012, 3:20:14 PM2/1/12
to everyth...@googlegroups.com
On 29.01.2012 23:47 meekerdb said the following:

I think that there is some difference in respect that one can abstract
control from a particular application. For example, I can consider a PID
controller as such. Such a consideration belongs, roughly speaking, to
cybernetics laws.

Evgenii

> Brent
>
>
>>
>> Evgenii
>>
>

Craig Weinberg

unread,
Feb 2, 2012, 10:23:27 PM2/2/12
to Everything List
On Jan 31, 1:25 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 30 Jan 2012, at 21:12, Craig Weinberg wrote:
>
> > On Jan 30, 5:09 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> >> On 29 Jan 2012, at 03:20, Craig Weinberg wrote:
>
> >>> How do you know that they 'occur' in the computations rather than in
> >>> the eye of the beholder of the computations?
>
> >> The beholder of the computations is supported by the computations.
> >> Those exist independently of me, in the same way numbers are prime or
> >> not independently of me.
>
> > How would you know that they exist at all?
>
> Because their existence is a theorem in the theory which is assumed.

Right. So my logic is that the theory that there is a capacity to have
and assume theories (sense accumulated/entangled to cognitive level of
elaboration) is a stronger, more primitive theory than any other
possible.

>
> > Many people feel the same
> > way about God.
>
> We never know if something exists, except our consciousness here and
> now, but that's all. That is why we assume theories, which assumes the
> existence of the primary objects from which we start to derive
> theorems, including different sort of existence.

Which is why I think that the primary object should be our
consciousness here and now as well as the idea of the opposite of that
(unconsciousness, there and then)

>
>
>
> >>> Universe means 'all that is' in every context.
>
> >> But "all that is" is what we are searching, testing, studying. The
> >> word "is" is very useful in everyday life, but very ambiguous per.
> >> "is" or "exist" depends on the theory chosen. Something can exist
> >> ontologically, or epistemologically.
>
> > As long as it is something to something, then it 'is'. There is
> > nothing that it is not, as long as sense is respected. Unicorns are
> > not part of the universe as far as we know, but the idea of unicorns
> > is certainly part of the human universe and therefore the universe.
>
> That is what we have to clarify. You beg the question entirely. I have
> still no clue what your theory presuppose. You discourse technic is
> similar to the pseudo-priests and evangelists. I'm sorry.

I understand. Some of the ways you express your ideas are impossible
for me to parse too. Some people do say that they like my ideas
though. I get a few emails of thanks and encouragement and none that
are critical (surprisingly).

>
>
>
> >>> But it isn't the case, it's the idea of it being the case.
>
> >> It is the case that 17 is prime, independently of of it is the case
> >> that such or such human has the idea that it is the case that 17 is
> >> prime. You are confusing levels.
>
> > 17 is only prime in a symbolic system that defines primeness,
> > enumeration, and division of whole integers that way.
>
> You confuse arithmetical truth, and theories of arithmetic. The
> primeness of 17 is not a symbolic truth.

So you would say that primeness is present in all possible universes?
What about a universe that consisted only of flavor events. What would
be a prime flavor?

>
> > Internal
> > consistency of the rules a game, even a universal game, does not make
> > the game independent of players.
>
> This concerns *all* theories.

Not sure what you mean? Primeness? Rules?

>
> > The rules arise from the players
> > interactions with each other,
>
> Not for definable games. In game theory we can show the existence of
> game without us knowing any of the game rule.

If you define the game without the players understanding it, it is
your game that you are playing using game pieces as manipulated
objects, not subject-players.

>
> > and that interaction is the game.
>
> In which theory?

I guess mine?

>
> > Comp
> > says that there are disembodied rules that assemble themselves
> > mechanically as games which then dreams it is separate players.
>
> More or less.
>
>
>
> >>> You're just
> >>> saying 'Let p ='. It doesn't mean proposition that has any causal
> >>> efficacy.
>
> >> The fact that 17 is prime has causal efficacy. It entails many facts.
>
> > It entails only arithmetic facts, but there is nothing to say that
> > arithmetic by itself causes anything outside of arithmetic.
>
> But a lot inside, and *from* inside.

I don't think there is an inside of arithmetic, I think it's a
subjective channel which delineates the edge of sensorimotive
interiority and electromagnetic exteriority. There is an inside of
experience and iconic representation is part of that; ideal forms,
precision and accuracy, logic, etc. They are part of the canon of
common sense, no more primary than linguistic, artistic, or pragmatic
concepts. All I'm trying to say is that my view is a framework with
which to arrange all possible views of consciousness and cosmos, and
through that arrangement, underlying principles of symmetry emerge
which reveal a much broader and deeper sense of the universe - one
that redefines some aspects of physics.

>
> > Even
> > within arithmetic, it is the execution of a program or function by a
> > mind or body, that is through energy exerted within matter, which
> > produces causes.
>
> What is matter? What is energy? How does it come from?

Matter is the exterior view of pieces of the singularity - so it is
volumes of mechanical entropic densities - quantitative topological
expressions of sense.

Energy is the interior view *through* the pieces of the singularity -
so it is sequences of animated signifying intensities - qualitative
motive expressions of sense (see the symmetry of each word of the
matter and energy description? volumes::sequences, mechanical
(manipulated from without)::animated (from within),
entropic::signifying, densities::intensities, etc?)

It doesn't come from, it is the source of 'comes from'. They are the
consequences of the folding of the singularity into primordial monad
vs Big Bang.

> What is the
> relation with mind?

A human mind is the sensorimotive side of the energy that we are, the
electromagnetic side is brain activity. The electromagnetic side of
the matter that we are is the body and it's physical environment, The
sensorimotive side of the matter that we are is our biography.

> You said that you agree that matter is not primary. So what is your
> theory?
>

Sense is primary. Matter is a topological, density based expression of
sense.

>
>
> >>>>> I have no reason to believe that a machine can observe
> >>>>> itself in anything more than a trivial sense.
>
> >>>> It needs a diagonalization. It can't be completely trivial.
>
> >>> Something is aware of something, but it's just electronic components
> >>> or bricks on springs or whatever being aware of the low level
> >>> physical
> >>> interactions.
>
> >> A machine/program/number can be aware of itself (1-person) without
> >> knowing anything about its 3p lower level.
>
> > We don't really know that machine/program/number can be aware of
> > anything.
>
> We just know nothing. That is why we use theories, which are
> collections of assumption/hypothesis.

True, but it only seems that machines can be thought of as having
awareness only when we build them explicitly to do that.

>
> > It may only be material interpreters which are aware of
> > anything and the degree to which they are aware of 1p and 3p may be
> > inversely proportional to their complexity. Being fantastically
> > complex, we are aware of only some of our 1p and 3p self. Simpler
> > organisms or particles may in fact have awareness of 100% of their 1p
> > and 3p selves.
>
> The idea that simple organism might be "more conscious" than complex
> organism might make sense in the comp theory. What does not make sense
> is the need for matter.

Matter is needed in this universe though. It's how the universe keeps
a lid on eternity and infinity. It makes a distinction between private
games that are free to expand in fiction and public games that are
chiseled in granite and stained with blood. It give reality the
authenticity and structure necessary for games to ... matter.
Substance. It's the diametric opposite to your view - 99% techne and
1% logos instead of comp 99% logos and 1% techne. Logos can explain
techne as an idea, techne can make logos irrelevant in practice.
Opposite views, opposite strategies of control.
It might, but it doesn't have to. It would be simple to take a picture
of a camera in the mirror and load that into memory so that every time
a similar image was recognized as itself in the mirror, a light would
go on. That doesn't mean anything to the camera about itself. It
cannot possibly know or care what that image means.

>
>
>
> >> I am not sure. I don't see the relevance of that mechanist point.
>
> > I'm saying the complexity of the immune system suggests that complex
> > function does necessarily give rise to consciousness.
>
> Yes. But that is trivial. Nobody claimed that consciousness is just
> complexity.

No? Isn't complexity the only thing that makes Deep Blue different
from a pocket calculator (remember those? pre-LCD even). Isn't
complexity the only thing separating Deep Blue from AGI machines that
will be fully conscious? If not, what besides complexity is required?

>
> >>>>> Consciousness does nothing to speed decisions, it would only cost
> >>>>> processing overhead
>
> >>>> That's why high animals have larger cortex.
>
> >>> Their decisions are no faster than simpler animals.
>
> >> Complex decision are made possible, and are done more faster.
>
> > That only requires more processing power, not consciousness.
>
> Processing power progress are bounded by contingent slow origin.
> That's the reason mind exist, it accelerate the processing much more
> quickly. In fact, just by software change, the slower machine can
> always beat the faster machines, on almost inputs, except a finite
> number of them.

There's no reason to think that the same acceleration wouldn't occur
unconsciously though. You don't need mind, you just need logic.

>
>
>
> >>>>> and add nothing to the efficiency of unconscious
> >>>>> adaptation.
>
> >>>> So, why do you think we are conscious?
>
> >>> I think that humans have developed a greater sensorimotive capacity
>
> >> I still don't know what you mean by that. You can replace
> >> "sensorimotive" by "acquainted to the son of God" in all your
> >> argument
> >> without them having a different meaning or persuasive force.
>
> > Sensorimotive is the interior view of electromagnetism.
>
> You already told me this, and I asked you what you mean by "interior",
> "view", and "electromagnetism".

Interior view is literally that. If I am inside a sphere, the inside
is a sensorimotive show that I am watching, and the outside is a
charged sphere that interacts with other changed particles. I happen
to be inside a trillion celled human body and brain, so it interacts
with other charged bodies and brains on all kinds of different levels
and scales. Chemicals, organisms, objects, people, planets, etc.

>
> > Electromagnetism is orderly dynamic changes in material objects across
> > space relative to each other, sensorimotivation is the perception of
> > change through time in subjective experience relative to one's self.
> > Like electromagnetism is electricity and magnetism, sensorimotivation
> > is sensation and motive. They correspond to receiving of sense
> > experience (sensation) and embodying and projecting an intention
> > (motive).
>
> Theory? Definitions?

Why does it need to be defined or theorized any further than that? I'm
just making a map of the cosmos in the simplest terms possible. I'm
suggesting the possibility of a subjective and objective version of
addition and multiplication which are perpendicular to logos and
techne.

>
>
>
> >>> as
> >>> a virtuous cycle of evolutionary circumstance and subjective
> >>> investment. Just as hardware development drives software development
> >>> and vice versa. It's not that we are conscious as opposed to
> >>> unconscious, it's that our awareness is hypertrophied from
> >>> particular
> >>> animal motives being supported by the environment and we have
> >>> transformed our environment to enable our motives. Our seemingly
> >>> unique category of consciousness can either be anthropic prejudice
> >>> or
> >>> objective fact, but either way it exists in a context of many other
> >>> kinds of awareness. The question is not why we are conscious, it is
> >>> why is consciousness possible and/or why are we human.
>
> >> Why we are human is easily explained, or not-explainable, as an
> >> indexical geographical fact, by comp. It is like "why am I the one in
> >> W and not in M?". Comp explains why consciousness is necessary. It is
> >> the way we feel when integrating quickly huge amount of information
> >> in
> >> a personal scenario.
>
> > 'the way we feel' doesn't relate to information though. Where is the
> > feeling located?
>
> Feelings are not the type of thing for which location applied.

I agree as far as the experience of feeling, but there is always a
scope to which feelings apply. If a number had a feeling then every
instance of that number would have to have every feeling corresponding
to every instance at the same time. You and I feel things and walk
around and our body is addressable in spatial coordinates, but a
simulation has arbitrary spatial coordinates. The simulated batter has
the same feeling as the simulated baseball because they are drawn by
the same program.. where does one begin and the other leave off? Does
the program feel like the bat or the ball? It doesn't work to feel
like both. It's like trying to tickle yourself.

>
> > In the information, in the informed, or somewhere
> > else?
>
> You might say: in the mind of the person. But even this is a way to
> speak.

This is a more serious problem than I think you want to look at. It
flies in the face of our most basic and universal experience as a
person in a world of finite objects and other people.

>
>
>
> >>>>> Anything that is conscious can also be unconscious. Can
> >>>>> Peano Arithmetic be unconscious too?
>
> >>>> Yes. That's possible if you accept that consciousness is a logical
> >>>> descendent of consistency.
>
> >>> Aren't the moons of Saturn consistent?
>
> >> The material moons are not programs, nor theories. "consistent"
> >> cannot
> >> apply to it without stretching the words a lot.
>
> > Why aren't they programs?
>
> By UDA, which explains why observable matter cannot be program, but
> the a priori non computable result of infinities of programs.

Why isn't the UDA itself made of matter?

>
> > They undergo tremendous logical change over
> > time. Why discriminate against moons?
>
> If not being a program is discrimination, then you are the one
> discriminating a lot of possibly conscious entities.

I don't have a problem with discrimination. I think it's useful for my
purposes.

>
> > I don't see any stretch at all
> > in calling them consistent. You could set a clock by their orbits.
>
> A clock is till not something on which the consistency predicate
> applies. Consistency applies only to collection of beliefs.

That seems arbitrary to me. Beliefs predicate on consistency as much
as the other way around.

>
>
>
> >>> Will consciousness logically
> >>> descend from their consistency?
>
> >> If ever the moon have to become conscious. Yes. No if this has not to
> >> happen. There is few chance moons becomes conscious, for they are not
> >> self-moving and have very few degrees of freedom.
>
> > Computers are 'solid state' though?
>
> With internal write/read and delete subroutines.

Maybe the moon has those too?

>
> > Moons have all kinds of geological
> > changes going on over thousands of years.
>
> That's a poor evidence of thinking.

Why? If you slowed yourself down to that frequency, your brain would
show poor evidence of thinking too. Speed up the Earth a few thousand
times and it would look pretty interesting, The biosphere and
atmosphere are quite computationally rich. The thermodynamics of the
crust, mantle, and core would be very fluid and dynamic, pumping with
thermal respiration and cellular convection patterns. The solar system
as a whole would be magnificent at it's natively scaled speed - a
whirling dynamo of organo-metallic spheres and stellar nuclear furnace
flashing out AM/FM/Multiband transmissions like a galactic E-M
transmitter (which it probably is)

>
>
>
> >>>> It follows then from the fact that
> >>>> consistency entails the consistency of inconsistency (Gödel II). Of
> >>>> course, the reality is more complex, for consciousness is only
> >>>> approximated by the instinctive unconscious) inductive inference of
> >>>> self-consistency.
>
> >>> You need some kind of awareness to begin with to tell the difference
> >>> between consistency and inconsistency.
>
> >> Not necessarily. Checking inconsistency does not require a lot of
> >> cognitive ability.
>
> > It does necessarily require awareness of some kind. Something has to
> > detect something and know how to expect and interpret a 'difference'
> > in that detection. Cognition has nothing to do with it. That's much
> > higher up the mountain, in true vs false land. Consistency is only
> > same v different.
>
> ?

If I pour hot water on an ice cube, there is no cognition involved,
but the ice responds to the hot water because there are collisions.
The molecules have to embody a susceptibility to collision - which is
a simple form of sense. There is no rule book, they actively respond
to momentum with momentum. We are the ones who observe the consistency
of their interaction and derive a rule book, just as an alien
astronomer studying our civilization from space could derive a rule
book which is probabilistic and misses entirely our perception.

>
>
>
> >> I was just alluding to the fact that replication, although not
> >> providing Turing universality, do that in company of the while loop.
>
> > I was just saying that while loops and replication don't imply the
> > generation of feeling.
>
> That's the non-comp assumption.

I don't see that labeling it adds anything.

>
>
>
> >>> Conscious and unconscious are aspects of the inherent subject-object
> >>> symmetry of the universe.
>
> >> Which you assume.
>
> > What choice do I have? My only experience of the universe is 100%
> > definable by the subject-object symmetry.
>
> ?

?

>
>
>
> >>> Why isn't arithmetic truth physical?
>
> >> Because it does not rely on any physical notion. You can do number
> >> theory without ever doing physics.
>
> > But you can't do number theory without a physical subject doing the
> > theorizing.
>
> In which theory?

No theory, in reality. In any possible here and now that is real.

>
>
>
> >>> Why not have some
> >>> creatures with smart skulls or shells and stupid soft parts
> >>> inside? It
> >>> seems to be a strong indicator of material properties consistently
> >>> determining mechanism and not the other way around.
>
> >> Seeming is deceptive.
>
> > What would be an explanation, or counterfactual?
>
> Comp.

So you are citing comp itself to support comp? The counterfactual to
the idea that intelligence is invariably associated with one specific
category of living tissue tends to invalidate comp is that comp says
that doesn't mention categories of living tissue? That's my point, if
it doesn't explain that obvious correlation, how can it be any more
unlikely to be true?

>
> >>> Arithmetical truth does make sense, definitely, but so do other
> >>> kinds
> >>> of experiences make sense and are not arithmetic truths.
>
> >> If they are conceptually rich enough, you can take them instead of
> >> arithmetic, without changing anything in the explanation of
> >> consciousness and matter. I use numbers because people are more
> >> familiar with them.
>
> > I use sense because it makes more sense.
>
> But sense is what I want to explain, like matter, I cannot assume it

You have to assume it. Assuming and explaining are aspects of making
sense. You are trying to put your mind outside of a system that has no
outside.

> in the TOE, although I have to ask people if they agree on some
> consciousness property, like being invariant for some substitution, to
> connect the TOE with their own sense.

It all begins and ends with sense.

>
>
>
> >> If you deposit your Gödel number code at the bank, or something like
> >> that. You stretch the meaning of comp, which is just the bet that our
> >> body is Turing emulable and that we can survive through any of its
> >> Turing emulation.
>
> > Isn't that what money is really all about now though? Instead of a
> > body, we have accounts. You can't get more Turing emulable that that.
> > It's practically Turing-maniacal.
>
> >>> All of those Wall Street quants... where is the
> >>> theology and creativity?
>
> >> It is buried by the materialists since 1500 years.
>
> > 60% of the stock trades in the US markets are automated. I would say
> > that makes AI the dominant financial decision maker in the world.
>
> The problem is not money, nor machines. It is humans when they steal
> money, with whatever is the technological means.

How do you know? Maybe it is the agenda of the numbers behind the
money to consolidate in the fewest hands possible? It doesn't care who
the players are, it just makes sure that those who are closer to the
source of the numbers get more and more while everyone else gets less
and less. It's a program, or more like a memory leak in the program of
civilization, draining out significance.

>
>
>
> >> There is no ontological brain, yet we are.
>
> > Aren't we the ontological brain already?
>
> No. Our brain are epistemological. You have to grasp the UDA by
> yourself to see this.

My stab at understanding UDA is that if you have a program that writes
all possible programs, some of those programs are also going to write
programs that write programs, some of which would refer to themselves.
In referring to themselves, you would get programmatic relations in
the runtime that would not be anticipated and would explain our own
situation as conscious beings. Is that close? The brain being
epistemological is a function of each entity to represent the other in
some way, although I'm not sure why it is in that form and not some
other.

>
>
>
> >>>>> We cannot be
> >>>>> simulated anymore than water or fire can be simulated.
>
> >>>> Why? That's a strong affirmation. We have not yet find a phenomenon
> >>>> in
> >>>> nature that cannot be simulated (except the collapse of the wave,
> >>>> which can still be Turing 1-person recoverable).
>
> >>> You can't water a real plant with simulated water or survive the
> >>> arctic burning virtual coal for heat.
>
> >> What is a real plant? A plant is epistemologically real relatively to
> >> you and your most probable computations. It is not an absolute
> >> notion.
>
> > It might be an absolute notion.
>
> In which theory?

In universally shared reality. Common sense.

>
> > At my level of description it is a
> > plant, at another it's tissues, cells, molecules, etc. Anything that
> > satisfies all of those descriptions within all of those perceptual
> > frames may be a real plant. If it only looks like a plant, then it's a
> > cartoon or a puppet.
>
> You assume a lot.

I assume my direct experience and extrapolate from there.

>If only you could start thinking to distinguish what
> you assume, and what you derive, we would be able to understand better
> what you try to convey.

If you focus on what you assume, then you codify in cognitive theory
from the start. You limit your sense of the universe to prefrontal
cortex logic arbitrarily.

>
>
>
> >>> If you look at substitution
> >>> level in reverse, you will see that it's not a matter of making a
> >>> plastic plant that acts so real we can't tell the difference, it's a
> >>> description level which digitizes a description of a plant rather
> >>> than
> >>> an actual plant. Nothing has been simulated, only imitated. The
> >>> difference is that an imitation only reminds us of what is being
> >>> imitated but a simulation carries the presumption of replacement.
>
> >> This makes things more complex than they might be.
>
> > It makes more sense though. Otherwise we would have movies that we
> > could literally live inside of already.
>
> What makes you sure that is not the case?

Because when we dream we may think we are awake, but when we are awake
we do not think we are dreaming. Because physical conditions can wake
us up out of a sound sleep but our dream worlds cannot summon us to
sleep suddenly. Because we can tell the difference between media
presentations and live events.

>
>
>
> >>> If you say that human consciousness exists independently of a human
> >>> brain, you have to give me an example of such a case.
>
> >> UDA shows that you are an example of this.
>
> > But drinking some scotch or smoking a cigar tells me that I am not
> > independent of my brain.
>
> Nice. If you can prove that, then you refute comp. Good luck.
> With comp, the human material brain is a construct of the immaterial
> human minds, with respect of infinities UMs in a complex but
> conceptually and mathematically very precise statistical competition.
> We can already axiomatize completely the propositional logic for the
> "probability one" in each points of view.

Why do we use the idea of nicotine to change the idea of the brain
instead of just using the idea of changing our minds directly with
computation?

>
> Keep in mind that comp is not what most Aristotelians want it to be.
> You have to understand that comp contradicts the usual very common old
> naturalist conception of reality, which is a probable efficacious
> locally correct animal instinctive extrapolation.

I understand. I'm only saying that comp is not valid as the ultimate
and absolute truth of the universe, not that it is not a valid
perspective to make sense of the universe. Multisense realism is about
showing how comp coexists with it's opposite (techne) and right-angle
paradigms (subject and object).

>
>
>
> >>>>>>> We, unfortunately cannot be digitized,
>
> >>>>>> You don't know that. But you don't derive it either from what you
> >>>>>> assume (which to be franc remains unclear)
>
> >>>>> I do derive it, because the brain and the self are two parts of a
> >>>>> whole. You cannot export the selfness into another form, because
> >>>>> the
> >>>>> self has no form, it's only experiential content through the
> >>>>> interior
> >>>>> of a living brain.
>
> >>>> That's the 1-self, but it is just an interface between truth and
> >>>> relative bodies.
>
> >>> Truth is just an interface between all 1-self and all relative
> >>> bodies.
>
> >> In which theory? This does not make sense.
>
> > It's an implication of multisense realism. Truth (a kind of Sense) is
> > an interface between all 1-self (sensorimotive experiences) and all 3-
> > p relative bodies (electromagnetic objects). It is the synchronization
> > of interior dreams and external bodies.
>
> That looks like a not too much wrong comp phenomenon.

That's probably what it would look like to comp. Comp isn't a realism
though, it's a theoretical logic - a non-realism.
I don't think it makes them mysterious, it makes them primordial by
necessity. Once you realize that the universe does nothing but sense
and make sense, there is no need to ask anything more, because asking
is part of the show. Asking is a way of making sense.

>
>
>
> >>> My whole point is revealing a universe description in which logic
> >>> and
> >>> direct experience coexist in many ways. Limiting it to logical
> >>> language defeats the purpose,
>
> >> That's what the machine can already explain. You consider it as a
> >> zombie.
>
> > Not a zombie, a puppet.
>
> Whatever.
> If comp is correct, this is an insult of my friends.

You can still have puppets for friends. Most people's friends are
probably largely psychological projections anyhow.

>
>
>
> >>> although I would love to collaborate
> >>> with someone who was interested in formalizing the ideas.
>
> >> Convince people that there is an idea. But by insisting that your
> >> ideas contradict comp, you shoot in your theory, because you add a
> >> magic where the comp theories explains the appearance of the magic
> >> without introducing it at the start.
>
> > Comp introduces magic at the start. 'Arithmetic Truth' is very much a
> > digital Dreamtime.
>
> But it is believed even more than Aristotle doctrine. We appreciate
> arithmetic since the Sumerians. The Pythagorean triples were known
> since many thousand of years (6000 up to 8000 before JC). You are
> using right now a machine entirely based on arithmetic. We use it
> everyday, and we teach it in high school.

Unquestionably. I'm not arguing that arithmetic itself is exclusive to
human minds, any sufficiently evolved organism will discover some form
of arithmetic I think. I only say that the idea that arithmetic exists
independently of any subjective discovery by a material entity is a
creation myth.

> Yes, it is a bit of Magic, when we get familar to its many surprises.
> Unfortunately most people see it as boring number crunching, or number
> tables and not so much appreciate the music, but that's just a reflect
> of lack of education.

You underestimate my esteem for arithmetic drastically. I only
antagonize you here because I need you to see the limits of arithmetic
in order for you to even consider my ideas. That and the fact that I'm
not talented or skilled with complex arithmetic. I'm more of a verbal
guy, yes? I have nothing but respect for quant power, I just take
issue with quant suprematism.

>
> > I don't add any magic
>
> What are you assuming?
> Apparently you assume a lot: matter, space, wave, sense, persons,
> electrons, motives, etc.
> I have no clue what you mean by any of those terms, nor basic
> principle you assume on them, nor how you relate them.
> All I know is that you postulate something non Turing emulable,
> playing some role in matter and consciousness.

As a theoretical logician, you start from nothing but assume logic. Bp
and p. Numbers. Arithmetic, information, machines, computation,
memory, addressability, pattern recognition, looping, branching,
decision, digital and analog compression, isomorphism, simulation, set
theory, Platonia, all kinds of conceptual architecures. That's great,
and you can do almost anything with almost anything through that
methodology. The trouble is that if there were nothing to oppose that
set of assumptions, there would be no feeling and meaning to doing
anything at all. That is gambled away in wishful thinking about qualia
chasing the quanta.

In multisense realism, I represent comp as one cardinal point in a set
of four that mark the extremes of the continuum. I therefore have to
assume everything that every cardinal point assumes. I have to map the
entire universe and leave nothing out.

>
> > and nothing appears except
> > different levels of sense recapitulation in inertial frames.
> > Everything in multisense realism works with a universe of only the
> > typical experiences that we live through every day, plus it explains
> > why extraordinary experiences are harder to ground in public
> > certainty.
>
> Let the others say so, or not.

OK

>
>
>
> >>> Logic is a
> >>> 3p language - a mechanistic, involuntary form of reasoning which
> >>> denies the 1p subject any option but to accept it.
>
> >> This is false. The right side of the hypostases with "& p& are
> >> provably beyond language, at the level the machine can live.
>
> > You're making my point.
>
> Well, the point was made by the machine I am interviewing.
>
> > The notion of anything being literally false
> > or true is just what I said: an involuntary form of reasoning.
>
> ? (why not?)

?

>
> > Then
> > you proceed to deny me, the 1p subject, any option to accept it.
>
> ?

To say that something is objectively true or false is involuntary. The
universe gives us an alternative to that in our subjectivity. We can
care or not care whether something seems true or false and decide to
question it.

>
> I just try to understand.
>
> I fail, because of vagueness together with strong negative assumptions
> bearing on a very large class of entities ad-hoc-ly segregated.

We can only do what we can do. I don't see it in terms of failure,
it's just a measure of how alike and different we are.
>
>
>
> >>> The 1p experience
> >>> is exactly the opposite of that. It is a 'seems like' affair which
> >>> invites or discourages voluntary participation of the subject.
> >>> Half of
> >>> the universe is made of this.
>
> >> With comp, it is the main part of the "universe".
>
> > That's why it's a little naive :)
>
> I don't think it is particularly naive to believe that the observable
> universe is just one *aspect* of something much larger.
>

Not at all, but it might be to believe that it is the 'main' part. The
universe is as much logos as it is techne, subject, and object, but
mainly it is the sense that is made in the circuits between and around
them.

Craig
Reply all
Reply to author
Forward
0 new messages