Why would any structure give rise to qualia? We think some structure
(for example our brain, or the abstract computation or arithmetical
truth/structure representing it) does and we communicate it to others in
a "3p" way. The options here are to either say qualia exists and our
internal beliefs (which also have 'physical' correlates) are correct, or
that it doesn't and we're all delusional, although in the second case,
the belief is self-defeating because the 3p world is inferred through
the 1p view. It makes logical sense that a structure which has such
beliefs as ourselves could have the same qualia (or a digital
substitution of our brain), but this is *unprovable*.
If you don't eliminate qualia away, do you think the principle described
here makes sense? http://consc.net/papers/qualia.html
If we don't attribute consciousness to some structures or just 'how a
computation feels from the inside' then we're forced to believe that
consciousness is a very fickle thing.
As for arithmetic/numbers - Peano Arithmetic is strong enough to
describe computation which is enough to describe just about any finite
structure/process (although potentially unbounded in time) and our own
thought processes are such processes if neuroscience is to be believed.
Arithmetic itself can admit many interpretation and axioms tell you what
'arithmetic' isn't and what theorems must follow, not what it is - can
you explain to me what a number is without appealing to a model or
interpretation? Arithmetical realism merely states that arithmetical
propositions have a truth value, or that the standard model of
arithmetic exists.
If you think that isn't enough, I don't see what else could be enough
without positing some form of magic in the physics, but that forces us
to believe consciousness is very fickle. Attributing consciousness to
(undefinable) arithmetical truth appears to me like a better theory than
attributing it to some uncomputable God-of-the-gaps physical magic , if
one has to believe in consciousness (as a side note, the set of
arithmetical truths is also uncomputable and undefinable within
arithmetic itself). If you must use Occam, the only thing that you can
shave would be your own consciousness, which I think is overreaching,
although some philosophers do just that (like Dennett), if you use Occam
and accept consciousness and that you admit a digital substitution, an
arithmetical ontology is one of the simplest solutions.
If qualia doesn't correspond to a structure's properties, then we should
observe inconsistencies between what we observed and what we do. Yet, we
don't observe any of that. Which is why consciousness/qualia/'what it's
like to be some structure' as internal truth makes sense to me. If you
reject having a digital substitution, you either have to appeal to the
brain having some concrete infinities in its implementation, or you have
to say that there are some inconsistencies. To put it in another way,
where in the piece-by-piece digital substitution thought experiment (the
one I linked) do you think consciousness or qualia changes? Does it
suddenly disappear when you replace one neuron? Is it's fading, yet the
behavior never changes while the person reports having vivid and
complete qualia)? What about those people with digital implants(for
example, for hearing), do you think they are now p.zombies? I'd rather
bet on what seems more likely to me, but you're free to bet on less
likely hypotheses.
As for "Near Death Experiences" or various altered states of
consciousness, I don't see how that shows COMP wrong: those people were
conscious during them. I would even say that altered states of
consciousness merely means that the class of possible experiences is
very large. I had a fairly vivid lucid dream last night, yet I don't
take that as proof against COMP, I take that as proof that conscious
experience can be quite varied, and the more unusual (as opposed to the
usual awake state) the state is, the more unusual the nature of the
qualia can be. If after drinking or ingesting some mind-altering
substance, you have some unusual qualia, I'd say that at least partially
points to your local brain's 'physical' (or arithmetical or
computational or ...) state being capable of being directly affected by
its environment - again, points towards functionalism of some form, not
against it.
> As I continue to ponder the UDA, I keep coming back to a niggling
> doubt that an arithmetical ontology can ever really give a
> satisfactory explanation of qualia.
Of course the comp warning here is a bit "diabolical". Comp predicts
that consciousness and qualia can't satisfy completely the self-
observing machine. More below.
> It seems to me that imputing
> qualia to calculations (indeed consciousness at all, thought that may
> be the same thing) adds something that is not given by, or derivable
> from, any mathematical axiom. Surely this is illegitimate from a
> mathematical point of view. Every mathematical statement can only be
> made in terms of numbers and operators, so to talk about *qualities*
> arising out of numbers is not mathematics so much as numerology or
> qabbala.
No, it is modal logic, although model theory does that too. It is
basically the *magic* of computer science. relatively to a universal
number, a number can denote infinite things, like the program
factorial denotes the set {(0,0),(1,1),(2,2),(3,6),(4,24),(5,120), ...}.
Nobody can define consciousness and qualia, but many can agree on
statements about them, and in that way we can even communicate or
study what machine can say about any predicate verifying those
properties.
>
> Here of course is where people start to invoke the wonderfully protean
> notion of ‘emergent properties’. Perhaps qualia emerge when a
> calculation becomes deep enough.Perhaps consciousness emerges from a
> complicated enough arrangement of neurons.
Consciousness, as bet in a reality emerges as theorems in arithmetic.
They emerge like the prime numbers emerges. They follow logically,
from any non logical axioms defining a universal machine. UDA
justifies why it has to so, and AUDA shows how to make this
verifiable, with the definitions of knowledge on which most people
already agree.
> But I’ll venture an axiom
> of my own here: no properties can emerge from a complex system that
> are not present in primitive form in the parts of that system.
I agree with that in the logical sense. that is why I don't need more
than arithmetic for the universal realm.
> There
> is nothing mystical about emergent properties. When the emergent
> property of ‘pumping blood’ arises out of collections of heart cells,
> that property is a logical extension of the properties of the parts -
> physical properties such as elasticity, electrical conductivity,
> volume and so on that belong to the individual cells. But nobody
> invoking ‘emergent properties’ to explain consciousness in the brain
> has yet explained how consciousness arises as a natural extension of
> the known properties of brain cells - or indeed of matter at all.
>
Because the notion of matter prevent the progress. What arithmetic
explains is why universal numbers can develop a many-dream-world
interpretation of arithmetic justifying their local predictive
theories. Then for consciousness, we can explain why the predictive
theories can't address the question, for consciousness is related to
the big picture behind the observable surface. Numbers too find truth
that they can't relate to any numbers, or numbers relations.
> In the same way, I can’t see how qualia can emerge from arithmetic,
> unless the rudiments of qualia are present in the natural numbers or
> the operations of addition and mutiplication.
Rudiment of qualia would explains qualia away. They are intrinsically
more complex. A qualia needs two universal numbers (the hero and the
local environment(s) which executes the hero (in the computer science
sense, or in the UD). It needs the "hero" to refers automatically to
high level representation of itself and the environment, etc. Then the
qualia will be defined (and shown to exist) as truth felt as directly
available, and locally invariants, yet non communicable, and applying
to a person without description (the 1-person). "Feeling" being
something like "known as true in all my locally directly accessible
environments".
> And yet it seems to me
> they can’t be, because the only properties that belong to arithmetic
> are those leant to them by the axioms that define them.
Not at all. Arithmetical truth is far bigger than anything you can
derive from any (effective) theory. Theories are not PI_1 complete,
Arithmetical truth is PI_n complete for each n. It is very big.
> Indeed
> arithmetic *is* exactly those axioms and nothing more.
Gödel's incompleteness theorem refutes this.
> Matter may in
> principle contain untold, undiscovered mysterious properties which I
> suppose might include the rudiments of consciousness. Yet mathematics
> is only what it is defined to be. Certainly it contains many mysteries
> emergent properties, but all these properties arise logically from its
> axioms and thus cannot include qualia.
It is here that you are wrong. Even if we limit ourselves to
arithmetical truth, it extends terribly what machines can justify.
>
> I call the idea that it can numerology because numerology also
> ascribes qualities to numbers. A ‘2’ in one’s birthdate indicates
> creativity (or something), a ‘4’ material ambition and so on. Because
> the emergent properties of numbers can indeed be deeply amazing and
> wonderful - Mandelbrot sets and so on - there is a natural human
> tendency to mystify them, to project properties of the imagination
> into them.
No. Some bet on mechanism to justify the non sensicalness of the
notion of zombie, or the hope that he or his children might travel on
mars in 4 minutes, or just empirically by the absence of relevant non
Turing-emulability of biological phenomenon.
Unlike putting consciousness in matter (an unknown into an unknown),
comp explains consciousness with intuitively related concept, like
self-reference, non definability theorem, perceptible incompleteness,
etc.
And if you look at the Mandelbrot set, a little bit everywhere, you
can hardly miss the unreasonable resemblances with nature, from
lightening to embryogenesis given evidence that its rational part
might be a compact universal dovetailer, or creative set (in Post
sense).
> But if these qualities really do inhere in numbers and are
> not put there purely by our projection, then numbers must be more than
> their definitions. We must posit the numbers as something that
> projects out of a supraordinate reality that is not purely
> mathematical - ie, not merely composed of the axioms that define an
> arithmetic.
Like arithmetical truth. I think acw explained already.
> This then can no longer be described as a mathematical
> ontology, but rather a kind of numerical mysticism.
It is what you get in the case where brain are natural machines.
> And because
> something extrinsic to the axioms has been added, it opens the way for
> all kinds of other unicorns and fairies that can never be proved from
> the maths alone. This is unprovability not of the mathematical
> variety, but more of the variety that cries out for Mr Occam’s shaving
> apparatus.
No government can prevent numbers from dreaming. Although they might
try <sigh>.
You can't apply Occam on dreams.
They exist epistemologically once you have enough finite things.
Feel free to suggest a non-comp theory. Note that even just the
showing of *one* such theory is everything but easy. Somehow you have
to study computability, and UDA, to construct a non Turing emulable
entity, whose experience is not recoverable in any first person sense.
Better to test comp on nature, so as to have a chance at least to get
an evidence against comp, or against the classical theory of knowledge.
Bruno
> of my own here: no properties can emerge from a complex system that
> are not present in primitive form in the parts of that system. There
What about gliders emerging from the rules of Game of Life? There are
no primitive form gliders in the transition table, nor in static cells
of the grid.
--
----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------
It doesn't change the fact that either a human or an AI capable of some
types of pattern recognition would form the internal beliefs that there
is a glider moving in a particular direction. This belief would even be
strengthened if you increase the resolution of your digital array/grid
by enough, have some high-level stable emergent patterns in it and only
allow "sensing" (either by an external party or something embedded in
it) in an inexact, potentially randomized way (such as only being able
to sense an average of the block, for example, if trying to access an
NxN-sized block, you'd only be able to access a quantized average, and
the offsets being sensed would be randomized slightly) - they would even
prefer to work with a continuum because there's no easy way of
establishing a precise resolution or sensing at that low level, but
regardless of how sensing (indirectly accessing data) is done, emergent
digital movement patterns would look like (continuous) movement to the
observer.
Also, it would not be very wise to assume humans are capable of sensing
such a magical continuum directly (even if it existed), the evidence
that says that humans' sense visual information through their eyes: when
a photon hits a photoreceptor cell, that *binary* piece of information
is transmitted through neurons connected to that cell and so on
throughout the visual system(...->V1->...->V4->IT->...) and eventually
up to the prefrontal cortex. Neurons are also rather slow, they can only
spike about once per 5ms (~200Hz), although they rarely do so often.
(Note that I'm not saying that conscious experience is only the current
brain state in a single universe with only one timeline and nothing
more, in COMP, the (infinite amount of) counterfactuals are also
important, for example for selecting the next state, or for "splits" and
"mergers").
Exactly. It is an emergent phenomenon that is not "present in in
primitive form in the parts of the system". All emergent phenomena are
in the "eye of the beholder", but that doesn't make them less real.
Unless you're saying the gliders don't exist at all, but that doesn't
appear to be the case here, as why would you label a non-existent
phenomenon "beta movement".
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
> What I'm talking about is something different. We don't have to guess
> what the pixels of Conway's game of life are doing because, we are the
> ones who are displaying the game in an animated sequences. The game
> could be displayed as a single pixel instead and be no different to
> the computer.
I have no idea how a randomly chosen computation will evolve over time,
except in cases where one carefully designed the computation to be very
predictable, but even then we can be surprised. Your view of computation
seems to be that it's just something people write to try to model some
process or to achieve some particular behavior - that's the local
engineer view. In practice computation is unpredictable, unless we can
rigorously prove what it can do, and it's also trivially easy to make
machines which we cannot know a damn thing about what they will do
without running them for enough steps. After seeing how some computation
behaves over time, we may form some beliefs about it by induction, but
unless we can prove that it will only behave in some particular way, we
can still be surprised by it. Computation can do a lot of things, and we
should explore its limits and possibilities!
>
>> (unless a time
>> continuum (as in real numbers) is assumed, but that's a very strong
>> assumption). (One can also apply a form of MGA with this assumption
>> (+the digital subst. one) to show that consciousness has to be something
>> more "abstract" than merely matter.)
>>
>> It doesn't change the fact that either a human or an AI capable of some
>> types of pattern recognition would form the internal beliefs that there
>> is a glider moving in a particular direction.
>
> Yes, it does. A computer gets no benefit at all from seeing the pixels
> arrayed in a matrix. It doesn't even need to run the game, it can just
> load each frame of the game in memory and not have any 'internal
> beliefs' about gliders moving.
>
Benefit? I only considered a form of narrow AI which is capable of
recognizing patterns in its sense data without doing anything about
them, but merely classifying it and possibly doing some inferences from
them. Both of this is possible using various current AI research.
However, if we're talking about "benefit" here, I invite you to think
about what 'emotions', 'urges' and 'goals' are - we have a
reward/emotional system and its behavior isn't undefined, it can be
reasoned about, not only that, one can model structures like it
computationally: imagine a virtual world with virtual physics with
virtual entities living in it, some entities might be programmed to
replicate themselves and acquire resources to do so or merely to
survive, they might even have social interactions which result in
various emotional responses within their virtual society. One of the
best explanations for emotions that I've ever seen was given by a
researcher that was trying to build such emotional machines, he did it
by programming his agents with simpler urges and the emotions were an
emergent property of the system:
http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-emotion-and-mental-representation
http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-emotion-and-mental-representation-2
http://agi-school.org/2009/dr-joscha-bach-the-micropsi-architecture
http://www.cognitive-ai.com/
>> regardless of how sensing (indirectly accessing data) is done, emergent
>> digital movement patterns would look like (continuous) movement to the
>> observer.
>
> I don't think that sensing is indirect accessed data, data is
> indirectly experienced sense. Data supervenes on sense, but not all
> sense is data (you can have feelings that you don't understand or even
> be sure that you have them).
>
It is indirect in the example that I gave because there is an objective
state that we can compute, but none of the agents have any direct access
to it - only to approximations of it - if the agent is external, he is
limited to how he can access by the interface, if the agent is itself
part of the structure, then the limitation lies within itself - sort of
like how we are part of the environment and thus we cannot know exactly
what the environment's granularity is (if one exists, and it's not a
continuum or merely some sort of rational geometry or many other
possibilities).
> I'm not sure why you say that continuous
> movement patterns emerge to the observer, that is factually incorrect.
> http://en.wikipedia.org/wiki/Akinetopsia
Most people tend to feel their conscious experience being continuous,
regardless of if it really is so, we do however notice large
discontinuities, like if we slept or got knocked out. Of course most
bets are off if neuropsychological disorders are involved.
>>
>> Also, it would not be very wise to assume humans are capable of sensing
>> such a magical continuum directly (even if it existed), the evidence
>> that says that humans' sense visual information through their eyes:
>
> I don't think that what humans sense visually is information. It can
> and does inform us but it is not information. Perception is primitive.
> It's the sensorimotive view of electromagnetism. It is not a message
> about an event, it is the event.
>
I'm not sure how to understand that. Try writing a paper on your theory
and see if it's testable or verifiable in any way?
A small sidenote: a few years ago I've considered various consciousness
theories and various possible ontologies. Some of them, especially some
of the panpsychic kinds sure sound amazing and simple - they may even
lead to some religious experiences in some, but if you think about what
expectations to derive from them, or in general, what predictions or how
to test them, they tend to either fall short or worse, lead to
inconsistent beliefs when faced by even simple thought experiments (such
as the Fading qualia one). COMP on the other hand, offers very solid
testable predictions and doesn't fail most though experiments or
observational data that you can put it through (at least so far). I wish
other consciousness theories were as solid, understandable and testable
as COMP.
>> when
>> a photon hits a photoreceptor cell, that *binary* piece of information
>> is transmitted through neurons connected to that cell and so on
>> throughout the visual system(...->V1->...->V4->IT->...) and eventually
>> up to the prefrontal cortex.
>
> That's a 3p view. It doesn't explain the only important part -
> perception itself. The prefrontal cortex is no more or less likely to
> generate visual awareness than the retina cells or neurons or
> molecules themselves.
>
In COMP, you can blame the whole system for the awareness, however you
can blame the structure of the visual system for the way colors are
differentiated - it places great constraints on what the color qualia
can be - certainly not only black and white (given proper
functioning/structure).
> The 1p experience of vision is not dependent upon external photons (we
> can dream and visualize) and it is not solipsistic either (our
> perceptions of the world are generally reliable). If I had to make a
> copy of the universe from scratch, I would need to know that what
> vision is all about is feeling that you are looking out through your
> eyes at a world of illuminated and illuminating objects. Vision is a
> channel of sensitivity for the human being as a whole, and it has as
> more to do with our psychological immersion in the narrative of our
> biography than it does photons and microbiology. That biology,
> chemistry, or physics does not explain this at all is not a small
> problem, it is an enormous deal breaker.
>
You're right that our internal beliefs do affect how we perceive things.
It's not biology's or chemistry's job to explain that to you. Emergent
properties from the brain's structure should explain those parts to you.
Cognitive sciences as well as some related fields do aim to solve such
problems. It's like asking why an atom doesn't explain the computations
involved in processing this email. Different emergent structures at
different levels, sure one arises from the other, but in many cases, one
level can be fully abstracted from the other level.
> My solution is that both views are correct on their own terms in their
> own sense and that we should not arbitrarily privilege one view over
> the other. Our vision is human vision. It is based on retina vision,
> which is based on cellular and molecular visual sense. It is not just
> a mechanism which pushes information around from one place to another,
> each place is a living organism which actively contributes to the top
> level experience - it isn't a passive system.
>
Living organisms - replicators, are fine things, but I don't see why
must one confuse replicators with perception. Perception can exist by
itself merely on the virtue of passing information around and processing
it. Replicators can also exist due similar reasons, but on a different
level.
>> Neurons are also rather slow, they can only
>> spike about once per 5ms (~200Hz), although they rarely do so often.
>> (Note that I'm not saying that conscious experience is only the current
>> brain state in a single universe with only one timeline and nothing
>> more, in COMP, the (infinite amount of) counterfactuals are also
>> important, for example for selecting the next state, or for "splits" and
>> "mergers").
>
> Yes, organisms are slower than electronic measuring instruments, but
> it doesn't matter because our universe is not an electronic measuring
> instrument. It makes sense to us just fine at it's native anthropic
> rate of change (except for the technologies we have designed to defeat
> that sense).
Sure, the speed is not the most important thing, except when it leads to
us wanting some things to be faster and with our current biological
bodies, we cannot make them go faster or slower, we can only build
faster and faster devices, but we'll eventually hit the limit (we're
nearly there already). With COMP, this is even a greater problem
locally: if you get a digital brain (sometime in the not too near
future), some neuromorphic hardware is predicted to be a few orders of
magnitude faster(such as some 1000-4000 times our current rate), which
would mean that if someone wanted to function at realtime speed, they
might experience some insanely slow Internet speeds, for anything that
isn't locally accessible (for example, between US and Europe or Asia),
which mind lead to certain negative social effects (such as groups of
SIMs(Substrate Independent Minds) that prefer running at realtime speed
congregating and locally accessible hubs as opposed to the much slower
Internet). However, such a problem is only locally relevant (here in
this Universe, on this Earth), and is solvable if one is fine with
slowing themselves down relatively to some other program, and a system
can be designed which allows unbounded speedup (I did write more on this
in my other thread).
>
> Craig
>
But many things about numbers are not arithmetical. Arithmetical truth
is not arithmetical. Machine's knowledge can be proved to be non
arithmetical.
If you want, arithmetic is enough rich for having a bigger reality
than anything we can describe in 3p terms.
> There is nothing in the universe
The term universe is ambiguous.
You confuse proving p, which can be explained in arithmetic, and
"proving p & p is true", which can happen to be true for a machine,
but escapes necessarily its language.
The same for consciousness. It cannot be explained in *any* third
person terms. But it can be proved that self-observing machine cannot
avoid the discovery of many things concerning them which are beyond
language.
Pierz, Craig, I disagree. Consciousness can be explained as a non 3p
describable fixed point when machine's observe themselves. This
provides a key role to consciousness, including the ability to develop
meanings, to speed decisions, to make decision in absence of
information, etc.
Consciousness is not explainable in term of any parts of something,
but as an invariant in universal self-transformation.
If you accept the classical theory of knowledge, then Peano Arithmetic
is already conscious.
>
>>
>>> My solution is that both views are correct on their own terms in
>>> their
>>> own sense and that we should not arbitrarily privilege one view over
>>> the other. Our vision is human vision. It is based on retina vision,
>>> which is based on cellular and molecular visual sense. It is not
>>> just
>>> a mechanism which pushes information around from one place to
>>> another,
>>> each place is a living organism which actively contributes to the
>>> top
>>> level experience - it isn't a passive system.
>>
>> Living organisms - replicators,
>
> Life replicates, but replication does not define life. Living
> organisms feel alive and avoid death. Replication does not necessitate
> feeling alive.
I am OK with this. Yet, replication + while-loop might be enough.
>
>> are fine things, but I don't see why
>> must one confuse replicators with perception. Perception can exist by
>> itself merely on the virtue of passing information around and
>> processing
>> it. Replicators can also exist due similar reasons, but on a
>> different
>> level.
>
> Perception has never existed 'by itself'. Perception only occurs in
> living organisms who are informed by their experience.
The whole point is to explain terms like "living", "conscious", etc.
You take them as primitive, so are escaping the issue.
> There is no
> independent disembodied 'information' out there. There detection and
> response, sense and motive of physical wholes.
Same for "physical" (and that's not obvious!).
If you survive with a digital brain, then consciousness is necessarily
not digital.
A brain is not a maker of consciousness. It is only a stable pattern
making it possible (or more probable) that a person can manifest
itself relatively to some universal number(s).
Keep in mind that comp makes materialism wrong. The big picture is
completely different. I think that you confuse comp, with its
Aristotelian version where computations seems to be incarnated by
physical primitive materials. Comp + materialism leads to person-
nihilism, so it is important to understand that comp should not be
assumed together with materialism (even weak).
>
>> , some neuromorphic hardware is predicted to be a few orders of
>> magnitude faster(such as some 1000-4000 times our current rate),
>> which
>> would mean that if someone wanted to function at realtime speed, they
>> might experience some insanely slow Internet speeds, for anything
>> that
>> isn't locally accessible (for example, between US and Europe or
>> Asia),
>> which mind lead to certain negative social effects (such as groups of
>> SIMs(Substrate Independent Minds) that prefer running at realtime
>> speed
>> congregating and locally accessible hubs as opposed to the much
>> slower
>> Internet). However, such a problem is only locally relevant (here in
>> this Universe, on this Earth), and is solvable if one is fine with
>> slowing themselves down relatively to some other program, and a
>> system
>> can be designed which allows unbounded speedup (I did write more on
>> this
>> in my other thread).
>
> We are able to extend and augment our neurological capacities (we
> already are) with neuromorphic devices, but ultimately we need our own
> brain tissue to live in.
Why? What does that mean?
> We, unfortunately cannot be digitized,
You don't know that. But you don't derive it either from what you
assume (which to be franc remains unclear)
.
I think that you have a reductionist conception of machine, which was
perhaps defensible before Gödel 1931 and Turing discovery of the
universal machine, but is no more defensible after.
Bruno
> we can
> only be analogized through impersonation.
>
> Craig
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
Why is this not 3p describable? Your explanation of it seems to imply a description.
Brent
> On 1/27/2012 9:20 AM, Bruno Marchal wrote:
>>
>> Pierz, Craig, I disagree. Consciousness can be explained as a non
>> 3p describable fixed point when machine's observe themselves.
>
> Why is this not 3p describable? Your explanation of it seems to
> imply a description.
Yes, but the explanation is not consciousness itself.
In the UDA, you are supposed to know what consciousness is. You are
asked to believe that your consciousness remains invariant for a
functional digital substitution.
In the AUDA, consciousness is not mentioned. It is handled indirectly
via knowledge, which is defined via an appeal to truth, which (by
Tarski theorem) is not definable by the mechanical entity under
consideration.
In B'"1+1=2" & 1+1=2, the "1+1 = 2" is a description, but 1+1=2 is
not. It is true fact, and as such cannot be described. We cannot
translate True("1+1=2") in arithmetic. We can do it at some meta-
level, when we study a simpler machine than us, that we believe to be
correct, like PA. But then we can see that neither PA, nor any correct
machine can do this for *itself*.
Consciousness, knowledge, truth, are concept which does not admit
formal definition; when they encompass ourselves.
Bruno
>
> Brent
>
>
>> This provides a key role to consciousness, including the ability to
>> develop meanings, to speed decisions, to make decision in absence
>> of information, etc.
>> Consciousness is not explainable in term of any parts of something,
>> but as an invariant in universal self-transformation.
>> If you accept the classical theory of knowledge, then Peano
>> Arithmetic is already conscious.
>
Let my quote Jeffrey Gray (Consciousness: Creeping up on the Hard
Problem, p. 33) on biology and physics.
"In very general terms, biology makes use of two types of concept:
physicochemical laws and feedback mechanisms. The latter include both
the feedback operative in natural selection, in which the controlled
variables that determine survival are nowhere explicitly represented
within the system; and servomechanisms, in which there is a specific
locus of representation capable of reporting the values of the
controlled variables to other system components and to other systems.
The relationship between physicochemical laws and cybernetic mechanisms
in the biological perspective on biology poses no deep problems. It
consist in a kind of a contract: providing cybernetics respects the laws
of physics and chemistry, its principles may be used to construct any
kind of feedback system that serves a purpose. Behaviour as such does
not appear to require for its explanation any principles additional to
these."
Roughly speaking Gray's statement is
Biology = Physics + Feedback mechanisms
Yet even at this stage (just at a level of bacteria, I guess there is no
qualia yet) it is unclear to me whether physics includes cybernetics
laws or they emerge/supervene. What is your opinion to this end?
I wanted to discuss this issue in another thread
http://groups.google.com/group/everything-list/t/a4b4e1546e0d03df
but at the present the discussion is limited to the question of
information is basic physical property (Information is the Entropy) or not.
Evgenii
>
> In the same way, I can�t see how qualia can emerge from arithmetic,
> unless the rudiments of qualia are present in the natural numbers or
> the operations of addition and mutiplication. And yet it seems to me
> they can�t be, because the only properties that belong to arithmetic
> are those leant to them by the axioms that define them. Indeed
> arithmetic *is* exactly those axioms and nothing more. Matter may in
> principle contain untold, undiscovered mysterious properties which I
> suppose might include the rudiments of consciousness. Yet
> mathematics is only what it is defined to be. Certainly it contains
> many mysteries emergent properties, but all these properties arise
> logically from its axioms and thus cannot include qualia.
>
> I call the idea that it can numerology because numerology also
> ascribes qualities to numbers. A �2� in one�s birthdate indicates
> creativity (or something), a �4� material ambition and so on.
> Because the emergent properties of numbers can indeed be deeply
> amazing and wonderful - Mandelbrot sets and so on - there is a
> natural human tendency to mystify them, to project properties of the
> imagination into them. But if these qualities really do inhere in
> numbers and are not put there purely by our projection, then numbers
> must be more than their definitions. We must posit the numbers as
> something that projects out of a supraordinate reality that is not
> purely mathematical - ie, not merely composed of the axioms that
> define an arithmetic. This then can no longer be described as a
> mathematical ontology, but rather a kind of numerical mysticism. And
> because something extrinsic to the axioms has been added, it opens
> the way for all kinds of other unicorns and fairies that can never be
> proved from the maths alone. This is unprovability not of the
> mathematical variety, but more of the variety that cries out for Mr
> Occam�s shaving apparatus.
>
Not everyone. The approach based on both UDA and self-reference gives
a tremendous importance to the 1p and 3p distinction.
> The same
> sleight of hand tricked up in a variety of guises, but amounting
> always to the same manoeuvre.
You might have to look closer.
Bruno
On Jan 27, 12:20 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:But many things about numbers are not arithmetical. Arithmetical truthis not arithmetical. Machine's knowledge can be proved to be nonarithmetical.If you want, arithmetic is enough rich for having a bigger realitythan anything we can describe in 3p terms.
But all arithmetic truths, knowledge, beliefs, etc are all still
sensemaking experiences. It doesn't matter whether they are arithmetic
or not, as long as they can possibly be detected or made sense of in
any way, even by inference, deduction, emergence, etc, they are still
sense. Not all sense is arithmetic or related to arithmetic in some
way though. Sense can be gestural or intuitive.
There is nothing in the universeThe term universe is ambiguous.
Only in theory. I use it in a literal, absolutist way.
You confuse proving p, which can be explained in arithmetic, and"proving p & p is true", which can happen to be true for a machine,but escapes necessarily its language.The same for consciousness. It cannot be explained in *any* thirdperson terms. But it can be proved that self-observing machine cannotavoid the discovery of many things concerning them which are beyondlanguage.
I think that are confusing p with a reality rather than a logical idea
about reality.
I have no reason to believe that a machine can observe
itself in anything more than a trivial sense.
It is not a conscious
experience, I would guess that it is something like an accounting of
unaccounted-for function terminations. Proximal boundaries. A
silhouette of the self offering no interiority but an extrapolation of
incomplete 3p data. That isn't consciousness.
"But I’ll venture an axiomof my own here: no properties can emerge from a complex system thatare not present in primitive form in the parts of that system. Thereis nothing mystical about emergent properties. When the emergentproperty of ‘pumping blood’ arises out of collections of heart cells,that property is a logical extension of the properties of the parts -physical properties such as elasticity, electrical conductivity,volume and so on that belong to the individual cells. But nobodyinvoking ‘emergent properties’ to explain consciousness in the brainhas yet explained how consciousness arises as a natural extension ofthe known properties of brain cells - or indeed of matter at all. "Pierz, Craig, I disagree. Consciousness can be explained as a non 3pdescribable fixed point when machine's observe themselves. Thisprovides a key role to consciousness, including the ability to developmeanings, to speed decisions, to make decision in absence ofinformation, etc.
I disagree. It provides a key role to the function of agency but it
has nothing to do with consciousness and qualia per se. A sleep walker
can navigate to the kitchen for a snack without being conscious.
Consciousness does nothing to speed decisions, it would only cost
processing overhead
and add nothing to the efficiency of unconscious
adaptation.
Consciousness is not explainable in term of any parts of something,but as an invariant in universal self-transformation.If you accept the classical theory of knowledge, then Peano Arithmeticis already conscious.
Why and how does universal self-transformation equate to
consciousness?
Anything that is conscious can also be unconscious. Can
Peano Arithmetic be unconscious too?
My solution is that both views are correct on their own terms intheirown sense and that we should not arbitrarily privilege one view overthe other. Our vision is human vision. It is based on retina vision,which is based on cellular and molecular visual sense. It is notjusta mechanism which pushes information around from one place toanother,each place is a living organism which actively contributes to thetoplevel experience - it isn't a passive system.Living organisms - replicators,Life replicates, but replication does not define life. Livingorganisms feel alive and avoid death. Replication does not necessitatefeeling alive.I am OK with this. Yet, replication + while-loop might be enough.
Should we mourn the untying of our shoelaces each time?
are fine things, but I don't see whymust one confuse replicators with perception. Perception can exist byitself merely on the virtue of passing information around andprocessingit. Replicators can also exist due similar reasons, but on adifferentlevel.Perception has never existed 'by itself'. Perception only occurs inliving organisms who are informed by their experience.The whole point is to explain terms like "living", "conscious", etc.You take them as primitive, so are escaping the issue.
They aren't primitive, the symmetry is primitive.
There is noindependent disembodied 'information' out there. There detection andresponse, sense and motive of physical wholes.Same for "physical" (and that's not obvious!).
Do you doubt that if all life were exterminated that planets would
still exist? Where would information be though?
Sorry, but I think it's never going to happen. Consciousness is notdigital.If you survive with a digital brain, then consciousness is necessarilynot digital.A brain is not a maker of consciousness. It is only a stable patternmaking it possible (or more probable) that a person can manifestitself relatively to some universal number(s).
Why not just use adipose tissue instead? That's a more stable pattern.
Why have a vulnerable concentration of this pattern in the head? Our
skeleton would make a much safer place four a person to manifest
itself relatively to some universal number.
Keep in mind that comp makes materialism wrong.
That's not why it's wrong. I have no problem with materialism being
wrong, I have a problem with experience being reduced to non
experience or non sense.
The big picture iscompletely different. I think that you confuse comp, with itsAristotelian version where computations seems to be incarnated byphysical primitive materials. Comp + materialism leads to person-nihilism, so it is important to understand that comp should not beassumed together with materialism (even weak).
I don't think that I am confusing it. Comp is perfectly illustrated as
modern investment banking. There is no material, in fact it strangles
the life out of all materials, eviscerating culture and architecture,
all in the name of consolidating digitally abstracted control of
control. This is machine intelligence. The idea of unexperienced
ownership as an end unto itself, forever concentrating data and
exporting debt.
We are able to extend and augment our neurological capacities (wealready are) with neuromorphic devices, but ultimately we need our ownbrain tissue to live in.Why? What does that mean?
It means that without our brain, there is no we.
We cannot be
simulated anymore than water or fire can be simulated.
Human
consciousness exists nowhere but through a human brain.
We, unfortunately cannot be digitized,You don't know that. But you don't derive it either from what youassume (which to be franc remains unclear)
I do derive it, because the brain and the self are two parts of a
whole. You cannot export the selfness into another form, because the
self has no form, it's only experiential content through the interior
of a living brain.
.I think that you have a reductionist conception of machine, which wasperhaps defensible before Gödel 1931 and Turing discovery of theuniversal machine, but is no more defensible after.
I know that you think that, but you don't take into account that I
started with with that. I read Gödel, Escher, Bach around 1980 I
think. Even though I couldn't get too much into the math, I was quite
happy with the implications of it. For the next 25 years I believed
that the universe was made of 'patterns' - pretty close to what your
view is.
It's only been in the last 7 years that I have found a better
idea. My hypothesis is post-Gödelian symmetry.
I wasn't asking for a formal definition, just a 3p description. You are saying that
B"1+1=2" is a description of being conscious that 1+1=2? This confuses me though because
I read B as "provable"; yet many things are provable of which we are not conscious.
Brent
What are "cybernetics laws"? Can they be written down like the Standard Model Lagrangian
or Einstein's equation?
>>
> I think it's clear that in approaches such as Gray's, which are based
> on a conventional materialist ontology, any laws invoked must
> ultimately rely on/emerge from physical laws. In fact, that's clear in
> Gray's qualifier "providing cybernetics respect the laws of physics
> and chemistry". "Respects" in this clause means that cybernetics must
> be subservient to physics, therefore emergent from it. However the
> laws of physics do not include cybernetic laws - the fundamental
> equations of physics are actually reducible to a handful of equations
> you can write down on a couple of sheets of paper. In terms of the
> point I am making regarding qualia, Gray's argument is one variant on
> the theme of the type of reasoning I object to. It's all there in the
> statement:
>
> "Behaviour as such does not appear to require for its explanation any
> principles additional to these."
>
> The issue isn't explaining behaviour, it's explaining consciousness/
> qualia. These approaches always end up conflating the two, their
> proponents getting annoyed with anyone who isn't prepared to wish away
> the gap between them.
But most people seem to think that the two are linked; that philosophical zombies are
impossible. Are you asserting that they are possible?
Brent
>> but if I had to make an
>> initial guess (maybe wrong), it seems similar to some form of
>> panpsychism directly over matter.
>
> Close, but not exactly. Panpsychism can imply that a rock has human-
> like experiences. My hypothesis can be categorized as
> panexperientialism because I do think that all forces and fields are
> figurative externalizations of processes which literally occur within
> and through 'matter'. Matter is in turn diffracted pieces of the
> primordial singularity.
Not entirely sure what you mean by the singularity, but okay.
> It's confusing for us because we assume that
> motion and time are exterior conditions, by if my view is accurate,
> then all time and energy is literally interior to the observer as an
> experience.
I think most people realize that the sense of time is subjective and
relative, as with qualia. I think some form of time is required for
self-consciousness. There can be different scales of time, for example,
the local universe may very well run at planck-time (guesstimation based
on popular physics theories, we cannot know, and with COMP, there's an
infinity of such frames of references), but our conscious experience is
much slower relative to that planck-time, usually assumed to run at a
variable rate, at about 1-200Hz (neuron-spiking freq), although maybe
observer moments could even be smaller in size.
> What I think is that matter and experience are two
> symmetrical but anomalous ontologies - two sides of the same coin, so
> that our qualia and content of experience is descended from
> accumulated sense experience of our constituent organism, not
> manufactured by their bodies, cells, molecules, interactions. The two
> both opposite expressions (a what& how of matter and space and a who
> & why of experience or energy and time) of the underlying sense that
> binds them to the singularity (where& when).
>
Accumulated sense experience? Our neurons do record our memories
(lossily, as we also forget), and interacting "matter" does lead to
state changes. Although, this (your theory) feels much like a
reification of matter and qualia (and having them be nearly the same
thing), and I think it's possible to find some inconsistencies here,
more on this later in this post.
>> Such theories are testable and
>> falsifiable, although only in the 1p sense. A thing that should be worth
>> keeping in mind is that whatever our experience is, it has to be
>> consistent with our structure (or, if we admit, our computational
>> equivalent) - it might be more than it, but it cannot be less than it.
>> We wouldn't see in color if our eyes' photoreceptor cells didn't absorb
>> overlapping ranges of light wavelengths and then processed it throughout
>> the visual system (in some parts, in not-so-general ways, while in
>> others, in more general ways). The structures that we are greatly limit
>> the nature of our possible qualia.
>
> I understand what you are saying, and I agree the structures do limit
> our access to qualia, but not the form. Synesthesia, blindsight, and
> anosognosia show clearly that at the human level at least, sensory
> content is not tied to the nature of mechanism. We can taste color
> instead of see it, or know vision without seeing. This is not to say
> that we aren't limited by being a human being, of course we are, but
> our body is as much a vehicle for our experience as much as our
> experience is a filtered through our body. Indeed the brain makes no
> sense as anything other than a sensorimotive amplifier/condenser.
>
Synesthesia can happen for multiple reasons, although one possible cause
is that some parts of the neocortical hierarchy are more tightly
inter-connected, which leads to sense-data from one region to directly
affect processing of sense-data from an adjacent region, thus having
experience of both qualia simultaneously. I don't see how synesthesia
contradicts mechanism, on the contrary, mechanism explains it quite
well. Blindsight seems to me to be due to the neocortex being very good
at prediction and integrating data from other senses, more on this idea
can be seen in Jeff Hawkins' "On Intelligence". I can't venture a guess
about anosognosia, it seems like a complicated-enough neurophysiology
problem.
Do you think brains-in-a-vat or those with auditory implants have no
qualia for those areas despite behaving like they do? DO you think they
are partial zombies?
To elaborate, consider that someone gets a digital eye, this eye can
capture sense data from the environment, process it, then route it to an
interface which generates electrical impulses exactly like how the eye
did before and stimulates the right neurons. Consider the same for the
other senses, such as hearing, touch, smell, taste and so on. Now
consider a powerful-enough computer capable of simulating an
environment, first you can think of some unrealistic like our video
games, but then you can think of something better like ray-tracing and
eventually full-on physical simulation to any granularity that you'd
like (this may not yet be feasible in our physical world without slowing
the brain down, but consider it as a thought experiment for now). Do you
think these brains are p. zombies because they are not interacting with
the "real" world? The reason I'm asking this question is that it seems
to me like in your theory, only particular things can cause particular
sense data, and here I'm trying to completly abstract away from sense
data and make it accessible by proxy and allow piping any type of data
into it (although obviously the brain will only accept data that fits
the expected patterns, and I do expect that only correct data will be sent).
I wouldn't be so sure. I think if we can privilege the brains of others
with consciousness, then we should privilege any systems which perform
the same functions as well. Of course we cannot know if anything besides
us is conscious, but I tend to favor non-solipsistic theories myself.
The brain physically stores beliefs in synapses and its neuron bodies
and I see no reason why some artificial general intelligence couldn't
store its beliefs in its own data-structures such as hypergraphs and
whatnot, and the actual physical storage/encoding shouldn't be too
relevant as long as the interpreter (program) exists. I wouldn't have
much of a problem assuming consciousness to anything that is obviously
behaving intelligent and self-aware. We may not have such AGI yet, but
research in those areas is progressing rather nicely.
Yet they will behave as if they have those emotions, qualia, ...
Punishing will result in some (types of) actions being avoided and
rewards will result in some (types of) actions being more frequent.
A computationalist may claim they are conscious because of the
computational structure underlying their cognitive architecture.
You might claim they are not because they don't have access to "real"
qualia or that their implementation substrate isn't magical enough?
Eventually such a machine may plead to you that they are conscious and
that they have qualia (as they do have sense data), but you won't
believe them because of being implemented in a different substrate than
you? Same situation goes for substrate independent minds/mind uploads.
> It's not necessary since they
> have no autonomy (avoiding 'Free Will' for John Clark's sake) to begin
> with.
I don't see why not. If I had to guess, is it because you don't grant
autonomy to anything whose behavior is fully determined? Within COMP,
you both have deterministic behavior, but indeterminism is also
completely unavoidable from the 1p. I don't think 'free' will has
anything to do with 1p indeterminism, I think it's merely the feeling
you get when you have multiple choices and you use your active conscious
processes to select one choice, however whatever you select, it's always
due to other inner processes, which are not always directly accessing to
the conscious mind - you do what you want/will, but you don't always
control what you want/will, that depends on your cognitive architecture,
your memories and the environment (although since you're also part of
the environment, the choice will always be quasideterministic, but not
fully deterministic).
> All we have to do is script rules into their mechanism.
It's not as simple, you can have systems find out their own rules/goals.
Try looking at modern AGI research.
> Some
> parents would like to be able to do that I'm sure, but of course it
> doesn't work that way for people. No matter how compelling and
> coercive the brainwashing, some humans are always going to try to hack
> it and escape. When a computer hacks it's programming and escapes, we
> will know about it, but I'm not worried about that.
Sure, we're as 'free' as computations are, although most computations
we're looking into are those we can control because that's what's
locally useful for humans.
> What is far more
> worrisome and real is that the externalization of our sense of
> computation (the glass exoskeleton) will be taken for literal truth,
> and our culture will be evacuated of all qualities except for
> enumeration. This is already happening. This is the crisis of the
> 19-21st centuries. Money is computation. WalMart parking lot is the
> cathedral of the god of empty progress.
There are some worries. I wouldn't blame computation for it, but our
current limited physical resources and some emergent social machines
which might not have beneficial outcomes, sort of like a tragedy of the
commons, however that's just a local problem. On the contrary, I think
the answer to a lot of our problems has computational solutions,
unfortunately we're still some 20-50+ years away to finding them, and I
hope we won't be too late there.
>
>>
>>>> regardless of how sensing (indirectly accessing data) is done, emergent
>>>> digital movement patterns would look like (continuous) movement to the
>>>> observer.
>>
>>> I don't think that sensing is indirect accessed data, data is
>>> indirectly experienced sense. Data supervenes on sense, but not all
>>> sense is data (you can have feelings that you don't understand or even
>>> be sure that you have them).
>>
>> It is indirect in the example that I gave because there is an objective
>> state that we can compute, but none of the agents have any direct access
>> to it - only to approximations of it - if the agent is external, he is
>> limited to how he can access by the interface, if the agent is itself
>> part of the structure, then the limitation lies within itself - sort of
>> like how we are part of the environment and thus we cannot know exactly
>> what the environment's granularity is (if one exists, and it's not a
>> continuum or merely some sort of rational geometry or many other
>> possibilities).
>
> Not sure what you're saying here.I get that we cannot see our own
> fine granularity, but that doesn't mean that the sense of that
> granularity isn't entangled in our experience in an iconic way.
>
The idea was that indeed one cannot see their own granularity. I also
gave an example of an interface to a system which has a granularity, but
that wouldn't be externally accessible.
I don't see what you mean by 'entangled in our experience in an iconic
way'. You can't *directly* sense more than the information than that
available directly to your senses, as in, if your eye only captures
about 1000*1000 pixels worth of data, you can't see beyond that without
a new eye and a new visual pathway (and some extension to the PFC and so
on). We're able to differentiate colors because of how the data is
processed in the visual system. We're not able to sense strings or
quarks or even atoms directly, we can only infer their existence as a
pattern indirectly.
>>
>> > I'm not sure why you say that continuous
>> > movement patterns emerge to the observer, that is factually incorrect.
>> >http://en.wikipedia.org/wiki/Akinetopsia
>> Most people tend to feel their conscious experience being continuous,
>> regardless of if it really is so, we do however notice large
>> discontinuities, like if we slept or got knocked out. Of course most
>> bets are off if neuropsychological disorders are involved.
>
> Any theory of consciousness should rely heavily on all known varieties
> of consciousness, especially neuropsychological disorders. What good
> is a theory of 21st century adult males of European descent with a
> predilection for intellectual debate? The extremes are what inform us
> the most. I don't think there is a such thing as 'regardless of it
> really is so' when it comes to consciousness. What we feel our
> conscious experience to be is actually what it feels like. No external
> measurement can change that. We notice discontinuities because our
> sense extends much deeper than conscious experience. We can tell if
> we've been sleeping even without any external cues.
>
Sure, I agree that some disorders will give important hints as to the
range of conscious experience, although I think some disorders may be so
unusual that we lose any idea about what the conscious experience is.
Our best source of information is our own 1p and 3p reports.
You have to show that mechanism makes no sense. Given the data that I
observe, mechanism is what both what my inner inductive senses tell me
as well as what formal induction tells me is the case. We cannot know,
but evidence is very strong towards mechanism. I ask you again to
consider the brain-in-a-vat example I said before. Do you think someone
with an auditory implant (example:
http://en.wikipedia.org/wiki/Auditory_brainstem_implant
http://en.wikipedia.org/wiki/Cochlear_implant) hears nothing? Are they
partial zombies to you?
They behave in all ways like they sense the sound, yet you might claim
that they don't because the substrate is different?
>> COMP on the other hand, offers very solid
>> testable predictions and doesn't fail most though experiments or
>> observational data that you can put it through (at least so far). I wish
>> other consciousness theories were as solid, understandable and testable
>> as COMP.
>
> My hypothesis explains why that is the case. Comp is too stupid not to
> prove itself. The joke is on us if we believe that our lives are not
> real but numbers are. This is survival 101. It's an IQ test. If we
> privilege our mechanistic, testable, solid, logical sense over our
> natural, solipsistic, anthropic sense, then we will become more and
> more insignificant, and Dennet's denial of subjectivity will draw
> closer and closer to self-fulfilling prophesy. The thing about
> authentic subjectivity, it is has a choice. We don't have to believe
> in indirect proof about ourselves because our direct experience is all
> the proof anyone could ever have or need. We are already real, we
> don't need some electronic caliper to tell us how real.
>
COMP doesn't prove itself, it requires the user to make some sane
assumptions (either impossibility of zombies or functionalism or the
existence of the substitution level and mechanism; most of these
assumptions make logical, scientific and philosophic sense given the
data). It just places itself as the best candidate to bet on, but it can
never "prove" itself. COMP doesn't deny subjectivity, it's a very
important part of the theory. The assumptions are just: (1p) mind,
(some) mechanism (observable in the environment, by induction),
arithmetical realism (truth value of arithmetical sentences exists), a
person's brain admits a digital substitution and 1p is preserved (which
makes sense given current evidence and given the thought experiment I
mentioned before).
>>
>>>> when
>>>> a photon hits a photoreceptor cell, that *binary* piece of information
>>>> is transmitted through neurons connected to that cell and so on
>>>> throughout the visual system(...->V1->...->V4->IT->...) and eventually
>>>> up to the prefrontal cortex.
>>
>>> That's a 3p view. It doesn't explain the only important part -
>>> perception itself. The prefrontal cortex is no more or less likely to
>>> generate visual awareness than the retina cells or neurons or
>>> molecules themselves.
>>
>> In COMP, you can blame the whole system for the awareness, however you
>> can blame the structure of the visual system for the way colors are
>> differentiated - it places great constraints on what the color qualia
>> can be - certainly not only black and white (given proper
>> functioning/structure).
>
> Nah. Color could be sour and donkey, or grease, ring, and powder. The
> number of possible distinctions is, and even their relationships to
> each other as you say, part of the visual system's structure, but it
> has nothing to do with the content of what actually is distinguished.
>
It seems to me like your theory is that objects (what is an object here?
do you actually assume a donkey to be ontologically primitive?!) emit
magical qualia-beams that somehow directly interact with your brain
which itself is made of qualia-like things. Most current science
suggests that that isn't the case, but surely you can test it, so you
should. Maybe I completly misunderstood your idea.
>>
>>> The 1p experience of vision is not dependent upon external photons (we
>>> can dream and visualize) and it is not solipsistic either (our
>>> perceptions of the world are generally reliable). If I had to make a
>>> copy of the universe from scratch, I would need to know that what
>>> vision is all about is feeling that you are looking out through your
>>> eyes at a world of illuminated and illuminating objects. Vision is a
>>> channel of sensitivity for the human being as a whole, and it has as
>>> more to do with our psychological immersion in the narrative of our
>>> biography than it does photons and microbiology. That biology,
>>> chemistry, or physics does not explain this at all is not a small
>>> problem, it is an enormous deal breaker.
>>
>> You're right that our internal beliefs do affect how we perceive things.
>> It's not biology's or chemistry's job to explain that to you. Emergent
>> properties from the brain's structure should explain those parts to you.
>> Cognitive sciences as well as some related fields do aim to solve such
>> problems. It's like asking why an atom doesn't explain the computations
>> involved in processing this email. Different emergent structures at
>> different levels, sure one arises from the other, but in many cases, one
>> level can be fully abstracted from the other level.
>
> Emergent properties are just the failure of our worldview to find
> coherence. I will quote what Pierz wrote again here because it says it
> all:
>
> "But I�ll venture an axiom
> of my own here: no properties can emerge from a complex system that
> are not present in primitive form in the parts of that system. There
> is nothing mystical about emergent properties. When the emergent
> property of �pumping blood� arises out of collections of heart cells,
> that property is a logical extension of the properties of the parts -
> physical properties such as elasticity, electrical conductivity,
> volume and so on that belong to the individual cells. But nobody
> invoking �emergent properties� to explain consciousness in the brain
> has yet explained how consciousness arises as a natural extension of
> the known properties of brain cells - or indeed of matter at all. "
>
If you don't like emergence, think of it in the form of "abstraction".
When you write a program in C or Lisp or Java or whatever, you don't
care what it gets compiled to: it will work the same on any machine if a
compiler or interpreter exists for it and if your program was written in
a portable manner. Emergence is similar, but a lot more muddy as the
levels can still interact with each other and the fully "perfect"
abstracted system may not always exist, even if most high-level behavior
is not obvious from the low-level behavior. Emergence is indeed in the
eye of the beholder. Consciousness in COMP is like some abstract
arithmetical structure that can be locally implemented in your brain has
a 1p view. The existence of the 1p view is not something reductionist,
it's ontologically primitive (as arithmetical truth/relations), but
merely a consequence of some particular abstract machine being contained
(or emerging) at some substitution level in the brain. COMP basically
says that rich enough machines will have qualia and consciousness if
they satisfy some properties and they cannot avoid that.
>>
>>> My solution is that both views are correct on their own terms in their
>>> own sense and that we should not arbitrarily privilege one view over
>>> the other. Our vision is human vision. It is based on retina vision,
>>> which is based on cellular and molecular visual sense. It is not just
>>> a mechanism which pushes information around from one place to another,
>>> each place is a living organism which actively contributes to the top
>>> level experience - it isn't a passive system.
>>
>> Living organisms - replicators,
>
> Life replicates, but replication does not define life. Living
> organisms feel alive and avoid death. Replication does not necessitate
> feeling alive.
>
You'll have to define what feeling alive is. This shouldn't be confused
with being biological. I feel like I have coherent senses, that's what
it means to me to be alive. My cells on their own (without any input
from me) replicate and keep my body functioning properly. I will avoid
try to avoid situations that can kill me because I prefer being alive
because of my motivational/emotional/reward system. I don't think
someone will move or do anything without such a biasing
motivational/emotional/reward system. There's some interesting studies
on people who had damage to such systems and how it affects their
decision making process.
>> are fine things, but I don't see why
>> must one confuse replicators with perception. Perception can exist by
>> itself merely on the virtue of passing information around and processing
>> it. Replicators can also exist due similar reasons, but on a different
>> level.
>
> Perception has never existed 'by itself'. Perception only occurs in
> living organisms who are informed by their experience. There is no
> independent disembodied 'information' out there. There detection and
> response, sense and motive of physical wholes.
>
I see no reason why that has to be true, feel free to give some evidence
supporting that view. Merely claiming that those people with auditory
implants hear nothing is not sufficient. My prediction is that if one
were to have such an implant, get some memories with it, then somehow
switched back to using a regular ear, their auditory memories from those
times would still remain.
>>
>>>> Neurons are also rather slow, they can only
>>>> spike about once per 5ms (~200Hz), although they rarely do so often.
>>>> (Note that I'm not saying that conscious experience is only the current
>>>> brain state in a single universe with only one timeline and nothing
>>>> more, in COMP, the (infinite amount of) counterfactuals are also
>>>> important, for example for selecting the next state, or for "splits" and
>>>> "mergers").
>>
>>> Yes, organisms are slower than electronic measuring instruments, but
>>> it doesn't matter because our universe is not an electronic measuring
>>> instrument. It makes sense to us just fine at it's native anthropic
>>> rate of change (except for the technologies we have designed to defeat
>>> that sense).
>>
>> Sure, the speed is not the most important thing, except when it leads to
>> us wanting some things to be faster and with our current biological
>> bodies, we cannot make them go faster or slower, we can only build
>> faster and faster devices, but we'll eventually hit the limit (we're
>> nearly there already). With COMP, this is even a greater problem
>> locally: if you get a digital brain (sometime in the not too near
>> future)
>
> Sorry, but I think it's never going to happen. Consciousness is not
> digital.
>
It's not digital in COMP either: arithmetical truth is undefinable in
arithmetic itself. However, the brain might admit a digital
substitution. Try not to confuse the brain and the mind. Some assume
they are the same, in which case they are forced to eliminativism (if
they assume mechanism), others are forced to less understandable
theories (from my perspective, but you probably understand it better
than me) like yours (if they assume mechanism is false), while others
are forced to COMP (arithmetical ontology) if they don't give up their
1p and assume mechanism (+digital subst. level).
>> , some neuromorphic hardware is predicted to be a few orders of
>> magnitude faster(such as some 1000-4000 times our current rate), which
>> would mean that if someone wanted to function at realtime speed, they
>> might experience some insanely slow Internet speeds, for anything that
>> isn't locally accessible (for example, between US and Europe or Asia),
>> which mind lead to certain negative social effects (such as groups of
>> SIMs(Substrate Independent Minds) that prefer running at realtime speed
>> congregating and locally accessible hubs as opposed to the much slower
>> Internet). However, such a problem is only locally relevant (here in
>> this Universe, on this Earth), and is solvable if one is fine with
>> slowing themselves down relatively to some other program, and a system
>> can be designed which allows unbounded speedup (I did write more on this
>> in my other thread).
>
> We are able to extend and augment our neurological capacities (we
> already are) with neuromorphic devices, but ultimately we need our own
> brain tissue to live in. We, unfortunately cannot be digitized, we can
> only be analogized through impersonation.
>
You'd have to show this to be the case then. Most evidence suggests that
we might admit a digital substitution level. We cannot know if we'd
survive such a substitution from the 1p, and that is a bet in COMP.
> Craig
>
No, there is a huge number of anecdotes.
http://records.viu.ca/www/ipp/pdf/NDE.pdf
And when there have been controlled experiments in which signs were placed on high shelves
in operating rooms those floating NDE's have not been able to read them.
> Other evidence, for instance, comes
> from LSD research conducted in the fifties (see Stanislav Grof's
> work).
The award winning Dr. Grof?
http://www.stanislavgrof.com/pdf/Bronze.Delusional.Boulder_2000.pdf
> Of course there's also vast and incontrovertible evidence that
> consciousness, under normal conditions, does supervene on brain state
> and structure, so we are left with an anomaly that in most cases is
> resolved by denying the evidence of the exceptions. This is not all
> that hard to do when the evidence is to be found in consciousnesses
> of subjects rather than 'instruments' and cannot easily be subjected
> to controlled experimental trials. But even a single personal
> experience can override the weightiest scientific authority
So all those sightings of ghosts and Elvis override the theory that the dead don't roam
around where you can see them.
> - as
> Galileo looking through the telescope and seeing 'impossible'
> mountains on the moon. So one can have a personal conviction that
> 'something is wrong with the conventional view' without necessarily
> being able to present conceptual or experimental proof for one's
> conviction. Therefore, I prefer to keep reminding people that
> something utterly central to their existence - in fact the defining
> feature to that existence: our awareness of it - remains without an
> explanation. Even the estimable David Deutsch - arch rationalist and
> materialist - concedes that we have no explanation for qualia.
Have you ever considered what form such an explanation might take?
Brent
>>>> but if I had to make an
>>>> initial guess (maybe wrong), it seems similar to some form of
>>>> panpsychism directly over matter.
>
>>> Close, but not exactly. Panpsychism can imply that a rock has human-
>>> like experiences. My hypothesis can be categorized as
>>> panexperientialism because I do think that all forces and fields are
>>> figurative externalizations of processes which literally occur within
>>> and through 'matter'. Matter is in turn diffracted pieces of the
>>> primordial singularity.
>
>> Not entirely sure what you mean by the singularity, but okay.
>
> The singularity can be thought of as the Big Bang before the Big Bang,
> but I take it further through the thought experiment of trying to
> imagine really what it must be - rather than accepting the cartoon
> version of some ball of white light exploding into space. Since space
> and time comes out of the Big Bang, it has no place to explode out to,
> and no exterior to define any boundaries to begin with. What that
> means is that space and time are divisions with the singularity and
> the Big Bang is eternal and timeless at once, and we are inside of it.
>
When I think of the Big Bang, the question "what is outside" makes no
sense to me as I just think of the "singularity" as what's at time 0.
Except, that I don't think the initial state is literally empty - not
even string theory assumes that. General relativity might assume that,
but we all know General Relativity is not compatible with Quantum
Mechanics, so we shouldn't assume it literally true when it comes to
black holes or the Big Bang. Of course, within the context of COMP,
there is no ontologically primary "Big Bang" per se, but some apparent
structure which had some time evolution that looks like that.
>>> It's confusing for us because we assume that
>>> motion and time are exterior conditions, by if my view is accurate,
>>> then all time and energy is literally interior to the observer as an
>>> experience.
>
>> I think most people realize that the sense of time is subjective and
>> relative, as with qualia. I think some form of time is required for
>> self-consciousness. There can be different scales of time, for example,
>> the local universe may very well run at planck-time (guesstimation based
>> on popular physics theories, we cannot know, and with COMP, there's an
>> infinity of such frames of references), but our conscious experience is
>> much slower relative to that planck-time, usually assumed to run at a
>> variable rate, at about 1-200Hz (neuron-spiking freq), although maybe
>> observer moments could even be smaller in size.
>
> I think planck time is an aspect of the instruments we are using to
> measure microcosmic events. There is no reason to think that time is
> literal and digital.
>
All the instruments? If everything that can measure at that scale gives
the same results, isn't that enough to just say 'it probably is
objectively so' (at least within this local physics, we're in
everything-list after all ;))
>>> What I think is that matter and experience are two
>>> symmetrical but anomalous ontologies - two sides of the same coin, so
>>> that our qualia and content of experience is descended from
>>> accumulated sense experience of our constituent organism, not
>>> manufactured by their bodies, cells, molecules, interactions. The two
>>> both opposite expressions (a what& how of matter and space and a who
>>> & why of experience or energy and time) of the underlying sense that
>>> binds them to the singularity (where& when).
>
>> Accumulated sense experience? Our neurons do record our memories
>> (lossily, as we also forget)
>
> There is loss but there is also embellishment. Our recollection is
> influenced by our semantic agendas, not only data loss. There's also
> those cases of superior autobiographical memory
> http://www.cbsnews.com/stories/2010/12/16/60minutes/main7156877.shtml
> which indicate that memory loss is not an inherent neurological
> limitation.
>
There is probably some memory limit, although it should be fairly large
(90*10^9 neurons, each having some 1000-3000 synapses is quite a bit of
data). However, even with that capacity it's hardly enough to losslessly
remember all sense data that one ever experienced, or even lossily - we
don't even remember sense data, we remember high-level patterns (see the
book I linked before) and patterns of patterns and so on (up to some ~10
levels of recursion). If we actively try to remember as many details as
possible, we'll do so, although I don't think this can go on
indefinitely. Someone obsessing over certain details all the time may
very well have better memory about them. Personally, I tend to forget
most things I don't use, even after just a few years, and I'm still
quite young. There would be things I would like to remember and yet some
specific details elude me as the memory is old (10 years?). We do
embellish our memories as well, which just shows how unreliable they can be.
I don't have synesthesia, so I can't speak for the nature of the qualia
experienced. As for interpreting optical data through other connected
parts of the cortex: imagine you have a region which is trained with
some particular sense data (not optical), now imagine it has some loose
connections to the visual system, thus synesthesia occurs - a few
auditory circuits fire together with visual ones, thus you have both
qualia at once (the visual one and the other one). The other system is
trained to mostly process data of a certain structure with certain types
of patterns being common in it, so it only experiences qualia consistent
with that type of structure. When the loose connections from the visual
system trigger circuits from the other system, the "wrong" qualia is
triggered together with the visual qualia. Given enough time, a person
with synesthesia should anticipate both qualia to occur simultaneously
and what they mean.
> If you
> assume that putting eyes on a robot conjures qualia automatically, why
> would it be visual qualia?
My assumption isn't nearly as simple. A camera or an eye would not be
sufficient to get the rich experience of visual qualia. You'd have to
have the right visual system processing the data (computationally, the
substrate itself would be irrelevant). If I had to venture a guess,
systems like Hierarchical temporal memory(HTM) or Deep Spatio-Temporal
Inference Network (DeSTIN) might have similar visual/color qualia to our
own due to how they recursively process/recognize the patterns in the
data, very much like our own visual system (they are based on similar
principles, but usually use more probabilistic/statistic methods instead
of implementing the full neural network, although they do retain the
same overall properties and structure), but they might not have the same
large cartesian theatre-like image, you'd probably require some motor
control (like our saccades) to be able to continuously capture more
details of the scene and form the needed associations. However, most of
these details are something for an AGI researcher to worry about.
Now about the distinction between visual and some other qualia. If
Hawkin's theory is correct, most of our cortical systems for processing
different types of input from the environment (visual, auditory,
sensory, ...) are pretty much the same circuit template organized in a
hierarchical matter (each part feeding recognized patterns to a higher
part and so on), and only some rough higher-level connectivity is
different - the main difference lies in what the input actually is.
If the system is trained using visual data, it'd end up recognizing
small patterns in small locally-connected areas (such as edges, lines,
..), then patterns in those patterns and so on, until it recognizes
objects as wholes (temporal patterns are also recognized as well, so
complex movement patterns can also be recognized; the system would
actually behave worse if data wasn't live or if there was no noise).
If you were to try to imagine what abstract structure would correspond
to that visual data, you'd see how it would encode the spatial(and
temporal) nature of the data, how it would differentiate colors, but
also contain high-level features (at different levels of the hierarchy)
such as lines or faces or whole objects (in a way, a person can also
think in higher-level concepts without consciously saying those
concepts' words in their mind). Such a system, if it was able to talk
about its qualia (in the framework of a larger system), it would have
all the *communicable* properties of visual qualia that we humans
ascribe to it. One could also imagine that the perceived data structures
would have their corresponding arithmetical truths and so on (if
considering it within the context of COMP).
Now consider, auditory data - it's basically temporal data, only one
dimension (or 2 with time), compared to the usual 3 (or 4, due to
spatial correlations between other systems) for visual data. The same
process as with the visual data would apply, but the learned structure
is now tailored for auditory data. The corresponding qualia would of
course be quite different from the visual one as the way the interpreter
has learned to work only correctly processes auditory data and if you
were to represent an abstract structure for that auditory data, you'd
see how the patterns were different from the visual patterns.
Sensory feeling would itself lend to similar learning and have its own
unique structural properties. At minimum, it would have the right
*communicable* properties.
Simpler qualia like smell or taste would still be unique in its own way,
but much simpler due to their considerably simpler structure.
Such a system would retain all *communicable* properties, but we can't
say if it would have the exact same qualia as we do, however I can't
know if you and me have the exact same qualia, we can only agree on the
communicable parts of it. I can't even know if I have the same qualia
every day - if there are parts that are completely incommunicable or
inaccessible structurally, that part of the nature of qualia may very
well be constantly changing ("dancing qualia") without us ever knowing
about it (of course, it also makes sense that we don't believe such a
thing to be happening because our experiences appear stable and we have
temporal continuity, and anything like that would be absolutely
incommunicable and inaccessible to any 3p instruments).
There are however many questions here for me: what qualia would a system
trained with atmospheric data have? what would it be like for an AGI if
they had an additional visual system? how it would be like to perceive
data in multiple dimensions (such as 3, 4, 5, ...) as systems can be
trained with such data?
We can find the communicable answers to such questions, but we cannot
know it's like to be such systems without extending ourselves in such a
way as to partially be such a system (full 1p questions only have full
1p answers, but may have partial communicable 3p answers).
To summarize: The nature of the qualia depends on the structure of the
data that is perceived, as well as the structure of the system that does
the perceiving. We can only talk about communicable parts of qualia, and
those parts only depend on structural/pattern-based parts of the data
and processing/computational systems involved. We cannot even know of
the stability of anything not expressly contained withing the structure,
nor does it make too much sense to bother about it as even if it would
be true, we'd be completely oblivious about them.
(Most of the stuff I'm talking about is discussed in detail in "On
Intelligence" and many other papers. It's a very promising theory about
the functioning of the neocortex and some of the related systems. Don't
read the book if you're looking for conscious theories though, I think
the author might be an eliminitavist, but his HTM/intelligence theories
are pretty solid looking.)
>> on the contrary, mechanism explains it quite
>> well. Blindsight seems to me to be due to the neocortex being very good
>> at prediction and integrating data from other senses, more on this idea
>> can be seen in Jeff Hawkins' "On Intelligence". I can't venture a guess
>> about anosognosia, it seems like a complicated-enough neurophysiology
>> problem.
>
> We don't need to get too deeply into it though to see that it is
> possible for our sense of sight to function to some extent without our
> seeing anything, and that it is possible for us to see things without
> those things matching the optical referent.
>
Sure, if our visual system is already trained to "see" and if it
correlates/integrates data from other systems, I can see(pun intended)
how someone could claim to see things without actual sensory data. I
'see' lots of things while dreaming or closing my eyes and day-dreaming
- the visual system is already trained on visual data and one can recall
memories of things we've seen or sensed (the same goes for other qualia,
such as sound or feeling, smell, ...).
As I said before, the cortex is very uniform in the circuits it
contains. If something is never trained with some data, it never
develops the structures needed to process that type of data, thus the
qualia would be anything it was trained with. If you trained the visual
cortex with auditory data, it would have temporal (audio) qualia. The
actual source shouldn't matter, for all you care, it would be buffered
PCM waves (obviously, learning in humans means correlating all senses,
so "live" data that properly correlates with the other senses would be
required if you want it to be of any use to the human).
>> To elaborate, consider that someone gets a digital eye, this eye can
>> capture sense data from the environment, process it, then route it to an
>> interface which generates electrical impulses exactly like how the eye
>> did before and stimulates the right neurons. Consider the same for the
>> other senses, such as hearing, touch, smell, taste and so on.
>
> I have not seen that any prosthetic device has given a particular
> sense to anyone who didn't already have it at some point naturally at
> some point in their life. I could be wrong, but I can't find anything
> online indicating that it has been done. It seems like one of the many
> instances of miraculous breakthroughs that have been on the verge of
> happening for
>
So you want babies to have with implants after they're born? That's the
only way that would be possible of happening. If a baby never develops
an auditory system, there's no way an implant can help it (it'd take
much more than that). As for breakthroughs: there's a lot of things
which are possible in principle, yet will take some time to get realized
because it's a lot of work and effort, and even with current
technologies, we're severly limited: you may be upset that we haven't
cracked the AGI problem in all this time, but our technology is still
behind to be able to solve it easily, very simple solutions such as
AIXItl are computationally intractable (they'd only work in universes
such as those shown in the novel "Permutation City", we just don't have
that kind of resources here on Earth), more complex solutions such as
those based on our brain are still not computationally feasible, but
they'll be in some 20-50 years, although the guys at DARPA/HP SyNAPSe
may very well create some fast neuromorphic hardware (their projects
already have some few million neuron-sized rat running around water
mazes, will take a decade or less until they can get to human-sized
networks). Others are trying to be more clever about this computational
and architectural limitations (our CPUs are not nearly as parallel as
our brains are - they run one or a few thread at the same time, our
brains run 90*10^9*3000 "threads" at the same time) and design AGIs
which function on classical CPUs or GPUs, the guys at OpenCog are trying
such a limited resource approach using some very interesting algorithmic
techniques, and I think that they may very well succeed in the goal of
human-level intelligence given some 7-15 years of work. However, even
with their resource-limited high-level approach, the resource costs are
prohibitive (hundreds of GB of RAM, multiple CPUs and eventually GPUs
and FPGAs (they're porting some components to work with specialized
hardware to get more speed)), just not nearly as bad as current
neuromorphic approaches (although the future memristor based ones may
very well be much faster and cost considerably less memory). To put it
more simply: there is no way they could have solved AGI 50 years ago, or
even 20 years ago, the hardware just wasn't there, nor was our knowledge
of cognitive architectures and AI. Instead of giving up on the problem,
better try to see why it's not yet solved and try to see if you can
contribute to solving it.
>> Now
>> consider a powerful-enough computer capable of simulating an
>> environment, first you can think of some unrealistic like our video
>> games, but then you can think of something better like ray-tracing and
>> eventually full-on physical simulation to any granularity that you'd
>> like (this may not yet be feasible in our physical world without slowing
>> the brain down, but consider it as a thought experiment for now).
>
> I'll go with this proposition as a thought experiment but I don't
> know if any digital simulation can deliver on it's promise IRL,
> regardless of sophistication or resolution. It may be the case that
> the way our senses coordinate with each other you could never
> completely eliminate a subjective rejection on some subtle level. We
> may have a sense of reality, even if it's not consciously available. I
> think a study could be done where to see if people respond differently
> to a bot than a real person, even if they are consciously fooled. It's
> not critical, but I have a hunch that people might sense or know more
> than they think they know about what is real and what isn't.
>
If the physics is too different, I can see why they'd respond
differently. I don't see how it would be different if the qualia would
be as accurate as the real one. Also, if a baby was in this VR world
from birth, I don't think it could ever know any different. An
"imperfect" simulation can be recognized if we have prior data of the
current reality. My question to you was a bit different: it seemed to me
that computational input-only was not leading to consciousness in your
theory (at least from some of the other posts I've read from you some
other day), I wanted to know if you think that is the case or not. If it
is, then what would happen to someone with some implant replacing some
sensory organ within your theory.
>> Do you
>> think these brains are p. zombies because they are not interacting with
>> the "real" world? The reason I'm asking this question is that it seems
>> to me like in your theory, only particular things can cause particular
>> sense data, and here I'm trying to completly abstract away from sense
>> data and make it accessible by proxy and allow piping any type of data
>> into it (although obviously the brain will only accept data that fits
>> the expected patterns, and I do expect that only correct data will be sent).
>
> No, real brains have real qualia, even if the external input is an
> imitation of natural inputs. Again though, maybe no matching qualia if
> it has not been initialized by a neurological organ at some point, but
> still functional. If you have never in your life seen blue with your
> eyes, I don't know that any kind of stimulation of the brain will
> generate blue.
>
Oh, that answers my previous question. So you think "real" qualia has
magical 'imprinting' properties. I don't think that is needed: I think
only the right structural differences and organization of the sense data
is needed. It seems that in your theory, you distinguish the origin of
the sense(color in this case), from the actual sense data itself. You
could reconsider the thought experiment from the perspective of someone
with a bionic eye from birth.
I don't care about a program that plays a Thank_You.mp3, but I would
care very much about a program with the right cognitive architecture
which formed similar internal beliefs as myself and has memories, is
self-conscious and so on. "If it looks like a duck, swims like a duck,
and quacks like a duck, then it probably is a duck."
Does it behave like a conscious self-aware being? Yes. Does this happen
because it has the right internal structures? Very much so yes.
Sure, a lot of modern computer programs are unintelligent, but I can see
some AGI systems which were designed to have a develop a psyche,
memories, associations, ... claim that they are conscious in the future
(if some projects succeed in their goal), and I would have no problem
considering them conscious. In your theory, you'd probably not attribute
consciousness to anything that isn't implemented in the required
"magical"(such as wetware) substrate.
>> Of course we cannot know if anything besides
>> us is conscious, but I tend to favor non-solipsistic theories myself.
>> The brain physically stores beliefs in synapses and its neuron bodies
>
> Not necessarily.TV programs are not stored in the pixels of the TV
> screen. Neurology may only be an organic abacus which we use to keep
> track of things. The memories are not in the arrangements of the
> synapses but accessed through them.
>
If that is so, you'd have to provide some evidence or explanations as to
why. I see no reason why someone would be convinced to take that view.
The evidence does suggest that memories are indeed stored in graph
between neuron bodies (synapses being the edges and the neurons being
the vertices), although that view is an oversimplication, but overall
such a model seems sufficient to store memories (as shown by some
artificial life projects which use such neural networks to simulate
simple animals/... or demonstrated by current HTM-like visual system
implementations).
>> and I see no reason why some artificial general intelligence couldn't
>> store its beliefs in its own data-structures such as hypergraphs and
>> whatnot, and the actual physical storage/encoding shouldn't be too
>> relevant as long as the interpreter (program) exists.
>
> Because it has no beliefs. It stores only locations of off/on
> switches.
>
Your view is overly reductionist. The data contains abstract beliefs and
the machine will behave as if it has those beliefs. In COMP, it would
also be conscious because arithmetical truth would encode those beliefs.
If I take your view, I should only look at the brain as a complex
chemical/electrical network, and eventually reduce it to similar
off/on-switches. There is no reason to assume consciousness to a brain,
the only reason we do is because we observe that our consciousness and
qualia correspond to the brain's beliefs at some level of abstraction. I
should have mention this before, but we don't even see "qualia" directly
(like in your theory?): do you know what the data that our visual system
sees initially looks like? It's a noisy convoluted not-too-high-res
mess. It's nothing like our clear, detailed, comprehensive conscious
experience. It only becomes like that after its been broken into
patterns (and patterns of patterns and patterns of patterns of patterns
and patterns of ...) and propagated throughout the system (with other
systems' anticipating/predicting beliefs also MODIFYING the data). I can
only see such unprocessed noise if I turn off the lights at night and
stare a bit into the darkness - because no stable patterns exist in such
data, it's just too random and not structured, so no patterns are
recognized.
>> I wouldn't have
>> much of a problem assuming consciousness to anything that is obviously
>> behaving intelligent and self-aware. We may not have such AGI yet, but
>> research in those areas is progressing rather nicely.
>
> I would say that ATI (Artificial Trivial Intelligence) is progressing
> rather nicely, but true AGI is stalled indefinitely.
>
Really? Doesn't seem like you keep up with the progress. I'm not talking
about the narrow AIs you use while translating text (which is far from
perfect), or even the more fancy Watson winning Jeopardy (although that
might be getting a little bit more closer).
Have you looked at recent AGI papers? Or looked into systems like
OpenCog or DARPA SyNAPSe's idea, or even some of the recent narrow
HTM-like systems (which while not generally intelligent, have solved
some difficult sense-data processing problems). I'd bet on some very
interesting progress (such as baby-level or mammal-level intelligence)
within 7 years or so. As for human-level, probably 15-50+ years,
depending on which approaches turn out to work or not. As I said before,
we're still very constrained as far computational resources are
concerned, and nobody has cracked molecular nanotechnology yet, and no
fancy 3D chip fabrication technology either. Energy costs are also very
high - running something the size of the human brain using current
FPGA-based neuromorphic technology would cost as much as running a small
city. Some new hardware approaches (like SyNAPSe) will likely solve one
case of this problem. Efficient resource-constrained AGI is hard, but
most evidence points toward it being solvable.
A cartoon character does not have inner beliefs (except those modeled in
the author's mind) or a working cognitive architecture. Of course, I
guess in your theory only those made of magical matter organized in
magical ways have thoughts? The thing is, humans don't store beliefs in
neurons either, they store in emergent abstract structures which encode
their data in neurons/synapses. A few neurons die? Not a problem, the
data is sparsely distributed throughout the cortex. More than a few?
Still recoverable. Neuroplasticity is quite awesome.
>> Punishing will result in some (types of) actions being avoided and
>> rewards will result in some (types of) actions being more frequent.
>
> That is only one of the results of punishment and reward. There are
> many many others. They teach us to punish and reward other. They give
> us traumatic memories. The might make us addicted to other rewards.
> Lots of things that will never happen to a computer.
>
I don't think it's moral to punish others, no matter what they do. I may
feel angry and may even want to punish them or even seek vengeance, but
I won't claim that is morally right. As for what would happen to a
computer: a computer whose cognitive architecture features a particular
implementation of empathy (such as confusing models of others with
yourself), may end up some types of moral behavior similar to ours, such
as applying what we do to ourselves to ours, or a generalized form of
the golden rule. Addiction? Very much possible, try watching those
videos I linked before. Why does addiction happen? Many reasons,
although when I said 'will result in some (types of) actions being more
frequent', addiction is clearly included there, if some action is
rewarding, and an agent seeks actions which maximize its reward (which
in turn means that those actions are biased toward being more often, in
the form of a compulsion). Traumatic memories? Recalling something tends
to also involve recalling emotional memories. If such a system would
have traumatic memories (memories about negative emotional events), they
would reinforce their aversion towards certain actions (that were
punished). There is absolutely no reason to assume a "computer" (don't
confuse the system running in some hardware with the hardware) wouldn't
be able to have those things happen given the right cognitive architecture.
>> A computationalist may claim they are conscious because of the
>> computational structure underlying their cognitive architecture.
>> You might claim they are not because they don't have access to "real"
>> qualia or that their implementation substrate isn't magical enough?
>
> My views have nothing to do with magic. Computationalism is about
> magic. Also all qualia is real qualia, they are just materially
> limited to the scale and nature of the experiencer.
>
When I talk about 'magic', I merely mean things which have to not be
explained or cannot be explained by the theory or which are hand waved
away. Magic could be seen as axioms, theorems could be seen as
non-magic. Within COMP, arithmetic realism is close to magic as it has
to be assumed, even if most people can understand what it is, but we
can't really reduce it further. Matter within COMP is non-magic as it's
explained/reduced. Mind in COMP is almost non-magic, although there is
some unexplainable truth in there (along with qualia), however it's
included in the full arithmetical truth - which is very large.
In an Aristotelian world-view, the existence of matter or it being
ontologically primary is magic. In your case, if you assume the brain
hosts consciousness, but a different substrate doesn't is 'magical'
because it privileges wetware with some very unique mental properties,
yet allows zombies in other substrates for reasons which don't make much
sense a lot of sense to me (I can imagine it, but I can't understand why
that would be necessary.)
>> Eventually such a machine may plead to you that they are conscious and
>> that they have qualia (as they do have sense data), but you won't
>> believe them because of being implemented in a different substrate than
>> you? Same situation goes for substrate independent minds/mind uploads.
>
> Meh. Science fiction. If such a thing were remotely possible then
> there would be no difference between experimenting with new operating
> system builds and grafting human cockroach genetic hybrids. Computer
> science would be considered genocidal. Does Watson know or care if you
> wipe it's memory or turn it off? Of course not, it's an electronic
> filing cabinet with a fancy lookup interface.
>
I never considered Watson when talking about AGI. Watson is mostly
narrow AI, although it's moving in a good direction. Either way, if at
least one of the current projects that are underway succeed, you'll be
able to consider again if you want to deny them consciousness or not.
I'm betting that at least one will succeed, I don't know what you're
betting, but you seem disillusioned about the prospects of AGI - I'm not
because I realize full well that there was no way they could have done
it in the past as they lacked both the hardware and the right cognitive
architecture, but with failure one learns, so even if they didn't attain
AGI, they advance many other related fields greatly.
Computer Science genocidal? I did see some smart people afraid of
considering COMP, even though they secretly believe in it given their
writings, mostly because COMP implies a lot of possible experiences, not
all ethically pleasing, but then, the universe is quite "cruel" as well,
and the universe doesn't "care" about us. The nice thing about COMP
though is that no machine is truly locked down to any world, and there
are infinities of continuations. As for deleting programs? a physical
implementation of a program allows it to manifest locally relatively to
you, deletion just implies a reduction in measure and no more
manifestation relatively to you or other observers in that frame
(however other continuations should exist for the program) - it's
obviously unethical/immoral because other programs may care about/want
to access that program you deleted, or that program might want to
manifest relatively to you or other programs. However, deleting
something like the Universal Dovetailer, or a complete world simulation
would likely be fine as no programs within the simulation or the UD
interact directly with you (although your own computation should be
found within UDs everywhere).
>>
>>> It's not necessary since they
>>> have no autonomy (avoiding 'Free Will' for John Clark's sake) to begin
>>> with.
>>
>> I don't see why not. If I had to guess, is it because you don't grant
>> autonomy to anything whose behavior is fully determined?
>
> No, it's because our autonomy comes from the fact that we are made of
> a trillion living cells which are all descended from autonomous
> eukaryotes. Living organisms make a terrible choice to make a machine
> out of, which is why the materials we select for computers and
> machines are the precise opposite of living organisms. Sterile, rigid,
> dry, hard, inorganic, etc. Also our every experience with machines and
> computers has only reinforced the pre-existing stereotype of machines
> as unfeeling and automatic. Why on Earth should I imagine that
> machines have any autonomy whatsoever? Where would the dividing line
> be? Do trash cans have autonomy? Puppets? Mousetraps?
>
I don't think randomness makes the will any more free. For me 'free
will' is just seeing my choices and making one of them consciously.
Machines can have autonomy because they don't know what they will do,
just like you don't know what you will do until you do it. Some machines
may be subject to very complex dynamics which don't make it any more
easily to determine what they will do. If you also add quantum
indeterminacy or COMP 1p indeterminacy, even more possibilities appear.
Machines are inorganic because that's the state of our manufacturing
technology at the moment. If some day we crack molecular nanotechnology,
you'll have to update that belief.
> At what point does autonomy magically appear?
Depends what you mean by autonomy, but I would guess that some cognitive
architectures could have the 'feel' of free will because they would
introspect and see that they have multiple choices and that they have to
make one, they would then use their cognitive machinery to make a choice
influenced by many factors, many which might not be consciously accessible.
If you meant it in a more general manner: most programs where you can't
trivially, provably predict their future behavior in much less steps
than it would take to run the program would be "autonomous" enough.
>> Within COMP,
>> you both have deterministic behavior, but indeterminism is also
>> completely unavoidable from the 1p. I don't think 'free' will has
>> anything to do with 1p indeterminism, I think it's merely the feeling
>> you get when you have multiple choices and you use your active conscious
>> processes to select one choice, however whatever you select, it's always
>> due to other inner processes, which are not always directly accessing to
>> the conscious mind - you do what you want/will, but you don't always
>> control what you want/will, that depends on your cognitive architecture,
>> your memories and the environment (although since you're also part of
>> the environment, the choice will always be quasideterministic, but not
>> fully deterministic).
>
> I agree except for the fact that it makes no sense for such a feeling
> to exist in the first place. There is no reason to be conscious of
> some decisions and not of others were there not the possibility to
> influence those decisions consciously. Just because there are multiple
> subconscious agendas doesn't mean that you don't consciously
> contribute to the process in a causally efficacious way.
>
Sure, the conscious processes contribute to the choice. The actual fuzzy
"line" dividing conscious and unconscious processes is a hard practical,
but solvable problem. It can be approached within COMP and it seems to
have to do with what data is accessible from where, for example, I would
venture that a process which makes use of some internal, unsharable, not
causally connected data is not something consciously accessible (the
process might be conscious of it by itself, as a "degree" of
consciousness, but not the whole, sort of like many different conscious
substructures may exist within the whole structure, but when considering
the consciousness of the structure as a whole, it would not be
accessible/present). In another way, it may be that parts that can be
reduced/simplified away would not lead to changes in
qualia/consciousness (for example, if in an AGI some encryption was
involved when passing sensory data around).
>>
>>> All we have to do is script rules into their mechanism.
>>
>> It's not as simple, you can have systems find out their own rules/goals.
>> Try looking at modern AGI research.
>
> I know, I have already had this conversation with actual AGI
> researchers. It still is only going to find rules based on the
> parameters you set. The system is never going to find a goal like
> "kill the programmer as soon as possible". AGI = trivial intelligence
> and trivial agency. It doesn't scale up to higher quality agency or
> intelligence, just like 100,000 frogs aren't the equivalent of one
> person.
>
Developing goal systems such that the AGI develops some proper
ethics/morality is a difficult problem, we don't want it to reach goals
like "kill the programmer as soon as possible". I already said that we
didn't even have a chance of making AGI yet - computationally
unfeasible. I think the race is on now - we're finally getting closer
and closer to the needed resources and architecture designs are getting
more capable of attacking that good old general intelligence problem.
You seemed to have given up on either waiting for it or working on it
because for some reason you think the problem is unsolvable or
incompatible with your ontology.
>>
>>> Some
>>> parents would like to be able to do that I'm sure, but of course it
>>> doesn't work that way for people. No matter how compelling and
>>> coercive the brainwashing, some humans are always going to try to hack
>>> it and escape. When a computer hacks it's programming and escapes, we
>>> will know about it, but I'm not worried about that.
>>
>> Sure, we're as 'free' as computations are, although most computations
>> we're looking into are those we can control because that's what's
>> locally useful for humans.
>
> If computations were as free as us, they would look for humans who
> they can control because that's what's locally useful for computers.
>
They have hardly enough computational capacity for doing that. Don't
expect any Skynet to take over the Internet anytime soon - our computers
are far too slow and the architectures are not very suitable for running
AGIs. Although, I wouldn't completely rule out the possibility of doing
this with some very resource-limited AGIs, it's just that they're in
their infancy for now.
>>
>>> What is far more
>>> worrisome and real is that the externalization of our sense of
>>> computation (the glass exoskeleton) will be taken for literal truth,
>>> and our culture will be evacuated of all qualities except for
>>> enumeration. This is already happening. This is the crisis of the
>>> 19-21st centuries. Money is computation. WalMart parking lot is the
>>> cathedral of the god of empty progress.
>>
>> There are some worries. I wouldn't blame computation for it,
>
> I don't blame computation, but I think that it is a symptom of the
> excessively occidental pendulum swing since the Enlightenment Era.
> Modern science and mercantilism are born of the same time, place, and
> purpose - the impulse for control of external circumstances through
> methodical discipline and organization - the harnessing of logic and
> objectivity.
>
We gained much from what started in the Enlightenment Era. We also lost
some things, but I think we'll regain them if we can make certain right
choices.
>> but our
>> current limited physical resources and some emergent social machines
>> which might not have beneficial outcomes, sort of like a tragedy of the
>> commons, however that's just a local problem. On the contrary, I think
>> the answer to a lot of our problems has computational solutions,
>> unfortunately we're still some 20-50+ years away to finding them, and I
>> hope we won't be too late there.
>
> I think it's already 30 years too late and unfortunately I think the
> financialization problem is not going to permit any solutions of any
> kind from being realized. Only a change in human sense and redirection
> of free will could save us, and that would be a miracle that dwarfs
> all previous revolutions.
>
Why 30 years too late? We just didn't have as much knowledge about
things as we had 30 years ago. However, it's not like we can just sit
and relax, some problems do have a timelimit on them before solving them
because even more difficult if not nearly impossible.
Sure I agree with that, I also explained before how the meaning and
words, letters appear in our mind using the HTM examples before
(patterns, patterns of patterns, patterns of patterns of patterns, ...).
The question I asked was how would the granularity of the quantum world
be entangled with the sense directly - it can only be done indirectly
through us constructing measuring tools and using them.
>> You can't *directly* sense more than the information than that
>> available directly to your senses, as in, if your eye only captures
>> about 1000*1000 pixels worth of data, you can't see beyond that without
>> a new eye and a new visual pathway (and some extension to the PFC and so
>> on).
>
> If I type this in Chinese, someone who reads Chinese will sense more
> than you will even with the same information available directly to
> your senses. Perception is not a passive reception of 'information',
> it is a sensorimotive experience of a living animal.
>
Again, explained with the HTM example I gave before. Raw sense data
doesn't mean much until it's processed by our visual system by being
broken off into small patterns, then patterns of those are
recognized/learned and so on. Someone will assign different meanings to
the data they access depending on their previous memories (which
themselves depend on previous memories and so on, although initially you
have a virgin "randomized" system ready to learn just about any types of
data/patterns).
>> We're able to differentiate colors because of how the data is
>> processed in the visual system.
>
> Differentiation can be accomplished more easily with quantitative data
> than qualitative experience. Why convert 400nm wavelength light into a
> color if you can just read it directly as light of that exact
> wavelength in the first place? It's redundant and nonsensical. I know
> it seems like it makes it easier and convenient for us, but that's
> reverse engineering and begging the question. The fact remains that
> there is no logic in taking a precise exchange of digital quantitative
> data into a black box where it is inexplicably converted into maple
> syrup and cuckoo clocks so that it can then be passed back to the rest
> of the brain in the form of acetylcholine and ion channel
> polarizations.
>
Evolution is quite chaotic and unpredictable. The answer to your
question lies in how the eye evolved, how the neocortex evolved, etc.
Also, the brain doesn't convert 400nm into "color". The brain processes
small areas of data categorized in specific ways that differentiates
color in ways (and then larger patterns of those patterns, etc) that we
can communicate those properties. In COMP qualia/consciousness is the
arithmetical truth of that abstract system which does all this
processing in space and time (perception is also temporal besides being
spatial, temporal patterns are also processed!). We cannot communicate
the qualia directly, although we can talk about the communicable
components, for which we'll find out that they have equivalents in the
structural organization of the brain. If the brain cannot differentiate
some input data, I'll bet that you won't be able to notice that you saw
2 different qualia, which nonetheless were not represented somehow,
somewhere in the brain (ignoring MGA 1/counterfactuals/parallel worlds
here, to avoid complicating the issue: in those cases, a functioning
structure somewhere existed and now you have memories of it present in
your local brain).
>> We're not able to sense strings or
>> quarks or even atoms directly, we can only infer their existence as a
>> pattern indirectly.
>
> Right, but when the atoms in our retinal cells change, we see
> something.
>
Not if that information gets lost before it manages to influence the
visual system. The data we sense in our eyes is noisy as hell, our
experience is clear as water. The reason for this is that we don't sense
direct sensory input, but processed, corrected, predicted patterns
influenced by sensory data.
>>
>>
>>
>>>> > I'm not sure why you say that continuous
>>>> > movement patterns emerge to the observer, that is factually incorrect.
>>>> >http://en.wikipedia.org/wiki/Akinetopsia
>>>> Most people tend to feel their conscious experience being continuous,
>>>> regardless of if it really is so, we do however notice large
>>>> discontinuities, like if we slept or got knocked out. Of course most
>>>> bets are off if neuropsychological disorders are involved.
>>
>>> Any theory of consciousness should rely heavily on all known varieties
>>> of consciousness, especially neuropsychological disorders. What good
>>> is a theory of 21st century adult males of European descent with a
>>> predilection for intellectual debate? The extremes are what inform us
>>> the most. I don't think there is a such thing as 'regardless of it
>>> really is so' when it comes to consciousness. What we feel our
>>> conscious experience to be is actually what it feels like. No external
>>> measurement can change that. We notice discontinuities because our
>>> sense extends much deeper than conscious experience. We can tell if
>>> we've been sleeping even without any external cues.
>>
>> Sure, I agree that some disorders will give important hints as to the
>> range of conscious experience, although I think some disorders may be so
>> unusual that we lose any idea about what the conscious experience is.
>> Our best source of information is our own 1p and 3p reports.
>
> I think the more unusual the better. We need every source of
> information about it.
>
Okay, but if the data is too unusual, it might be nearly impossible to
make any sense of it. If it's repeatable enough, I'd say it should be
usable.
Mechanism is inferable/bettable through our senses using induction
(induction itself is also indirectly used when processing
spatio-temporal patterns). If it was impossible, it would surprise me to
find any laws in the universe, or even us being conscious and existing
(as cognitive science shows that data processing in the brain is not
random at all, it follows some precise mechanistic, yet self-organizing
rules).
Senses are direct enough, but if there were only raw senses and nothing
else (no processing), there probably wouldn't be any cognition.
>> Given the data that I
>> observe, mechanism is what both what my inner inductive senses tell me
>> as well as what formal induction tells me is the case. We cannot know,
>> but evidence is very strong towards mechanism.
>
> That's because evidence is mechanistic. Subjectivity cannot be proved
> through external evidence.
>
Mechanism does not deny subjectivity. It only does so if you insist on
materialism. UDA shows materialism and mechanism are impossible. Your
options are: 1) if materialism and mechanism (single universe) -> no
consciousness/all are zombies 2) if mechanism + consciousness -> no
primary matter, but plenty of virtual matter and a free mind with rich
qualia 3) if consciousness only -> too many theories, rarely any
predictions whatsoever, uncommunicable consciousness is too weak to be
able to provide a useful theory only by itself (no predictions, hard to
falsify). I'm guessing your theory is close to being consciousness +
materialism, but no mechanism?
>> I ask you again to
>> consider the brain-in-a-vat example I said before. Do you think someone
>> with an auditory implant (example:http://en.wikipedia.org/wiki/Auditory_brainstem_implanthttp://en.wikipedia.org/wiki/Cochlear_implant) hears nothing? Are they
>> partial zombies to you?
>
> No, the nature of sense is such that it can be prosthetically
> extended. Blind people can 'see' with a cane. That's very different
> from being replaced or simulated though.
>
My example was when there was replacing going on. Of course, there is
also extension.
>> They behave in all ways like they sense the sound, yet you might claim
>> that they don't because the substrate is different?
>
> The substrate isn't different because their brains are human brains.
>
What if you slowly start replacing the brains with functionally
equivalent parts. Neuroscience says this should work. You claim that it
won't work. Deciding on this is still early, but the future will answer
these questions.
I already did give my view on those disorders, I don't see any conflict
with functionalism there, on the contrary.
If you accept a few assumptions, you get COMP's conclusions. I think
those assumptions are likely true given the data that I have and that I
can reason inductively. I can't know if they are true, but the evidence
points towards them being likely true.
>> It just places itself as the best candidate to bet on, but it can
>> never "prove" itself.
>
> A seductive coercion.
>
We have to bet on a theory or another when we want to use it for
practical things. As such, I'll tend to bet on what is more probable
given the data as that increases the chance that I'll accomplish my
goals. Currently we're just studying the consequences and requirements
of different theories, but one day, man may very well have to bet one
some of them if science and technology advances enough to make certain
things practical (such as a computationalist doctor you can say 'yes' to
or if we treat an AGI with human-level intelligence as a person).
>> COMP doesn't deny subjectivity, it's a very
>> important part of the theory. The assumptions are just: (1p) mind,
>> (some) mechanism (observable in the environment, by induction),
>> arithmetical realism (truth value of arithmetical sentences exists), a
>> person's brain admits a digital substitution and 1p is preserved (which
>> makes sense given current evidence and given the thought experiment I
>> mentioned before).
>
> Think about substituting vinegar for water. A plant will accept a
> certain concentration ratio of acetic acid to water, but just because
> they are both transparent liquids does not mean a plant will live on
> it in sufficient concentration.
>
If the doctor does his job properly, H2O will still be H2O. You seem to
be claiming that there is something irreplaceable about data sensed from
the real world as opposed data from a (locally) computed environment. I
see no reason for this hypothesis and it's on you to show evidence that
shows your view is more likely to be correct than mine.
It seems like the experience is seemilingly disconnected from the
neurochemical interactions in the brain in your theory? Also what
exactly is that "sensed" donkey (surely you don't expect literal donkeys
to pass through your brain; a donkey pattern makes sense, but that is
representable as an informational pattern which the rest of the system
can understand/distinguish from other patterns)? There's a lot of
details which seem to be taken for granted and would have to be
explained in detail for me to understand. I also don't see why this view
is that much more different from some types of dualism.
An AGI might eventually care about abstraction. A compiler just
translates (and optimizes) from one language to another (like from a
high-level language to assembly or the CPU's machine code). Programmers
do care what they want to write their programs in - they want them to be
portable and run in many software and hardware implementations.
Someone might want to upload their mind someday to become substrate
independent and avoid a lot of problems that come with wetware brains
and bodies.
> Consciousness has no place in a computer.
You could apply that to the brain as well, if you're going to strip away
the abstraction and refuse to consider high-level patterns.
A computer may be a suitable body for a conscious process.
>>
>>>>> My solution is that both views are correct on their own terms in their
>>>>> own sense and that we should not arbitrarily privilege one view over
>>>>> the other. Our vision is human vision. It is based on retina vision,
>>>>> which is based on cellular and molecular visual sense. It is not just
>>>>> a mechanism which pushes information around from one place to another,
>>>>> each place is a living organism which actively contributes to the top
>>>>> level experience - it isn't a passive system.
>>
>>>> Living organisms - replicators,
>>
>>> Life replicates, but replication does not define life. Living
>>> organisms feel alive and avoid death. Replication does not necessitate
>>> feeling alive.
>>
>> You'll have to define what feeling alive is.
>
> Why? Is it not defined enough already? This is why occidental
> approaches will always fail miserably at understanding consciousness.
> It won't listen to a single note on the piano until we define what
> music is first.
>
I only asked you to define it as if you were explaining it to someone
who asked you what it means (like a child who never heard the expression
before). It's so highly ambiguous that it can be taken to mean too many
things. I wanted to try and keep the discussion precise.
>> This shouldn't be confused
>> with being biological. I feel like I have coherent senses, that's what
>> it means to me to be alive.
>
> Right, it should not be confused with biology. For me 'I feel' is good
> enough to begin with, but it extends further. I want to continue to
> live, to experience pleasure and avoid pain, to seek significance and
> stave off entropy, etc. Lots of things but they all begin with
> sensorimotive awareness.
>
I can see an AGI which could eventually have such goals or
reward/motivational systems (although it's questionable if some of them
are really desirable to have). Of course, if you claim that such AGIs
would not be conscious, despite behaving like they are, we would have a
problem.
>> My cells on their own (without any input
>> from me) replicate and keep my body functioning properly. I will avoid
>> try to avoid situations that can kill me because I prefer being alive
>> because of my motivational/emotional/reward system. I don't think
>> someone will move or do anything without such a biasing
>> motivational/emotional/reward system. There's some interesting studies
>> on people who had damage to such systems and how it affects their
>> decision making process.
>
> Sure, yes, but we need not have any understanding of our cells or
> systems. The feelings alone are enough. They are primitive. We don't
> have to care why we want to avoid pain and death, the motivation is
> presented without need for explanation. There is no logic - to the
> contrary, all logic arises from these fundamental senses which
> transcend logic.
>
I don't see why they have to be fundamental or ontologically primitive.
The 3p parts can be explained in simpler notions. The 1p part can be
explained in COMP as arithmetical truth. Any 1p theory should not
directly contradict 3p observations, otherwise it's wrong.
The thing about these senses is that while we have them, but we don't
always understand them. When we do find the 3p explanation for why some
sense is like this or that, we can have a partial communicable
explanation for them and better understand ourselves. I don't see how
these senses transcend logic(they seem to follow logic as far as we
investigate, and the inaccessible parts, we cannot say anything about),
despite that they do make our 1p world, so they are very important
direct experiences to us.
>>
>>>> are fine things, but I don't see why
>>>> must one confuse replicators with perception. Perception can exist by
>>>> itself merely on the virtue of passing information around and processing
>>>> it. Replicators can also exist due similar reasons, but on a different
>>>> level.
>>
>>> Perception has never existed 'by itself'. Perception only occurs in
>>> living organisms who are informed by their experience. There is no
>>> independent disembodied 'information' out there. There detection and
>>> response, sense and motive of physical wholes.
>>
>> I see no reason why that has to be true, feel free to give some evidence
>> supporting that view. Merely claiming that those people with auditory
>> implants hear nothing is not sufficient.
>
> I didn't say that they hear nothing. If they had hearing loss from an
> accident or illness I see no reason why they would not hear through an
> implant. If they have never heard anything at all? Maybe, maybe not.
> They could just as easily feel it as tactile rather than aural qualia
> and we would not know the difference and neither would they. The Wiki
> suggests this might be the case for all implant recipients "(most
> auditory brain stem implant recipients only have an awareness of sound
> - recipients won't be able to hear musical melodies, only the beat)".
> You can feel a beat. That's not really an awareness of sound qua
> sound, it's just a detection of one aspect of the phenomena our ears
> can parse as aural sound.
>
That sounds like a limitation of current technology, resolution-wise.
As for not knowing the difference? I already said that I can't know the
difference between your and my qualia. Feeling sound as tactile means
that either it's connected to the wrong nerves or in the case of
congenitally deaf person: it's a matter of the structure of the data and
how it can be differentiated from other data (and from other cortical
areas which is trained with radically different data). Feeling as
tactile could happen if the structure of the data is really no different
from tactile data and if that part of the cortex could just as well be
integrated with the sensorimotor cortex.
>> My prediction is that if one
>> were to have such an implant, get some memories with it, then somehow
>> switched back to using a regular ear, their auditory memories from those
>> times would still remain.
>
> I agree. Why wouldn't they?
>
It seemed to me that in your theory you were reifying qualia in such a
way that only raw qualia from the "real" world could be experienced.
With the implant, data is processed, thus you don't get your "real" qualia.
What about gradual replacement? You said it wasn't possible before (with
no reason as to why you think it's so), but evidence suggests that it
might be possible some day.
Seems like you're reifying matter a bit too much, and magically
privileging some matter, some cells, some organisms. I tend to think in
the patterns that matter, cells and organisms represent. We cannot
consider any of them primitive or magically privileged for no reason. I
privilege computation because of the Chuch-Turing Thesis - it's the only
absolute thing we can have in math. Either way, if you theory offers
testable predictions, you may be able to find out if you're right or not
about it. Although predictions of the form: "Any AGI is not conscious"
are not too valid because they are not testable or maybe they are, but
you wouldn't perform the test - if a mind upload is eventually possible,
you could try, but you wouldn't, because you're betting that you'd lose
your consciousness, yet if the bet is wrong, you'd be conscious.
> Craig
>
I model the 3p idea by formal. "formal" just means having a local
finitely describable shape. Consciousness is not a product of any such
finite shape.
> You are saying that B"1+1=2" is a description of being conscious
> that 1+1=2?
Not at all. I am saying that B"1+1=2" & 1+1=2, is a description of
being conscious that 1+1=2.
B"1+1=2" just mean that the machine believes p (using Dennett'
intentional stance, or some generalized non-zombie attitude. For
correct machine Bp is the same as "the machine asserts p", "the
machine proves p", but supposedly express by the machine itself (in
general). Knowledge of p is (Bp & p). This related knowledge to an
aspect of consciousness: its non formal definablity. By Tarski, we
cannot define (Bp & p) is arithmetic, although we can simulate it for
each particular arithmetical p (by itself).
> This confuses me though because I read B as "provable"; yet many
> things are provable of which we are not conscious.
Sure. That's why we use Bp & p instead. And this changes everything,
even for the correct (a priori) machine, because that machine cannot
know that she is correct.
Bruno
But it is proved in the comp theory.
Bruno
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
Yes. In many ways. If you have already the phi_i, which can be derived
from the laws of addition and multiplication, that you can write down,
you can extract many things from little equations. Auda is a
consequence, at some level of theoretisation, of B(Bp->p)->Bp (Löb's
formula). The physical reality of the machine is described in the
formula p->BDp, with a new B defined from the Löbian one used above.
Fair enough, pierz. I think the UDA is far simpler than MGA, which you
seem to grasp.
AUDA is deeply based on theoretical computer science and logic. I have
made attempt to explain, but in a mailing list people tend to forget,
or to run away once there are too many symbols, so I can only refer to
the papers.
Bruno
>
> BTW, while I am with Craig in intuiting a serious conceptual lacuna in
> the materialist paradigm, that doesn't necessarily enamour me of his
> alternative. His talk of 'sense making' seems to me more like a 'way
> of talking about things' than a theory in the scientific or
> philosophic sense. It doesn't really seem to explain anything as such,
> but more to put a lot of language around an ill defined intuition.
> Sorry Craig if that wrongs you, but like others, I would like to hear
> something concrete your theory predicts rather than just another
> interpretive slant on the same data.
>
>> Brent
>>
>>
>>
>>>> I wanted to discuss this issue in another thread
>>
>>>> http://groups.google.com/group/everything-list/t/a4b4e1546e0d03df
>>
>>>> but at the present the discussion is limited to the question of
>>>> information is basic physical property (Information is the
>>>> Entropy) or not.
>>
>>>> Evgenii
>
On Jan 27, 5:55 am, Bruno Marchal <marc...@ulb.ac.be> wrote:On 26 Jan 2012, at 07:19, Pierz wrote:As I continue to ponder the UDA, I keep coming back to a nigglingdoubt that an arithmetical ontology can ever really give asatisfactory explanation of qualia.Of course the comp warning here is a bit "diabolical". Comp predictsthat consciousness and qualia can't satisfy completely the self-observing machine. More below.It seems to me that imputingqualia to calculations (indeed consciousness at all, thought that maybe the same thing) adds something that is not given by, or derivablefrom, any mathematical axiom. Surely this is illegitimate from amathematical point of view. Every mathematical statement can only bemade in terms of numbers and operators, so to talk about *qualities*arising out of numbers is not mathematics so much as numerology orqabbala.No, it is modal logic,
A nice term for speculation! Mind you, that's OK. Where would we be
without speculation? But the term 'modal logic' might be used for
numerology too - it *might* be the case that a 4 in one's birthdate
does signify a practical soul.
although model theory does that too. It isbasically the *magic* of computer science.
Magic, but not numerology then.
relatively to a universalnumber, a number can denote infinite things,
I think you're saying that a number can be part of an infinite number
of sets, calculations etc, which is true, but what it denotes is
always purely a matter of logical numerical relationships, unless it
denotes something beyond mathematics itself, such as when I count
oranges. I am saying that to denote qualia, the numbers must be
denoting 'oranges' (or maybe the colour orange as an experience),
things outside of pure logic, not mathematical entities.
like the programfactorial denotes the set {(0,0),(1,1),(2,2),(3,6),(4,24),(5,120), ...}.Nobody can define consciousness and qualia, but many can agree onstatements about them, and in that way we can even communicate orstudy what machine can say about any predicate verifying thoseproperties.Here of course is where people start to invoke the wonderfully proteannotion of ‘emergent properties’. Perhaps qualia emerge when acalculation becomes deep enough.Perhaps consciousness emerges from acomplicated enough arrangement of neurons.Consciousness, as bet in a reality emerges as theorems in arithmetic.
Sorry, I cannot parse that sentence. It doesn't seem grammatical.
They emerge like the prime numbers emerges.
'They'? The theorems? You mean consciousness is a bet on an
arithmetical theorem?
Rudiment of qualia would explains qualia away. They are intrinsicallymore complex. A qualia needs two universal numbers (the hero and thelocal environment(s) which executes the hero
Once executed, he's not a hero any more, he's a martyr :)
(in the computer sciencesense,
oh, right ;)or in the UD). It needs the "hero" to refers automatically tohigh level representation of itself and the environment, etc. Then thequalia will be defined (and shown to exist) as truth felt as directlyavailable, and locally invariants, yet non communicable, and applyingto a person without description (the 1-person). "Feeling" beingsomething like "known as true in all my locally directly accessibleenvironments".And yet it seems to methey can’t be, because the only properties that belong to arithmeticare those leant to them by the axioms that define them.Not at all. Arithmetical truth is far bigger than anything you canderive from any (effective) theory. Theories are not PI_1 complete,Arithmetical truth is PI_n complete for each n. It is very big.
I do appreciate Gödel's theorem and its proof that there are true,
unprovable statements within any given arithmetic, so you are correct.
But my error is one of technical terminology I think. Surely there are
statements that can be made within a certain arithmetic and others
that can't. For instance, within Peano arithmetic it does not make
sense to ask about the truth value of statements involving i (the
imaginary number).
Then there are limits to what can be called a
mathematical statement - ie one involving the truth and falsity of
purely logical relations. So I can't, in any arithmetic or system of
mathematics, ask if the number 20 is nice or not.
Or if the prime
numbers are pink or blue.
Arithmetical truth may be as vast as you
like, but my point is that it is still *arithmetical*, and qualities
don't come into it.
It is the set of sentences that can be made about
numbers and those sentences are limited in their symbols. So Gödel
doesn't help you here I don't think.
Indeedarithmetic *is* exactly those axioms and nothing more.Gödel's incompleteness theorem refutes this.Matter may inprinciple contain untold, undiscovered mysterious properties which Isuppose might include the rudiments of consciousness. Yet mathematicsis only what it is defined to be. Certainly it contains many mysteriesemergent properties, but all these properties arise logically from itsaxioms and thus cannot include qualia.It is here that you are wrong. Even if we limit ourselves toarithmetical truth, it extends terribly what machines can justify.
Terribly perhaps, but still not beyond the arithmetical, by
definition.
I call the idea that it can numerology because numerology alsoascribes qualities to numbers. A ‘2’ in one’s birthdate indicatescreativity (or something), a ‘4’ material ambition and so on. Becausethe emergent properties of numbers can indeed be deeply amazing andwonderful - Mandelbrot sets and so on - there is a natural humantendency to mystify them, to project properties of the imaginationinto them.No. Some bet on mechanism to justify the non sensicalness of thenotion of zombie, or the hope that he or his children might travel onmars in 4 minutes, or just empirically by the absence of relevant nonTuring-emulability of biological phenomenon.Unlike putting consciousness in matter (an unknown into an unknown),comp explains consciousness with intuitively related concept, likeself-reference, non definability theorem, perceptible incompleteness,etc.And if you look at the Mandelbrot set, a little bit everywhere, youcan hardly miss the unreasonable resemblances with nature, fromlightening to embryogenesis given evidence that its rational partmight be a compact universal dovetailer, or creative set (in Postsense).
Well I certainly don't dispute the central significance mathematics
must play in any complete scientific or philosophical world view. I
suppose the question is whether that mathematics is ontologically
primary or not.
But if these qualities really do inhere in numbers and arenot put there purely by our projection, then numbers must be more thantheir definitions. We must posit the numbers as something thatprojects out of a supraordinate reality that is not purelymathematical - ie, not merely composed of the axioms that define anarithmetic.Like arithmetical truth. I think acw explained already.
Are you saying arithmetical truth is not purely mathematical?
This then can no longer be described as a mathematicalontology, but rather a kind of numerical mysticism.It is what you get in the case where brain are natural machines.And becausesomething extrinsic to the axioms has been added, it opens the way forall kinds of other unicorns and fairies that can never be proved fromthe maths alone. This is unprovability not of the mathematicalvariety, but more of the variety that cries out for Mr Occam’s shavingapparatus.No government can prevent numbers from dreaming. Although they mighttry <sigh>.You can't apply Occam on dreams.They exist epistemologically once you have enough finite things.
Well, I'm not trying to prevent anyone from dreaming! I'm arguing
whether or not maths can include dreams.
Feel free to suggest a non-comp theory. Note that even just theshowing of *one* such theory is everything but easy. Somehow you haveto study computability, and UDA, to construct a non Turing emulableentity, whose experience is not recoverable in any first person sense.Better to test comp on nature, so as to have a chance at least to getan evidence against comp, or against the classical theory of knowledge.
Hehe. I suppose you have some idea that I can't do that! As noted in
my
prior post in this thread, these are my attempts to understand, as
completely as I can, this interesting philosophy. I admit I like your
theory better than materialism. I am trying to discover if I like it
enough ('like' in the sense that it satisfies my intellectual
intuition and my logic sufficiently) to entertain it seriously over my
current admission of nearly total ignorance as to what the world 'is'.
I don't need to posit an alternative to make that enquiry, and to do
so by questioning whatever in the theory seems weak (even if it proves
in the end to be my understanding that is the weakness).
Sorry, I have not understood your answer. Let me contrast this with your
previous statement
>>> But I'll
>>> venture an axiom of my own here: no properties can emerge from
>>> a complex system that are not present in primitive form in the
>>> parts of that system. There is nothing mystical about emergent
>>> properties.
What happens with the cybernetic laws from this viewpoint?
> terms of the point I am making regarding qualia, Gray's argument is
> one variant on the theme of the type of reasoning I object to. It's
> all there in the statement:
>
> "Behaviour as such does not appear to require for its explanation
> any principles additional to these."
>
> The issue isn't explaining behaviour, it's explaining consciousness/
> qualia. These approaches always end up conflating the two, their
> proponents getting annoyed with anyone who isn't prepared to wish
> away the gap between them.
The quote above was from the beginning of Gray's book where he tries to
consider life and a human being from the viewpoint of physicalism. Yet,
he shows later on that this does not work and consciousness remains as a
hard problem.
Evgenii
...
Basically the cybernetics laws describe a feedback. Let us for a example
consider a PID controller to keep the temperature constant in a
thermostat. What is the relationship between the equations implemented
in the controller and physics laws? Do these equations emerge from or
supervene to the physics laws?
Evgenii
Math, even just arithmetic is already outside logic
The equations describe the 'boundary conditions' as well as the physical laws. For
example Kirchoff's law just says current is conserved in electrical circuits, but to use
this law in an equation describing the function of some circuit the equations must also
describe the configuration of the circuit.
Brent
>
> Evgenii
>
There is a huge amount of
evidence along these lines that consciousness does not in fact
supervene on the physical brain.
No, there is a huge number of anecdotes.
But it is proved in the comp theory.
Bruno
> On Jan 28, 8:03 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> On 28 Jan 2012, at 02:33, Craig Weinberg wrote:
>>
>>> On Jan 27, 12:20 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>
>>>> But many things about numbers are not arithmetical. Arithmetical
>>>> truth
>>>> is not arithmetical. Machine's knowledge can be proved to be non
>>>> arithmetical.
>>>> If you want, arithmetic is enough rich for having a bigger reality
>>>> than anything we can describe in 3p terms.
>>
>>> But all arithmetic truths, knowledge, beliefs, etc are all still
>>> sensemaking experiences. It doesn't matter whether they are
>>> arithmetic
>>> or not, as long as they can possibly be detected or made sense of in
>>> any way, even by inference, deduction, emergence, etc, they are
>>> still
>>> sense. Not all sense is arithmetic or related to arithmetic in some
>>> way though. Sense can be gestural or intuitive.
>>
>> That might be possible. But gesture and intuition can occur in
>> relative computations.
>
> How do you know that they 'occur' in the computations rather than in
> the eye of the beholder of the computations?
The beholder of the computations is supported by the computations.
Those exist independently of me, in the same way numbers are prime or
not independently of me.
>
>>
>>
>>
>>>>> There is nothing in the universe
>>
>>>> The term universe is ambiguous.
>>
>>> Only in theory. I use it in a literal, absolutist way.
>>
>> This does not help to understand what you mean by "universe".
>
> Universe means 'all that is' in every context.
But "all that is" is what we are searching, testing, studying. The
word "is" is very useful in everyday life, but very ambiguous per.
"is" or "exist" depends on the theory chosen. Something can exist
ontologically, or epistemologically.
>
>>
>>
>>
>>>> You confuse proving p, which can be explained in arithmetic, and
>>>> "proving p & p is true", which can happen to be true for a machine,
>>>> but escapes necessarily its language.
>>>> The same for consciousness. It cannot be explained in *any* third
>>>> person terms. But it can be proved that self-observing machine
>>>> cannot
>>>> avoid the discovery of many things concerning them which are beyond
>>>> language.
>>
>>> I think that are confusing p with a reality rather than a logical
>>> idea
>>> about reality.
>>
>> p refers to reality by definition. "p" alone is for "it is the case
>> that p".
>
> But it isn't the case, it's the idea of it being the case.
It is the case that 17 is prime, independently of of it is the case
that such or such human has the idea that it is the case that 17 is
prime. You are confusing levels.
> You're just
> saying 'Let p ='. It doesn't mean proposition that has any causal
> efficacy.
The fact that 17 is prime has causal efficacy. It entails many facts.
>
>>
>>> I have no reason to believe that a machine can observe
>>> itself in anything more than a trivial sense.
>>
>> It needs a diagonalization. It can't be completely trivial.
>
> Something is aware of something, but it's just electronic components
> or bricks on springs or whatever being aware of the low level physical
> interactions.
A machine/program/number can be aware of itself (1-person) without
knowing anything about its 3p lower level.
>
>>
>>> It is not a conscious
>>> experience, I would guess that it is something like an accounting of
>>> unaccounted-for function terminations. Proximal boundaries. A
>>> silhouette of the self offering no interiority but an
>>> extrapolation of
>>> incomplete 3p data. That isn't consciousness.
>>
>> Consciousness is not just self-reference. It is true self-reference.
>> It belongs to the intersection of truth and self-reference.
>
> It's more than that too though. Many senses can be derived from
> consciousness, true self-reference is neither necessary nor
> sufficient. I think that the big deal about consciousness is not that
> it has true self-reference but that it is able to care about itself
> its world that a non-trivial, open ended, and creative way. We can
> watch a movie or have a dream and lose self-awareness without being
> unconscious. Deep consciousness is often characterized by
> unselfconscious awareness.
This is not excluded by the definition I gave.
I am not sure. I don't see the relevance of that mechanist point.
>
>>
>>> Consciousness does nothing to speed decisions, it would only cost
>>> processing overhead
>>
>> That's why high animals have larger cortex.
>
> Their decisions are no faster than simpler animals.
Complex decision are made possible, and are done more faster.
>
>>
>>> and add nothing to the efficiency of unconscious
>>> adaptation.
>>
>> So, why do you think we are conscious?
>
> I think that humans have developed a greater sensorimotive capacity
I still don't know what you mean by that. You can replace
"sensorimotive" by "acquainted to the son of God" in all your argument
without them having a different meaning or persuasive force.
> as
> a virtuous cycle of evolutionary circumstance and subjective
> investment. Just as hardware development drives software development
> and vice versa. It's not that we are conscious as opposed to
> unconscious, it's that our awareness is hypertrophied from particular
> animal motives being supported by the environment and we have
> transformed our environment to enable our motives. Our seemingly
> unique category of consciousness can either be anthropic prejudice or
> objective fact, but either way it exists in a context of many other
> kinds of awareness. The question is not why we are conscious, it is
> why is consciousness possible and/or why are we human.
Why we are human is easily explained, or not-explainable, as an
indexical geographical fact, by comp. It is like "why am I the one in
W and not in M?". Comp explains why consciousness is necessary. It is
the way we feel when integrating quickly huge amount of information in
a personal scenario.
> To the former,
> the possibility is primordial, and the latter is a matter of
> probability and intentional efforts.
>
>>
>>
>>
>>>> Consciousness is not explainable in term of any parts of something,
>>>> but as an invariant in universal self-transformation.
>>>> If you accept the classical theory of knowledge, then Peano
>>>> Arithmetic
>>>> is already conscious.
>>
>>> Why and how does universal self-transformation equate to
>>> consciousness?
>>
>> I did not say that. I said that consciousness is a fixed point for a
>> very peculiar form of self-transformation.
>
> what makes it peculiar?
The computer science details of its implementation (not of
consciousness, but of the self-transformation, based on some
application of Kleene's theorem).
>
>>
>>> Anything that is conscious can also be unconscious. Can
>>> Peano Arithmetic be unconscious too?
>>
>> Yes. That's possible if you accept that consciousness is a logical
>> descendent of consistency.
>
> Aren't the moons of Saturn consistent?
The material moons are not programs, nor theories. "consistent" cannot
apply to it without stretching the words a lot.
> Will consciousness logically
> descend from their consistency?
If ever the moon have to become conscious. Yes. No if this has not to
happen. There is few chance moons becomes conscious, for they are not
self-moving and have very few degrees of freedom.
>
>> It follows then from the fact that
>> consistency entails the consistency of inconsistency (Gödel II). Of
>> course, the reality is more complex, for consciousness is only
>> approximated by the instinctive unconscious) inductive inference of
>> self-consistency.
>
> You need some kind of awareness to begin with to tell the difference
> between consistency and inconsistency.
Not necessarily. Checking inconsistency does not require a lot of
cognitive ability.
I was just alluding to the fact that replication, although not
providing Turing universality, do that in company of the while loop.
>
>>
>>
>>
>>>>>> are fine things, but I don't see why
>>>>>> must one confuse replicators with perception. Perception can
>>>>>> exist by
>>>>>> itself merely on the virtue of passing information around and
>>>>>> processing
>>>>>> it. Replicators can also exist due similar reasons, but on a
>>>>>> different
>>>>>> level.
>>
>>>>> Perception has never existed 'by itself'. Perception only occurs
>>>>> in
>>>>> living organisms who are informed by their experience.
>>
>>>> The whole point is to explain terms like "living", "conscious",
>>>> etc.
>>>> You take them as primitive, so are escaping the issue.
>>
>>> They aren't primitive, the symmetry is primitive.
>>
>> ?
>
> Conscious and unconscious are aspects of the inherent subject-object
> symmetry of the universe.
Which you assume.
>
>>
>>
>>
>>>>> There is no
>>>>> independent disembodied 'information' out there. There detection
>>>>> and
>>>>> response, sense and motive of physical wholes.
>>
>>>> Same for "physical" (and that's not obvious!).
>>
>>> Do you doubt that if all life were exterminated that planets would
>>> still exist? Where would information be though?
>>
>> In the arithmetical relation, which truth are independent of me.
>> (I indulge in answering by staying in the frame of my working
>> hypothesis without repeating this).
>
> Why isn't arithmetic truth physical?
Because it does not rely on any physical notion. You can do number
theory without ever doing physics.
>
>>
>>
>>
>>>>> Sorry, but I think it's never going to happen. Consciousness is
>>>>> not
>>>>> digital.
>>
>>>> If you survive with a digital brain, then consciousness is
>>>> necessarily
>>>> not digital.
>>>> A brain is not a maker of consciousness. It is only a stable
>>>> pattern
>>>> making it possible (or more probable) that a person can manifest
>>>> itself relatively to some universal number(s).
>>
>>> Why not just use adipose tissue instead? That's a more stable
>>> pattern.
>>> Why have a vulnerable concentration of this pattern in the head? Our
>>> skeleton would make a much safer place four a person to manifest
>>> itself relatively to some universal number.
>>
>> Write a letter to nature for geographical reclamation.
>
> Funny but avoiding a serious problem of comp. Why not have some
> creatures with smart skulls or shells and stupid soft parts inside? It
> seems to be a strong indicator of material properties consistently
> determining mechanism and not the other way around.
Seeming is deceptive.
>
>>
>>
>>
>>>> Keep in mind that comp makes materialism wrong.
>>
>>> That's not why it's wrong. I have no problem with materialism being
>>> wrong, I have a problem with experience being reduced to non
>>> experience or non sense.
>>
>> This does not happen in comp. On the contrary machines can already
>> explain why that does not happen. Of course you need to believe that
>> arithmetical truth makes sense. But your posts illustrate that you
>> do.
>
> Arithmetical truth does make sense, definitely, but so do other kinds
> of experiences make sense and are not arithmetic truths.
If they are conceptually rich enough, you can take them instead of
arithmetic, without changing anything in the explanation of
consciousness and matter. I use numbers because people are more
familiar with them.
>
>>
>>
>>
>>>> The big picture is
>>>> completely different. I think that you confuse comp, with its
>>>> Aristotelian version where computations seems to be incarnated by
>>>> physical primitive materials. Comp + materialism leads to person-
>>>> nihilism, so it is important to understand that comp should not be
>>>> assumed together with materialism (even weak).
>>
>>> I don't think that I am confusing it. Comp is perfectly
>>> illustrated as
>>> modern investment banking. There is no material, in fact it
>>> strangles
>>> the life out of all materials, eviscerating culture and
>>> architecture,
>>> all in the name of consolidating digitally abstracted control of
>>> control. This is machine intelligence. The idea of unexperienced
>>> ownership as an end unto itself, forever concentrating data and
>>> exporting debt.
>>
>> Only in your reductionist appraisal of comp. That is widespread and
>> dangerous indeed, but you add to the grains of it, imo.
>>
>
> Investment banking is just an example, I'm not trying to reduce comp
> to that, but the example is defensible. Investment banking is almost
> pure comp, is it not?
If you deposit your Gödel number code at the bank, or something like
that. You stretch the meaning of comp, which is just the bet that our
body is Turing emulable and that we can survive through any of its
Turing emulation.
> All of those Wall Street quants... where is the
> theology and creativity?
It is buried by the materialists since 1500 years.
>
>>
>>
>>>>> We are able to extend and augment our neurological capacities (we
>>>>> already are) with neuromorphic devices, but ultimately we need our
>>>>> own
>>>>> brain tissue to live in.
>>
>>>> Why? What does that mean?
>>
>>> It means that without our brain, there is no we.
>>
>> That's not correct.
>
> What makes you think that?
There is no ontological brain, yet we are.
>
>>
>>> We cannot be
>>> simulated anymore than water or fire can be simulated.
>>
>> Why? That's a strong affirmation. We have not yet find a phenomenon
>> in
>> nature that cannot be simulated (except the collapse of the wave,
>> which can still be Turing 1-person recoverable).
>
> You can't water a real plant with simulated water or survive the
> arctic burning virtual coal for heat.
What is a real plant? A plant is epistemologically real relatively to
you and your most probable computations. It is not an absolute notion.
> If you look at substitution
> level in reverse, you will see that it's not a matter of making a
> plastic plant that acts so real we can't tell the difference, it's a
> description level which digitizes a description of a plant rather than
> an actual plant. Nothing has been simulated, only imitated. The
> difference is that an imitation only reminds us of what is being
> imitated but a simulation carries the presumption of replacement.
This makes things more complex than they might be.
>
>>
>>> Human
>>> consciousness exists nowhere but through a human brain.
>>
>> Not at all. Brain is a construct of human consciousness, which has
>> some local role.
>> You are so much Aristotelian.
>>
>
> If you say that human consciousness exists independently of a human
> brain, you have to give me an example of such a case.
UDA shows that you are an example of this.
>
>>
>>
>>>>> We, unfortunately cannot be digitized,
>>
>>>> You don't know that. But you don't derive it either from what you
>>>> assume (which to be franc remains unclear)
>>
>>> I do derive it, because the brain and the self are two parts of a
>>> whole. You cannot export the selfness into another form, because the
>>> self has no form, it's only experiential content through the
>>> interior
>>> of a living brain.
>>
>> That's the 1-self, but it is just an interface between truth and
>> relative bodies.
>
> Truth is just an interface between all 1-self and all relative bodies.
In which theory? This does not make sense.
>
>>
>>
>>
>>>> .
>>>> I think that you have a reductionist conception of machine, which
>>>> was
>>>> perhaps defensible before Gödel 1931 and Turing discovery of the
>>>> universal machine, but is no more defensible after.
>>
>>> I know that you think that, but you don't take into account that I
>>> started with with that. I read Gödel, Escher, Bach around 1980 I
>>> think. Even though I couldn't get too much into the math, I was
>>> quite
>>> happy with the implications of it. For the next 25 years I believed
>>> that the universe was made of 'patterns' - pretty close to what your
>>> view is.
>>
>> Not really. The physical universe is not made of any patterns. Nor is
>> it made of anything. It is a highly complex structure which appears
>> in
>> first person plural shared dreams.
>
> That's what I'm saying. 'Structure' = pattern.
>
>> You might, like many, confuse
>> digital physics (which does not work) and comp.
>> "I am a machine" makes it impossible for both my consciousness, and
>> my
>> material body to be Turing emulable.
>
> But your material body is Turing emulable (or rather, Turing
> imitatable).
At the comp subst level: imitable is emulable. You seem to lower that
level in the infinite.
>
>> I agree that this is counter-
>> intuitive, and that's why I propose a reasoning, and I prefer that
>> people grasp the reasoning than pondering at infinitum on the results
>> without doing the needed (finite) work.
>>
>>> It's only been in the last 7 years that I have found a better
>>> idea. My hypothesis is post-Gödelian symmetry.
>>
>> You have to elaborate a lot. You should study first order logical
>> language to be sure no trace of metaphysical implicit baggage is put
>> in your theory; in case you want scientists trying to understand what
>> you say.
>
> My whole point is revealing a universe description in which logic and
> direct experience coexist in many ways. Limiting it to logical
> language defeats the purpose,
That's what the machine can already explain. You consider it as a
zombie.
> although I would love to collaborate
> with someone who was interested in formalizing the ideas.
Convince people that there is an idea. But by insisting that your
ideas contradict comp, you shoot in your theory, because you add a
magic where the comp theories explains the appearance of the magic
without introducing it at the start.
> Logic is a
> 3p language - a mechanistic, involuntary form of reasoning which
> denies the 1p subject any option but to accept it.
This is false. The right side of the hypostases with "& p& are
provably beyond language, at the level the machine can live.
> The 1p experience
> is exactly the opposite of that. It is a 'seems like' affair which
> invites or discourages voluntary participation of the subject. Half of
> the universe is made of this.
With comp, it is the main part of the "universe".
Bruno
On 1/29/2012 6:54 AM, Bruno Marchal wrote:There is a huge amount of
evidence along these lines that consciousness does not in fact
supervene on the physical brain.
No, there is a huge number of anecdotes.
But it is proved in the comp theory.
Bruno
I would say that your argument is that consciousness does not supervene on a *fundamental* physical brain. I don't think it shows that consciousness can exist without there also being a physical environment in which it exists, even if the physical is not fundamental but is part of the same computations.
In any case I don't see that it supports NDE's against more mundane explanations.
Yet someone doesn't need "too wild" theories to give possible, but
unverifiable explanations to those anecdotes. I'll ignore case a) here
as it's not very interesting and look for what possible explanations you
could have for it. b) seems good, but let's consider the case where such
experiences are more repeatable (from your perspective). If considered
within the context of MWI or COMP, you could conjecture that the cases
where something hadn't happened, such as your grandmother having a scary
dream that led her to stop you at some given time from going somewhere
to your death (dreams are especially good candidates for things that can
be influenced by more chaotic dynamics and noise, such as things going
on below the substitution level or at quantum level). To put it another
way - you can only experience things consistent with you being alive
(Anthropic principle or more limited forms like Quantum Theory of
Immortality or COMP Theory of Immortality) - it's all one useful
coincidence caused by the law of large numbers, from the laws of
physics/the machines that run you to more deterministic high-level
physics emerging. Of course, since such a coincidence was a complex,
many step event, I'd be willing to think that the relative measure of
one's computations is stacked towards longer (if not even
non-terminating) computations, thus histories which lead you to longer
locally stable physics are much more probable than those which lead to
more non-local unusual continuations, this might very well have to do
with those self-reference laws and whatever machines mostly won the
measure battle for this local physics we have now.
As for the other example, I have no idea why your other friend had that
'attack', I can't see any better explanation for now than just some
confirmation bias on your part.
As an anecdote, I did have a few of my own short brushes with potential
death and got saved by some small, but not too unusual coincidences.
Unlike others which tend to just jump to some organized religion and
praise some magical being for saving them whenever they have a brush
with death (or just very unusual coincidences), I just ended up chalking
them up to slight measure reductions and I hope it didn't lead to too
much sadness to my friends and family in those branches where 'I' didn't
survive (assuming COMP or some MW-like theories).
I have have my own share of such experiences and so have several
friends. it is as if consciousness is not limited to "being in a moment"
but can stretch out in time at the price of the experience being of low
resolution. There is a well known uncertainty between a duration in time
and energy, but time is still a not-well-understood concept.
As to the panpsychism that you mention, such would be necessary,
IMHO, for a dualism to be consistent. Building on the Local System
theory of Prof. Hitoshi Kitada, I am conjecturing that any system that
can have its own wavefunction associated with it will be, at some level,
conscious. The problem that we need to overcome is the definition of
what consciousness is once we strip away the anthropomorphic facade. One
thing about explanations, they have to all be consistent with each other
so that we don't end up with a huge crazy-quilt of explanations that
work for one thing but can't be carried into anything else.
Onward!
Stephen
On Jan 30, 5:09 am, Bruno Marchal <marc...@ulb.ac.be> wrote:On 29 Jan 2012, at 03:20, Craig Weinberg wrote:How do you know that they 'occur' in the computations rather than inthe eye of the beholder of the computations?The beholder of the computations is supported by the computations.Those exist independently of me, in the same way numbers are prime ornot independently of me.
How would you know that they exist at all?
Many people feel the same
way about God.
Universe means 'all that is' in every context.But "all that is" is what we are searching, testing, studying. Theword "is" is very useful in everyday life, but very ambiguous per."is" or "exist" depends on the theory chosen. Something can existontologically, or epistemologically.
As long as it is something to something, then it 'is'. There is
nothing that it is not, as long as sense is respected. Unicorns are
not part of the universe as far as we know, but the idea of unicorns
is certainly part of the human universe and therefore the universe.
But it isn't the case, it's the idea of it being the case.It is the case that 17 is prime, independently of of it is the casethat such or such human has the idea that it is the case that 17 isprime. You are confusing levels.
17 is only prime in a symbolic system that defines primeness,
enumeration, and division of whole integers that way.
Internal
consistency of the rules a game, even a universal game, does not make
the game independent of players.
The rules arise from the players
interactions with each other,
and that interaction is the game.
Comp
says that there are disembodied rules that assemble themselves
mechanically as games which then dreams it is separate players.
You're justsaying 'Let p ='. It doesn't mean proposition that has any causalefficacy.The fact that 17 is prime has causal efficacy. It entails many facts.
It entails only arithmetic facts, but there is nothing to say that
arithmetic by itself causes anything outside of arithmetic.
Even
within arithmetic, it is the execution of a program or function by a
mind or body, that is through energy exerted within matter, which
produces causes.
I have no reason to believe that a machine can observeitself in anything more than a trivial sense.It needs a diagonalization. It can't be completely trivial.Something is aware of something, but it's just electronic componentsor bricks on springs or whatever being aware of the low level physicalinteractions.A machine/program/number can be aware of itself (1-person) withoutknowing anything about its 3p lower level.
We don't really know that machine/program/number can be aware of
anything.
It may only be material interpreters which are aware of
anything and the degree to which they are aware of 1p and 3p may be
inversely proportional to their complexity. Being fantastically
complex, we are aware of only some of our 1p and 3p self. Simpler
organisms or particles may in fact have awareness of 100% of their 1p
and 3p selves.
It is not a consciousexperience, I would guess that it is something like an accounting ofunaccounted-for function terminations. Proximal boundaries. Asilhouette of the self offering no interiority but anextrapolation ofincomplete 3p data. That isn't consciousness.Consciousness is not just self-reference. It is true self-reference.It belongs to the intersection of truth and self-reference.It's more than that too though. Many senses can be derived fromconsciousness, true self-reference is neither necessary norsufficient. I think that the big deal about consciousness is not thatit has true self-reference but that it is able to care about itselfits world that a non-trivial, open ended, and creative way. We canwatch a movie or have a dream and lose self-awareness without beingunconscious. Deep consciousness is often characterized byunselfconscious awareness.This is not excluded by the definition I gave.
How does caring and creating follow from true self-reference? A camera
that recognizes itself in a mirror would not automatically care about
something or become conscious.
I am not sure. I don't see the relevance of that mechanist point.
I'm saying the complexity of the immune system suggests that complex
function does necessarily give rise to consciousness.
Consciousness does nothing to speed decisions, it would only costprocessing overheadThat's why high animals have larger cortex.Their decisions are no faster than simpler animals.Complex decision are made possible, and are done more faster.
That only requires more processing power, not consciousness.
and add nothing to the efficiency of unconsciousadaptation.So, why do you think we are conscious?I think that humans have developed a greater sensorimotive capacityI still don't know what you mean by that. You can replace"sensorimotive" by "acquainted to the son of God" in all your argumentwithout them having a different meaning or persuasive force.
Sensorimotive is the interior view of electromagnetism.
Electromagnetism is orderly dynamic changes in material objects across
space relative to each other, sensorimotivation is the perception of
change through time in subjective experience relative to one's self.
Like electromagnetism is electricity and magnetism, sensorimotivation
is sensation and motive. They correspond to receiving of sense
experience (sensation) and embodying and projecting an intention
(motive).
asa virtuous cycle of evolutionary circumstance and subjectiveinvestment. Just as hardware development drives software developmentand vice versa. It's not that we are conscious as opposed tounconscious, it's that our awareness is hypertrophied from particularanimal motives being supported by the environment and we havetransformed our environment to enable our motives. Our seeminglyunique category of consciousness can either be anthropic prejudice orobjective fact, but either way it exists in a context of many otherkinds of awareness. The question is not why we are conscious, it iswhy is consciousness possible and/or why are we human.Why we are human is easily explained, or not-explainable, as anindexical geographical fact, by comp. It is like "why am I the one inW and not in M?". Comp explains why consciousness is necessary. It isthe way we feel when integrating quickly huge amount of information ina personal scenario.
'the way we feel' doesn't relate to information though. Where is the
feeling located?
In the information, in the informed, or somewhere
else?
Anything that is conscious can also be unconscious. CanPeano Arithmetic be unconscious too?Yes. That's possible if you accept that consciousness is a logicaldescendent of consistency.Aren't the moons of Saturn consistent?The material moons are not programs, nor theories. "consistent" cannotapply to it without stretching the words a lot.
Why aren't they programs?
They undergo tremendous logical change over
time. Why discriminate against moons?
I don't see any stretch at all
in calling them consistent. You could set a clock by their orbits.
Will consciousness logicallydescend from their consistency?If ever the moon have to become conscious. Yes. No if this has not tohappen. There is few chance moons becomes conscious, for they are notself-moving and have very few degrees of freedom.
Computers are 'solid state' though?
Moons have all kinds of geological
changes going on over thousands of years.
It follows then from the fact thatconsistency entails the consistency of inconsistency (Gödel II). Ofcourse, the reality is more complex, for consciousness is onlyapproximated by the instinctive unconscious) inductive inference ofself-consistency.You need some kind of awareness to begin with to tell the differencebetween consistency and inconsistency.Not necessarily. Checking inconsistency does not require a lot ofcognitive ability.
It does necessarily require awareness of some kind. Something has to
detect something and know how to expect and interpret a 'difference'
in that detection. Cognition has nothing to do with it. That's much
higher up the mountain, in true vs false land. Consistency is only
same v different.
I was just alluding to the fact that replication, although notproviding Turing universality, do that in company of the while loop.
I was just saying that while loops and replication don't imply the
generation of feeling.
Conscious and unconscious are aspects of the inherent subject-objectsymmetry of the universe.Which you assume.
What choice do I have? My only experience of the universe is 100%
definable by the subject-object symmetry.
Why isn't arithmetic truth physical?Because it does not rely on any physical notion. You can do numbertheory without ever doing physics.
But you can't do number theory without a physical subject doing the
theorizing.
Why not have somecreatures with smart skulls or shells and stupid soft parts inside? Itseems to be a strong indicator of material properties consistentlydetermining mechanism and not the other way around.Seeming is deceptive.
What would be an explanation, or counterfactual?
Arithmetical truth does make sense, definitely, but so do other kindsof experiences make sense and are not arithmetic truths.If they are conceptually rich enough, you can take them instead ofarithmetic, without changing anything in the explanation ofconsciousness and matter. I use numbers because people are morefamiliar with them.
I use sense because it makes more sense.
If you deposit your Gödel number code at the bank, or something likethat. You stretch the meaning of comp, which is just the bet that ourbody is Turing emulable and that we can survive through any of itsTuring emulation.
Isn't that what money is really all about now though? Instead of a
body, we have accounts. You can't get more Turing emulable that that.
It's practically Turing-maniacal.All of those Wall Street quants... where is thetheology and creativity?It is buried by the materialists since 1500 years.
60% of the stock trades in the US markets are automated. I would say
that makes AI the dominant financial decision maker in the world.
There is no ontological brain, yet we are.
Aren't we the ontological brain already?
We cannot besimulated anymore than water or fire can be simulated.Why? That's a strong affirmation. We have not yet find a phenomenoninnature that cannot be simulated (except the collapse of the wave,which can still be Turing 1-person recoverable).You can't water a real plant with simulated water or survive thearctic burning virtual coal for heat.What is a real plant? A plant is epistemologically real relatively toyou and your most probable computations. It is not an absolute notion.
It might be an absolute notion.
At my level of description it is a
plant, at another it's tissues, cells, molecules, etc. Anything that
satisfies all of those descriptions within all of those perceptual
frames may be a real plant. If it only looks like a plant, then it's a
cartoon or a puppet.
If you look at substitutionlevel in reverse, you will see that it's not a matter of making aplastic plant that acts so real we can't tell the difference, it's adescription level which digitizes a description of a plant rather thanan actual plant. Nothing has been simulated, only imitated. Thedifference is that an imitation only reminds us of what is beingimitated but a simulation carries the presumption of replacement.This makes things more complex than they might be.
It makes more sense though. Otherwise we would have movies that we
could literally live inside of already.
If you say that human consciousness exists independently of a humanbrain, you have to give me an example of such a case.UDA shows that you are an example of this.
But drinking some scotch or smoking a cigar tells me that I am not
independent of my brain.
We, unfortunately cannot be digitized,You don't know that. But you don't derive it either from what youassume (which to be franc remains unclear)I do derive it, because the brain and the self are two parts of awhole. You cannot export the selfness into another form, because theself has no form, it's only experiential content through theinteriorof a living brain.That's the 1-self, but it is just an interface between truth andrelative bodies.Truth is just an interface between all 1-self and all relative bodies.In which theory? This does not make sense.
It's an implication of multisense realism. Truth (a kind of Sense) is
an interface between all 1-self (sensorimotive experiences) and all 3-
p relative bodies (electromagnetic objects). It is the synchronization
of interior dreams and external bodies.
My whole point is revealing a universe description in which logic anddirect experience coexist in many ways. Limiting it to logicallanguage defeats the purpose,That's what the machine can already explain. You consider it as azombie.
Not a zombie, a puppet.
although I would love to collaboratewith someone who was interested in formalizing the ideas.Convince people that there is an idea. But by insisting that yourideas contradict comp, you shoot in your theory, because you add amagic where the comp theories explains the appearance of the magicwithout introducing it at the start.
Comp introduces magic at the start. 'Arithmetic Truth' is very much a
digital Dreamtime.
I don't add any magic
and nothing appears except
different levels of sense recapitulation in inertial frames.
Everything in multisense realism works with a universe of only the
typical experiences that we live through every day, plus it explains
why extraordinary experiences are harder to ground in public
certainty.
Logic is a3p language - a mechanistic, involuntary form of reasoning whichdenies the 1p subject any option but to accept it.This is false. The right side of the hypostases with "& p& areprovably beyond language, at the level the machine can live.
You're making my point.
The notion of anything being literally false
or true is just what I said: an involuntary form of reasoning.
Then
you proceed to deny me, the 1p subject, any option to accept it.
The 1p experienceis exactly the opposite of that. It is a 'seems like' affair whichinvites or discourages voluntary participation of the subject. Half ofthe universe is made of this.With comp, it is the main part of the "universe".
That's why it's a little naive :)
That's the reason mind exist, it accelerate the processing much more quickly. In fact, just by software change, the slower machine can always beat the faster machines, on almost inputs, except a finite number of them.
I think that there is some difference in respect that one can abstract
control from a particular application. For example, I can consider a PID
controller as such. Such a consideration belongs, roughly speaking, to
cybernetics laws.
Evgenii
> Brent
>
>
>>
>> Evgenii
>>
>