Prey Blackbox Lab Without Repair

0 views
Skip to first unread message

Millicent

unread,
Aug 4, 2024, 5:13:07 PM8/4/24
to rambrecoucon
NowI regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

[Endnote: See also this related post, by the philosopher Eric Schwetzgebel: Why Tononi Should Think That the United States Is Conscious. While the discussion is much more informal, and the proposed counterexample more debatable, the basic objection to IIT is the same.]


I think consciousness is like the operating system of a computer (Windows, unix, etc.), mediating between external inputs and outputs and internal applications. That is, Windows does not know how to do spreadsheet calculations, but it can start Excel, feed external data to it, and show the results on a monitor screen. As far as Windows knows, the numbers it puts on the screen might have been produced by magic. Similarly, there are no nerves which monitor the brain, so ideas seem to come out of nowhere.


These leaves a lot of details to be figured out, such as if the brain is a sort of computer, what is its programming language? Is it process-oriented, like Fortran and Basic, object-oriented like C++ and Java, list-oriented like Lisp and Scheme, or something else? This could be a very difficult problem, like trying to decipher an alien computer, but in principle it is doable.


Happy Birthday, Scott! I think IIT is interesting. I like the idea that brain is just a unique one-way hard-disk like a fingerprint human beings use to access and feed into a larger consciousness. Like without the senses at large punched in and online feeding from the brain tissue into the consciousness, the brain tissue is useless and cannot be read offline.


Closer to our topic, suppose a philosopher announced a new theory of morality with some counterintuitive results: the theory said that murdering people for thrills was OK (or even a moral obligation), whereas helping old ladies across the street was the ultimate evil. Now, would you be inclined to take seriously what this theory said for more complicated, ambiguous moral dilemmas, like the trolley problems?


Luke #10: Leslie Valiant, as it happens, is extremely interested in computational models for the mind, but in the case gasarch and I were talking about, back in the 1970s, he was only thinking about a particular approach to proving circuit lower bounds. For more details, see the fourth-to-last paragraph in the OP.


Tegmark and Tononi both seem to approach the problem of consciousness in an abstract manner disconnected from living matter which is the only material we can reasonably confident is (or might) be capable of consciousness. I argue that living matter itself possesses integrated information and the difference between living matter that possesses little or no consciousness and organisms with greater consciousness is primarily the degree to which the living material can operate in near real time. In this view consciousness is a potential property of living material. Integrated information might be necessary but is not sufficient to produce consciousness.


But the above illustrates a crucial point for this discussion: namely, whatever geometric difficulties there are in physically realizing expander graphs, those difficulties apply to the brain just as much as to artificial systems! And hence, if those difficulties impose some effective upper bound on Φ in the one case, then they do the same in the other.


One objection I have to IIT is its list of postulates. Its very heavy on the perceptual side, with nothing on the output end. It also does not account for a self-world boundary. (nor, as I see it, does IIT).


Accepting the possibility of p-zombies necessarily implies dualism. Any dualist theory of this sort is necessarily less parsimonious than a monist theory. Ergo, I should never prefer the dualist theory.


Oh, I also disagree with your connecting p-zombies and the hard problem of consciousness. You can still accept and attempt to address the hard problem of consciousness seriously without accepting the validity of p-zombies.


ITT makes three main assertions:

1) consciousness is a normal physical process, e.g. it respects the physical version of the Church-Turing thesis

2) consciousness is an intrinsic property, e.g. we can calculate its amount without taking the outside into account


I would like to echo the previous points here that we should stick only to constructing mathematical arguments that we can conceive of as physical systems. So as Scott shows if we use a matrix computation in isolation we achieve the process of information integration and we can also conceive of the physical system.


The refutation itself does not integrate all the parts of the theory such as its evolution over time. i.e. Phi autoregressive describes how the current version of a system predicts its previous state, in such a way that it improves the predictions of the operation over the partitioned states in isolation. The Cray computer has some crude analogy to brain topology, but is low in Phi by means of the fact its connection to CPU ratio is lower than the brain, but if it were topologically similar to a brain, say like the Spinnaker system.. why would it not be conscious ? Regression of the integration mechanism would have to improve outcomes over its previous physical state. i.e. Be able to compare on the difference between current state and what would occur if it were to lower its Phi and become more partitioned. Is there a physically conceivable integration example for this we could say is not conscious ?


It turns out the direct measurement of anything is wildly difficult, even so mundane a thing as the temperature of boiling water. But it raises a problem Chang calls the problem of nomic measurement:


Still, some scientists remain perversely interested in the differences between DVD players and people. In particular, it seems that people (and many animals) have certain properties that DVD players lack: call this set of properties ψ, which may include, for instance, certain kinds of creative abilities or intentional states. (You can be as behaviorist as you like defining the set ψ.)


To a degree I am with Tononi (a pure maths conjuring up of mega phi is not a challenge to phi arising in a natural system) but in the end you are right: you provided a counterexample. IIT needs to fold up its tent or tighten up the definition of phi to exclude DVD players.


Now I had to cheat off my dorm roomate to squeak thru Calc III so I will leave it to the smart guys, but is there a mathematical way to differentiate massive interdependence arising from cleverly selected matrix operations (see what I mean about my math aptitude?) and interdependence/integration that arises over time in a system B (endowed initially only with local integration) when B is exposed to an environment W, such that B becomes able to operate with increasing effectiveness (defined, say, as the ability to survive long enough in W to spin off other Bs)?


Is that the first derivative I am after? This growth in phi is crucial because I think Tononi can rightly object that your counterexample mechanically generates mega but uninteresting phi, and indeed the subject is still consciousness and what we are after is the interesting kind of phi that lets us play ping pong while drinking beer. The DVD player is simply built with all this phi. It defeats the current definition of IIT, but that just means we need to mathematically limit counterexamples to those in which phi arises unguided as an emergent property given merely local integration, crucially producing a more effective system requiring integration supported by but otherwise unrelated to the local integration (such that one could aspire to building an equivalent system using, say, integrated circuits instead of nerve cells).


This, I feel, is what Plato was talking about in the allegory of the cave, where the reality of the prisoners was the result of the pattern matching game they played with the shadows on the cave wall.


If what you want out of a consciousness-yardstick ψ is that ψ assigns consciousness to and only to things that behave vaguely like humans, then what is the problem with taking ψ to be the Turing test?


Failures in explanatory power are not grounds for absolute rejection though. For instance, the Copenhagen interpretation of QM would never have passed this test either, but quite a bit of good physics was done despite its explanatory limitations.


Any reasonable definition of consciousness must include the necessary condition that a system is conscious only if it is able to learn a new language. Ostensibly learning context, semantics, syntax, and symbols through pidgin, creole, and finally onto full blown language. Even in the most severe cases of being locked in, it as least theoretically possible to develop a language through which to communicate.


Any measurement of consciousness in a context and semantic free language is absurd. Nothing points to this folly more clearly than the a priori assumption of the existence of a language made in both the IIT discipline and this post.


A necessary condition for consciousness is the ability to, at least theoretically, translate, or cross compile, between any formal languages. As far as I know there does not exist a Turing machine that can cross compile between arbitrary choices of Turing complete languages.


What if it is impossible to come up with a criterion for consciousness that is invulnerable to the kind of counterexample you provide? In other words, for any quantitative measure of consciousness, it is possible to construct some computer program which scores highly on that measure, but which on intuitive grounds is clearly not conscious.


Given all this, my inclination is to think that, while IIT fails to solve the (unsolvable) problem of perfectly identifying consciousness, it is still potentially a useful heuristic. We can take on faith that DVD players and Cheetos are not conscious, but still use IIT to ask whether a monkey, a fruitfly, or a nematode should be considered conscious. The above-mentioned (Gary Williams, #39) Current Biology paper seems to support this view.

3a8082e126
Reply all
Reply to author
Forward
0 new messages