--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU63GQuAJNQ%2BAM%3DcYHxi%3D57x_bGAoF35npeMcXcEdiNaA%40mail.gmail.com.
On Mon, May 22, 2023 at 7:34 PM Stathis Papaioannou <stat...@gmail.com> wrote:--On Tue, 23 May 2023 at 07:56, Terren Suydam <terren...@gmail.com> wrote:Many, myself included, are captivated by the amazing capabilities of chatGPT and other LLMs. They are, truly, incredible. Depending on your definition of Turing Test, it passes with flying colors in many, many contexts. It would take a much stricter Turing Test than we might have imagined this time last year, before we could confidently say that we're not talking to a human. One way to improve chatGPT's performance on an actual Turing Test would be to slow it down, because it is too fast to be human.All that said, is chatGPT actually intelligent? There's no question that it behaves in a way that we would all agree is intelligent. The answers it gives, and the speed it gives them in, reflect an intelligence that often far exceeds most if not all humans.I know some here say intelligence is as intelligence does. Full stop, conversation over. ChatGPT is intelligent, because it acts intelligently.But this is an oversimplified view! The reason it's over-simple is that it ignores what the source of the intelligence is. The source of the intelligence is in the texts it's trained on. If ChatGPT was trained on gibberish, that's what you'd get out of it. It is amazingly similar to the Chinese Room thought experiment proposed by John Searle. It is manipulating symbols without having any understanding of what those symbols are. As a result, it does not and can not know if what it's saying is correct or not. This is a well known caveat of using LLMs.ChatGPT, therefore, is more like a search engine that can extract the intelligence that is already structured within the data it's trained on. Think of it as a semantic google. It's a huge achievement in the sense that training on the data in the way it does, it encodes the context that words appear in with sufficiently high resolution that it's usually indistinguishable from humans who actually understand context in a way that's grounded in experience. LLMs don't experience anything. They are feed-forward machines. The algorithms that implement chatGPT are useless without enormous amounts of text that expresses actual intelligence.Cal Newport does a good job of explaining this here.It could be argued that the human brain is just a complex machine that has been trained on vast amounts of data to produce a certain output given a certain input, and doesn’t really understand anything. This is a response to the Chinese room argument. How would I know if I really understand something or just think I understand something?Stathis Papaioannouit is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.
--Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU5_fvzMNCTmc_7zc5gfOBTTsmOy5R%2BmJW4ceWtJYzw_g%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com.
On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 10:48, Terren Suydam <terren...@gmail.com> wrote:On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 10:03, Terren Suydam <terren...@gmail.com> wrote:it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.The functionalist position is that phenomenal experience supervenes on behaviour, such that if the behaviour is replicated (same output for same input) the phenomenal experience will also be replicated. This is what philosophers like Searle (and many laypeople) can’t stomach.I think the kind of phenomenal supervenience you're talking about is typically asserted for behavior at the level of the neuron, not the level of the whole agent. Is that what you're saying? That chatGPT must be having a phenomenal experience if it talks like a human? If so, that is stretching the explanatory domain of functionalism past its breaking point.The best justification for functionalism is David Chalmers' "Fading Qualia" argument. The paper considers replacing neurons with functionally equivalent silicon chips, but it could be generalised to replacing any part of the brain with a functionally equivalent black box, the whole brain, the whole person.You're saying that an algorithm that provably does not have experiences of rabbits and lollipops - but can still talk about them in a way that's indistinguishable from a human - essentially has the same phenomenology as a human talking about rabbits and lollipops. That's just absurd on its face. You're essentially hand-waving away the grounding problem. Is that your position? That symbols don't need to be grounded in any sort of phenomenal experience?
--Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXViwvq0TnbJXnPt7VVDoy8zASJyZeq-O3ZpOpMSx6cwg%40mail.gmail.com.
On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 13:37, Terren Suydam <terren...@gmail.com> wrote:On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 10:48, Terren Suydam <terren...@gmail.com> wrote:On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 10:03, Terren Suydam <terren...@gmail.com> wrote:it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.The functionalist position is that phenomenal experience supervenes on behaviour, such that if the behaviour is replicated (same output for same input) the phenomenal experience will also be replicated. This is what philosophers like Searle (and many laypeople) can’t stomach.I think the kind of phenomenal supervenience you're talking about is typically asserted for behavior at the level of the neuron, not the level of the whole agent. Is that what you're saying? That chatGPT must be having a phenomenal experience if it talks like a human? If so, that is stretching the explanatory domain of functionalism past its breaking point.The best justification for functionalism is David Chalmers' "Fading Qualia" argument. The paper considers replacing neurons with functionally equivalent silicon chips, but it could be generalised to replacing any part of the brain with a functionally equivalent black box, the whole brain, the whole person.You're saying that an algorithm that provably does not have experiences of rabbits and lollipops - but can still talk about them in a way that's indistinguishable from a human - essentially has the same phenomenology as a human talking about rabbits and lollipops. That's just absurd on its face. You're essentially hand-waving away the grounding problem. Is that your position? That symbols don't need to be grounded in any sort of phenomenal experience?It's not just talking about them in a way that is indistinguishable from a human, in order to have human-like consciousness the entire I/O behaviour of the human would need to be replicated. But in principle, I don't see why a LLM could not have some other type of phenomenal experience. And I don't think the grounding problem is a problem: I was never grounded in anything, I just grew up associating one symbol with another symbol, it's symbols all the way down.Is the smell of your grandmother's kitchen a symbol?
On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 10:48, Terren Suydam <terren...@gmail.com> wrote:On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 10:03, Terren Suydam <terren...@gmail.com> wrote:it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.The functionalist position is that phenomenal experience supervenes on behaviour, such that if the behaviour is replicated (same output for same input) the phenomenal experience will also be replicated. This is what philosophers like Searle (and many laypeople) can’t stomach.I think the kind of phenomenal supervenience you're talking about is typically asserted for behavior at the level of the neuron, not the level of the whole agent. Is that what you're saying? That chatGPT must be having a phenomenal experience if it talks like a human? If so, that is stretching the explanatory domain of functionalism past its breaking point.The best justification for functionalism is David Chalmers' "Fading Qualia" argument. The paper considers replacing neurons with functionally equivalent silicon chips, but it could be generalised to replacing any part of the brain with a functionally equivalent black box, the whole brain, the whole person.You're saying that an algorithm that provably does not have experiences of rabbits and lollipops - but can still talk about them in a way that's indistinguishable from a human - essentially has the same phenomenology as a human talking about rabbits and lollipops. That's just absurd on its face. You're essentially hand-waving away the grounding problem. Is that your position? That symbols don't need to be grounded in any sort of phenomenal experience?Terren
----
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA_fnyGDNxfQJXaqdUsYdSw7Sm5kx5j_5n94K8trJA57Jg%40mail.gmail.com.
On Mon, May 22, 2023 at 11:37 PM Terren Suydam <terren...@gmail.com> wrote:On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 10:48, Terren Suydam <terren...@gmail.com> wrote:On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 10:03, Terren Suydam <terren...@gmail.com> wrote:it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.The functionalist position is that phenomenal experience supervenes on behaviour, such that if the behaviour is replicated (same output for same input) the phenomenal experience will also be replicated. This is what philosophers like Searle (and many laypeople) can’t stomach.I think the kind of phenomenal supervenience you're talking about is typically asserted for behavior at the level of the neuron, not the level of the whole agent. Is that what you're saying? That chatGPT must be having a phenomenal experience if it talks like a human? If so, that is stretching the explanatory domain of functionalism past its breaking point.The best justification for functionalism is David Chalmers' "Fading Qualia" argument. The paper considers replacing neurons with functionally equivalent silicon chips, but it could be generalised to replacing any part of the brain with a functionally equivalent black box, the whole brain, the whole person.You're saying that an algorithm that provably does not have experiences of rabbits and lollipops - but can still talk about them in a way that's indistinguishable from a human - essentially has the same phenomenology as a human talking about rabbits and lollipops. That's just absurd on its face. You're essentially hand-waving away the grounding problem. Is that your position? That symbols don't need to be grounded in any sort of phenomenal experience?TerrenAre you talking here about Chalmer's thought experiment in which each neuron is replaced by a functional duplicate, or about an algorithm like ChatGPT that has no detailed resemblance to the structure of a human being's brain? I think in the former case the case for identical experience is very strong, though note Chalmers is not really a functionalist, he postulates "psychophysical laws" which map physical patterns to experiences, and uses the replacement argument to argue that such laws would have the property of "functional invariance".In you are just talking about ChatGPT style programs, I would agree with you, a system trained only on the high-level symbols of human language (as opposed to symbols representing neural impulses or other low-level events on the microscopic level) is not likely to have experience anything like a human being using the same symbols. If Stathis' black box argument is meant to suggest otherwise I don't the logic, since it's not like a ChatGPT style program would replicate the detailed output of a composite group of neurons either, or even the exact verbal output of a specific person, so there is no equivalent to gradual replacement of parts of a real human. If we are just talking about qualitatively behaving in a "human-like" way without replicating the behavior of a specific person or sub-component of a person like a group of neurons in their brain, Chalmer's thought-experiment doesn't apply. And even in a qualitative sense, count me as very skeptical that a LLM trained only on human writing will ever pass any really rigorous Turing test.
--
--Stathis Papaioannou
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXiwn%2Bwh2K_T6K5pL5NZJR8%3DaPejiWQmyy5SHtee0%2Bouw%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWZz4fP1nS_uSNRS6%3Drp63cCpWRLt0_Oeq77Yrfi8WS_w%40mail.gmail.com.
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.
While we may not know everything about explaining it, pain doesn't seem to be that much of a mystery to me, and I don't consider it a symbol per se. It seems obvious to me anyways that pain arose out of a very early neural circuit as a survival mechanism.
Pain is the feeling you experience when pain receptors detect an area of the body is being damaged. It is ultimately based on a sensory input that transmits to the brain via nerves where it is translated into a sensation that tells you to avoid whatever is causing the pain if possible, or let's you know you otherwise have a problem with your hardware.That said, I agree with you on LLMs for the most part, although I think they are showing some potentially emergent, interesting behaviors.On Tue, May 23, 2023 at 1:58 AM Terren Suydam <terren...@gmail.com> wrote:Take a migraine headache - if that's just a symbol, then why does that symbol feel bad while others feel good? Why does any symbol feel like anything? If you say evolution did it, that doesn't actually answer the question, because evolution doesn't do anything except select for traits, roughly speaking. So it just pushes the question to: how did the subjective feeling of pain or pleasure emerge from some genetic mutation, when it wasn't there before?
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com.
> Many, myself included, are captivated by the amazing capabilities of chatGPT and other LLMs. They are, truly, incredible. Depending on your definition of Turing Test, it passes with flying colors in many, many contexts. It would take a much stricter Turing Test than we might have imagined this time last year,
> One way to improve chatGPT's performance on an actual Turing Test would be to slow it down, because it is too fast to be human.
> All that said, is chatGPT actually intelligent?
> There's no question that it behaves in a way that we would all agree is intelligent. The answers it gives, and the speed it gives them in, reflect an intelligence that often far exceeds most if not all humans. I know some here say intelligence is as intelligence does. Full stop,
> But this is an oversimplified view!
< If ChatGPT was trained on gibberish, that's what you'd get out of it.
> the Chinese Room thought experiment proposed by John Searle.
> ChatGPT, therefore, is more like a search engine
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUibqVw6uAgxYFjT2HdnFdeF67jORYt63hVjAj1oH6n7jg%40mail.gmail.com.
nw4
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2ZG8Vo3LBF2nvUP5umHZVvUusjgPYQEkKhwptmKaNUWw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA9R_BDWiZvvGrA-8Tgx4kC7Syh29Z%3D_Jev09AJvOvknew%40mail.gmail.com.
On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning. Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.
In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA9qkQUefHJZd8xueN2NXUADCwXuf%2BetYcrJh912iwzEjA%40mail.gmail.com.
replicating the behaviour of the human through any means, such as training an AI not only on language but also movement, would also preserve consciousness, even though it does not simulate any physiological processes. Another way to say this is that it is not possible to make a philosophical zombie.
On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning. Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.Terren
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA9qkQUefHJZd8xueN2NXUADCwXuf%2BetYcrJh912iwzEjA%40mail.gmail.com.
> What was the biochemical or neural change that suddenly birthed the feeling of pain?
> I don't believe symbols can be grounded in other symbols
> There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.
Let me start out by saying I don't believe in zombies. We are biophysical systems with a long history of building on and repurposing earlier systems of genes and associated proteins. I saw you don't believe it is symbols all the way down. I agree with you, but I am arguing the beginning of that chain of symbols for many things begins with sensory input and ends with a higher level symbol/abstraction, particularly in fully conscious animals like human beings that are self aware and capable of an inner dialogue.
An earlier example I gave of someone born blind results in someone with no concept of red or any color for that matter, or images, and so on. I don't believe redness is hiding in some molecule in the brain like Brent does. It's only created via pruned neural networks based on someone who has sensory inputs working properly. That's the beginning of the chain of symbols, but it starts with an electrical impulse sent down nerves from a sensory organ.
It's the same thing with pain. If a certain gene is screwed up related to a subset of sodium channels (which are critical for proper transmission of signals propagating along certain nerves), a human being is incapable of feeling pain. I'd argue they don't know what pain is, just like a congenital blind person doesn't know what red is. It's the same thing with hearing and music. If a brain is missing that initial sensory input, your consciousness does not have the ability to feel the related subjective sensation.
And yes, I'm arguing that a true simulation (let's say for the sake of a thought experiment we were able to replicate every neural connection of a human being in code, including the connectomes, and neurotransmitters, along with a simulated nerve that was connected to a button on the desk we could press which would simulate the signal sent when a biological pain receptor is triggered) would feel pain that is just as real as the pain you and I feel as biological organisms.
You asked me for the principle behind how a critter could start having a negative feeling that didn't exist in its progenitors. Again, I believe the answer is as simple as it happened when pain receptors evolved that may have started as a random mutation where the behavior they induced in lower organisms resulted in increased survival.
I'm not claiming to have solved the hard problem of consciousness. I don't claim to have the answer for why pain subjectively feels the way it does, or why pleasure does, but I do know that reward systems that evolved much earlier are involved (like dopamine based ones), and that pleasure can be directly triggered via various recreational drugs. That doesn't mean I think the dopamine molecule is where the pleasure qualia is hiding.
Even lower forms of life like bacteria move towards what their limited sensory systems tell them is a reward and away from what it tells them is a danger. I believe our subjective experiences are layered onto these much earlier evolutionary artifacts, although as eukaryotes I am not claiming that much of this is inherited from LUCA. I think it blossomed once predator/prey dynamics were possible in the Cambrian explosion and was built on from there over many many years.
--Getting slightly off topic, I don't think substrate likely matters as far as producing consciousness. The only possible way I could see that it would is if quantum effects are actually involved in generating it that we can't reasonably replicate. That said, I think Penrose and others do not have the odds on their side there for a number of reasons.Like I said though, I don't believe in zombies.On Tue, May 23, 2023 at 9:12 AM Terren Suydam <terren...@gmail.com> wrote:--On Tue, May 23, 2023 at 2:25 AM Dylan Distasio <inte...@gmail.com> wrote:While we may not know everything about explaining it, pain doesn't seem to be that much of a mystery to me, and I don't consider it a symbol per se. It seems obvious to me anyways that pain arose out of a very early neural circuit as a survival mechanism.But how? What was the biochemical or neural change that suddenly birthed the feeling of pain? I'm not asking you to know the details, just the principle - by what principle can a critter that comes into being with some modification of its organization start having a negative feeling when it didn't exist in its progenitors? This doesn't seem mysterious to you?Very early neural circuits are relatively easy to simulate, and I'm guessing some team has done this for the level of organization you're talking about. What you're saying, if I'm reading you correctly, is that that simulation feels pain. If so, how do you get that feeling of pain out of code?Terren--Pain is the feeling you experience when pain receptors detect an area of the body is being damaged. It is ultimately based on a sensory input that transmits to the brain via nerves where it is translated into a sensation that tells you to avoid whatever is causing the pain if possible, or let's you know you otherwise have a problem with your hardware.That said, I agree with you on LLMs for the most part, although I think they are showing some potentially emergent, interesting behaviors.On Tue, May 23, 2023 at 1:58 AM Terren Suydam <terren...@gmail.com> wrote:Take a migraine headache - if that's just a symbol, then why does that symbol feel bad while others feel good? Why does any symbol feel like anything? If you say evolution did it, that doesn't actually answer the question, because evolution doesn't do anything except select for traits, roughly speaking. So it just pushes the question to: how did the subjective feeling of pain or pleasure emerge from some genetic mutation, when it wasn't there before?
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA9R_BDWiZvvGrA-8Tgx4kC7Syh29Z%3D_Jev09AJvOvknew%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJrqPH--v_GGPboxD3RHsQ8zc_8XrUAgrpZHGGpgfoymdpHn-g%40mail.gmail.com.
On Tue, May 23, 2023 Terren Suydam <terren...@gmail.com> wrote:> What was the biochemical or neural change that suddenly birthed the feeling of pain?It would not be difficult to make a circuit such that that whenever a specific binary sequence of zeros and ones is in a register the circuit stops doing everything else and changes that sequence to something else as fast as possible. As I've said before, intelligence is hard but emotion is easy.
On Tue, May 23, 2023 at 11:08 AM Dylan Distasio <inte...@gmail.com> wrote:And yes, I'm arguing that a true simulation (let's say for the sake of a thought experiment we were able to replicate every neural connection of a human being in code, including the connectomes, and neurotransmitters, along with a simulated nerve that was connected to a button on the desk we could press which would simulate the signal sent when a biological pain receptor is triggered) would feel pain that is just as real as the pain you and I feel as biological organisms.This follows from the physicalist no-zombies-possible stance. But it still runs into the hard problem, basically. How does stuff give rise to experience.
On Tue, May 23, 2023, 9:34 AM Terren Suydam <terren...@gmail.com> wrote:On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning. Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.I agree everything you have experienced is rooted in consciousness.But at the low level, that only thing your brain senses are neural signals (symbols, on/off, ones and zeros).In your arguments you rely on the high-level conscious states of human brains to establish that they have grounding, but then use the low-level descriptions of machines to deny their own consciousness, and hence deny they can ground their processing to anything.If you remained in the space of low-level descriptions for both brains and machine intelligences, however, you would see each struggles to make a connection to what may exist at the high-level. You would see, the lack of any apparent grounding in what are just neurons firing or not firing at certain times. Just as a wire in a circuit either carries or doesn't carry a charge.
Conversely, if you stay in the high-level realm of consciousness ideas, well then you must face the problem of other minds. You know you are conscious, but you cannot prove or disprove the conscious of others, at least not with first defining a theory of consciousness and explaining why some minds satisfy the definition of not. Until you present a theory of consciousness then this conversation is, I am afraid, doomed to continue in this circle forever.This same conversation and outcome played out over the past few months on the extropy-chat-list, although with different actors, so I can say with some confidence where some topics are likely to lead.In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?
Jason
On Tue, May 23, 2023 at 9:34 AM Terren Suydam <terren...@gmail.com> wrote:On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning. Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.TerrenBut are you talking specifically about symbols with high-level meaning like the words humans use in ordinary language, which large language models like ChatGPT are trained on? Or are you talking more generally about any kinds of symbols, including something like the 1s and 0s in a giant computer that was performing an extremely detailed simulation of a physical world, perhaps down to the level of particle physics, where that simulation could include things like detailed physical simulations of things in external environment (a flower, say) and components of a simulated biological organism with a nervous system (with particle-level simulations of neurons etc.)? Would you say that even in the case of the detailed physics simulation, nothing in there could ever give rise to conscious experience like our own?Jesse
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj8F0xkGD7Pe82R_FsLzGO51Z4cgN6J71Er_F5ptMo3EA%40mail.gmail.com.
> in my view, consciousness entails a continuous flow of experience.
On Tue, May 23, 2023 at 3:50 PM Terren Suydam <terren...@gmail.com> wrote:> in my view, consciousness entails a continuous flow of experience.If I could instantly stop all physical processes that are going on inside your head for one year and then start them up again, to an outside objective observer you would appear to lose consciousness for one year, but to you your consciousness would still feel continuous but the outside world would appear to have discontinuously jumped to something new.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0B5ce20hEsOv_GLcmJP5kkOpqMggwUFTcBQfGL_59y0g%40mail.gmail.com.
> reality is fundamentally consciousness.
> Why does some "knowledge" correspond with a positive feeling and other knowledge with a negative feeling?
>> If I could instantly stop all physical processes that are going on inside your head for one year and then start them up again, to an outside objective observer you would appear to lose consciousness for one year, but to you your consciousness would still feel continuous but the outside world would appear to have discontinuously jumped to something new.> I meant continuous in terms of the flow of state from one moment to the next. What you're describing is continuous because it's not the passage of time that needs to be continuous, but the state of information in the model as the physical processes evolve.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1DCmuv8Yj5th%2BaV%3Dx%3DBa_WCJsNePunLHpufXwi-W_EbQ%40mail.gmail.com.
On Tue, May 23, 2023 at 1:46 PM Jason Resch <jason...@gmail.com> wrote:On Tue, May 23, 2023, 9:34 AM Terren Suydam <terren...@gmail.com> wrote:On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning. Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.I agree everything you have experienced is rooted in consciousness.But at the low level, that only thing your brain senses are neural signals (symbols, on/off, ones and zeros).In your arguments you rely on the high-level conscious states of human brains to establish that they have grounding, but then use the low-level descriptions of machines to deny their own consciousness, and hence deny they can ground their processing to anything.If you remained in the space of low-level descriptions for both brains and machine intelligences, however, you would see each struggles to make a connection to what may exist at the high-level. You would see, the lack of any apparent grounding in what are just neurons firing or not firing at certain times. Just as a wire in a circuit either carries or doesn't carry a charge.Ah, I see your point now. That's valid, thanks for raising it and let me clarify.
Bringing this back to LLMs, it's clear to me that LLMs do not have phenomenal experience, but you're right to insist that I explain why I think so. I don't know if this amounts to a theory of consciousness, but the reason I believe that LLMs are not conscious is that, in my view, consciousness entails a continuous flow of experience. Assuming for this discussion that consciousness is realizable in a substrate-independent way, that means that consciousness is, in some sort of way, a process in the domain of information. And so to realize a conscious process, whether in a brain or in silicon, the physical dynamics of that information process must also be continuous, which is to say, recursive.
The behavior or output of the brain in one moment is the input to the brain in the next moment.But LLMs do not exhibit this. They have a training phase, and then they respond to discrete queries. As far as I know, once it's out of the training phase, there is no feedback outside of the flow of a single conversation. None of that seems isomorphic to the kind of process that could support a flow of experience, whatever experience would mean for an LLM.So to me, the suggestion that chatGPT could one day be used to functionally replace some subset of the brain that is responsible for mediating conscious experience in a human, just strikes me as absurd.
Conversely, if you stay in the high-level realm of consciousness ideas, well then you must face the problem of other minds. You know you are conscious, but you cannot prove or disprove the conscious of others, at least not with first defining a theory of consciousness and explaining why some minds satisfy the definition of not. Until you present a theory of consciousness then this conversation is, I am afraid, doomed to continue in this circle forever.This same conversation and outcome played out over the past few months on the extropy-chat-list, although with different actors, so I can say with some confidence where some topics are likely to lead.In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?I'm agnostic about this, but that's because I no longer assume physicalism. For me, the hard problem signals that physicalism is impossible. I've argued on this list many times as a physicalist, as one who believes in the possibility of artificial consciousness, uploading, etc. I've argued that there is something it is like to be a cybernetic system. But at the end of it all, I just couldn't overcome the problem of aesthetic valence. As an aside, the folks at Qualia Computing have put forth a theory that symmetry in the state space isomorphic to ongoing experience is what corresponds to positive valence, and anti-symmetry to negative valence.
It's a very interesting argument but one is still forced to leap from a mathematical concept to a subjective feeling. Regardless, it's the most sophisticated attempt to reconcile the hard problem that I've come across.I've since come around to the idealist stance that reality is fundamentally consciousness, and that the physical is a manifestation of that consciousness, like in a dream.
It has its own "hard problem", which is explaining why the world appears so orderly.
But if you don't get too hung up on that, it's not as clear that artificial consciousness is possible. It might be! it may even be that efforts like the above to explain how you get it from bit are relevant to idealist explanations of physical reality. But the challenge with idealism is that the explanations that are on offer sound more like mythology and metaphor than science. I should note that Bernardo Kastrup
has some interesting ideas on idealism, and he approaches it in a way that is totally devoid of woo. That said, one really intriguing set of evidence in favor of idealism is near-death-experience (NDE) testimony, which is pretty remarkable if one actually studies it.
On Tue, May 23, 2023 at 2:27 PM Jason Resch <jason...@gmail.com> wrote:On Tue, May 23, 2023 at 1:15 PM Terren Suydam <terren...@gmail.com> wrote:On Tue, May 23, 2023 at 11:08 AM Dylan Distasio <inte...@gmail.com> wrote:And yes, I'm arguing that a true simulation (let's say for the sake of a thought experiment we were able to replicate every neural connection of a human being in code, including the connectomes, and neurotransmitters, along with a simulated nerve that was connected to a button on the desk we could press which would simulate the signal sent when a biological pain receptor is triggered) would feel pain that is just as real as the pain you and I feel as biological organisms.This follows from the physicalist no-zombies-possible stance. But it still runs into the hard problem, basically. How does stuff give rise to experience.I would say stuff doesn't give rise to conscious experience. Conscious experience is the logically necessary and required state of knowledge that is present in any consciousness-necessitating behaviors. If you design a simple robot with a camera and robot arm that is able to reliably catch a ball thrown in its general direction, then something in that system *must* contain knowledge of the ball's relative position and trajectory. It simply isn't logically possible to have a system that behaves in all situations as if it knows where the ball is, without knowing where the ball is. Consciousness is simply the state of being with knowledge.Con- "Latin for with"-Scious- "Latin for knowledge"-ness "English suffix meaning the state of being X"Consciousness -> The state of being with knowledge.There is an infinite variety of potential states and levels of knowledge, and this contributes to much of the confusion, but boiled down to the simplest essence of what is or isn't conscious, it is all about knowledge states. Knowledge states require activity/reactivity to the presence of information, and counterfactual behaviors (if/then, greater than less than, discriminations and comparisons that lead to different downstream consequences in a system's behavior). At least, this is my theory of consciousness.JasonThis still runs into the valence problem though. Why does some "knowledge" correspond with a positive feeling and other knowledge with a negative feeling?
I'm not talking about the functional accounts of positive and negative experiences. I'm talking about phenomenology. The functional aspect of it is not irrelevant, but to focus only on that is to sweep the feeling under the rug. So many dialogs on this topic basically terminate here, where it's just a clash of belief about the relative importance of consciousness and phenomenology as the mediator of all experience and knowledge.
On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 21:09, Jason Resch <jason...@gmail.com> wrote:As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.I think you’ve captured my position. But in addition I think replicating the fine-grained causal organisation is not necessary in order to replicate higher level phenomena such as GMK. By extension of Chalmers’ substitution experiment,Note that Chalmers's argument is based on assuming the functional substitution occurs at a certain level of fine-grained-ness. If you lose this step, and look at only the top-most input-output of the mind as black box, then you can no longer distinguish a rock from a dreaming person, nor a calculator computing 2+3 and a human computing 2+3, and one also runs into the Blockhead "lookup table" argument against functionalism.
Accordingly, I think intermediate steps and the fine-grained organization are important (to some minimum level of fidelity) but as Bruno would say, we can never be certain what this necessary substitution level is. Is it neocortical columns, is it the connectome, is it the proteome, is it the molecules and atoms, is it QFT? Chalmers argues that at least at the level where noise introduces deviations in a brain simulation, simulating lower levels should not be necessary, as human consciousness appears robust to such noise at low levels (photon strikes, brownian motion, quantum uncertainties, etc.)
On Wed, 24 May 2023 at 04:03, Jason Resch <jason...@gmail.com> wrote:On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 21:09, Jason Resch <jason...@gmail.com> wrote:As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.I think you’ve captured my position. But in addition I think replicating the fine-grained causal organisation is not necessary in order to replicate higher level phenomena such as GMK. By extension of Chalmers’ substitution experiment,Note that Chalmers's argument is based on assuming the functional substitution occurs at a certain level of fine-grained-ness. If you lose this step, and look at only the top-most input-output of the mind as black box, then you can no longer distinguish a rock from a dreaming person, nor a calculator computing 2+3 and a human computing 2+3, and one also runs into the Blockhead "lookup table" argument against functionalism.Yes, those are perhaps problems with functionalism. But a major point in Chalmers' argument is that if qualia were substrate-specific (hence, functionalism false) it would be possible to make a partial zombie or an entity whose consciousness and behaviour diverged from the point the substitution was made. And this argument works not just by replacing the neurons with silicon chips, but by replacing any part of the human with anything that reproduces the interactions with the remaining parts.
Accordingly, I think intermediate steps and the fine-grained organization are important (to some minimum level of fidelity) but as Bruno would say, we can never be certain what this necessary substitution level is. Is it neocortical columns, is it the connectome, is it the proteome, is it the molecules and atoms, is it QFT? Chalmers argues that at least at the level where noise introduces deviations in a brain simulation, simulating lower levels should not be necessary, as human consciousness appears robust to such noise at low levels (photon strikes, brownian motion, quantum uncertainties, etc.)
--Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUC5Yz7DSQWDyD2jXP_EQf8wQ2h3xzjbQEkgmJr5Zu78A%40mail.gmail.com.
On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:On Wed, 24 May 2023 at 04:03, Jason Resch <jason...@gmail.com> wrote:On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:On Tue, 23 May 2023 at 21:09, Jason Resch <jason...@gmail.com> wrote:As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.I think you’ve captured my position. But in addition I think replicating the fine-grained causal organisation is not necessary in order to replicate higher level phenomena such as GMK. By extension of Chalmers’ substitution experiment,Note that Chalmers's argument is based on assuming the functional substitution occurs at a certain level of fine-grained-ness. If you lose this step, and look at only the top-most input-output of the mind as black box, then you can no longer distinguish a rock from a dreaming person, nor a calculator computing 2+3 and a human computing 2+3, and one also runs into the Blockhead "lookup table" argument against functionalism.Yes, those are perhaps problems with functionalism. But a major point in Chalmers' argument is that if qualia were substrate-specific (hence, functionalism false) it would be possible to make a partial zombie or an entity whose consciousness and behaviour diverged from the point the substitution was made. And this argument works not just by replacing the neurons with silicon chips, but by replacing any part of the human with anything that reproduces the interactions with the remaining parts.How deeply do you have to go when you consider or define those "other parts" though? That seems to be a critical but unstated assumption, and something that depends on how finely grained you consider the relevant/important parts of a brain to be.For reference, this is what Chalmers says:"In this paper I defend this view. Specifically, I defend a principle of organizational invariance, holding that experience is invariant across systems with the same fine-grained functional organization. More precisely, the principle states that given any system that has conscious experiences, then any system that has the same functional organization at a fine enough grain will have qualitatively identical conscious experiences. A full specification of a system's fine-grained functional organization will fully determine any conscious experiences that arise."By substituting a fine-grained functional organization for a coarse-grained one, you change the functional definition and can no longer guarantee identical experiences, nor identical behaviors in all possible situations. They're no longer"functional isomorphs" as Chalmers's argument requires.By substituting a recording of a computation for a computation, you replace a conscious mind with a tape recording of the prior behavior of a conscious mind. This is what happens in the Blockhead thought experiment. The result is something that passes a Turing test, but which is itself not conscious (though creating such a recording requires prior invocation of a conscious mind or extraordinary luck).
> By substituting a recording of a computation for a computation, you replace a conscious mind with a tape recording of the prior behavior of a conscious mind.
> This is what happens in the Blockhead thought experiment
After answering that, let me ask what you think would happen to the conscious of the individual if we replaced all but one neuron in the brain with this RNG-driven black box that continues to stimulate this sole remaining neuron in exactly the same way as the rest of the brain would have?
>> But you'd still need a computation to find the particular tape recording that you need, and the larger your library of recordings the more complex the computation you'd need to do would be. And in that very silly thought experiment your library needs to contain every sentence that is syntactically and grammatically correct. And there are an astronomical number to an astronomical power of those. Even if every electron, proton, neutron, photon and neutrino in the observable universe could record 1000 million billion trillion sentences there would still be well over a googolplex number of sentences that remained unrecorded. Blockhead is just a slight variation on Searle's idiotic Chinese room.
> It's very different. Note they you don't need to realize or store every possible input for the central point of Block's argument to work.For example, let's say that AlphaZero was conscious for the purposes of this argument. We record each of its 361 possible responses AlphaZero produces to each of the different opening moves on a Go board and store the result in a lookup table. This table would be only a few kilobytes.
> Then we can ask, what has happened to the conscious of AlphaZero?
After answering that, let me ask what you think would happen to the conscious of the individual if we replaced all but one neuron in the brain with this RNG-driven black box that continues to stimulate this sole remaining neuron in exactly the same way as the rest of the brain would have?The consciousness would continue. And then we could get rid of the neuron and the consciousness would continue. So we end up with the same result as the rock implementing all computations and hence all consciousnesses,
which amounts to saying that consciousness exists independently of any hardware. This is consistent with Bruno Marchal’s theory.
After answering that, let me ask what you think would happen to the conscious of the individual if we replaced all but one neuron in the brain with this RNG-driven black box that continues to stimulate this sole remaining neuron in exactly the same way as the rest of the brain would have?The consciousness would continue. And then we could get rid of the neuron and the consciousness would continue. So we end up with the same result as the rock implementing all computations and hence all consciousnesses,Rocks don't implement all computations. I am aware some philosophers have said as much, but they achieve this trick by labeling successive states of a computation to each time-ordered state of the rock. I don't think any computer scientist accepts this as valid. The transitions of the rock states lack the counterfactual relations which are necessary for computation. If you were to try to map states S_1 to state S_5000 of a rock to a program computing Pi, looking at state S_6000 of the rock won't provide you any meaningful information about what the next digit of Pi happens to be.
which amounts to saying that consciousness exists independently of any hardware. This is consistent with Bruno Marchal’s theory.It depends how you define hardware. Marchal's theory still requires computations supported by platonic truths/number relations. This is not physical hardware, but it's still a platform for supporting threads of computation.Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgOYSj29zYZSnfUXpmRRu3k2f4BCKmB7ckqsSi--oPVkQ%40mail.gmail.com.
After answering that, let me ask what you think would happen to the conscious of the individual if we replaced all but one neuron in the brain with this RNG-driven black box that continues to stimulate this sole remaining neuron in exactly the same way as the rest of the brain would have?The consciousness would continue. And then we could get rid of the neuron and the consciousness would continue. So we end up with the same result as the rock implementing all computations and hence all consciousnesses,Rocks don't implement all computations. I am aware some philosophers have said as much, but they achieve this trick by labeling successive states of a computation to each time-ordered state of the rock. I don't think any computer scientist accepts this as valid. The transitions of the rock states lack the counterfactual relations which are necessary for computation. If you were to try to map states S_1 to state S_5000 of a rock to a program computing Pi, looking at state S_6000 of the rock won't provide you any meaningful information about what the next digit of Pi happens to be.Yes, so it can't be used as a computer that interacts with its environment and provides useful results. But we could say that the computation is still in there hidden, in the way every possible sculpture is hidden inside a block of marble.
>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.
I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.
On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.I think the concept of a partial zombie makes consciousness nonsensical.
How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?
What is the advantage of having "real" visual experiences if they make no objective difference and no subjective difference either?
On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.I think the concept of a partial zombie makes consciousness nonsensical.It borders on the nonsensical, but between the two bad alternatives I find the idea of a RNG instantiating human consciousness somewhat less sensical than the idea of partial zombies.
How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?Here, we have to be careful what we mean by "I". Our own brains have various spheres of consciousness as demonstrated by the Wada Test: we can shut down one hemisphere of the brain and lose partial awareness and functionality such as the ability to form words and yet one remains conscious. I think being a partial zombie would be like that, having one's sphere of awareness shrink.
What is the advantage of having "real" visual experiences if they make no objective difference and no subjective difference either?The advantage of real computations (which imply having real awareness/experiences) is that real computations are more reliable than RNGs for producing intelligent behavioral responses.
On Thu, 25 May 2023 at 13:59, Jason Resch <jason...@gmail.com> wrote:
On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:
>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.
I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.
I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.
I think the concept of a partial zombie makes consciousness nonsensical.
It borders on the nonsensical, but between the two bad alternatives I find the idea of a RNG instantiating human consciousness somewhat less sensical than the idea of partial zombies.
If consciousness persists no matter what the brain is replaced with as long as the output remains the same this is consistent with the idea that consciousness does not reside in a particular substance (even a magical substance) or in a particular process. This is a strange idea, but it is akin to the existence of platonic objects. The number three can be implemented by arranging three objects in a row but it does not depend those three objects unless it is being used for a particular purpose, such as three beads on an abacus.
How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?
Here, we have to be careful what we mean by "I". Our own brains have various spheres of consciousness as demonstrated by the Wada Test: we can shut down one hemisphere of the brain and lose partial awareness and functionality such as the ability to form words and yet one remains conscious. I think being a partial zombie would be like that, having one's sphere of awareness shrink.
But the subject's sphere of awareness would not shrink in the thought experiment, since by assumption their behaviour stays the same, while if their sphere of awareness shrank they notice that something was different and say so.
What is the advantage of having "real" visual experiences if they make no objective difference and no subjective difference either?
The advantage of real computations (which imply having real awareness/experiences) is that real computations are more reliable than RNGs for producing intelligent behavioral responses.
Yes, so an RNG would be a bad design choice. But the point remains that if the output of the system remains the same, the consciousness remains the same, regardless of how the system functions. The reasonable-sounding belief that somehow the consciousness resides in the brain, in particular biochemical reactions or even in electronic circuits simulating the brain is wrong.
--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUcTU%3D1P3bkeoki894AQ7PrLXSFH6zFXwGPVhrqwaKoYA%40mail.gmail.com.
On 5/24/2023 9:29 PM, Stathis Papaioannou wrote:
On Thu, 25 May 2023 at 13:59, Jason Resch <jason...@gmail.com> wrote:
On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:
>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.
I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.
I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.
I think the concept of a partial zombie makes consciousness nonsensical.
It borders on the nonsensical, but between the two bad alternatives I find the idea of a RNG instantiating human consciousness somewhat less sensical than the idea of partial zombies.
If consciousness persists no matter what the brain is replaced with as long as the output remains the same this is consistent with the idea that consciousness does not reside in a particular substance (even a magical substance) or in a particular process. This is a strange idea, but it is akin to the existence of platonic objects. The number three can be implemented by arranging three objects in a row but it does not depend those three objects unless it is being used for a particular purpose, such as three beads on an abacus.
How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?
Here, we have to be careful what we mean by "I". Our own brains have various spheres of consciousness as demonstrated by the Wada Test: we can shut down one hemisphere of the brain and lose partial awareness and functionality such as the ability to form words and yet one remains conscious. I think being a partial zombie would be like that, having one's sphere of awareness shrink.
But the subject's sphere of awareness would not shrink in the thought experiment, since by assumption their behaviour stays the same, while if their sphere of awareness shrank they notice that something was different and say so.
Why do you think they would notice? Color blind people don't notice they are color blind...until somebody tells them about it and even then they don't "notice" it.
> Can I ask you what you would believe would happen to the conscious of the individual if you replaced the right hemisphere of the brain with a black box that interfaced identically with the left hemisphere, but internal to this black box is nothing but a random number generator, and it is only by fantastic luck that the output of the RNG happens to have caused it's interfacing with the left hemisphere to remain unchanged?
On Thu, 25 May 2023 at 13:59, Jason Resch <jason...@gmail.com> wrote:On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.I think the concept of a partial zombie makes consciousness nonsensical.It borders on the nonsensical, but between the two bad alternatives I find the idea of a RNG instantiating human consciousness somewhat less sensical than the idea of partial zombies.If consciousness persists no matter what the brain is replaced with as long as the output remains the same this is consistent with the idea that consciousness does not reside in a particular substance (even a magical substance) or in a particular process.
This is a strange idea, but it is akin to the existence of platonic objects. The number three can be implemented by arranging three objects in a row but it does not depend those three objects unless it is being used for a particular purpose, such as three beads on an abacus.
How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?Here, we have to be careful what we mean by "I". Our own brains have various spheres of consciousness as demonstrated by the Wada Test: we can shut down one hemisphere of the brain and lose partial awareness and functionality such as the ability to form words and yet one remains conscious. I think being a partial zombie would be like that, having one's sphere of awareness shrink.But the subject's sphere of awareness would not shrink in the thought experiment,
since by assumption their behaviour stays the same, while if their sphere of awareness shrank they notice that something was different and say so.
What is the advantage of having "real" visual experiences if they make no objective difference and no subjective difference either?The advantage of real computations (which imply having real awareness/experiences) is that real computations are more reliable than RNGs for producing intelligent behavioral responses.Yes, so an RNG would be a bad design choice. But the point remains that if the output of the system remains the same, the consciousness remains the same, regardless of how the system functions.
The reasonable-sounding belief that somehow the consciousness resides in the brain, in particular biochemical reactions or even in electronic circuits simulating the brain is wrong.
On Tue, May 23, 2023, 3:50 PM Terren Suydam <terren...@gmail.com> wrote:
On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.
I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning. Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.I agree everything you have experienced is rooted in consciousness.But at the low level, that only thing your brain senses are neural signals (symbols, on/off, ones and zeros).In your arguments you rely on the high-level conscious states of human brains to establish that they have grounding, but then use the low-level descriptions of machines to deny their own consciousness, and hence deny they can ground their processing to anything.If you remained in the space of low-level descriptions for both brains and machine intelligences, however, you would see each struggles to make a connection to what may exist at the high-level. You would see, the lack of any apparent grounding in what are just neurons firing or not firing at certain times. Just as a wire in a circuit either carries or doesn't carry a charge.Ah, I see your point now. That's valid, thanks for raising it and let me clarify.I appreciate that thank you.Bringing this back to LLMs, it's clear to me that LLMs do not have phenomenal experience, but you're right to insist that I explain why I think so. I don't know if this amounts to a theory of consciousness, but the reason I believe that LLMs are not conscious is that, in my view, consciousness entails a continuous flow of experience. Assuming for this discussion that consciousness is realizable in a substrate-independent way, that means that consciousness is, in some sort of way, a process in the domain of information. And so to realize a conscious process, whether in a brain or in silicon, the physical dynamics of that information process must also be continuous, which is to say, recursive.I am quite partial to the idea that recursion or loops may me necessary to realize consciousness, or at least certain types of consciousness, such as self-consciousness (which I take to be models which include the self as an actor within the environment), but I also believe that loops may exist in non obvious forms, and even extend beyond the physical domain of a creature's body or the confines of a physical computer.Allow me to explain.Consider a something like the robot arm I described that is programmed to catch a ball. Now consider that the at each time step, a process is run they receives the current coordinates of th robot arm position and the ball position. This is not technically a loop, and bit really recursive, it may be implemented by a time that fires off the process say 1000 times a second.But, if you consider the pair of the robot arm and the environment, a recursive loop emerges, in the sense that the action decided and executed in the previous time step affects the sensory input in subsequent time steps. If the robots had enough sophistication to have a language function and we asked it, "what caused your arm to move?" The only answer it could give would have to be a reflexive one: a process within me caused my arm to move. So we get self reference, and recursion through environmental interaction.Now let's consider the LLM in this context, each invocation is indeed a feed forward independent process, but through this back and forth flow, the LLM interacting with the user, a recursive continuous loop of processing emerges. The LLM could be said to perceive an ever growing thread of conversation, with new words constantly being appended to its perception window. Moreover, some of these words would be external inputs, while others are internal outputs. If you ask the LLM: where did those internally generated outputs come from? Again th only valid answer it could supply would have to be reflexive.Reflexivity is I think the essence of self awareness, and though a single LLM invocation cannot do this, an LLM that generates output and then is subsequently asked about the source of this output, must turn it's attention inward towards itself.This is something like how Dennett describes how a zombie asked to look inward bootstraps itself into consciousness.
The behavior or output of the brain in one moment is the input to the brain in the next moment.But LLMs do not exhibit this. They have a training phase, and then they respond to discrete queries. As far as I know, once it's out of the training phase, there is no feedback outside of the flow of a single conversation. None of that seems isomorphic to the kind of process that could support a flow of experience, whatever experience would mean for an LLM.So to me, the suggestion that chatGPT could one day be used to functionally replace some subset of the brain that is responsible for mediating conscious experience in a human, just strikes me as absurd.One aspect of artificial neural networks that is worth considering here is that they are (by the 'universal approximation theorem') completely general and universal in the functions they can learn and model. That is, any logical circuit which can be computed in finite time, can in principle, be learned and implemented by a neural network. This gives me some pause when I consider what things neural networks will never be able to do.
Conversely, if you stay in the high-level realm of consciousness ideas, well then you must face the problem of other minds. You know you are conscious, but you cannot prove or disprove the conscious of others, at least not with first defining a theory of consciousness and explaining why some minds satisfy the definition of not. Until you present a theory of consciousness then this conversation is, I am afraid, doomed to continue in this circle forever.This same conversation and outcome played out over the past few months on the extropy-chat-list, although with different actors, so I can say with some confidence where some topics are likely to lead.In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?I'm agnostic about this, but that's because I no longer assume physicalism. For me, the hard problem signals that physicalism is impossible. I've argued on this list many times as a physicalist, as one who believes in the possibility of artificial consciousness, uploading, etc. I've argued that there is something it is like to be a cybernetic system. But at the end of it all, I just couldn't overcome the problem of aesthetic valence. As an aside, the folks at Qualia Computing have put forth a theory that symmetry in the state space isomorphic to ongoing experience is what corresponds to positive valence, and anti-symmetry to negative valence.But is there not much more to conscious then these two binary states? Is the state space sufficiently large in their theory to account for the seemingly infinite possible diversity of conscious experience?
It's a very interesting argument but one is still forced to leap from a mathematical concept to a subjective feeling. Regardless, it's the most sophisticated attempt to reconcile the hard problem that I've come across.I've since come around to the idealist stance that reality is fundamentally consciousness, and that the physical is a manifestation of that consciousness, like in a dream.I agree. Or at least I would say, consciousness is more fundamental than the physical universe. It might then be more appropriate to say my position is a kind of neutral monism, where platonically existing information/computation is the glue that relates consciousness to physics and explains why we perceive an ordered world with apparent laws.I explain this in much more detail here:
It has its own "hard problem", which is explaining why the world appears so orderly.Yes, the "hard problem of matter" as some call it. I agree this problem is much more solvable than the hard problem of consciousness.But if you don't get too hung up on that, it's not as clear that artificial consciousness is possible. It might be! it may even be that efforts like the above to explain how you get it from bit are relevant to idealist explanations of physical reality. But the challenge with idealism is that the explanations that are on offer sound more like mythology and metaphor than science. I should note that Bernardo KastrupI will have to look into him.
has some interesting ideas on idealism, and he approaches it in a way that is totally devoid of woo. That said, one really intriguing set of evidence in favor of idealism is near-death-experience (NDE) testimony, which is pretty remarkable if one actually studies it.It is indeed.Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi8g-EvBizTz98y_zO3OSrtWT99uQ1%2BkvL2%3DAabsA-ANA%40mail.gmail.com.
On Tue, May 23, 2023, 4:14 PM Terren Suydam <terren...@gmail.com> wrote:On Tue, May 23, 2023 at 2:27 PM Jason Resch <jason...@gmail.com> wrote:On Tue, May 23, 2023 at 1:15 PM Terren Suydam <terren...@gmail.com> wrote:On Tue, May 23, 2023 at 11:08 AM Dylan Distasio <inte...@gmail.com> wrote:And yes, I'm arguing that a true simulation (let's say for the sake of a thought experiment we were able to replicate every neural connection of a human being in code, including the connectomes, and neurotransmitters, along with a simulated nerve that was connected to a button on the desk we could press which would simulate the signal sent when a biological pain receptor is triggered) would feel pain that is just as real as the pain you and I feel as biological organisms.This follows from the physicalist no-zombies-possible stance. But it still runs into the hard problem, basically. How does stuff give rise to experience.I would say stuff doesn't give rise to conscious experience. Conscious experience is the logically necessary and required state of knowledge that is present in any consciousness-necessitating behaviors. If you design a simple robot with a camera and robot arm that is able to reliably catch a ball thrown in its general direction, then something in that system *must* contain knowledge of the ball's relative position and trajectory. It simply isn't logically possible to have a system that behaves in all situations as if it knows where the ball is, without knowing where the ball is. Consciousness is simply the state of being with knowledge.Con- "Latin for with"-Scious- "Latin for knowledge"-ness "English suffix meaning the state of being X"Consciousness -> The state of being with knowledge.There is an infinite variety of potential states and levels of knowledge, and this contributes to much of the confusion, but boiled down to the simplest essence of what is or isn't conscious, it is all about knowledge states. Knowledge states require activity/reactivity to the presence of information, and counterfactual behaviors (if/then, greater than less than, discriminations and comparisons that lead to different downstream consequences in a system's behavior). At least, this is my theory of consciousness.JasonThis still runs into the valence problem though. Why does some "knowledge" correspond with a positive feeling and other knowledge with a negative feeling?That is a great question. Though I'm not sure it's fundamentally insoluble within model where every conscious state is a particular state of knowledge.I would propose that having positive and negative experiences, i.e. pain or pleasure, requires knowledge states with a certain minium degree of sophistication. For example, knowing:Pain being associated with knowledge states such as: "I don't like this, this is bad, I'm in pain, I want to change my situation."Pleasure being associated with knowledge states such as: "This is good for me, I could use more of this, I don't want this to end.'Such knowledge states require a degree of reflexive awareness, to have a notion of a self where some outcomes may be either positive or negative to that self, and perhaps some notion of time or a sufficient agency to be able to change one's situation.Sone have argued that plants can't feel pain because there's little they can do to change their situation (though I'm agnostic on this).I'm not talking about the functional accounts of positive and negative experiences. I'm talking about phenomenology. The functional aspect of it is not irrelevant, but to focus only on that is to sweep the feeling under the rug. So many dialogs on this topic basically terminate here, where it's just a clash of belief about the relative importance of consciousness and phenomenology as the mediator of all experience and knowledge.You raise important questions which no complete theory of consciousness should ignore. I think one reason things break down here is because there's such incredible complexity behind and underlying the states of consciousness we humans perceive and no easy way to communicate all the salient properties of those experiences.Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiyaxumARPQ42bZty5ZhrLbFQuSHe_cNXvjUxd_gniFfg%40mail.gmail.com.
The analogous case in Chalmers’ experiment is that the visual qualia are altered by the replacement process, the subject notices, but he continues to say that everything is fine, because the inputs to his language centres etc. are the same. But what part of the brain does the noticing, the trying to speak, the experience of horror at helplessly observing oneself say that everything is fine?
There isn’t a special part of the brain that runs conscious subroutines disconnected from the outputs.
To take it one step further, if chatGPT's next iteration of training included the millions of conversations humans had with it, you could see a self model become instantiated in a more permanent way. But again, at the end of its training, the state would be frozen. That's the sticking point for me.
The behavior or output of the brain in one moment is the input to the brain in the next moment.But LLMs do not exhibit this. They have a training phase, and then they respond to discrete queries. As far as I know, once it's out of the training phase, there is no feedback outside of the flow of a single conversation. None of that seems isomorphic to the kind of process that could support a flow of experience, whatever experience would mean for an LLM.So to me, the suggestion that chatGPT could one day be used to functionally replace some subset of the brain that is responsible for mediating conscious experience in a human, just strikes me as absurd.One aspect of artificial neural networks that is worth considering here is that they are (by the 'universal approximation theorem') completely general and universal in the functions they can learn and model. That is, any logical circuit which can be computed in finite time, can in principle, be learned and implemented by a neural network. This gives me some pause when I consider what things neural networks will never be able to do.Yup, you made that point a couple months ago here and that stuck with me - that it's possible the way that LLMs are sort of outperforming expectations could be that it's literally modelling minds and using that to generate its responses. I'm not sure that's possible, because I'm not clear on whether the neural networks used in LLMs qualify as being general/universal.
Conversely, if you stay in the high-level realm of consciousness ideas, well then you must face the problem of other minds. You know you are conscious, but you cannot prove or disprove the conscious of others, at least not with first defining a theory of consciousness and explaining why some minds satisfy the definition of not. Until you present a theory of consciousness then this conversation is, I am afraid, doomed to continue in this circle forever.This same conversation and outcome played out over the past few months on the extropy-chat-list, although with different actors, so I can say with some confidence where some topics are likely to lead.In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?I'm agnostic about this, but that's because I no longer assume physicalism. For me, the hard problem signals that physicalism is impossible. I've argued on this list many times as a physicalist, as one who believes in the possibility of artificial consciousness, uploading, etc. I've argued that there is something it is like to be a cybernetic system. But at the end of it all, I just couldn't overcome the problem of aesthetic valence. As an aside, the folks at Qualia Computing have put forth a theory that symmetry in the state space isomorphic to ongoing experience is what corresponds to positive valence, and anti-symmetry to negative valence.
But is there not much more to conscious then these two binary states? Is the state space sufficiently large in their theory to account for the seemingly infinite possible diversity of conscious experience?They're not saying the state is binary. I don't even think they're saying symmetry is a binary. They're deriving the property of symmetry (presumably through some kind of mathematical transform) and hypothesizing that aesthetic valence corresponds to the outcome of that transform. I also think it's possible for symmetry and anti-symmetry to be present at the same time; the mathematical object isomorphic to experience is a high-dimensional object and probably has nearly infinite ways of being symmetrical and anti-symmetrical.
It's a very interesting argument but one is still forced to leap from a mathematical concept to a subjective feeling. Regardless, it's the most sophisticated attempt to reconcile the hard problem that I've come across.I've since come around to the idealist stance that reality is fundamentally consciousness, and that the physical is a manifestation of that consciousness, like in a dream.I agree. Or at least I would say, consciousness is more fundamental than the physical universe. It might then be more appropriate to say my position is a kind of neutral monism, where platonically existing information/computation is the glue that relates consciousness to physics and explains why we perceive an ordered world with apparent laws.I explain this in much more detail here:I assume that's inspired by Bruno's ideas?
I miss that guy. I still see him on FB from time to time.
He was super influential on me too. Probably the single smartest person I ever "met".
It has its own "hard problem", which is explaining why the world appears so orderly.Yes, the "hard problem of matter" as some call it. I agree this problem is much more solvable than the hard problem of consciousness.But if you don't get too hung up on that, it's not as clear that artificial consciousness is possible. It might be! it may even be that efforts like the above to explain how you get it from bit are relevant to idealist explanations of physical reality. But the challenge with idealism is that the explanations that are on offer sound more like mythology and metaphor than science. I should note that Bernardo KastrupI will have to look into him.I take him with a grain of salt - he's fairly combative and dismissive of people who are physicalists. But his ideas are super interesting, I don't know if he's the first to take analytical approach to idealism, but he's definitely the first to become well known for it.
And the idea that plants cannot influence their environments is patently false. There's an emerging recognition of just how much plants do respond to environmental stimuli. There's a symbiotic relationship between plants and fungal networks in the soil, and these networks have been shown to mediate communication, where trees will signal threats and direct resources to other trees who need it. I can try to dig up some references on that.
“When a plant is wounded, its body immediately kicks into protection mode. It releases a bouquet of volatile chemicals, which in some cases have been shown to induce neighboring plants to pre-emptively step up their own chemical defenses and in other cases to lure in predators of the beasts that may be causing the damage to the plants. Inside the plant, repair systems are engaged and defenses are mounted, the molecular details of which scientists are still working out, but which involve signaling molecules coursing through the body to rally the cellular troops, even the enlisting of the genome itself, which begins churning out defense-related proteins ... If you think about it, though, why would we expect any organism to lie down and die for our dinner? Organisms have evolved to do everything in their power to avoid being extinguished. How long would any lineage be likely to last if its members effectively didn't care if you killed them?”“The research of Ariel Novoplansky, from the Ben-Gurion University of the Begev, has demonstrated that plants can communicate with each other in sophisticated ways. Novoplansy’s experiment involved putting plants in a series of adjacent pots, with each plant having one root in its neighbor's pot. He then subjected one of the plants to drought. What he discovered was that this information was passed down the series of plant pots through the roots, as revealed by the fact that all of the plants closed their pores to reduce water loss. Closing of pores is generally the action of thirsty plants, but in this case it was the action of perfectly well-watered plants responding to the danger signals of a neighbor several pots along. The plants were even able to retain the information, which prevented them from dying in the drought that Novoplansky subjected the plants to in a later stage of the experiment.”
“By infecting trees with isotope traces, Simard has shown that there is beneath our feet a complex web of communication between trees, which she has dubbed the “Wood-Wide Web.” Communication happens via mycorrhiza structures, which connect trees to other trees via fungi. /the trees and the fungi enjoy a quid pro quo relationship: the trees deliver carbon to the fungi and the fungi reciprocate by delivering nutrients to the trees. A dense web of connections is formed in this way, with the busiest trees at the center connected to hundreds of other trees.”
“Many vegans and vegetarians feel that it is wrong to kill or exploit sentient creatures. But if plants also have sentience, what is there left to eat? These are very hard ethical questions; it may turn out that some killing of sentient life is inevitable if we want to survive ourselves. But accepting the consciousness of plant life means at the very least accepting that plants have genuine interests, interests that deserve our respect and consideration.”
-- Phillip Goff in "Galileo’s Error" (2019)
“He speaks with plant scientists from around the world whose research has led them to conclude that plants can communicate, learn, and even remember. Some even go as far as to say plants are intelligent.”
“But in principle, there is no doubt that plants are processing and sharing information, potentially in an incredibly complex way.”
> Have you ever wondered what delineates the mind from its environment?
> Why it is that you are not aware of my thoughts but you see me as an object that only affects your senses, even though we could represent the whole earth as one big functional system?
> I don't have a good answer to this question
> The randomly generated outputs from the RNG would seem an environmental noise/sensation coming from the outside, rather than a recursively linked and connected loop of processing
> But here (almost by magic), the RNG outputs have forced the physical behavior of the remaining hemisphere to remain the same
Arnold Zuboff has written a thought experiment to this effect.
> But if a theory cannot acknowledge a difference in the conscious between an electron and a dreaming brain inside a skull, then the theory is (in my opinion) operationally useless.
And yet, how the outputs are computed is still important to the rest of the brain (in terms of defining the computational state it is in).Think of it this way: a multiply function that takes in two inputs (2,2) and returns "4", and an add function that takes in two inputs (2,2) and return "4" have the same output for the same input, at least in this case, but the functional meaning is very different. The computations that occured in the function carry a meaning which is different, even though the same output comes out. The internal functional implementation then, defines a different computational state for any later process that receives this output of "4".Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg_YCAi15Uy-hCU%3D%3D%3D1U8nFaRbyEUZy0f%2BppBbfLyOFiw%40mail.gmail.com.
Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?
I'm agnostic about this, but that's because I no longer assume physicalism. For me, the hard problem signals that physicalism is impossible. I've argued on this list many times as a physicalist, as one who believes in the possibility of artificial consciousness, uploading, etc. I've argued that there is something it is like to be a cybernetic system. But at the end of it all, I just couldn't overcome the problem of aesthetic valence
> chatGPT was able to give the derivation of the moment of inertia of a sphere, but was unable to derive this in a
much simpler way