Some questions about the "Mind At Large" idea that I find confusing...1. Do objects exist when no-one (human or other animal) is perceiving them? If no, how does the mind make it look as though they exist?
2. Why is the brain (or life?) an image of dissociated consciousness instead of being a non-conscious object within consciousness? In other words why isn't the brain merely an experience within the mind in the same way as non-living things are?
3. Why does the mind split into many separate ones instead of being only one mind?
On Saturday, 17 September 2016 04:50:35 UTC+1, Jimi wrote:
Some questions about the "Mind At Large" idea that I find confusing...
They would not truly or metaphysically exist in either case. They would have only a dependent or relative existence.
Are their existence dependent on the observer?
Not sure I understand the question. Everything would be an experience in the mind except for what is truly real.
I think Bernardo is saying that life is not only an experience in consciousness, but that life is also an image of dissociated consciousness, namely that there is something it is like to be a living being. What I don't get is this: if this is not true of non-living things (there is nothing it is like to be a non-living thing), then why would it be true of living things?
1. Yes. Whether you're a materialist or absolute idealist, there's a cause behind you seeing a physical object which exists outside your mind.
Okay, so why aren't we zombies or puppets being controlled by MAL? For that to happen, MAL would need to be dreaming up a world, like in old-fashioned Berkeleyan idealism, containing a bunch of lifeless human-shaped objects that act like humans, yet don't have any first-person point of view. This doesn't make sense, because the idea that there's objects that go around and do stuff is a product of our first-person human perspective, not something that MAL shares.
The important question as I see it is "Do you believe in the basic tenets of materialism, and why?".
If you do then materialism will obviously seem superior, and you will probably have a hard time making sense of anything falling outside of that.
That's just the nature of accepting any thought system.
Materialism makes sense, I don't know how to look at it so it feels as if it doesn't make sense.Can you give few examples what doesn't make sense to you in materialism, because here its commonly used term "materialism doesn't make sense", how it doesn't make sense, explain please?
With that assumption of materialism we have all those tech. and science. If it was nonsense why we were able to achieve that success? The result shows clearly that it makes sense to the most of the world.
To me it seems you only have consciousness to play with and creating big things out of it which grow beyond the consciousness itself (as I know my consciousness only).
What if it is not a thing that exists on its own, do you consider that possibility really or believing in non falsifiable makes you feel better?
Why I don't feel conscious when I'm born and I develop it later? Doesn't this show it's dependent on matter?
Why do you think what is called non-consciousness is also consciousness but non reflective (or obfuscated consciousness)? What's your proof?
Regarding the acrobacy in making non-consciousness -> consciousness why you don't think that the computers are also consciousness but not self reflective or have obfuscated consciousness, while according to you they are also dissociates of MAL?
What's the difference of your position in believing to god?
Can idealism be constructed to not entail god?
Can you give falsifiable examples regarding the reality of idealism? Experiments that when give some result will render idealism false.
1. The conclusion that consciousness is not identical to a physical process only makes sense if you commit a masked man fallacy. For example, it only makes sense to say that the morning star and the evening star are separate entities if you don't understand that they are both Venus. We don't necessarily get full understanding of what consciousness is just from knowing what it's like to be conscious.
2. When we experience Paris, our brain receives information about Paris in the form of certain neurological processes. That information is being stored in the brain as thoughts about Paris. When we have a thought about Paris, the brain simply reproduces those processes. So certainly thoughts can exist if materialism is true.
But the neurological processes are not like a code that needs to be interpreted. If materialism is true, "Paris" is just what we call the neurological processes that are produced when we experience Paris.
Hm, well. I think you're talking about symbol grounding. How do you connect a symbol, an arbitrary physical pattern, to its real-world referent? Of course you can stick a camera in front of the symbol system, add some visual recognition software, and lo! the symbol for 'teapot' is activated when you put a teapot in front of the camera. But again, the symbol system is a set of arbitrary physical patterns, so the fact that a symbol can be 'activated' by a stimulus (i.e. some physical changes happened) doesn't give it any meaning unless you interpret it as having such. All you have is a rather fragile causal link between the presence of a teapot and the activation of the symbol. If a freak cosmic ray struck the machine running the symbol system and activated the 'teapot' symbol, you wouldn't say that the machine is 'thinking about teapots', because you're using your human judgement to interpret the symbol system sensibly. You're interpreting it so that only some causal connections are meaningful. So causal connections cannot be sufficient to give meaning...
This is all just a way of re-hashing John Searle's Chinese Room argument, which is meant to illustrate how symbol systems are meaningless in intrinsic terms, unlike human thought. The neural patterns in the human brain can be read as a very complex symbol system by slicing up the state space appropriately. John Searle argues that this shows the human brain has a special, unexplained capability to assign meaning to symbols. Alex Rosenberg argues that neural patterns, being symbols, cannot have any intrinsic meaning to them, which means our belief that we have meaningful thoughts must be a delusion. I am not actually sure what an idealist interpretation of meaning would say, but since idealism takes thought as an ontological primitive, it doesn't look like so much of a problem.
On Wednesday, 28 September 2016 22:25:23 UTC+1, Jimi wrote:But the neurological processes are not like a code that needs to be interpreted. If materialism is true, "Paris" is just what we call the neurological processes that are produced when we experience Paris.
It still represents visual information of that teapot.
If qualia were not physical processes, you would expect that they wouldn't require 1/90 of a second in order to exist.
Some questions about the "Mind At Large" idea that I find confusing...1. Do objects exist when no-one (human or other animal) is perceiving them? If no, how does the mind make it look as though they exist?2. Why is the brain (or life?) an image of dissociated consciousness instead of being a non-conscious object within consciousness? In other words why isn't the brain merely an experience within the mind in the same way as non-living things are?3. Why does the mind split into many separate ones instead of being only one mind?
“A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?
…Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain…
What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.
Physics has ruled out the existence of clumps of matter of the required sort…
…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.
It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all…When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong.”
The "problem of intentionality" has the erroneous premise that thoughts have meaning about other things. When we say that a thought or a perception has meaning about something, what we are really doing is relating sense data to an act of conceptualizing those sense data. Thoughts only have meaning in the sense that they relate to concepts that are produced by organizing the sense data.