Subjective states can somehow be extracted from brains via a computer.
The ingenius folks who were miraculously able to extract an image from the brain
that we saw recently
http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity
somehow did it entirely through computation. How was that possible?
There are at least two imaginable theories, neither of which I can explain step by step:
"The conscious visual representation is likely to be distributed over more than one area of the cerebral cortex and possibly over certain subcortical structures as well. We have argued (Crick and Koch, 1995a) that in primates, contrary to most received opinion, it is not located in cortical area V1 (also called the striate cortex or area 17). Some of the experimental evidence in support of this hypothesis is outlined below. This is not to say that what goes on in V1 is not important, and indeed may be crucial, for most forms of vivid visual awareness. What we suggest is that the neural activity there is not directly correlated with what is seen."
"Here we present a new motion-energy [10,
11] encoding model that largely overcomes this limitation.
The model describes fast visual information and slow hemodynamics
by separate components. We recorded BOLD
signals in occipitotemporal visual cortex of human subjects
who watched natural movies and fit the model separately
to individual voxels." https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011
a) Computers are themselves conscious (which can neither be proven nor disproven)
and are therefore capable of perception.
or
2) The flesh of the brain is simultaneously objective and subjective.
Thus an ordinary (by which I mean not conscious) computer can work on it
objectively yet produce a subjective image by some manipulation of the flesh
of the brain. One perhaps might call this "milking" of the brain.
----- Receiving the following content -----From: Craig WeinbergReceiver: everything-listTime: 2013-01-05, 11:37:17Subject: Re: Subjective states can be somehow extracted from brains via acomputer
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/Z_D4nNG0oGUJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Hi Craig WeinbergSorry, everybody, I was snookered into believing that they had really accomplished the impossible.
The killer argument against that is that the brain has no sync signals to generatethe raster lines.
Hi Telmo Menezes
Well then, we have at least one vote supporting the results.
I remain sceptical because of the line sync issue.
The brain doesn't provide a raster line sync signal.
----- Receiving the following content -----From: Telmo MenezesReceiver: everything-list
Time: 2013-01-07, 09:33:30Subject: Re: Re: Re: Subjective states can be somehow extracted from brainsviaacomputerHi Roger,
On Mon, Jan 7, 2013 at 1:28 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Telmo Menezes
Well then, we have at least one vote supporting the results.Scientific results are not supported or refuted by votes.��
I remain sceptical because of the line sync issue.
The brain doesn't provide a raster line sync signal.The synch signal is a requirement of a very specific technology to display video. Analog film does not have a synch signal. It still does sampling. Sampling is always necessary if you use a finite machine to record some visual representation of the world. If one believes the brain stores our memories (I know you don't) you have to believe that it samples perceptual information somehow. It will probably not be as neat and simple as a sync signal.A trivial but important point: every movie is a representation of reality, not reality itself. It's just a set of symbols that represent the world as seen from a specific point of view in the form of a matrix of discrete light intensity levels. So the mapping from symbols to visual representations is always present, no matter what technology you use. Again, the sync signal is just a detail of the implementation of one such technologies.The way the brain encodes images is surely very complex and convoluted. Why not? There wasn't ever any adaptive pressure for the encoding to be easily translated from the outputs of an MRI machine. If we require all contact between males and females to be done through MRI machines and wait a couple million years maybe that will change. We might even get a sync signal, who knows?Either you believe that the brain encodes images somehow, or you believe that the brain is an absurd mechanism. Why are the optic nerves connected to the brain? Why does the visual cortex fire in specific ways when shown specific images? Why can we tell from brain activity if someone is nervous, asleep, solving a math problem of painting?�
[Roger Clough], [rcl...@verizon.net]
1/7/2013
"Forever is a long time, especially near the end." - Woody AllenFrom: Telmo Menezes
----- Receiving the following content -----
Receiver: everything-list
Time: 2013-01-07, 06:19:33
Subject: Re: Re: Subjective states can be somehow extracted from brains viaacomputer
On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough 爓rote:
The way the brain encodes images is surely very complex and convoluted. Why not? There wasn't ever any adaptive pressure for the encoding to be easily translated from the outputs of an MRI machine. If we require all contact between males and females to be done through MRI machines and wait a couple million years maybe that will change.
On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig WeinbergSorry, everybody, I was snookered into believing that they had really accomplished the impossible.So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?
The hypothesis is that the brain has some encoding for images.
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.
It's horribly hard to decode what's going on in the brain.
These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.
So there must be some way to decode brain activity into images.The killer argument against that is that the brain has no sync signals to generatethe raster lines.Neither does reality, but we somehow manage to show a representation of it on tv, right?
On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig WeinbergSorry, everybody, I was snookered into believing that they had really accomplished the impossible.So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?
The paper doesn't claim that images from the brain have been decoded,
but the sensational headlines imply that is what they did.
The video isn't supposed to be anything but fabricated.
It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction.
Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.
The hypothesis is that the brain has some encoding for images.
Where are the encoded images decoded into what we actually see?
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.
That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.
It's horribly hard to decode what's going on in the brain.
Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.
These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.
You might get the same result out of precisely mapping the movements of the eyes instead.
What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.
So there must be some way to decode brain activity into images.The killer argument against that is that the brain has no sync signals to generatethe raster lines.Neither does reality, but we somehow manage to show a representation of it on tv, right?
What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/6hB08_ZTh9kJ.
Hi Craig,On Tue, Jan 8, 2013 at 12:41 AM, Craig Weinberg <whats...@gmail.com> wrote:
On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig WeinbergSorry, everybody, I was snookered into believing that they had really accomplished the impossible.So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?
The paper doesn't claim that images from the brain have been decoded,Yes it does, right in the abstract:"To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies."
but the sensational headlines imply that is what they did.Starting with UC Berkeley itself:
The video isn't supposed to be anything but fabricated.ALL videos are fabricated in that sense.
It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction.Yes, and the images you see on your computer screen are just a matrix of molecules artificially made to align in a certain way so that the light being emitted behind them arrives at your eyes in a way that resembles the light emitted by some real world scene that it is meant to be represented.
Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.
Nice straw man + ad hominem you did there!
The hypothesis is that the brain has some encoding for images.
Where are the encoded images decoded into what we actually see?In the computer that runs the Bayesian algorithm.
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.
That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.
Sure. That's why the algorithm has to be trained with known videos. So it can learn which brain activity correlates with what 3p accessible images we can all agree upon.
It's horribly hard to decode what's going on in the brain.
Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.
Yes. The newborn baby comes with the genetic material that generates the optimal decoder.These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.
You might get the same result out of precisely mapping the movements of the eyes instead.Maybe. That's not where they took the information from though. They took it from the visual cortex.
What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.
Google history can only approximate the shape of my face if there is a correlation between the two. In which case my Google history is, in fact, also a description of the shape of my face.
So there must be some way to decode brain activity into images.The killer argument against that is that the brain has no sync signals to generatethe raster lines.Neither does reality, but we somehow manage to show a representation of it on tv, right?
What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.
I agree. I never claimed this was an insight into 1p or anything to do with consciousness. Just that you can extract information from human brains, because that information is represented there somehow. But you're only going to get 3p information.
Hi Telmo Menezes
Presumably the brain works with analog, not digital, signals.
----- Receiving the following content -----
From: Craig WeinbergReceiver: everything-list
Time: 2013-01-08, 09:23:56Subject: Re: Re: Re: Re: Re: Subjective states can be somehow extractedfrombrainsviaacomputer
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/qG1kmMDadnAJ.
On Monday, January 7, 2013 7:24:24 PM UTC-5, telmo_menezes wrote:Hi Craig,On Tue, Jan 8, 2013 at 12:41 AM, Craig Weinberg <whats...@gmail.com> wrote:
On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig WeinbergSorry, everybody, I was snookered into believing that they had really accomplished the impossible.So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?
The paper doesn't claim that images from the brain have been decoded,Yes it does, right in the abstract:"To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies."
The Bayesian decoder is not literally decoding the BOLD and fMRI patterns into images, no more than listing the ingredients of a bag of chips in alphabetical order turns potatoes into words.
The key is the 'sampled natural movie prior'. That means it is a figurative reconstruction.
They are giving you a choice of selecting one video from hundreds, then looking at the common patterns in several people's brains when they choose the same video. They are not decoding the patterns into videos. By 'reconstructions' they are not saying that they literally recreated any part of the visual experience, but rather that they were able to make a composite video from the videos that they used by plugging the Bayesian probability into the data sets. The videos that you see are YouTube videos superimposed, *not in any way* a decoded translation of neural correlates.
but the sensational headlines imply that is what they did.Starting with UC Berkeley itself:
Of course. Does that surprise you? University PR is notoriously hyped. Exciting the public is the stuff that endowments are made of.
The video isn't supposed to be anything but fabricated.ALL videos are fabricated in that sense.
Sure, but a video from a camera on the end of a wire in someone's esophagus is less of a fabrication than a collage of verbal descriptions about digestion.
See what I'm driving at? The images are images they got off the internet superimposed over each other - not out of someone's brain activity being interpreted by a computer. The only thing being interpreted or decoded is cross referenced statistics.
Try thinking about it this way. What would the video look like if they plugged the Bayesian decoder algorithm into the regions related to the memory of flavors? Show someone a picture of strawberries, let's you get a pattern in the olfactory-gustatory regions of the brain. Show someone else a bunch of pictures of tasty things, and lo and behold, through your statistical regression, you can match up a pictures of strawberry candy, strawberry ice cream, etc with the pictures of stawberries, pink milk, etc. You get a video of blurry pink stuff and proclaim that you have reconstructed the image of strawberry flavor. It's a neat bit of stage magic, but it has nothing at all to do with translating flavor into image.
No more than searching strawberries on Google gives routers and servers a taste of strawberry.
It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction.Yes, and the images you see on your computer screen are just a matrix of molecules artificially made to align in a certain way so that the light being emitted behind them arrives at your eyes in a way that resembles the light emitted by some real world scene that it is meant to be represented.
Photography is a direct optical analog. The pixels on a computer screen are a digitized analog of photography.
The images 'reconstructed' are not analogs at all, they are wholly synthetic guesses which are reverse engineered purely from probability. What you see are not in fact images, but mechanically curated noise which remind us of images.
Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.
Nice straw man + ad hominem you did there!
Sorry, I wasn't trying to do either, although I admit it was condescending. I was trying to point out that it seems like you were saying that brain activity was decoded into visual pixels. I'm not clear really on what your understanding of it is.
The hypothesis is that the brain has some encoding for images.
Where are the encoded images decoded into what we actually see?In the computer that runs the Bayesian algorithm.
I'm talking about where in the brain are the images that we actually see 'decoded'?
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.
That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.
Sure. That's why the algorithm has to be trained with known videos. So it can learn which brain activity correlates with what 3p accessible images we can all agree upon.
Images aren't 3p. Images are 1p visual experiences inferred through 3p optical presentations.
The algorithm can't learn anything about images because it will never experience them in any way.
It's horribly hard to decode what's going on in the brain.
Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.
Yes. The newborn baby comes with the genetic material that generates the optimal decoder.These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.
You might get the same result out of precisely mapping the movements of the eyes instead.Maybe. That's not where they took the information from though. They took it from the visual cortex.
That's what makes people jump to the conclusion that they are looking at something that came from a brain rather than YouTube + video editing + simple formula + data sets from experiments that have no particular relation to brains or consciousness.
What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.
Google history can only approximate the shape of my face if there is a correlation between the two. In which case my Google history is, in fact, also a description of the shape of my face.
Why would there by a correlation between your Google history and the shape of your face?
So there must be some way to decode brain activity into images.The killer argument against that is that the brain has no sync signals to generatethe raster lines.Neither does reality, but we somehow manage to show a representation of it on tv, right?
What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.
I agree. I never claimed this was an insight into 1p or anything to do with consciousness. Just that you can extract information from human brains, because that information is represented there somehow. But you're only going to get 3p information.
The information being modeled here visually is not extracted from the human brain. Videos are matched to videos based on incidental correlations of brain activity. The same result could be achieved in many different ways having nothing to do with the brain at all. You could have people listen to one of several songs and draw a pictures of how the music makes them feel, and then write a program which figures out which song they most likely drew based on the statistics what known subjects drew - voila, you have a picture of music.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/1EFWCDGNhrEJ.
Hi Craig,On Tue, Jan 8, 2013 at 3:17 AM, Craig Weinberg <whats...@gmail.com> wrote:
On Monday, January 7, 2013 7:24:24 PM UTC-5, telmo_menezes wrote:Hi Craig,On Tue, Jan 8, 2013 at 12:41 AM, Craig Weinberg <whats...@gmail.com> wrote:
On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig WeinbergSorry, everybody, I was snookered into believing that they had really accomplished the impossible.So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?
The paper doesn't claim that images from the brain have been decoded,Yes it does, right in the abstract:"To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies."
The Bayesian decoder is not literally decoding the BOLD and fMRI patterns into images, no more than listing the ingredients of a bag of chips in alphabetical order turns potatoes into words.If you have a device that chemically analyses stuff and then lists the ingredients, this device is indeed producing a (limited) representation of an object in words. This is not fundamentally different from taking a photo of a potato. It's just that the representations focus on different aspects. And the representation is never the object, be it a photography or a piece of text.
The key is the 'sampled natural movie prior'. That means it is a figurative reconstruction.Any image is a figurative reconstruction of reality.
They are giving you a choice of selecting one video from hundreds, then looking at the common patterns in several people's brains when they choose the same video. They are not decoding the patterns into videos. By 'reconstructions' they are not saying that they literally recreated any part of the visual experience, but rather that they were able to make a composite video from the videos that they used by plugging the Bayesian probability into the data sets. The videos that you see are YouTube videos superimposed, *not in any way* a decoded translation of neural correlates.
Ok, we won't get anywhere with this :) We have different opinions on what constitutes a visual reconstruction. Yours is more stringent, and requires some notion of pixels. I believe that using video frames as building blocks meets the criteria. It will have a lower fidelity than pixels, but it's still a visual deconstruction and an (imperfect) decoding.
but the sensational headlines imply that is what they did.Starting with UC Berkeley itself:
Of course. Does that surprise you? University PR is notoriously hyped. Exciting the public is the stuff that endowments are made of.That's one way to look at it. The other is that the tax payers need the know what the big exciting goals are, so that they don't get so mad that we're spending their money.
The video isn't supposed to be anything but fabricated.ALL videos are fabricated in that sense.
Sure, but a video from a camera on the end of a wire in someone's esophagus is less of a fabrication than a collage of verbal descriptions about digestion.That's context dependent. The text description might be able to comunicate details about digestion that the video cannot.
See what I'm driving at? The images are images they got off the internet superimposed over each other - not out of someone's brain activity being interpreted by a computer. The only thing being interpreted or decoded is cross referenced statistics.
Try thinking about it this way. What would the video look like if they plugged the Bayesian decoder algorithm into the regions related to the memory of flavors? Show someone a picture of strawberries, let's you get a pattern in the olfactory-gustatory regions of the brain. Show someone else a bunch of pictures of tasty things, and lo and behold, through your statistical regression, you can match up a pictures of strawberry candy, strawberry ice cream, etc with the pictures of stawberries, pink milk, etc. You get a video of blurry pink stuff and proclaim that you have reconstructed the image of strawberry flavor. It's a neat bit of stage magic, but it has nothing at all to do with translating flavor into image.I don't see anything wrong with that approach, it's a valid decoder.
No more than searching strawberries on Google gives routers and servers a taste of strawberry.Ah, but now you're talking about 1p. 1p is the big mystery,
and this work as nothing to do with it. It has to do with the question: "does the brain store information is such a way that can be accesses and decoded". The decoder doesn't feel anything, neither does the brain. Feelings are 1p.
It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction.Yes, and the images you see on your computer screen are just a matrix of molecules artificially made to align in a certain way so that the light being emitted behind them arrives at your eyes in a way that resembles the light emitted by some real world scene that it is meant to be represented.
Photography is a direct optical analog. The pixels on a computer screen are a digitized analog of photography.Not really. They require a human brain to imagine the third dimension.
Images on a computer screen tell my cat nothing.
The images 'reconstructed' are not analogs at all, they are wholly synthetic guesses which are reverse engineered purely from probability. What you see are not in fact images, but mechanically curated noise which remind us of images.
Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.
Nice straw man + ad hominem you did there!
Sorry, I wasn't trying to do either, although I admit it was condescending. I was trying to point out that it seems like you were saying that brain activity was decoded into visual pixels. I'm not clear really on what your understanding of it is.
No hard feelings :) We're discussing interesting things so I empathise with you by default, even if I disagree with some of the things you say.
The hypothesis is that the brain has some encoding for images.
Where are the encoded images decoded into what we actually see?In the computer that runs the Bayesian algorithm.
I'm talking about where in the brain are the images that we actually see 'decoded'?That's like asking me where the good restaurants in Paris are. Spread all over, but at the same time in a lot of specific places. I do not have the mental capacity to fit the restaurant dynamics of Paris in my head. But I can point you to a couple of ones, the same way these researchers can point you to a couple of places where those images are.
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.
That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.
Sure. That's why the algorithm has to be trained with known videos. So it can learn which brain activity correlates with what 3p accessible images we can all agree upon.
Images aren't 3p. Images are 1p visual experiences inferred through 3p optical presentations.I disagree slightly. Images are 3p, the effect they have on our consciousness is 1p (thus art).
The algorithm can't learn anything about images because it will never experience them in any way.
It can learn how to deliver an image from brain A to brain B, in such a way that brain B will say: "ok, I see what you mean" and brain A will say "yeah, that's what I mean, but the image quality sucks".
It's horribly hard to decode what's going on in the brain.
Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.
Yes. The newborn baby comes with the genetic material that generates the optimal decoder.These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.
You might get the same result out of precisely mapping the movements of the eyes instead.Maybe. That's not where they took the information from though. They took it from the visual cortex.
That's what makes people jump to the conclusion that they are looking at something that came from a brain rather than YouTube + video editing + simple formula + data sets from experiments that have no particular relation to brains or consciousness.
What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.
Google history can only approximate the shape of my face if there is a correlation between the two. In which case my Google history is, in fact, also a description of the shape of my face.
Why would there by a correlation between your Google history and the shape of your face?Maybe ugly people are more likely to be interested in artificial intelligence and photos of Angelina Jolie? I don't know.
So there must be some way to decode brain activity into images.The killer argument against that is that the brain has no sync signals to generatethe raster lines.Neither does reality, but we somehow manage to show a representation of it on tv, right?
What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.
I agree. I never claimed this was an insight into 1p or anything to do with consciousness. Just that you can extract information from human brains, because that information is represented there somehow. But you're only going to get 3p information.
The information being modeled here visually is not extracted from the human brain. Videos are matched to videos based on incidental correlations of brain activity. The same result could be achieved in many different ways having nothing to do with the brain at all. You could have people listen to one of several songs and draw a pictures of how the music makes them feel, and then write a program which figures out which song they most likely drew based on the statistics what known subjects drew - voila, you have a picture of music.
You have a representation of music, sure. Art tries to pull that sort of stuff off all the time.
Cool. I actually would have agreed with you and a lot of people here at different times in my life. It's only been lately in the last five years or so that I have put together this other way of understanding everything. It gets lost in the debating, because I feel like I have to make my points about what is different or new about how I see things, but I do understand that other ways of looking at it make a lot of sense too - so much so that I suppose I am drawn only to digging into the weak spots to try to get others to see the secret exit that I think I've found...
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/elwBNPr92z4J.
Hi Craig,
I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:Hi Craig,
I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.
Do you agree that intelligence requires complexity?
I'm not sure intelligence and complexity are two different things.
On Fri, Jan 11, 2013 at 12:01 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 2:28 PM, Telmo Menezes wrote:Of course they're two different things. An oak tree is complex but not intelligent. The question is whether you think something can be intelligent without being complex?
On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:Hi Craig,
I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.
Do you agree that intelligence requires complexity?
I'm not sure intelligence and complexity are two different things.
I don't agree that an oak tree is not intelligent. It changes itself and its environment in non-trivial ways that promote its continuing existence. What's your definition of intelligence?
On 1/10/2013 3:15 PM, Telmo Menezes wrote:What's yours? I don't care what example you use, trees, rocks, bacteria, sewing machines...
On Fri, Jan 11, 2013 at 12:01 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 2:28 PM, Telmo Menezes wrote:Of course they're two different things. An oak tree is complex but not intelligent. The question is whether you think something can be intelligent without being complex?
On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:Hi Craig,
I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.
Do you agree that intelligence requires complexity?
I'm not sure intelligence and complexity are two different things.
I don't agree that an oak tree is not intelligent. It changes itself and its environment in non-trivial ways that promote its continuing existence. What's your definition of intelligence?
Are you going to contend that everything is intelligent and everything is complex, so that the words loose all meaning?
Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).
Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).
A thermostat is much less complex than a human brain but intelligent under my definition.
Hi Craig,
I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain,
I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.Still I feel that you avoid some questions. Maybe it's just my lack of understanding of what you're saying. For example: what is the primary "stuff" in your theory? In the same sense that for materialists it's subatomic particles and for comp it's N, +, *. What's yours?
On 1/10/2013 4:23 PM, Telmo Menezes wrote:Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).
A thermostat is much less complex than a human brain but intelligent under my definition.
But much less intelligent.
So in effect you think there is a degree of intelligence in everything, just like you believe there's a degree of consciousness in everything.
And the degree of intelligence correlates with the degree of complexity
...but you don't think the same about consciousness?
Brent
--
Brent
On Fri, Jan 11, 2013 at 1:33 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 4:23 PM, Telmo Menezes wrote:Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).
A thermostat is much less complex than a human brain but intelligent under my definition.
But much less intelligent.
That's your conclusion, not mine. According to my definition you can only compare thermostats being good at being thermostats and Brents being good at being Brents. Because you can only compare intelligence against a same set of goals. Otherwise you're just saying that intelligence A is more complex than intelligence B. Human intelligence requires a certain level of complexity, bacteria intelligence another. That's all.
On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:Hi Craig,
I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.
Do you agree that intelligence requires complexity?I'm not sure intelligence and complexity are two different things.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On 10 Jan 2013, at 23:28, Telmo Menezes wrote:
On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:Hi Craig,
I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.
Do you agree that intelligence requires complexity?
I'm not sure intelligence and complexity are two different things.
Hmm...
I have a theory of intelligence. It has strong defect, as it makes many things intelligent. But not everyone.
The machine X is intelligent, if it is not stupid.
And the machine X is stupid in two circumstances. Either she asserts that Y is intelligent, or she assert that Y is stupid. (Y can be equal to X).
On Friday, January 11, 2013 12:27:54 AM UTC-5, Brent wrote:On 1/10/2013 9:20 PM, Craig Weinberg wrote:
On Thursday, January 10, 2013 7:33:06 PM UTC-5, Brent wrote:On 1/10/2013 4:23 PM, Telmo Menezes wrote:Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).
A thermostat is much less complex than a human brain but intelligent under my definition.
But much less intelligent. So in effect you think there is a degree of intelligence in everything, just like you believe there's a degree of consciousness in everything. And the degree of intelligence correlates with the degree of complexity ...but you don't think the same about consciousness?
Brent
I was thinking today that a decent way of defining intelligence is just 'The ability to know "what's going on"'.
This makes it clear that intelligence refers to the degree of sophistication of awareness, not just complexity of function or structure. This is why a computer which has complex function and structure has no authentic intelligence and has no idea 'what's going on'. Intelligence however has everything to do with sensitivity, integration, and mobilization of awareness as an asset, i.e. to be directed for personal gain or shared enjoyment, progress, etc. Knowing what's going on implicitly means caring what goes on, which also supervenes on biological quality investment in experience.
Which is why I think an intelligent machine must be one that acts in its environment. Simply 'being aware' or 'knowing' are meaningless without the ability and motives to act on them.
Sense and motive are inseparable ontologically, although they can be interleaved by level. A plant for instance has no need to act on the world to the same degree as an organism which can move its location, but the cells that make up the plant act to grow and direct it toward light, extend roots to water and nutrients, etc. Ontologically however, there is no way to really have awareness which matters without some participatory opportunity or potential for that opportunity.
The problem with a machine (any machine) is that at the level which is it a machine, it has no way to participate. By definition a machine does whatever it is designed to do.
Anything that we use as a machine has to be made of something which we can predict and control reliably,
so that its sensory-motive capacities are very limited by definition. Its range of 'what's going on' has to be very narrow. The internet, for instance, passes a tremendous number of events through electronic circuits, but the content of all of it is entirely lost on it. We use the internet to increase our sense and inform our motives, but its sense and motive does not increase at all.
On 1/11/2013 11:44 AM, Bruno Marchal wrote:
On 10 Jan 2013, at 23:28, Telmo Menezes wrote:
On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:Hi Craig,
I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.
Do you agree that intelligence requires complexity?
I'm not sure intelligence and complexity are two different things.
Hmm...
I have a theory of intelligence. It has strong defect, as it makes many things intelligent. But not everyone.
The machine X is intelligent, if it is not stupid.
And the machine X is stupid in two circumstances. Either she asserts that Y is intelligent, or she assert that Y is stupid. (Y can be equal to X).
So if X is smart she asserts Y is not intelligent or Y is not stupid. :-)
Brent
In that theory, a pebble is intelligent, as no one has ever heard a pebble asserting that some other pebble or whatever, is stupid, or is intelligent.
(that theory is almost only a identification of intelligence with consistency (Dt)).
Intelligence is needed to develop competences.
But competences can have a negative feedback on intelligence.
Bruno
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On 1/11/2013 2:12 AM, Telmo Menezes wrote:So you've removed all meaning from intelligence. Rocks are smart at being rocks, we just have to recognize their goal is be rocks.
On Fri, Jan 11, 2013 at 1:33 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 4:23 PM, Telmo Menezes wrote:Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).
A thermostat is much less complex than a human brain but intelligent under my definition.
But much less intelligent.
That's your conclusion, not mine. According to my definition you can only compare thermostats being good at being thermostats and Brents being good at being Brents. Because you can only compare intelligence against a same set of goals. Otherwise you're just saying that intelligence A is more complex than intelligence B. Human intelligence requires a certain level of complexity, bacteria intelligence another. That's all.
Maybe we can stop dancing around the question by referring to human-level-intelligence and then rephrasing the question as, "Do you think human-like-intelligence requires human-like-complexity?"
On Thursday, January 10, 2013 4:58:32 PM UTC-5, telmo_menezes wrote:Hi Craig,I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain,
As long as you have another brain to experience the extracted memories in 1p, then I wouldn't rule out the possibility of a 3p transmission of some experiential content from one brain to another.
I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.Still I feel that you avoid some questions. Maybe it's just my lack of understanding of what you're saying. For example: what is the primary "stuff" in your theory? In the same sense that for materialists it's subatomic particles and for comp it's N, +, *. What's yours?
For me the primary stuff is sensory-motor presence.
Particles are public sense representations. N, +, * are private sense representations. Particles represent the experience of sensory-motor obstruction as topological bodies. Integers and arithmetic operators represent the sensory-motor relations of public objects as private logical figures.
CraigOn Wed, Jan 9, 2013 at 2:50 PM, Craig Weinberg <whats...@gmail.com> wrote:
On Wednesday, January 9, 2013 6:18:37 AM UTC-5, telmo_menezes wrote:Hi Craig,
Cool. I actually would have agreed with you and a lot of people here at different times in my life. It's only been lately in the last five years or so that I have put together this other way of understanding everything. It gets lost in the debating, because I feel like I have to make my points about what is different or new about how I see things, but I do understand that other ways of looking at it make a lot of sense too - so much so that I suppose I am drawn only to digging into the weak spots to try to get others to see the secret exit that I think I've found...
Ok, this sounds interesting and I'd like to know more. I've been away from the mailing list in the last few years, so maybe you've talked about it before. Would you tell me about that secret exit?
The secret exit is to reverse the assumption that consciousness occurs from functions or substances. Even though our human consciousness depends on a living human body (as far as we know for sure), that may be because of the degree of elaboration required to develop a human quality of experience, not because the fundamental capacity to perceive and participate depends on anything at all.
Being inside of a human experience means being inside of an animal experience, an organism's experience, a cellular and molecular level experience. The alternative means picking an arbitrary level at which total lack of awareness suddenly changes into perception and participation for no conceivable reason. Instead of hanging on to the hope of finding such a level or gate, the secret is to see that there are many levels and gates but that they are qualitative, with each richer integration of qualia reframing the levels left behind in a particular way, and that way (another key) is to reduce it from a personal, animistic temporal flow of 1p meaning and significant preference to impersonal, mechanistic spatial bodies ruled by cause-effect and chance/probability. 1p and 3p are relativistic, but what joins them is the capacity to discern the difference.
Rather than sense i/o being a function or logic take for granted, flip it over so that logic is the 3p shadow of sense. The 3p view is a frozen snapshot of countless 1p views as seen from the outside, and the qualities of the 3p view depend entirely on the nature of the 1p perceiver-partcipant. Sense is semiotic. Its qualitative layers are partitioned by habit and interpretive inertia, just as an ambiguous image looks different depending on how you personally direct your perception, or how a book that you read when you are 12 years old can have different meanings at 18 or 35. The meaning isn't just 'out there', it's literally, physically "in here". If this is true, then the entire physical universe doubles in size, or really is squared as every exterior surface is a 3p representation of an entire history of 1p experience. Each acorn is a potential for oak tree forest, an encyclopedia of evolution and cosmology, so that the acorn is just a semiotic placeholder which is scaled and iconicized appropriately as a consequence of the relation of our human quality awareness and that of the evolutionary-historical-possible future contexts which we share with it (or the whole ensemble of experiences in which 'we' are both embedded as strands of the story of the universe rather than just human body and acorn body or cells and cells etc).
To understand the common thread for all of it, always go back to the juxtaposition of 1p vs 3p, not *that* there is a difference, but the qualities of *what* those differences are - the sense of the juxtaposition.
http://media.tumblr.com/tumblr_m9y9by2XXw1qe3q3v.jpg
http://media.tumblr.com/tumblr_m9y9boN5rP1qe3q3v.jpg
That's were I get sense and motive or perception and participation. The symmetry is more primitive than either matter or mind, so that it isn't one which builds a bridge to the other but sense which divides itself on one level while retaining unity on another, creating not just dualism but a continuum of monism, dualism, dialectic, trichotomy, syzygy, etc. Many levels and perspectives on sense within sense.
http://multisenserealism.com/about/
Craig--To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/elwBNPr92z4J.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/4JA1h79Ss5IJ.
----- Receiving the following content -----
From: Bruno MarchalReceiver: everything-listTime: 2013-01-12, 07:02:42
> To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.
On Fri, Jan 11, 2013 at 6:10 AM, Craig Weinberg <whats...@gmail.com> wrote:
On Thursday, January 10, 2013 4:58:32 PM UTC-5, telmo_menezes wrote:Hi Craig,I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain,
As long as you have another brain to experience the extracted memories in 1p, then I wouldn't rule out the possibility of a 3p transmission of some experiential content from one brain to another.
I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.Still I feel that you avoid some questions. Maybe it's just my lack of understanding of what you're saying. For example: what is the primary "stuff" in your theory? In the same sense that for materialists it's subatomic particles and for comp it's N, +, *. What's yours?
For me the primary stuff is sensory-motor presence.It's very hard for me to grasp this.