Subjective states can be somehow extracted from brains via a computer

13 views
Skip to first unread message

Roger Clough

unread,
Jan 5, 2013, 10:43:32 AM1/5/13
to - MindBrain@yahoogroups.com, everything-list

Subjective states can somehow be extracted from brains via a computer.

The ingenius folks who were miraculously able to extract an image from the brain
that we saw recently

http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity

somehow did it entirely through computation. How was that possible?

There are at least two imaginable theories, neither of which I can explain step by step:

a) Computers are themselves conscious (which can neither be proven nor disproven)
and are therefore capable of perception.

or

2) The flesh of the brain is simultaneously objective and subjective.
Thus an ordinary (by which I mean not conscious) computer can work on it
objectively yet produce a subjective image by some manipulation of the flesh
of the brain. One perhaps might call this "milking" of the brain.

[Roger Clough], [rcl...@verizon.net]
1/5/2013
"Forever is a long time, especially near the end." - Woody Allen

Craig Weinberg

unread,
Jan 5, 2013, 11:37:17 AM1/5/13
to everyth...@googlegroups.com, - MindBrain@yahoogroups.com


On Saturday, January 5, 2013 10:43:32 AM UTC-5, rclough wrote:

Subjective states can somehow be extracted from brains via a computer.

No, they can't.
 

The ingenius folks who were miraculously able to extract an image from the brain
that we saw recently
 
http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity

somehow did it entirely through computation. How was that possible?

By passing off a weak Bayesian regression analysis as a terrific consciousness breakthrough. Look again at the image comparisons. There is nothing being reconstructed, there is only the visual noise of many superimposed shapes which least dis-resembles the test image. It's not even stage magic, it's just a search engine.
 

There are at least two imaginable theories, neither of which I can explain step by step:


What they did was take lots of images and correlate patterns in the V1 region of the brain with those that corresponded V1 patterns in others who had viewed the known images. It's statistical guesswork and it is complete crap.

"The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie"

Crick and Koch found in their 1995 study that

"The conscious visual representation is likely to be distributed over more than one area of the cerebral cortex and possibly over certain subcortical structures as well. We have argued (Crick and Koch, 1995a) that in primates, contrary to most received opinion, it is not located in cortical area V1 (also called the striate cortex or area 17). Some of the experimental evidence in support of this hypothesis is outlined below. This is not to say that what goes on in V1 is not important, and indeed may be crucial, for most forms of vivid visual awareness. What we suggest is that the neural activity there is not directly correlated with what is seen."

http://www.klab.caltech.edu/~koch/crick-koch-cc-97.html

What was found in their study, through experiments which isolated the effects in the brain which are related to looking (i.e. directing your eyeballs to move around) from those related to seeing (the appearance of images, colors, etc) is that the activity in the V1 is exactly the same whether the person sees anything or not.

What the visual reconstruction is based on is the activity in the occipitotemporal visual cortex. (downstream of V1 http://www.sciencedirect.com/science/article/pii/S0079612305490196)

"Here we present a new motion-energy [10,
11] encoding model that largely overcomes this limitation.
The model describes fast visual information and slow hemodynamics
by separate components. We recorded BOLD
signals in occipitotemporal visual cortex of human subjects
who watched natural movies and fit the model separately
to individual voxels." https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011

So what they did is analogous to tracing the rectangle pattern that your eyes make when generally tracing the contrast boundary of a door-like image and then comparing that pattern to patterns made by other people's eyes tracing the known images of doors. It's really no closer to any direct access to your interior state than any data-mining advertiser gets by chasing after your web history to determine that you might buy prostate vitamins if you are watching a Rolling Stones YouTube.

a) Computers are themselves conscious (which can neither be proven nor disproven)
    and are therefore capable of perception.

Nothing can be considered conscious unless it has the capacity to act in its own interest. Computers, by virtue of their perpetual servitude to human will, are not conscious.
 

    or

2) The flesh of the brain is simultaneously objective and subjective.
    Thus an ordinary (by which I mean not conscious) computer can work on it
    objectively yet produce a subjective image by some manipulation of the flesh
    of the brain. One perhaps might call this "milking" of the brain.  

The flesh of the brain is indeed simultaneously objective and subjective (as are all living cells and perhaps all molecules and atoms), but the noise comparisons being done in this experiment aren't milking anything but the hype machine of pop-sci neuro-fluff. It is cool that they are able to refine the matching of patterns in the brain to patterns which can be identify computationally, but without the expectation of a visual image corresponding to these patterns in the first place, it is meaningless as far as understanding consciousness. What it does do though is provide a new hunger for invasive neurological technologies to analyze the behavior of your brain and draw statistical conclusions from...something which promises nothing less than utopian/dystopian level developments.

Craig
 

Roger Clough

unread,
Jan 6, 2013, 2:55:47 PM1/6/13
to everything-list, - MindBrain@yahoogroups.com
Hi Craig Weinberg
 
Sorry, everybody, I was snookered into believing that they had really accomplished the impossible.
The killer argument against that is that the brain has no sync signals to generate
the raster lines.
 
 
[Roger Clough], [rcl...@verizon.net]
1/6/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
Receiver: everything-list
Time: 2013-01-05, 11:37:17
Subject: Re: Subjective states can be somehow extracted from brains via acomputer

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/Z_D4nNG0oGUJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Telmo Menezes

unread,
Jan 7, 2013, 6:19:33 AM1/7/13
to everyth...@googlegroups.com, - MindBrain@yahoogroups.com
On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg
 
Sorry, everybody, I was snookered into believing that they had really accomplished the impossible.

So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?

The hypothesis is that the brain has some encoding for images. These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.

It's horribly hard to decode what's going on in the brain.

These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images. So there must be some way to decode brain activity into images.

The killer argument against that is that the brain has no sync signals to generate
the raster lines.

Neither does reality, but we somehow manage to show a representation of it on tv, right?

Roger Clough

unread,
Jan 7, 2013, 7:28:04 AM1/7/13
to everything-list
Hi Telmo Menezes

Well then, we have at least one vote supporting the results.

I remain sceptical because of the line sync issue.
The brain doesn't provide a raster line sync signal.


[Roger Clough], [rcl...@verizon.net]
1/7/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Telmo Menezes
Receiver: everything-list
Time: 2013-01-07, 06:19:33
Subject: Re: Re: Subjective states can be somehow extracted from brains viaacomputer







On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough wrote:

Hi Craig Weinberg
?
Sorry, everybody, I was snookered into believing that they had really accomplished the impossible.


So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?


The hypothesis is that the brain has some encoding for images. These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.


It's horribly hard to decode what's going on in the brain.


These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images. So there must be some way to decode brain activity into images.


The killer argument against that is that the brain has no sync signals to generate
the raster lines.


Neither does reality, but we somehow manage to show a representation of it on tv, right?
?
?
?
[Roger Clough], [rcl...@verizon.net]
1/6/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2013-01-05, 11:37:17
Subject: Re: Subjective states can be somehow extracted from brains via acomputer




On Saturday, January 5, 2013 10:43:32 AM UTC-5, rclough wrote:

Subjective states can somehow be extracted from brains via a computer.


No, they can't.
?


The ingenius folks who were miraculously able to extract an image from the brain
that we saw recently

?
http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity

somehow did it entirely through computation. How was that possible?


By passing off a weak Bayesian regression analysis as a terrific consciousness breakthrough. Look again at the image comparisons. There is nothing being reconstructed, there is only the visual noise of many superimposed shapes which least dis-resembles the test image. It's not even stage magic, it's just a search engine.
?


There are at least two imaginable theories, neither of which I can explain step by step:



What they did was take lots of images and correlate patterns in the V1 region of the brain with those that corresponded V1 patterns in others who had viewed the known images. It's statistical guesswork and it is complete crap.

"The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie"

Crick and Koch found in their 1995 study that


"The conscious visual representation is likely to be distributed over more than one area of the cerebral cortex and possibly over certain subcortical structures as well. We have argued (Crick and Koch, 1995a) that in primates, contrary to most received opinion, it is not located in cortical area V1 (also called the striate cortex or area 17). Some of the experimental evidence in support of this hypothesis is outlined below. This is not to say that what goes on in V1 is not important, and indeed may be crucial, for most forms of vivid visual awareness. What we suggest is that the neural activity there is not directly correlated with what is seen."


http://www.klab.caltech.edu/~koch/crick-koch-cc-97.html

What was found in their study, through experiments which isolated the effects in the brain which are related to looking (i.e. directing your eyeballs to move around) from those related to seeing (the appearance of images, colors, etc) is that the activity in the V1 is exactly the same whether the person sees anything or not.

What the visual reconstruction is based on is the activity in the occipitotemporal visual cortex. (downstream of V1 http://www.sciencedirect.com/science/article/pii/S0079612305490196)


"Here we present a new motion-energy [10,
11] encoding model that largely overcomes this limitation.
The model describes fast visual information and slow hemodynamics
by separate components. We recorded BOLD
signals in occipitotemporal visual cortex of human subjects
who watched natural movies and fit the model separately
to individual voxels." https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011


So what they did is analogous to tracing the rectangle pattern that your eyes make when generally tracing the contrast boundary of a door-like image and then comparing that pattern to patterns made by other people's eyes tracing the known images of doors. It's really no closer to any direct access to your interior state than any data-mining advertiser gets by chasing after your web history to determine that you might buy prostate vitamins if you are watching a Rolling Stones YouTube.



a) Computers are themselves conscious (which can neither be proven nor disproven)
? ? and are therefore capable of perception.


Nothing can be considered conscious unless it has the capacity to act in its own interest. Computers, by virtue of their perpetual servitude to human will, are not conscious.
?


? ? or

2) The flesh of the brain is simultaneously objective and subjective.
? ? Thus an ordinary (by which I mean not conscious) computer can work on it
? ? objectively yet produce a subjective image by some manipulation of the flesh
? ? of the brain. One perhaps might call this "milking" of the brain. ?


The flesh of the brain is indeed simultaneously objective and subjective (as are all living cells and perhaps all molecules and atoms), but the noise comparisons being done in this experiment aren't milking anything but the hype machine of pop-sci neuro-fluff. It is cool that they are able to refine the matching of patterns in the brain to patterns which can be identify computationally, but without the expectation of a visual image corresponding to these patterns in the first place, it is meaningless as far as understanding consciousness. What it does do though is provide a new hunger for invasive neurological technologies to analyze the behavior of your brain and draw statistical conclusions from...something which promises nothing less than utopian/dystopian level developments.

Craig
?


[Roger Clough], [rcl...@verizon.net]
1/5/2013 ?

Telmo Menezes

unread,
Jan 7, 2013, 9:33:30 AM1/7/13
to everyth...@googlegroups.com
Hi Roger,


On Mon, Jan 7, 2013 at 1:28 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Telmo Menezes

Well then, we have at least one vote supporting the results.

Scientific results are not supported or refuted by votes. 
 

I remain sceptical because of the line sync issue.
The brain doesn't provide a raster line sync signal.

The synch signal is a requirement of a very specific technology to display video. Analog film does not have a synch signal. It still does sampling. Sampling is always necessary if you use a finite machine to record some visual representation of the world. If one believes the brain stores our memories (I know you don't) you have to believe that it samples perceptual information somehow. It will probably not be as neat and simple as a sync signal.

A trivial but important point: every movie is a representation of reality, not reality itself. It's just a set of symbols that represent the world as seen from a specific point of view in the form of a matrix of discrete light intensity levels. So the mapping from symbols to visual representations is always present, no matter what technology you use. Again, the sync signal is just a detail of the implementation of one such technologies.

The way the brain encodes images is surely very complex and convoluted. Why not? There wasn't ever any adaptive pressure for the encoding to be easily translated from the outputs of an MRI machine. If we require all contact between males and females to be done through MRI machines and wait a couple million years maybe that will change. We might even get a sync signal, who knows?

Either you believe that the brain encodes images somehow, or you believe that the brain is an absurd mechanism. Why are the optic nerves connected to the brain? Why does the visual cortex fire in specific ways when shown specific images? Why can we tell from brain activity if someone is nervous, asleep, solving a math problem of painting?

Roger Clough

unread,
Jan 7, 2013, 11:22:10 AM1/7/13
to everything-list
Hi Telmo Menezes
 
Yes, but the display they show wouldn't work if there were no
sync signal embedded in it. There's nothing in the brain to provide that,
so they must have.
 
 
[Roger Clough], [rcl...@verizon.net]
1/7/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
Receiver: everything-list
Time: 2013-01-07, 09:33:30
Subject: Re: Re: Re: Subjective states can be somehow extracted from brainsviaacomputer

Hi Roger,


On Mon, Jan 7, 2013 at 1:28 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Telmo Menezes

Well then, we have at least one vote supporting the results.

Scientific results are not supported or refuted by votes.�

I remain sceptical because of the line sync issue.
The brain doesn't provide a raster line sync signal.

The synch signal is a requirement of a very specific technology to display video. Analog film does not have a synch signal. It still does sampling. Sampling is always necessary if you use a finite machine to record some visual representation of the world. If one believes the brain stores our memories (I know you don't) you have to believe that it samples perceptual information somehow. It will probably not be as neat and simple as a sync signal.

A trivial but important point: every movie is a representation of reality, not reality itself. It's just a set of symbols that represent the world as seen from a specific point of view in the form of a matrix of discrete light intensity levels. So the mapping from symbols to visual representations is always present, no matter what technology you use. Again, the sync signal is just a detail of the implementation of one such technologies.

The way the brain encodes images is surely very complex and convoluted. Why not? There wasn't ever any adaptive pressure for the encoding to be easily translated from the outputs of an MRI machine. If we require all contact between males and females to be done through MRI machines and wait a couple million years maybe that will change. We might even get a sync signal, who knows?

Either you believe that the brain encodes images somehow, or you believe that the brain is an absurd mechanism. Why are the optic nerves connected to the brain? Why does the visual cortex fire in specific ways when shown specific images? Why can we tell from brain activity if someone is nervous, asleep, solving a math problem of painting?


[Roger Clough], [rcl...@verizon.net]
1/7/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Telmo Menezes
Receiver: everything-list
Time: 2013-01-07, 06:19:33
Subject: Re: Re: Subjective states can be somehow extracted from brains viaacomputer







On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough 爓rote:

Telmo Menezes

unread,
Jan 7, 2013, 5:34:21 PM1/7/13
to everyth...@googlegroups.com
Hi Roger,

Imagine a very simple brain that can recognise two things: a cat and a mouse. Furthermore, it can recognise if an object is still or in motion. So a possible perceptual state could be cat(still) + mouse(in motion). The visual cortex of this brain is complex enough to process the input of a normal human eye and convert it into these representations. It has a very simple memory that can store states and temporal precedence between states. For example:

mouse(still) -> cat(in motion) + mouse(still) -> cat(still) + mouse(in motion) -> cat(still)

Through an MRI we read the activation level of neurons that somehow encode this sequence of states. An incredible amount of information is lost BUT it is possible to represent a visual scene that approximates the meanings of those states. In a regular VGA screen with a synch signal I show you an animation of a mouse standing still, a cat appearing and so on. Of course the cat may be quite different from what the brain actually perceived. But it is also recognised as a cat by the brain, it produces an equivalent state so it's good enough.

Now imagine the brain can encode more properties about objects. Is is big or small? Furry? Dark or light?

Now imagine the brain can encode more information about precedence. Was it a long time ago? Just now? Aeons ago?

And so on and so on until you get to a point where the reconstructed video is almost like what the brain saw. No synch signal.

meekerdb

unread,
Jan 7, 2013, 6:40:19 PM1/7/13
to everyth...@googlegroups.com
On 1/7/2013 6:33 AM, Telmo Menezes wrote:

The way the brain encodes images is surely very complex and convoluted. Why not? There wasn't ever any adaptive pressure for the encoding to be easily translated from the outputs of an MRI machine. If we require all contact between males and females to be done through MRI machines and wait a couple million years maybe that will change.

In only a generation there'll be nobody to notice a change.  :-)

Brent

Craig Weinberg

unread,
Jan 7, 2013, 6:41:03 PM1/7/13
to everyth...@googlegroups.com, - MindBrain@yahoogroups.com


On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:



On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg
 
Sorry, everybody, I was snookered into believing that they had really accomplished the impossible.

So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?

The paper doesn't claim that images from the brain have been decoded, but the sensational headlines imply that is what they did. The video isn't supposed to be anything but fabricated. It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction. Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.
 

The hypothesis is that the brain has some encoding for images.

Where are the encoded images decoded into what we actually see?
 
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.

That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.


It's horribly hard to decode what's going on in the brain.

Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.
 

These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.

You might get the same result out of precisely mapping the movements of the eyes instead. What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.
 
So there must be some way to decode brain activity into images.

The killer argument against that is that the brain has no sync signals to generate
the raster lines.

Neither does reality, but we somehow manage to show a representation of it on tv, right?

What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.

Craig

Telmo Menezes

unread,
Jan 7, 2013, 7:24:24 PM1/7/13
to everyth...@googlegroups.com, - MindBrain@yahoogroups.com
Hi Craig,


On Tue, Jan 8, 2013 at 12:41 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:



On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg
 
Sorry, everybody, I was snookered into believing that they had really accomplished the impossible.

So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?

The paper doesn't claim that images from the brain have been decoded,

Yes it does, right in the abstract:
"To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies."


 
but the sensational headlines imply that is what they did.

 
The video isn't supposed to be anything but fabricated.

ALL videos are fabricated in that sense.
 
It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction.

Yes, and the images you see on your computer screen are just a matrix of molecules artificially made to align in a certain way so that the light being emitted behind them arrives at your eyes in a way that resembles the light emitted by some real world scene that it is meant to be represented.
 
Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.

Nice straw man + ad hominem you did there!
 
 

The hypothesis is that the brain has some encoding for images.

Where are the encoded images decoded into what we actually see?

In the computer that runs the Bayesian algorithm.
 
 
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.

That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.

Sure. That's why the algorithm has to be trained with known videos. So it can learn which brain activity correlates with what 3p accessible images we can all agree upon.
 


It's horribly hard to decode what's going on in the brain.

Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.

Yes. The newborn baby comes with the genetic material that generates the optimal decoder.
 
 

These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.

You might get the same result out of precisely mapping the movements of the eyes instead.

Maybe. That's not where they took the information from though. They took it from the visual cortex.
 
What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.

Google history can only approximate the shape of my face if there is a correlation between the two. In which case my Google history is, in fact, also a description of the shape of my face.
 
 
So there must be some way to decode brain activity into images.

The killer argument against that is that the brain has no sync signals to generate
the raster lines.

Neither does reality, but we somehow manage to show a representation of it on tv, right?

What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.

I agree. I never claimed this was an insight into 1p or anything to do with consciousness. Just that you can extract information from human brains, because that information is represented there somehow. But you're only going to get 3p information.
 
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/6hB08_ZTh9kJ.

Craig Weinberg

unread,
Jan 7, 2013, 9:17:47 PM1/7/13
to everyth...@googlegroups.com, - MindBrain@yahoogroups.com


On Monday, January 7, 2013 7:24:24 PM UTC-5, telmo_menezes wrote:
Hi Craig,


On Tue, Jan 8, 2013 at 12:41 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:



On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg
 
Sorry, everybody, I was snookered into believing that they had really accomplished the impossible.

So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?

The paper doesn't claim that images from the brain have been decoded,

Yes it does, right in the abstract:
"To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies."

The Bayesian decoder is not literally decoding the BOLD and fMRI patterns into images, no more than listing the ingredients of a bag of chips in alphabetical order turns potatoes into words. The key is the 'sampled natural movie prior'. That means it is a figurative reconstruction. They are giving you a choice of selecting one video from hundreds, then looking at the common patterns in several people's brains when they choose the same video. They are not decoding the patterns into videos. By 'reconstructions' they are not saying that they literally recreated any part of the visual experience, but rather that they were able to make a composite video from the videos that they used by plugging the Bayesian probability into the data sets. The videos that you see are YouTube videos superimposed, *not in any way* a decoded translation of neural correlates.



 
but the sensational headlines imply that is what they did.

Starting with UC Berkeley itself:

Of course. Does that surprise you? University PR is notoriously hyped. Exciting the public is the stuff that endowments are made of.
 
 
The video isn't supposed to be anything but fabricated.

ALL videos are fabricated in that sense.

Sure, but a video from a camera on the end of a wire in someone's esophagus is less of a fabrication than a collage of verbal descriptions about digestion. See what I'm driving at? The images are images they got off the internet superimposed over each other - not out of someone's brain activity being interpreted by a computer. The only thing being interpreted or decoded is cross referenced statistics.

Try thinking about it this way. What would the video look like if they plugged the Bayesian decoder algorithm into the regions related to the memory of flavors? Show someone a picture of strawberries, let's you get a pattern in the olfactory-gustatory regions of the brain. Show someone else a bunch of pictures of tasty things, and lo and behold, through your statistical regression, you can match up a pictures of strawberry candy, strawberry ice cream, etc with the pictures of stawberries, pink milk, etc. You get a video of blurry pink stuff and proclaim that you have reconstructed the image of strawberry flavor. It's a neat bit of stage magic, but it has nothing at all to do with translating flavor into image. No more than searching strawberries on Google gives routers and servers a taste of strawberry.

 
It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction.

Yes, and the images you see on your computer screen are just a matrix of molecules artificially made to align in a certain way so that the light being emitted behind them arrives at your eyes in a way that resembles the light emitted by some real world scene that it is meant to be represented.

Photography is a direct optical analog. The pixels on a computer screen are a digitized analog of photography. The images 'reconstructed' are not analogs at all, they are wholly synthetic guesses which are reverse engineered purely from probability. What you see are not in fact images, but mechanically curated noise which remind us of images.
 
 
Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.

Nice straw man + ad hominem you did there!

Sorry, I wasn't trying to do either, although I admit it was condescending. I was trying to point out that it seems like you were saying that brain activity was decoded into visual pixels. I'm not clear really on what your understanding of it is.

 
 

The hypothesis is that the brain has some encoding for images.

Where are the encoded images decoded into what we actually see?

In the computer that runs the Bayesian algorithm.

I'm talking about where in the brain are the images that we actually see 'decoded'?
 
 
 
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.

That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.

Sure. That's why the algorithm has to be trained with known videos. So it can learn which brain activity correlates with what 3p accessible images we can all agree upon.

Images aren't 3p. Images are 1p visual experiences inferred through 3p optical presentations. The algorithm can't learn anything about images because it will never experience them in any way.
 
 


It's horribly hard to decode what's going on in the brain.

Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.

Yes. The newborn baby comes with the genetic material that generates the optimal decoder.
 
 

These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.

You might get the same result out of precisely mapping the movements of the eyes instead.

Maybe. That's not where they took the information from though. They took it from the visual cortex.

That's what makes people jump to the conclusion that they are looking at something that came from a brain rather than YouTube + video editing + simple formula + data sets from experiments that have no particular relation to brains or consciousness.
 
 
What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.

Google history can only approximate the shape of my face if there is a correlation between the two. In which case my Google history is, in fact, also a description of the shape of my face.

Why would there by a correlation between your Google history and the shape of your face?
 
 
 
So there must be some way to decode brain activity into images.

The killer argument against that is that the brain has no sync signals to generate
the raster lines.

Neither does reality, but we somehow manage to show a representation of it on tv, right?

What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.

I agree. I never claimed this was an insight into 1p or anything to do with consciousness. Just that you can extract information from human brains, because that information is represented there somehow. But you're only going to get 3p information.

The information being modeled here visually is not extracted from the human brain. Videos are matched to videos based on incidental correlations of brain activity. The same result could be achieved in many different ways having nothing to do with the brain at all. You could have people listen to one of several songs and draw a pictures of how the music makes them feel, and then write a program which figures out which song they most likely drew based on the statistics what known subjects drew - voila, you have a picture of music.

Craig.

Roger Clough

unread,
Jan 8, 2013, 5:23:55 AM1/8/13
to everything-list
Hi Telmo Menezes

Presumably the brain works with analog, not digital, signals.
But the redisplay of the brain image requires a digital image signal.
How can that happen ?

If the recponstructed brain image has no sync signal,
how couold it display in a digital device ?


[Roger Clough], [rcl...@verizon.net]
1/8/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Telmo Menezes
Receiver: everything-list
Time: 2013-01-07, 17:34:21
Subject: Re: Re: Re: Re: Subjective states can be somehow extracted frombrainsviaacomputer


Hi Roger,


Imagine a very simple brain that can recognise two things: a cat and a mouse. Furthermore, it can recognise if an object is still or in motion. So a possible perceptual state could be cat(still) + mouse(in motion). The visual cortex of this brain is complex enough to process the input of a normal human eye and convert it into these representations. It has a very simple memory that can store states and temporal precedence between states. For example:


mouse(still) -> cat(in motion) + mouse(still) -> cat(still) + mouse(in motion) -> cat(still)


Through an MRI we read the activation level of neurons that somehow encode this sequence of states. An incredible amount of information is lost BUT it is possible to represent a visual scene that approximates the meanings of those states. In a regular VGA screen with a synch signal I show you an animation of a mouse standing still, a cat appearing and so on. Of course the cat may be quite different from what the brain actually perceived. But it is also recognised as a cat by the brain, it produces an equivalent state so it's good enough.


Now imagine the brain can encode more properties about objects. Is is big or small? Furry? Dark or light?


Now imagine the brain can encode more information about precedence. Was it a long time ago? Just now? Aeons ago?


And so on and so on until you get to a point where the reconstructed video is almost like what the brain saw. No synch signal.





On Mon, Jan 7, 2013 at 5:22 PM, Roger Clough wrote:

Hi Telmo Menezes

Yes, but the display they show wouldn't work if there were no
sync signal embedded in it. There's nothing in the brain to provide that,
so they must have.


[Roger Clough], [rcl...@verizon.net]
1/7/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Telmo Menezes
Receiver: everything-list
Time: 2013-01-07, 09:33:30
Subject: Re: Re: Re: Subjective states can be somehow extracted from brainsviaacomputer


Hi Roger,



On Mon, Jan 7, 2013 at 1:28 PM, Roger Clough wrote:

Hi Telmo Menezes

Well then, we have at least one vote supporting the results.



Scientific results are not supported or refuted by votes.

I remain sceptical because of the line sync issue.
The brain doesn't provide a raster line sync signal.



The synch signal is a requirement of a very specific technology to display video. Analog film does not have a synch signal. It still does sampling. Sampling is always necessary if you use a finite machine to record some visual representation of the world. If one believes the brain stores our memories (I know you don't) you have to believe that it samples perceptual information somehow. It will probably not be as neat and simple as a sync signal.


A trivial but important point: every movie is a representation of reality, not reality itself. It's just a set of symbols that represent the world as seen from a specific point of view in the form of a matrix of discrete light intensity levels. So the mapping from symbols to visual representations is always present, no matter what technology you use. Again, the sync signal is just a detail of the implementation of one such technologies.


The way the brain encodes images is surely very complex and convoluted. Why not? There wasn't ever any adaptive pressure for the encoding to be easily translated from the outputs of an MRI machine. If we require all contact between males and females to be done through MRI machines and wait a couple million years maybe that will change. We might even get a sync signal, who knows?


Either you believe that the brain encodes images somehow, or you believe that the brain is an absurd mechanism. Why are the optic nerves connected to the brain? Why does the visual cortex fire in specific ways when shown specific images? Why can we tell from brain activity if someone is nervous, asleep, solving a math problem of painting?


[Roger Clough], [rcl...@verizon.net]
1/7/2013

"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----

From: Telmo Menezes
Receiver: everything-list
Time: 2013-01-07, 06:19:33
Subject: Re: Re: Subjective states can be somehow extracted from brains viaacomputer







On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough ?rote:

Roger Clough

unread,
Jan 8, 2013, 5:29:31 AM1/8/13
to everything-list
Hi Telmo Menezes

The electronics presumably requires a digital signal.

But the brain presumably uses analog signals.

And there is the raster line and sync signal problem.

There is the digital pixel problem, which uses only 3 colors: blue,green,red.

How can all of this work ?



[Roger Clough], [rcl...@verizon.net]
1/8/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Telmo Menezes
Receiver: everything-list
Time: 2013-01-07, 19:24:24
Subject: Re: Re: Subjective states can be somehow extracted from brains viaacomputer


Hi Craig,



On Tue, Jan 8, 2013 at 12:41 AM, Craig Weinberg wrote:



On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:





On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough wrote:

Hi Craig Weinberg
?
Sorry, everybody, I was snookered into believing that they had really accomplished the impossible.


So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?

The paper doesn't claim that images from the brain have been decoded,


Yes it does, right in the abstract:
"To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies."



http://www.cell.com/current-biology/abstract/S0960-9822%2811%2900937-7



?
but the sensational headlines imply that is what they did.


Starting with UC Berkeley itself:
http://newscenter.berkeley.edu/2011/09/22/brain-movies/

?
The video isn't supposed to be anything but fabricated.


ALL videos are fabricated in that sense.
?
It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction.


Yes, and the images you see on your computer screen are just a matrix of molecules artificially made to align in a certain way so that the light being emitted behind them arrives at your eyes in a way that resembles the light emitted by some real world scene that it is meant to be represented.
?
Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.



Nice straw man + ad hominem you did there!
?
?



The hypothesis is that the brain has some encoding for images.

Where are the encoded images decoded into what we actually see?



In the computer that runs the Bayesian algorithm.
?
?

These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.

That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.



Sure. That's why the algorithm has to be trained with known videos. So it can learn which brain activity correlates with what 3p accessible images we can all agree upon.
?




It's horribly hard to decode what's going on in the brain.

Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.



Yes. The newborn baby comes with the genetic material that generates the optimal decoder.
?
?



These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.

You might get the same result out of precisely mapping the movements of the eyes instead.


Maybe. That's not where they took the information from though. They took it from the visual cortex.
?
What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.



Google history can only approximate the shape of my face if there is a correlation between the two. In which case my Google history is, in fact, also a description of the shape of my face.
?
?
So there must be some way to decode brain activity into images.


The killer argument against that is that the brain has no sync signals to generate
the raster lines.


Neither does reality, but we somehow manage to show a representation of it on tv, right?

What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.



I agree. I never claimed this was an insight into 1p or anything to do with consciousness. Just that you can extract information from human brains, because that information is represented there somehow. But you're only going to get 3p information.
?

Craig


?
?
?
[Roger Clough], [rcl...@verizon.net]
1/6/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2013-01-05, 11:37:17
Subject: Re: Subjective states can be somehow extracted from brains via acomputer




On Saturday, January 5, 2013 10:43:32 AM UTC-5, rclough wrote:

Subjective states can somehow be extracted from brains via a computer.


No, they can't.
?


The ingenius folks who were miraculously able to extract an image from the brain
that we saw recently

?
http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity

somehow did it entirely through computation. How was that possible?


By passing off a weak Bayesian regression analysis as a terrific consciousness breakthrough. Look again at the image comparisons. There is nothing being reconstructed, there is only the visual noise of many superimposed shapes which least dis-resembles the test image. It's not even stage magic, it's just a search engine.
?


There are at least two imaginable theories, neither of which I can explain step by step:



What they did was take lots of images and correlate patterns in the V1 region of the brain with those that corresponded V1 patterns in others who had viewed the known images. It's statistical guesswork and it is complete crap.

"The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie"

Crick and Koch found in their 1995 study that


"The conscious visual representation is likely to be distributed over more than one area of the cerebral cortex and possibly over certain subcortical structures as well. We have argued (Crick and Koch, 1995a) that in primates, contrary to most received opinion, it is not located in cortical area V1 (also called the striate cortex or area 17). Some of the experimental evidence in support of this hypothesis is outlined below. This is not to say that what goes on in V1 is not important, and indeed may be crucial, for most forms of vivid visual awareness. What we suggest is that the neural activity there is not directly correlated with what is seen."


http://www.klab.caltech.edu/~koch/crick-koch-cc-97.html

What was found in their study, through experiments which isolated the effects in the brain which are related to looking (i.e. directing your eyeballs to move around) from those related to seeing (the appearance of images, colors, etc) is that the activity in the V1 is exactly the same whether the person sees anything or not.

What the visual reconstruction is based on is the activity in the occipitotemporal visual cortex. (downstream of V1 http://www.sciencedirect.com/science/article/pii/S0079612305490196)


"Here we present a new motion-energy [10,
11] encoding model that largely overcomes this limitation.
The model describes fast visual information and slow hemodynamics
by separate components. We recorded BOLD
signals in occipitotemporal visual cortex of human subjects
who watched natural movies and fit the model separately
to individual voxels." https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011


So what they did is analogous to tracing the rectangle pattern that your eyes make when generally tracing the contrast boundary of a door-like image and then comparing that pattern to patterns made by other people's eyes tracing the known images of doors. It's really no closer to any direct access to your interior state than any data-mining advertiser gets by chasing after your web history to determine that you might buy prostate vitamins if you are watching a Rolling Stones YouTube.



a) Computers are themselves conscious (which can neither be proven nor disproven)
? ? and are therefore capable of perception.


Nothing can be considered conscious unless it has the capacity to act in its own interest. Computers, by virtue of their perpetual servitude to human will, are not conscious.
?


? ? or

2) The flesh of the brain is simultaneously objective and subjective.
? ? Thus an ordinary (by which I mean not conscious) computer can work on it
? ? objectively yet produce a subjective image by some manipulation of the flesh
? ? of the brain. One perhaps might call this "milking" of the brain. ?


The flesh of the brain is indeed simultaneously objective and subjective (as are all living cells and perhaps all molecules and atoms), but the noise comparisons being done in this experiment aren't milking anything but the hype machine of pop-sci neuro-fluff. It is cool that they are able to refine the matching of patterns in the brain to patterns which can be identify computationally, but without the expectation of a visual image corresponding to these patterns in the first place, it is meaningless as far as understanding consciousness. What it does do though is provide a new hunger for invasive neurological technologies to analyze the behavior of your brain and draw statistical conclusions from...something which promises nothing less than utopian/dystopian level developments.

Craig
?


[Roger Clough], [rcl...@verizon.net]
1/5/2013 ?

Craig Weinberg

unread,
Jan 8, 2013, 9:23:56 AM1/8/13
to everyth...@googlegroups.com


On Tuesday, January 8, 2013 5:23:55 AM UTC-5, rclough wrote:
Hi Telmo Menezes  

Presumably the brain works with analog, not digital, signals.  

You are both missing the more important issue - signals cannot be decoded in the brain. It's tempting to think that is possible because we are living in a world of images on screens and in print, but these collections of pixels only cohere as images through our visual awareness, not through optical properties. Try thinking of any of the other senses instead - if instead of images,  we want to peer into the decoding of digitized or analog signals associated with the smell of bacon cooking, would we set up a universal kitchen which would mix the aromatic compounds to match the tiny kitchen in the olfactory cortex? Can you not see that you still need a feeling, sensing person to smell the bacon or see the images? Otherwise there is no 'decoding'.

The fundamental problem is *always* going to be the Explanatory Gap. When we talk about signals, we are already talking about awareness. The idea of a brain that can only recognize some small number of objects and tell if they are moving or not is the level of recognition which already exists on the level of an atom. T-cells recognize other cells and molecules. These kinds of sensitivities do not require a brain, they are everywhere, on every level.

Craig

Roger Clough

unread,
Jan 8, 2013, 9:38:37 AM1/8/13
to everything-list
Hi Craig Weinberg
 
Exactly.
 
 
[Roger Clough], [rcl...@verizon.net]
1/8/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
Time: 2013-01-08, 09:23:56
Subject: Re: Re: Re: Re: Re: Subjective states can be somehow extractedfrombrainsviaacomputer

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/qG1kmMDadnAJ.

Telmo Menezes

unread,
Jan 8, 2013, 11:00:43 AM1/8/13
to everyth...@googlegroups.com, - MindBrain@yahoogroups.com
Hi Craig,


On Tue, Jan 8, 2013 at 3:17 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Monday, January 7, 2013 7:24:24 PM UTC-5, telmo_menezes wrote:
Hi Craig,


On Tue, Jan 8, 2013 at 12:41 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:



On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg
 
Sorry, everybody, I was snookered into believing that they had really accomplished the impossible.

So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?

The paper doesn't claim that images from the brain have been decoded,

Yes it does, right in the abstract:
"To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies."

The Bayesian decoder is not literally decoding the BOLD and fMRI patterns into images, no more than listing the ingredients of a bag of chips in alphabetical order turns potatoes into words.

If you have a device that chemically analyses stuff and then lists the ingredients, this device is indeed producing a (limited) representation of an object in words. This is not fundamentally different from taking a photo of a potato. It's just that the representations focus on different aspects. And the representation is never the object, be it a photography or a piece of text.
 
The key is the 'sampled natural movie prior'. That means it is a figurative reconstruction.

Any image is a figurative reconstruction of reality.
 
They are giving you a choice of selecting one video from hundreds, then looking at the common patterns in several people's brains when they choose the same video. They are not decoding the patterns into videos. By 'reconstructions' they are not saying that they literally recreated any part of the visual experience, but rather that they were able to make a composite video from the videos that they used by plugging the Bayesian probability into the data sets. The videos that you see are YouTube videos superimposed, *not in any way* a decoded translation of neural correlates.

Ok, we won't get anywhere with this :) We have different opinions on what constitutes a visual reconstruction. Yours is more stringent, and requires some notion of pixels. I believe that using video frames as building blocks meets the criteria. It will have a lower fidelity than pixels, but it's still a visual deconstruction and an (imperfect) decoding.
 



 
but the sensational headlines imply that is what they did.

Starting with UC Berkeley itself:

Of course. Does that surprise you? University PR is notoriously hyped. Exciting the public is the stuff that endowments are made of.

That's one way to look at it. The other is that the tax payers need the know what the big exciting goals are, so that they don't get so mad that we're spending their money.
 
 
 
The video isn't supposed to be anything but fabricated.

ALL videos are fabricated in that sense.

Sure, but a video from a camera on the end of a wire in someone's esophagus is less of a fabrication than a collage of verbal descriptions about digestion.

That's context dependent. The text description might be able to comunicate details about digestion that the video cannot.
 
See what I'm driving at? The images are images they got off the internet superimposed over each other - not out of someone's brain activity being interpreted by a computer. The only thing being interpreted or decoded is cross referenced statistics.

Try thinking about it this way. What would the video look like if they plugged the Bayesian decoder algorithm into the regions related to the memory of flavors? Show someone a picture of strawberries, let's you get a pattern in the olfactory-gustatory regions of the brain. Show someone else a bunch of pictures of tasty things, and lo and behold, through your statistical regression, you can match up a pictures of strawberry candy, strawberry ice cream, etc with the pictures of stawberries, pink milk, etc. You get a video of blurry pink stuff and proclaim that you have reconstructed the image of strawberry flavor. It's a neat bit of stage magic, but it has nothing at all to do with translating flavor into image.

I don't see anything wrong with that approach, it's a valid decoder.
 
No more than searching strawberries on Google gives routers and servers a taste of strawberry.

Ah, but now you're talking about 1p. 1p is the big mystery, and this work as nothing to do with it. It has to do with the question: "does the brain store information is such a way that can be accesses and decoded". The decoder doesn't feel anything, neither does the brain. Feelings are 1p.
 

 
It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction.

Yes, and the images you see on your computer screen are just a matrix of molecules artificially made to align in a certain way so that the light being emitted behind them arrives at your eyes in a way that resembles the light emitted by some real world scene that it is meant to be represented.

Photography is a direct optical analog. The pixels on a computer screen are a digitized analog of photography.

Not really. They require a human brain to imagine the third dimension. Images on a computer screen tell my cat nothing.
 
The images 'reconstructed' are not analogs at all, they are wholly synthetic guesses which are reverse engineered purely from probability. What you see are not in fact images, but mechanically curated noise which remind us of images.
 
 
Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.

Nice straw man + ad hominem you did there!

Sorry, I wasn't trying to do either, although I admit it was condescending. I was trying to point out that it seems like you were saying that brain activity was decoded into visual pixels. I'm not clear really on what your understanding of it is.

No hard feelings :) We're discussing interesting things so I empathise with you by default, even if I disagree with some of the things you say.
 

 
 

The hypothesis is that the brain has some encoding for images.

Where are the encoded images decoded into what we actually see?

In the computer that runs the Bayesian algorithm.

I'm talking about where in the brain are the images that we actually see 'decoded'?

That's like asking me where the good restaurants in Paris are. Spread all over, but at the same time in a lot of specific places. I do not have the mental capacity to fit the restaurant dynamics of Paris in my head. But I can point you to a couple of ones, the same way these researchers can point you to a couple of places where those images are.
 
 
 
 
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.

That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.

Sure. That's why the algorithm has to be trained with known videos. So it can learn which brain activity correlates with what 3p accessible images we can all agree upon.

Images aren't 3p. Images are 1p visual experiences inferred through 3p optical presentations.

I disagree slightly. Images are 3p, the effect they have on our consciousness is 1p (thus art).
 
The algorithm can't learn anything about images because it will never experience them in any way.

It can learn how to deliver an image from brain A to brain B, in such a way that brain B will say: "ok, I see what you mean" and brain A will say "yeah, that's what I mean, but the image quality sucks". 

 
 


It's horribly hard to decode what's going on in the brain.

Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.

Yes. The newborn baby comes with the genetic material that generates the optimal decoder.
 
 

These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.

You might get the same result out of precisely mapping the movements of the eyes instead.

Maybe. That's not where they took the information from though. They took it from the visual cortex.

That's what makes people jump to the conclusion that they are looking at something that came from a brain rather than YouTube + video editing + simple formula + data sets from experiments that have no particular relation to brains or consciousness.
 
 
What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.

Google history can only approximate the shape of my face if there is a correlation between the two. In which case my Google history is, in fact, also a description of the shape of my face.

Why would there by a correlation between your Google history and the shape of your face?

Maybe ugly people are more likely to be interested in artificial intelligence and photos of Angelina Jolie? I don't know.
 
 
 
 
So there must be some way to decode brain activity into images.

The killer argument against that is that the brain has no sync signals to generate
the raster lines.

Neither does reality, but we somehow manage to show a representation of it on tv, right?

What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.

I agree. I never claimed this was an insight into 1p or anything to do with consciousness. Just that you can extract information from human brains, because that information is represented there somehow. But you're only going to get 3p information.

The information being modeled here visually is not extracted from the human brain. Videos are matched to videos based on incidental correlations of brain activity. The same result could be achieved in many different ways having nothing to do with the brain at all. You could have people listen to one of several songs and draw a pictures of how the music makes them feel, and then write a program which figures out which song they most likely drew based on the statistics what known subjects drew - voila, you have a picture of music.

You have a representation of music, sure. Art tries to pull that sort of stuff off all the time.

Cheers
Telmo.
 
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/1EFWCDGNhrEJ.

Craig Weinberg

unread,
Jan 8, 2013, 9:00:06 PM1/8/13
to everyth...@googlegroups.com, - MindBrain@yahoogroups.com


On Tuesday, January 8, 2013 11:00:43 AM UTC-5, telmo_menezes wrote:
Hi Craig,


On Tue, Jan 8, 2013 at 3:17 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Monday, January 7, 2013 7:24:24 PM UTC-5, telmo_menezes wrote:
Hi Craig,


On Tue, Jan 8, 2013 at 12:41 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:



On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg
 
Sorry, everybody, I was snookered into believing that they had really accomplished the impossible.

So you think this paper is fiction and the video is fabricated? Do people here know something I don't about the authors?

The paper doesn't claim that images from the brain have been decoded,

Yes it does, right in the abstract:
"To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies."

The Bayesian decoder is not literally decoding the BOLD and fMRI patterns into images, no more than listing the ingredients of a bag of chips in alphabetical order turns potatoes into words.

If you have a device that chemically analyses stuff and then lists the ingredients, this device is indeed producing a (limited) representation of an object in words. This is not fundamentally different from taking a photo of a potato. It's just that the representations focus on different aspects. And the representation is never the object, be it a photography or a piece of text.

A representation is not a transmutation. A story about a potato is not french fries. In the case of consciousness, the experience of seeing is the *presentation*, not the representation, so that any claim to have extracted something from that presentation would not be a description about the experience but would actually be a captured portion of that experience. This experiment has nothing to do with that. It's a card trick. You look around the table and see who looks like they have a good hand, and then build a probability map based on what cards have been played to come up with some inferences about what a player sees in his hand. This is not in any way the same as tapping into the players visual awareness and putting it on video.
 
 
The key is the 'sampled natural movie prior'. That means it is a figurative reconstruction.

Any image is a figurative reconstruction of reality.

Reality is a figurative reconstruction of perceptual expectation. Of the two forms of perception, direct and indirect, measurements and logical inference are the more indirect and therefore the more representative. Only first person experience is unimpeachably genuine.
 
 
They are giving you a choice of selecting one video from hundreds, then looking at the common patterns in several people's brains when they choose the same video. They are not decoding the patterns into videos. By 'reconstructions' they are not saying that they literally recreated any part of the visual experience, but rather that they were able to make a composite video from the videos that they used by plugging the Bayesian probability into the data sets. The videos that you see are YouTube videos superimposed, *not in any way* a decoded translation of neural correlates.

Ok, we won't get anywhere with this :) We have different opinions on what constitutes a visual reconstruction. Yours is more stringent, and requires some notion of pixels. I believe that using video frames as building blocks meets the criteria. It will have a lower fidelity than pixels, but it's still a visual deconstruction and an (imperfect) decoding.

The pixels aren't the issue though. It still sounds to me like you think that what you are looking at came from analyzing people's brains. That is not the case. The images are stock video footage that has been subjected to probabilistic collage. The shapes you see are not being mapped from the visual cortex patterns, they are being picked out of the hat and mashed together intelligently with the goal of guessing similar patterns.  They would look the same if the Bayesian decoder was analyzing patterns of hair care product usage or correlations of pet names.

 



 
but the sensational headlines imply that is what they did.

Starting with UC Berkeley itself:

Of course. Does that surprise you? University PR is notoriously hyped. Exciting the public is the stuff that endowments are made of.

That's one way to look at it. The other is that the tax payers need the know what the big exciting goals are, so that they don't get so mad that we're spending their money.

That too. Still, they have every reason to hype these sensational stories up and none to be conservative about it.
 
 
 
 
The video isn't supposed to be anything but fabricated.

ALL videos are fabricated in that sense.

Sure, but a video from a camera on the end of a wire in someone's esophagus is less of a fabrication than a collage of verbal descriptions about digestion.

That's context dependent. The text description might be able to comunicate details about digestion that the video cannot.

I agree, but that isn't what we're talking about. You may indeed be able to figure out much more about what visual capacities a person has by using neuroscience than experiencing those capacities first hand, but that doesn't make the information about seeing the same thing as seeing. The menu is not the meal. The menu may include a whole page on each ingredient, but you still can't eat it. We can, by process of elimination, figure out what a person has looked at, but we cannot see what they see unless we share neurology with them.
 
 
See what I'm driving at? The images are images they got off the internet superimposed over each other - not out of someone's brain activity being interpreted by a computer. The only thing being interpreted or decoded is cross referenced statistics.

Try thinking about it this way. What would the video look like if they plugged the Bayesian decoder algorithm into the regions related to the memory of flavors? Show someone a picture of strawberries, let's you get a pattern in the olfactory-gustatory regions of the brain. Show someone else a bunch of pictures of tasty things, and lo and behold, through your statistical regression, you can match up a pictures of strawberry candy, strawberry ice cream, etc with the pictures of stawberries, pink milk, etc. You get a video of blurry pink stuff and proclaim that you have reconstructed the image of strawberry flavor. It's a neat bit of stage magic, but it has nothing at all to do with translating flavor into image.

I don't see anything wrong with that approach, it's a valid decoder.

You don't see anything wrong with the necessity for your brain to cook a miniature hamburger every time that you remember eating a hamburger?

 
No more than searching strawberries on Google gives routers and servers a taste of strawberry.

Ah, but now you're talking about 1p. 1p is the big mystery,

Images are 1p.
 
and this work as nothing to do with it. It has to do with the question: "does the brain store information is such a way that can be accesses and decoded". The decoder doesn't feel anything, neither does the brain. Feelings are 1p.

What is being accessed is not information which corresponds to images (1p visual feelings). All that is being accessed is the presence of recognizable activity patterns in the brain which are consistent enough to pick an image out of a lineup with. We are not understanding images in the brain, we are just being clever about statistically matching inputs with outputs of a black box. The videos are not peering into the black box, they are predicting (with incredibly low fidelity) inputs from outputs and skipping the black box altogether.
 
 

 
It's a muddle of YouTube videos superimposed upon each other according to a Bayesian probability reduction.

Yes, and the images you see on your computer screen are just a matrix of molecules artificially made to align in a certain way so that the light being emitted behind them arrives at your eyes in a way that resembles the light emitted by some real world scene that it is meant to be represented.

Photography is a direct optical analog. The pixels on a computer screen are a digitized analog of photography.

Not really. They require a human brain to imagine the third dimension.

Doesn't any organism with binocular vision presumably imagine the third dimension? Some plankton have two eyes, but no brain.

 
Images on a computer screen tell my cat nothing.
 

I wouldn't say that... http://www.cattv.com/catwebsite.php

 
 
The images 'reconstructed' are not analogs at all, they are wholly synthetic guesses which are reverse engineered purely from probability. What you see are not in fact images, but mechanically curated noise which remind us of images.
 
 
Did you think that the video was coming from a brain feed like a TV broadcast? It is certainly not that at all.

Nice straw man + ad hominem you did there!

Sorry, I wasn't trying to do either, although I admit it was condescending. I was trying to point out that it seems like you were saying that brain activity was decoded into visual pixels. I'm not clear really on what your understanding of it is.

No hard feelings :) We're discussing interesting things so I empathise with you by default, even if I disagree with some of the things you say.

Cool. I actually would have agreed with you and a lot of people here at different times in my life. It's only been lately in the last five years or so that I have put together this other way of understanding everything. It gets lost in the debating, because I feel like I have to make my points about what is different or new about how I see things, but I do understand that other ways of looking at it make a lot of sense too - so much so that I suppose I am drawn only to digging into the weak spots to try to  get others to see the secret exit that I think I've found...

 

 
 

The hypothesis is that the brain has some encoding for images.

Where are the encoded images decoded into what we actually see?

In the computer that runs the Bayesian algorithm.

I'm talking about where in the brain are the images that we actually see 'decoded'?

That's like asking me where the good restaurants in Paris are. Spread all over, but at the same time in a lot of specific places. I do not have the mental capacity to fit the restaurant dynamics of Paris in my head. But I can point you to a couple of ones, the same way these researchers can point you to a couple of places where those images are.

There aren't any images there though. Even if the experiment did what the hype implies that they did (or almost did), that would still mean that the brain requires an external decoder to turn invisible integrations of 'information' or action potentials or whatever into what we actually see through our eyes right now.  It's hard (maybe impossible) to get it if you aren't seeing it the right way, but your whole idea of images comes from actually seeing them so you are taking them for granted and assuming that they are just seeable because the brain does this or that. The point is, and blindsight confirms this, that we can get optical information without seeing, so that there is no justification for, or evidence of any decoding of any images in the brain. We see brain patterns that look nothing at all like images.

 
 
 
 
These images can come from the optic nerve, they could be stored in memory or they could be constructed by sophisticated cognitive processes related to creativity, pattern matching and so on. But if you believe that the brain's neural network is a computer responsible for our cognitive processes, the information must be stores there, physically, somehow.

That is the assumption, but it is not necessarily a good one. The problem is that information is only understandable in the context of some form of awareness - an experience of being informed. A machine with no user can only produce different kinds of noise as there is nothing ultimately to discern the difference between a signal and a non-signal.

Sure. That's why the algorithm has to be trained with known videos. So it can learn which brain activity correlates with what 3p accessible images we can all agree upon.

Images aren't 3p. Images are 1p visual experiences inferred through 3p optical presentations.

I disagree slightly. Images are 3p, the effect they have on our consciousness is 1p (thus art).

You disagree with yourself. In your comment about cats and 3-D, you admit that the contents of visual experience depend entirely on the capacities of the hardware. You can't have it both ways. How could an image be 3p. 3p doesn't have a size. These letters are a wall of bits 100 stories high or a tiny flickering speck, or an X-Ray glow, etc. They are only letter shaped to systems with human scale optical discernment and interpretation capacities. You can even interfere with those capacities with alcohol or drugs.

 
The algorithm can't learn anything about images because it will never experience them in any way.

It can learn how to deliver an image from brain A to brain B, in such a way that brain B will say: "ok, I see what you mean" and brain A will say "yeah, that's what I mean, but the image quality sucks". 

It still won't know it's an image. If it stumbled on a way to make brains happy, it would still not know anything at all about images, even if it could perfectly predict which dataset we would be likely to call a better image.
 

 
 


It's horribly hard to decode what's going on in the brain.

Yet every newborn baby learns to do it all by themselves, without any sign of any decoding theater.

Yes. The newborn baby comes with the genetic material that generates the optimal decoder.
 
 

These researchers thought of a clever shortcut. They expose people to a lot of images and record come measures of brain activity in the visual cortex. Then they use machine learning to match brain states to images. Of course it's probabilistic and noisy. But then they got a video that actually approximates the real images.

You might get the same result out of precisely mapping the movements of the eyes instead.

Maybe. That's not where they took the information from though. They took it from the visual cortex.

That's what makes people jump to the conclusion that they are looking at something that came from a brain rather than YouTube + video editing + simple formula + data sets from experiments that have no particular relation to brains or consciousness.
 
 
What they did may have absolutely nothing to do with how the brain encodes or experiences images, no more than your Google history can approximate the shape of your face.

Google history can only approximate the shape of my face if there is a correlation between the two. In which case my Google history is, in fact, also a description of the shape of my face.

Why would there by a correlation between your Google history and the shape of your face?

Maybe ugly people are more likely to be interested in artificial intelligence and photos of Angelina Jolie? I don't know.

Still, there is nothing in a person's Google history which would be a reliable key to predicting what their own face looked like, just like if you had no access to any reflection or other people to tell you what you looked like you couldn't know what you yourself looked like.

 
 
 
 
So there must be some way to decode brain activity into images.

The killer argument against that is that the brain has no sync signals to generate
the raster lines.

Neither does reality, but we somehow manage to show a representation of it on tv, right?

What human beings see on TV simulates one optical environment with another optical environment. You need to be a human being with a human visual system to be able to watch TV and mistake it for a representation of reality. Some household pets might be briefly fooled also, but mostly other species have no idea why we are staring at that flickering rectangle, or buzzing plastic sheet, or that large collection of liquid crystal flags. Representation is psychological, not material. The map is not the territory.

I agree. I never claimed this was an insight into 1p or anything to do with consciousness. Just that you can extract information from human brains, because that information is represented there somehow. But you're only going to get 3p information.

The information being modeled here visually is not extracted from the human brain. Videos are matched to videos based on incidental correlations of brain activity. The same result could be achieved in many different ways having nothing to do with the brain at all. You could have people listen to one of several songs and draw a pictures of how the music makes them feel, and then write a program which figures out which song they most likely drew based on the statistics what known subjects drew - voila, you have a picture of music.

You have a representation of music, sure. Art tries to pull that sort of stuff off all the time.

A shoe can be a representation of music too, that doesn't make it a scientific breakthrough. To say that we are decoding images from the visual cortex would require that we are actually seeing something which isomorphically bridges neurology and visual 1p experience. This is not that at all, it is again, a card trick or cold reading based on the similarity of a-signifying patterns in the brain, not on the content of those patterns themselves.

Salutations from across the nation,
Craig


 

Telmo Menezes

unread,
Jan 9, 2013, 6:18:37 AM1/9/13
to everyth...@googlegroups.com

Hi Craig,
 

Cool. I actually would have agreed with you and a lot of people here at different times in my life. It's only been lately in the last five years or so that I have put together this other way of understanding everything. It gets lost in the debating, because I feel like I have to make my points about what is different or new about how I see things, but I do understand that other ways of looking at it make a lot of sense too - so much so that I suppose I am drawn only to digging into the weak spots to try to  get others to see the secret exit that I think I've found...

Ok, this sounds interesting and I'd like to know more. I've been away from the mailing list in the last few years, so maybe you've talked about it before. Would you tell me about that secret exit?

Craig Weinberg

unread,
Jan 9, 2013, 8:50:57 AM1/9/13
to everyth...@googlegroups.com

The secret exit is to reverse the assumption that consciousness occurs from functions or substances. Even though our human consciousness depends on a living human body (as far as we know for sure), that may be because of the degree of elaboration required to develop a human quality of experience, not because the fundamental capacity to perceive and participate depends on anything at all.

Being inside of a human experience means being inside of an animal experience, an organism's experience, a cellular and molecular level experience. The alternative means picking an arbitrary level at which total lack of awareness suddenly changes into perception and participation for no conceivable reason. Instead of hanging on to the hope of finding such a level or gate, the secret is to see that there are many levels and gates but that they are qualitative, with each richer integration of qualia reframing the levels left behind in a particular way, and that way (another key) is to reduce it from a personal, animistic temporal flow of 1p meaning and significant preference  to impersonal, mechanistic spatial bodies ruled by cause-effect and chance/probability. 1p and 3p are relativistic, but what joins them is the capacity to discern the difference.

Rather than sense i/o being a function or logic take for granted, flip it over so that logic is the 3p shadow of sense. The 3p view is a frozen snapshot of countless 1p views as seen from the outside, and the qualities of the 3p view depend entirely on the nature of the 1p perceiver-partcipant. Sense is semiotic. Its qualitative layers are partitioned by habit and interpretive inertia, just as an ambiguous image looks different depending on how you personally direct your perception, or how a book that you read when you are 12 years old can have different meanings at 18 or 35. The meaning isn't just 'out there', it's literally, physically "in here". If this is true, then the entire physical universe doubles in size, or really is squared as every exterior surface is a 3p representation of an entire history of 1p experience. Each acorn is a potential for oak tree forest, an encyclopedia of evolution and cosmology, so that the acorn is just a semiotic placeholder which is scaled and iconicized appropriately as a consequence of the relation of our human quality awareness and that of the evolutionary-historical-possible future contexts which we share with it (or the whole ensemble of experiences in which 'we' are both embedded as strands of the story of the universe rather than just human body and acorn body or cells and cells etc).

To understand the common thread for all of it, always go back to the juxtaposition of 1p vs 3p, not *that* there is a difference, but the qualities of *what* those differences are - the sense of the juxtaposition.

http://media.tumblr.com/tumblr_m9y9by2XXw1qe3q3v.jpg
http://media.tumblr.com/tumblr_m9y9boN5rP1qe3q3v.jpg

That's were I get sense and motive or perception and participation. The symmetry is more primitive than either matter or mind, so that it isn't one which builds a bridge to the other but sense which divides itself on one level while retaining unity on another, creating not just dualism but a continuum of monism, dualism, dialectic, trichotomy, syzygy, etc. Many levels and perspectives on sense within sense.

http://multisenserealism.com/about/

Craig

Telmo Menezes

unread,
Jan 10, 2013, 4:58:32 PM1/10/13
to everyth...@googlegroups.com
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Still I feel that you avoid some questions. Maybe it's just my lack of understanding of what you're saying. For example: what is the primary "stuff" in your theory? In the same sense that for materialists it's subatomic particles and for comp it's N, +, *. What's yours?


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/elwBNPr92z4J.

meekerdb

unread,
Jan 10, 2013, 5:15:11 PM1/10/13
to everyth...@googlegroups.com
On 1/10/2013 1:58 PM, Telmo Menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Do you agree that intelligence requires complexity?

Brent

Telmo Menezes

unread,
Jan 10, 2013, 5:28:36 PM1/10/13
to everyth...@googlegroups.com
I'm not sure intelligence and complexity are two different things.
 

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.

meekerdb

unread,
Jan 10, 2013, 6:01:23 PM1/10/13
to everyth...@googlegroups.com
On 1/10/2013 2:28 PM, Telmo Menezes wrote:



On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Do you agree that intelligence requires complexity?

I'm not sure intelligence and complexity are two different things.

Of course they're two different things. An oak tree is complex but not intelligent. The question is whether you think something can be intelligent without being complex?

Brent

Telmo Menezes

unread,
Jan 10, 2013, 6:15:25 PM1/10/13
to everyth...@googlegroups.com
I don't agree that an oak tree is not intelligent. It changes itself and its environment in non-trivial ways that promote its continuing existence. What's your definition of intelligence?

meekerdb

unread,
Jan 10, 2013, 6:58:01 PM1/10/13
to everyth...@googlegroups.com
On 1/10/2013 3:15 PM, Telmo Menezes wrote:



On Fri, Jan 11, 2013 at 12:01 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 2:28 PM, Telmo Menezes wrote:



On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Do you agree that intelligence requires complexity?

I'm not sure intelligence and complexity are two different things.

Of course they're two different things. An oak tree is complex but not intelligent. The question is whether you think something can be intelligent without being complex?

I don't agree that an oak tree is not intelligent. It changes itself and its environment in non-trivial ways that promote its continuing existence. What's your definition of intelligence?

What's yours?  I don't care what example you use, trees, rocks, bacteria, sewing machines... Are you going to contend that everything is intelligent and everything is complex, so that the words loose all meaning?  Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).

Brent

Telmo Menezes

unread,
Jan 10, 2013, 7:23:12 PM1/10/13
to everyth...@googlegroups.com
On Fri, Jan 11, 2013 at 12:58 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 3:15 PM, Telmo Menezes wrote:



On Fri, Jan 11, 2013 at 12:01 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 2:28 PM, Telmo Menezes wrote:



On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Do you agree that intelligence requires complexity?

I'm not sure intelligence and complexity are two different things.

Of course they're two different things. An oak tree is complex but not intelligent. The question is whether you think something can be intelligent without being complex?

I don't agree that an oak tree is not intelligent. It changes itself and its environment in non-trivial ways that promote its continuing existence. What's your definition of intelligence?

What's yours?  I don't care what example you use, trees, rocks, bacteria, sewing machines...

If you allow for the concepts of agent, perception, action and goal, my definition is: the degree to which an agent can achieve its goals by perceiving itself and its environment and using that information to predict the outcome of its actions, for the purpose of choosing the actions that has the highest probability of leading to a future state where the goal are achieved. Intelligence can then be quantified by comparing the effectiveness of the agent in achieving its goals to that of an agent acting randomly.

But you can only compare intelligence in relation to a set of goals. How do you compare the intelligence of two agents with different goals and environments? Any criteria is arbitrary. We like to believe we're more intelligent because we're more complex, but you can also believe that bacteria are more intelligent because they are more resilient to extinction.
 
Are you going to contend that everything is intelligent and everything is complex, so that the words loose all meaning? 

I never said that. I do think that intelligence is a mushy concept to begin with, and that's not my fault.
 
Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).

A thermostat is much less complex than a human brain but intelligent under my definition.

meekerdb

unread,
Jan 10, 2013, 7:33:06 PM1/10/13
to everyth...@googlegroups.com
On 1/10/2013 4:23 PM, Telmo Menezes wrote:
Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).

A thermostat is much less complex than a human brain but intelligent under my definition.

But much less intelligent.  So in effect you think there is a degree of intelligence in everything, just like you believe there's a degree of consciousness in everything.  And the degree of intelligence correlates with the degree of complexity ...but you don't think the same about consciousness?

Brent

Craig Weinberg

unread,
Jan 11, 2013, 12:10:55 AM1/11/13
to everyth...@googlegroups.com


On Thursday, January 10, 2013 4:58:32 PM UTC-5, telmo_menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain,

As long as you have another brain to experience the extracted memories in 1p, then I wouldn't rule out the possibility of a 3p transmission of some experiential content from one brain to another.
 
I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Still I feel that you avoid some questions. Maybe it's just my lack of understanding of what you're saying. For example: what is the primary "stuff" in your theory? In the same sense that for materialists it's subatomic particles and for comp it's N, +, *. What's yours?

For me the primary stuff is sensory-motor presence. Particles are public sense representations. N, +, * are private sense representations. Particles represent the experience of sensory-motor obstruction as topological bodies. Integers and arithmetic operators represent the sensory-motor relations of public objects as private logical figures.

Craig

Craig Weinberg

unread,
Jan 11, 2013, 12:20:23 AM1/11/13
to everyth...@googlegroups.com

I was thinking today that a decent way of defining intelligence is just 'The ability to know "what's going on"'.

This makes it clear that intelligence refers to the degree of sophistication of awareness, not just complexity of function or structure. This is why a computer which has complex function and structure has no authentic intelligence and has no idea 'what's going on'. Intelligence however has everything to do with sensitivity, integration, and mobilization of awareness as an asset, i.e. to be directed for personal gain or shared enjoyment, progress, etc. Knowing what's going on implicitly means caring what goes on, which also supervenes on biological quality investment in experience.

Craig

meekerdb

unread,
Jan 11, 2013, 12:27:54 AM1/11/13
to everyth...@googlegroups.com
Which is why I think an intelligent machine must be one that acts in its environment.  Simply 'being aware' or 'knowing' are meaningless without the ability and motives to act on them.

Brent

Telmo Menezes

unread,
Jan 11, 2013, 5:12:48 AM1/11/13
to everyth...@googlegroups.com
On Fri, Jan 11, 2013 at 1:33 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 4:23 PM, Telmo Menezes wrote:
Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).

A thermostat is much less complex than a human brain but intelligent under my definition.

But much less intelligent. 

That's your conclusion, not mine. According to my definition you can only compare thermostats being good at being thermostats and Brents being good at being Brents. Because you can only compare intelligence against a same set of goals. Otherwise you're just saying that intelligence A is more complex than intelligence B. Human intelligence requires a certain level of complexity, bacteria intelligence another. That's all.

General Artificial Intelligence is not general at all - what we really want it is for it to be specifically good at interacting with humans and pursuing human goals (ours, no the AI's - otherwise people will say it's dumb).
 
So in effect you think there is a degree of intelligence in everything, just like you believe there's a degree of consciousness in everything. 

I said I'm more inclined to believe in a degree of consciousness in everything than in intelligence emerging from complexity.
 
And the degree of intelligence correlates with the degree of complexity

Again, your conclusion, not mine.
 
...but you don't think the same about consciousness?

Brent

--

Craig Weinberg

unread,
Jan 11, 2013, 8:07:47 AM1/11/13
to everyth...@googlegroups.com

Sense and motive are inseparable ontologically, although they can be interleaved by level. A plant for instance has no need to act on the world to the same degree as an organism which can move its location, but the cells that make up the plant act to grow and direct it toward light, extend roots to water and nutrients, etc. Ontologically however, there is no way to really have awareness which matters without some participatory opportunity or potential for that opportunity.

The problem with a machine (any machine) is that at the level which is it a machine, it has no way to participate. By definition a machine does whatever it is designed to do. Anything that we use as a machine has to be made of something which we can predict and control reliably, so that its sensory-motive capacities are very limited by definition.  Its range of 'what's going on' has to be very narrow. The internet, for instance, passes a tremendous number of events through electronic circuits, but the content of all of it is entirely lost on it. We use the internet to increase our sense and inform our motives, but its sense and motive does not increase at all.

Craig

Brent

Roger Clough

unread,
Jan 11, 2013, 9:32:31 AM1/11/13
to everything-list
Hi Craig Weinberg

Due to their universal perceptions, monads should be extremely complex.


[Roger Clough], [rcl...@verizon.net]
1/11/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2013-01-11, 08:07:47
Subject: Re: Subjective states can be somehow extracted from brains viaacomputer




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/pf0w53nZsoMJ.

Richard Ruquist

unread,
Jan 11, 2013, 9:56:26 AM1/11/13
to everyth...@googlegroups.com
Yes, Roger.

They come with 500 topo holes thru which super EM flux winds.
Given perhaps 6 quantum states for the flux,
there are 6^500 different types of monads.
Richard

Roger Clough

unread,
Jan 11, 2013, 11:05:29 AM1/11/13
to everything-list
Hi Richard Ruquist


For the umpteenth time, monads are not physical, they cannot be some kind of
product of EM waves.

[Roger Clough], [rcl...@verizon.net]
1/11/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Richard Ruquist
Receiver: everything-list
Time: 2013-01-11, 09:56:26
Subject: Re: Re: Subjective states can be somehow extracted from brainsviaacomputer


Yes, Roger.

They come with 500 topo holes thru which super EM flux winds.
Given perhaps 6 quantum states for the flux,
there are 6^500 different types of monads.
Richard

Richard Ruquist

unread,
Jan 11, 2013, 2:07:13 PM1/11/13
to everyth...@googlegroups.com
Right. Monads are below the quantum level and you have argued,
correctly I think, that not even quantum waves are physical. However,
monads may have a complex structure as you say below <snipped> and
string theory derives what that complex structure looks like including
the super EM flux that may be what strings are made of.

meekerdb

unread,
Jan 11, 2013, 2:20:13 PM1/11/13
to everyth...@googlegroups.com
On 1/11/2013 2:12 AM, Telmo Menezes wrote:



On Fri, Jan 11, 2013 at 1:33 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 4:23 PM, Telmo Menezes wrote:
Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).

A thermostat is much less complex than a human brain but intelligent under my definition.

But much less intelligent. 

That's your conclusion, not mine. According to my definition you can only compare thermostats being good at being thermostats and Brents being good at being Brents. Because you can only compare intelligence against a same set of goals. Otherwise you're just saying that intelligence A is more complex than intelligence B. Human intelligence requires a certain level of complexity, bacteria intelligence another. That's all.

So you've removed all meaning from intelligence.  Rocks are smart at being rocks, we just have to recognize their goal is be rocks.

Maybe we can stop dancing around the question by referring to human-level-intelligence and then rephrasing the question as, "Do you think human-like-intelligence requires human-like-complexity?"

Brent

Bruno Marchal

unread,
Jan 11, 2013, 2:44:10 PM1/11/13
to everyth...@googlegroups.com
On 10 Jan 2013, at 23:28, Telmo Menezes wrote:




On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Do you agree that intelligence requires complexity?

I'm not sure intelligence and complexity are two different things.

Hmm... 

I have a theory of intelligence. It has strong defect, as it makes many things intelligent. But not everyone.

The machine X is intelligent, if it is not stupid.

And the machine X is stupid in two circumstances. Either she asserts that Y is intelligent, or she assert that Y is stupid. (Y can be equal to X).

In that theory, a pebble is intelligent, as no one has ever heard a pebble asserting that some other pebble or whatever, is stupid, or is intelligent.

(that theory is almost only a identification of intelligence with consistency (Dt)).

Intelligence is needed to develop competences.

But competences can have a negative feedback on intelligence. 

Bruno



 

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

meekerdb

unread,
Jan 11, 2013, 4:08:20 PM1/11/13
to everyth...@googlegroups.com
On 1/11/2013 11:44 AM, Bruno Marchal wrote:

On 10 Jan 2013, at 23:28, Telmo Menezes wrote:




On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Do you agree that intelligence requires complexity?

I'm not sure intelligence and complexity are two different things.

Hmm... 

I have a theory of intelligence. It has strong defect, as it makes many things intelligent. But not everyone.

The machine X is intelligent, if it is not stupid.

And the machine X is stupid in two circumstances. Either she asserts that Y is intelligent, or she assert that Y is stupid. (Y can be equal to X).

So if X is smart she asserts Y is not intelligent or Y is not stupid.  :-)

Brent

Bruno Marchal

unread,
Jan 12, 2013, 5:38:14 AM1/12/13
to everyth...@googlegroups.com
On 11 Jan 2013, at 14:07, Craig Weinberg wrote:



On Friday, January 11, 2013 12:27:54 AM UTC-5, Brent wrote:
On 1/10/2013 9:20 PM, Craig Weinberg wrote:


On Thursday, January 10, 2013 7:33:06 PM UTC-5, Brent wrote:
On 1/10/2013 4:23 PM, Telmo Menezes wrote:
Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).

A thermostat is much less complex than a human brain but intelligent under my definition.

But much less intelligent.  So in effect you think there is a degree of intelligence in everything, just like you believe there's a degree of consciousness in everything.  And the degree of intelligence correlates with the degree of complexity ...but you don't think the same about consciousness?

Brent

I was thinking today that a decent way of defining intelligence is just 'The ability to know "what's going on"'.

This makes it clear that intelligence refers to the degree of sophistication of awareness, not just complexity of function or structure. This is why a computer which has complex function and structure has no authentic intelligence and has no idea 'what's going on'. Intelligence however has everything to do with sensitivity, integration, and mobilization of awareness as an asset, i.e. to be directed for personal gain or shared enjoyment, progress, etc. Knowing what's going on implicitly means caring what goes on, which also supervenes on biological quality investment in experience.

Which is why I think an intelligent machine must be one that acts in its environment.  Simply 'being aware' or 'knowing' are meaningless without the ability and motives to act on them.

Sense and motive are inseparable ontologically, although they can be interleaved by level. A plant for instance has no need to act on the world to the same degree as an organism which can move its location, but the cells that make up the plant act to grow and direct it toward light, extend roots to water and nutrients, etc. Ontologically however, there is no way to really have awareness which matters without some participatory opportunity or potential for that opportunity.

The problem with a machine (any machine) is that at the level which is it a machine, it has no way to participate. By definition a machine does whatever it is designed to do.

We can argue that the "natural machine" are not designed but selected. Even partially self-selected through choice of sexual partners.



Anything that we use as a machine has to be made of something which we can predict and control reliably,

Human made machine are designed in this way. 




so that its sensory-motive capacities are very limited by definition.  Its range of 'what's going on' has to be very narrow. The internet, for instance, passes a tremendous number of events through electronic circuits, but the content of all of it is entirely lost on it. We use the internet to increase our sense and inform our motives, but its sense and motive does not increase at all.

Our computer are not encouraged to develop themselves. They are sort of slaves. But machines in general are not predictable, unless we limit them in some way, as we do usually (a bit less so in AI research, but still so for the applications: the consumers want obedient machines).
You have a still a pre-Gödel or pre-Turing conception of machine. We just don't know what universal machine/number are capable of.

Bruno


Roger Clough

unread,
Jan 12, 2013, 5:53:56 AM1/12/13
to everything-list
Hi meekerdb

Complexity need not have anything to do with intelligence.
The critical requirement is autonomy of choice.


[Roger Clough], [rcl...@verizon.net]
1/12/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: meekerdb
Receiver: everything-list
Time: 2013-01-11, 14:20:13
Subject: Re: Subjective states can be somehow extracted from brains viaacomputer


Bruno Marchal

unread,
Jan 12, 2013, 6:15:03 AM1/12/13
to everyth...@googlegroups.com
On 11 Jan 2013, at 22:08, meekerdb wrote:

On 1/11/2013 11:44 AM, Bruno Marchal wrote:

On 10 Jan 2013, at 23:28, Telmo Menezes wrote:




On Thu, Jan 10, 2013 at 11:15 PM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 1:58 PM, Telmo Menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain, I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Do you agree that intelligence requires complexity?

I'm not sure intelligence and complexity are two different things.

Hmm... 

I have a theory of intelligence. It has strong defect, as it makes many things intelligent. But not everyone.

The machine X is intelligent, if it is not stupid.

And the machine X is stupid in two circumstances. Either she asserts that Y is intelligent, or she assert that Y is stupid. (Y can be equal to X).

So if X is smart she asserts Y is not intelligent or Y is not stupid.  :-)

Lol. No problem, she can assert that, but the OR is necessarily non constructive (non intuitionist). It is indeed a way to say that she does not know.

Bruno




Brent


In that theory, a pebble is intelligent, as no one has ever heard a pebble asserting that some other pebble or whatever, is stupid, or is intelligent.

(that theory is almost only a identification of intelligence with consistency (Dt)).

Intelligence is needed to develop competences.

But competences can have a negative feedback on intelligence. 

Bruno


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Telmo Menezes

unread,
Jan 12, 2013, 6:40:17 AM1/12/13
to everyth...@googlegroups.com
On Fri, Jan 11, 2013 at 8:20 PM, meekerdb <meek...@verizon.net> wrote:
On 1/11/2013 2:12 AM, Telmo Menezes wrote:



On Fri, Jan 11, 2013 at 1:33 AM, meekerdb <meek...@verizon.net> wrote:
On 1/10/2013 4:23 PM, Telmo Menezes wrote:
Do you think there can be something that is intelligent but not complex (and use whatever definitions of "intelligent" and "complex" you want).

A thermostat is much less complex than a human brain but intelligent under my definition.

But much less intelligent. 

That's your conclusion, not mine. According to my definition you can only compare thermostats being good at being thermostats and Brents being good at being Brents. Because you can only compare intelligence against a same set of goals. Otherwise you're just saying that intelligence A is more complex than intelligence B. Human intelligence requires a certain level of complexity, bacteria intelligence another. That's all.

So you've removed all meaning from intelligence.  Rocks are smart at being rocks, we just have to recognize their goal is be rocks.

I just claim that we can only talk quantitatively about intelligence in relation to a certain agent and a certain set of goals. Isn't it a bit of a stretch to say that I removed all meaning from the concept?
 

Maybe we can stop dancing around the question by referring to human-level-intelligence and then rephrasing the question as, "Do you think human-like-intelligence requires human-like-complexity?"

Ok. Yes, I think that human-like-intelligence requires human-like-complexity. 

Telmo Menezes

unread,
Jan 12, 2013, 6:46:23 AM1/12/13
to everyth...@googlegroups.com
On Fri, Jan 11, 2013 at 6:10 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Thursday, January 10, 2013 4:58:32 PM UTC-5, telmo_menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain,

As long as you have another brain to experience the extracted memories in 1p, then I wouldn't rule out the possibility of a 3p transmission of some experiential content from one brain to another.
 
I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Still I feel that you avoid some questions. Maybe it's just my lack of understanding of what you're saying. For example: what is the primary "stuff" in your theory? In the same sense that for materialists it's subatomic particles and for comp it's N, +, *. What's yours?

For me the primary stuff is sensory-motor presence.

It's very hard for me to grasp this.
 
Particles are public sense representations. N, +, * are private sense representations. Particles represent the experience of sensory-motor obstruction as topological bodies. Integers and arithmetic operators represent the sensory-motor relations of public objects as private logical figures.

Craig



On Wed, Jan 9, 2013 at 2:50 PM, Craig Weinberg <whats...@gmail.com> wrote:


On Wednesday, January 9, 2013 6:18:37 AM UTC-5, telmo_menezes wrote:

Hi Craig,
 

Cool. I actually would have agreed with you and a lot of people here at different times in my life. It's only been lately in the last five years or so that I have put together this other way of understanding everything. It gets lost in the debating, because I feel like I have to make my points about what is different or new about how I see things, but I do understand that other ways of looking at it make a lot of sense too - so much so that I suppose I am drawn only to digging into the weak spots to try to  get others to see the secret exit that I think I've found...

Ok, this sounds interesting and I'd like to know more. I've been away from the mailing list in the last few years, so maybe you've talked about it before. Would you tell me about that secret exit?

The secret exit is to reverse the assumption that consciousness occurs from functions or substances. Even though our human consciousness depends on a living human body (as far as we know for sure), that may be because of the degree of elaboration required to develop a human quality of experience, not because the fundamental capacity to perceive and participate depends on anything at all.

Being inside of a human experience means being inside of an animal experience, an organism's experience, a cellular and molecular level experience. The alternative means picking an arbitrary level at which total lack of awareness suddenly changes into perception and participation for no conceivable reason. Instead of hanging on to the hope of finding such a level or gate, the secret is to see that there are many levels and gates but that they are qualitative, with each richer integration of qualia reframing the levels left behind in a particular way, and that way (another key) is to reduce it from a personal, animistic temporal flow of 1p meaning and significant preference  to impersonal, mechanistic spatial bodies ruled by cause-effect and chance/probability. 1p and 3p are relativistic, but what joins them is the capacity to discern the difference.

Rather than sense i/o being a function or logic take for granted, flip it over so that logic is the 3p shadow of sense. The 3p view is a frozen snapshot of countless 1p views as seen from the outside, and the qualities of the 3p view depend entirely on the nature of the 1p perceiver-partcipant. Sense is semiotic. Its qualitative layers are partitioned by habit and interpretive inertia, just as an ambiguous image looks different depending on how you personally direct your perception, or how a book that you read when you are 12 years old can have different meanings at 18 or 35. The meaning isn't just 'out there', it's literally, physically "in here". If this is true, then the entire physical universe doubles in size, or really is squared as every exterior surface is a 3p representation of an entire history of 1p experience. Each acorn is a potential for oak tree forest, an encyclopedia of evolution and cosmology, so that the acorn is just a semiotic placeholder which is scaled and iconicized appropriately as a consequence of the relation of our human quality awareness and that of the evolutionary-historical-possible future contexts which we share with it (or the whole ensemble of experiences in which 'we' are both embedded as strands of the story of the universe rather than just human body and acorn body or cells and cells etc).

To understand the common thread for all of it, always go back to the juxtaposition of 1p vs 3p, not *that* there is a difference, but the qualities of *what* those differences are - the sense of the juxtaposition.

http://media.tumblr.com/tumblr_m9y9by2XXw1qe3q3v.jpg
http://media.tumblr.com/tumblr_m9y9boN5rP1qe3q3v.jpg

That's were I get sense and motive or perception and participation. The symmetry is more primitive than either matter or mind, so that it isn't one which builds a bridge to the other but sense which divides itself on one level while retaining unity on another, creating not just dualism but a continuum of monism, dualism, dialectic, trichotomy, syzygy, etc. Many levels and perspectives on sense within sense.

http://multisenserealism.com/about/

Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/elwBNPr92z4J.

To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/4JA1h79Ss5IJ.

Roger Clough

unread,
Jan 12, 2013, 6:58:31 AM1/12/13
to everything-list
Hi Richard Ruquist

I believe that quantum waves are nonphysical.


[Roger Clough], [rcl...@verizon.net]
1/12/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
From: Richard Ruquist
Receiver: everything-list
Time: 2013-01-11, 14:07:13
Subject: Re: Re: Re: Subjective states can be somehow extracted frombrainsviaacomputer


Right. Monads are below the quantum level and you have argued,
correctly I think, that not even quantum waves are physical. However,
monads may have a complex structure as you say below and
string theory derives what that complex structure looks like including
the super EM flux that may be what strings are made of.

Bruno Marchal

unread,
Jan 12, 2013, 7:02:42 AM1/12/13
to everyth...@googlegroups.com

On 12 Jan 2013, at 11:53, Roger Clough wrote:

> Hi meekerdb
>
> Complexity need not have anything to do with intelligence.
> The critical requirement is autonomy of choice.

But that autonomy itself requires a minimal amount of complexity. Then
the math shows that it is not a lot. universality is cheap.
Intelligence requires the same very minimal, but not null, complexity.

Bruno
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



Roger Clough

unread,
Jan 12, 2013, 7:26:25 AM1/12/13
to everything-list
Hi Bruno Marchal
 
Good.
 
 
[Roger Clough], [rcl...@verizon.net]
1/12/2013
"Forever is a long time, especially near the end." - Woody Allen
----- Receiving the following content -----
Receiver: everything-list
Time: 2013-01-12, 07:02:42
> To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.

Craig Weinberg

unread,
Jan 23, 2013, 12:54:46 PM1/23/13
to everyth...@googlegroups.com


On Saturday, January 12, 2013 6:46:23 AM UTC-5, telmo_menezes wrote:



On Fri, Jan 11, 2013 at 6:10 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Thursday, January 10, 2013 4:58:32 PM UTC-5, telmo_menezes wrote:
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that it is possible to extract memories (or their 3p shadows) from a brain,

As long as you have another brain to experience the extracted memories in 1p, then I wouldn't rule out the possibility of a 3p transmission of some experiential content from one brain to another.
 
I do not believe in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not sure I believe that there is a degree of consciousness in everything, but it sounds more plausible than the emergence from complexity idea.

Still I feel that you avoid some questions. Maybe it's just my lack of understanding of what you're saying. For example: what is the primary "stuff" in your theory? In the same sense that for materialists it's subatomic particles and for comp it's N, +, *. What's yours?

For me the primary stuff is sensory-motor presence.

It's very hard for me to grasp this.

It's supposed to be hard to grasp. We are supposed to watch the movie, not try to figure out who the actors really are and how the camera works. 

Reply all
Reply to author
Forward
0 new messages