1 mm^3 of brain

37 views
Skip to first unread message

Stuart LaForge

unread,
May 14, 2024, 8:11:56 AMMay 14
to ExI Chat, Extropolis
I have not had time for a close look yet, but I thought you guys would
appreciate this:

https://www.technologyreview.com/2024/05/09/1092223/google-map-cubic-millimeter-human-brain/

"A team led by scientists from Harvard and Google has created a 3D,
nanoscale-resolution map of a single cubic millimeter of the human
brain. Although the map covers just a fraction of the organ—a whole
brain is a million times larger—that piece contains roughly 57,000
cells, about 230 millimeters of blood vessels, and nearly 150 million
synapses. It is currently the highest-resolution picture of the human
brain ever created."

That is some crazy resolution, but how much resolution constitutes
functional isomorphism? Are rational numbers sufficient to preserve
identity or are real numbers required?

Stuart LaForge


John Clark

unread,
May 14, 2024, 8:30:40 AMMay 14
to extro...@googlegroups.com, ExI Chat
On Tue, May 14, 2024 at 8:11 AM Stuart LaForge <av...@sollegro.com> wrote:

Are rational numbers sufficient to preserve identity or are real numbers required?

I think a better question would be, would a finite number of integers be sufficient to preserve identity? And I think the answer will almost certainly be yes. Incidentally, yesterday OpenAI released GPT4o, and it seems pretty damn impressive to me, they don't call it GPT4.5 but I think they would be justified if they did; and it's free. And today at 1 PM eastern Google says they will have something new to release.

 John K Clark    See what's on my new list at  Extropolis
snr

Stuart LaForge

unread,
May 15, 2024, 8:17:00 AMMay 15
to extropolis
On Tuesday, May 14, 2024 at 5:30:40 AM UTC-7 johnk...@gmail.com wrote:
On Tue, May 14, 2024 at 8:11 AM Stuart LaForge <av...@sollegro.com> wrote:

Are rational numbers sufficient to preserve identity or are real numbers required?

I think a better question would be, would a finite number of integers be sufficient to preserve identity?

The set of integers and the set of rationals share the same cardinality Aleph0, so the questions imply one one another. It is motivated by an anecdote from back when I was experimenting with programming neural networks in  Python. All the references I could find use floating point variables for the values of neurons and their weights. To maximize the speed while running python on relatively slow hardware, I tried to adapt the error function, gradient descent, and other math to work with integer variables. It didn't work in the sense the model didn't really improve with training and just seemed to assume random states from other random states. Of course that does not necessarily mean that intelligence needs extreme precision, as my hardware limitations might have contributed greatly to my model's failure to converge. Nonetheless I do not believe anybody else has managed to implement an integer-based neural network to date although, I haven't really looked.

So it could be because so much of the backpropagation and other training algorithms depend on calculus which in turn depends on infinity and limits. It is possible that the same notion applies to nature. It seems to be a reason that attempts to quantize space time blow up into infinities. Look at pi for example. It shows up everywhere in physics from Heisenberg's uncertainty principle to Einstein's field equation. Because it shows up so much in nature, it is a real number in both a common and mathematical sense. It famously goes on forever without repeating and is from the largest subset of real numbers, the transcendental numbers which means it is not the root of any polynomial with rational coefficients.

So much of natural law hinging on an infinitely precise number that describes curvature and symmetry might not be such a coincidence.

Stuart LaForge

 


John Clark

unread,
May 15, 2024, 12:53:35 PMMay 15
to extro...@googlegroups.com
On Wed, May 15, 2024 at 8:17 AM Stuart LaForge <stuart....@gmail.com> wrote:

>> I think a better question would be, would a finite number of integers be sufficient to preserve identity?

The set of integers and the set of rationals share the same cardinality Aleph0

No computer and no brain uses infinite precision, therefore cardinality is irrelevant.  And all modern computing hardware that is optimized for use in AI neural nets, such as the GPU's made by Evidia, use 16 bit or even 8-bit precision, as opposed to 64 bit double precision floating point format that is used in modern CPUs for general computing purposes and is not specialized for AI use. Super precision is not necessary for AI, it would just waste computing resources and slow things down.

John K Clark

Will Steinberg

unread,
May 16, 2024, 8:06:20 PMMay 16
to extro...@googlegroups.com
The complexity of this picture (compared to the actual 1mm^3 slice it represents) is like comparing a crayon drawing to the Sistine Chapel--and that's an understatement, I'd reckon.

Even if you think the brain is a simple computer, there is so much going on beyond the connectome.  All membranes are fluid all the time and receptors are going between cells and getting shuttled back inside cells.  Those receptors have subtypes of subtypes and the way they handle inputs and outputs can be vastly modified by small changes in chemical and electrical gradients.  There is retrograde propagation of signals.  There are 'structural' components of the brain which handle aspects of computation.  The vessels in the brain and fluids and hormones they deliver are crucial.  Bacteria are also present in the brain and will almost certainly be found to be as instrumental as they are in other parts of the body.  And the genes themselves, and the epigenetics, are maybe the biggest other part of computation that is missing.

So I guess it's neat but it's more of a drop in the bucket than something to get excited about.  We still don't know shit, and we also don't know how experiences correlate to physical states.  No uploads soon.

John Clark

unread,
May 17, 2024, 1:08:15 PMMay 17
to extro...@googlegroups.com
On Thu, May 16, 2024 at 8:06 PM Will Steinberg <steinbe...@gmail.com> wrote:

> "The complexity of this picture (compared to the actual 1mm^3 slice it represents) is like comparing a crayon drawing to the Sistine Chapel--and that's an understatement, I'd reckon. Even if you think the brain is a simple computer, there is so much going on beyond the connectome.  All membranes are fluid all the time and receptors are going between cells and getting shuttled back inside cells.  Those receptors have subtypes of subtypes and the way they handle inputs and outputs can be vastly modified by small changes in chemical and electrical gradients.  There is retrograde propagation of signals.  There are 'structural' components of the brain which handle aspects of computation.  The vessels in the brain and fluids and hormones they deliver are crucial.  Bacteria are also present in the brain and will almost certainly be found to be as instrumental as they are in other parts of the body.  And the genes themselves, and the epigenetics, are maybe the biggest other part of computation that is missing. So I guess it's neat but it's more of a drop in the bucket than something to get excited about.  We still don't know shit, and we also don't know how experiences correlate to physical states. "

It's certainly true that computers will achieve AGI before we figure out how to make an upload, AGI will likely be achieved this year or next but there's no way we will get upload technology in less than two years. I don't know how much time will be required after AGI to make uploading practical because that would depend on the AGI and the priority it places on developing that technology.

It's also true that the wiring diagram of a human brain is far more complicated than that of a modern microprocessor, but I think all those wheels within wheels and pasted on bells and whistles are a sign of weakness not of strength. The difference is just what you would expect between something that came about through random mutation and natural selection and something that came out of the mind of an intelligent human engineer; there is no reason to expect Evolution would've discovered the most efficient way to process information, and the input output characteristic of information in a given volume of brain material is the only aspect of the brain that is relevant to uploading.

That fact suggests a strategy, if a 1 mm^3 volume proves to be too complicated to study and it probably would be, then start with something a 1000 times less, a 0.1MM^3 volume. Treat that smaller volume as a black box and don't even bother to try to figure out the clunky method the brain does things but instead use your own far more efficient algorithms to reproduce identical input output characteristics. Once the AGI has learned how to do that then, by using a similar procedure, it should be able to figure out how to reproduce the input output characteristics of a 1mm^3 volume and then 10mm^3 then 100mm^3 etc.

Of course none of that will happen if the AGI decides it has better things to do than develop upload technology, which is entirely possible, but I don't think it's unreasonable to suppose far better algorithms for signal processing exist than the ones evolution came up with, I don't even think the method humans use is the best one that biology has come up with.
  

A raven's brain is only about 17 cubic centimeters, a chimpanzee's brain is over 400, and yet a raven is about as smart as a chimp. And the African Grey Parrot has demonstrated an understanding of human language at least as deep as that of a chimpanzee and probably deeper, this despite the fact that the chimp's brain is about 25 times as large. I suppose that when there was evolutionary pressure to become smarter a flying creature couldn't just develop a bigger, heavier, more energy hogging brain; instead of the brute force approach it had to organize the small light brain it already had in more efficient ways. Our brains are about 1400 cm, but I'll bet centimeter by centimeter ravens are smarter than we are.

And there are other examples of Evolution's poor design abilities: in the eye of any vertebrate animal, the blood vessels that feed those cells and the nerves that communicate with them are not in the back of the eye as would be logical but at the front so light must pass through them before the light hits the light sensitive cells, this makes vision less sharp than it would otherwise be and creates a blind spot right in the middle of the visual field. No amount of spin can turn that dopey mess into a good design, a human engineer would have to be dead drunk to come up with a hodgepodge like that. And it won't be long before an AGI engineer will beat any human engineer. 

 > "No uploads soon."

I would say upload technology will be available either "soon" after an AGI comes on the scene or never because the AGI has decided it doesn't want to bother with human uploads.  

John K Clark



 

Dylan Distasio

unread,
May 17, 2024, 10:52:48 PMMay 17
to extro...@googlegroups.com
Your last sentence implies strong AI with self-awareness and the ability to pick and choose what it works on.   I am willing to bet you $2 we don't see this in the next two years, and that when (if) we do, it won't be utilizing a present day LLM transformer architecture.

I also think you're severely underestimating the complexity of the human brain and the difficulties that an AGI would encounter in treating even a sample x1000 smaller in size as a black box, and scaling that up successfully.   Reductionism has its limits in biological systems, and pretty much all life is an extremely complicated series of tightly integrated subsystems with feedback loops between proteins and networks of genes which at its core depends upon the physical properties of molecules at an atomic level, and the fitness landscape constrained by these properties.

I am hopeful that we will eventually crack the mystery of the human brain and consciousness, and create strong AI, but I don't see any indication that either of these nuts will be cracked any time soon.

Are you of the opinion that current LLM architectures will lead to strong AI?   If not, what architecture do you think will get us there soon.

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv1bAgoee20pxn6%3Dwe%2BT_WE2JQVHgjgz78emBUk0C8DDvg%40mail.gmail.com.

John Clark

unread,
May 18, 2024, 7:58:16 AMMay 18
to extro...@googlegroups.com
On Fri, May 17, 2024 at 10:52 PM Dylan Distasio <inte...@gmail.com> wrote:

I would say upload technology will be available either "soon" after an AGI comes on the scene or never because the AGI has decided it doesn't want to bother with human uploads.  
> "Your last sentence implies strong AI with self-awareness and the ability to pick and choose what it works on." 

Yes

 > "I am willing to bet you $2 we don't see this in the next two years, and that when (if) we do, it won't be utilizing a present day LLM transformer architecture."

I would happily bet you $100 except you also said .... 

 "I am hopeful that we will eventually crack the mystery of the human brain and consciousness, and create strong AI, but I don't see any indication that either of these nuts will be cracked any time soon."

The fact that you brought up consciousness makes me suspect you will NEVER conclude that a computer is conscious no matter how smart it is because you will be unable to prove it is conscious, and I will also be unable to prove it, just as I am unable to prove you are conscious.  Although I can't prove it I strongly believe that a smart computer would be conscious because consciousness is an inevitable byproduct of intelligence, otherwise natural selection would never have been able to come up with me, a conscious being; however I will never be able to prove that with mathematical rigor. And besides, as far as the human race is concerned the important thing about an AI is not its consciousness, the machine's intelligence is what is important.

> "Reductionism has its limits in biological systems"

These days it's very trendy to badmouth reductionism, but over the centuries it has been proven to be the engine that has powered the scientific revolution. I see no evidence that reductionism has suddenly stopped working.  

> "Are you of the opinion that current LLM architectures will lead to strong AI?"

The transformer architecture is certainly the front runner but there are other worthy candidates, such as hyperdimensional vector architecture, and there's no reason an AI couldn't use several different architectures in different parts of its brain:



I also think you're severely underestimating the complexity of the human brain

And I am quite certain you are overestimating the complexity of the human brain. People used to argue that we could never hope to make a machine that operates like the human brain does because the brain has about 86 billion neurons with 7*10^14 synaptic connections, so it would take an astronomical amount of information to specify exactly how all that is wired up. However only a tiny part of that information could've come from genetics, and the only other place the remaining information could've come from is the environment. I say that for the following reasons:

From experiment we know the human genome contains 3 billion base pairs, and we know there are 4 bases, so each base can represent 2 bits and there are 8 bits per byte; therefore the entire human genome only has the capacity to hold 750 MB of information; that's about the amount of information you could fit on an old-fashioned CD, not a DVD, just a CD. The true number must be considerably less than that because that's the recipe for building an entire human being, not just the brain, and the genome contains a huge amount of redundancy, 750 MB is just the upper bound. With a lossless compression algorithm you could easily put the entire human genome on a CD and still have enough room on it for two or three Taylor Swift songs.

Therefore I think we can be as certain as we can be certain of anything that it should be possible to build a seed AI that can grow from knowing nothing to being super-intelligent, and the recipe for building such a thing must be less than 750 MB, a LOT less. After all Albert Einstein went from understanding precisely nothing in 1879 to being the first person to understand General Relativity in 1915.  The human genome contains less than 750 megs of information and yet that is more than enough information to construct an entire human being, not just a brain. So whatever algorithm Einstein used to extract information from his environment was, it must have been pretty small, much much less than 750 megs.

That's why I've been saying for years that super-intelligence could be achieved just by scaling things up, no new scientific discovery was needed, just better engineering; although I admit I was surprised how little scaling up turned out to be required.

Let's compare the brain hardware that human intelligence is running on with the brain hardware that GPT-4 is running on, that is to say, let's compare synapses to transistors. The human brain has 7*10^14 synapses (a very generous estimate), but the largest supercomputer in the world, the Frontier Computer at Oak ridge, has about 2.5*10^15 transistors, over three times as many. And we know from experiments that a typical synapse in the human brain "fires" between 5 and 50 times per second, but a typical transistor in a computer "fires" about 4 billion times a second (4*10^9).  That's why the Frontier Computer can perform 1.1 *10^18 floating point calculations per second and why the human brain can not.

I’m not saying an AI must use that exact same algorithm but it does tell us that such a simple thing must exist. For all we know an AI might be able to find an even simpler algorithm, after all random mutation and natural selection managed to find it so it’s not unreasonable to suppose that an intelligence might be able to do even better.

John K Clark

Dylan Distasio

unread,
May 18, 2024, 12:46:27 PMMay 18
to extro...@googlegroups.com
On Sat, May 18, 2024 at 7:58 AM John Clark <johnk...@gmail.com> wrote:

 > "I am willing to bet you $2 we don't see this in the next two years, and that when (if) we do, it won't be utilizing a present day LLM transformer architecture."

I would happily bet you $100 except you also said .... 

 "I am hopeful that we will eventually crack the mystery of the human brain and consciousness, and create strong AI, but I don't see any indication that either of these nuts will be cracked any time soon."

The fact that you brought up consciousness makes me suspect you will NEVER conclude that a computer is conscious no matter how smart it is because you will be unable to prove it is conscious, and I will also be unable to prove it, just as I am unable to prove you are conscious.  Although I can't prove it I strongly believe that a smart computer would be conscious because consciousness is an inevitable byproduct of intelligence, otherwise natural selection would never have been able to come up with me, a conscious being; however I will never be able to prove that with mathematical rigor. And besides, as far as the human race is concerned the important thing about an AI is not its consciousness, the machine's intelligence is what is important.

You're very wrong about me here.  I'm not with the Chinese room crowd.   I purposely used the term 'self-aware' in the beginning of the reply, carefully avoiding the loaded term consciousness.   I mentioned consciousness being cracked later on in the reply because I do believe we'll eventually figure it out, and I have a personal interest in that area and fully defining it, but I am not interested in moving the goal posts in terms of this potential wager.   I do, however, think that a proper understanding of causality needs to be part of any strong AI, and that any said system should not make occasional catastrophic errors in understanding after it has learned something, and should never hallucinate. 

If you'd still like to make a friendly bet, I am open to working together on how we define "self aware" in concrete, verifiable terms without opening up the messy can of worms labeled "consciousness."

I should add that I would be very happy to lose this bet.
 


> "Reductionism has its limits in biological systems"

These days it's very trendy to badmouth reductionism, but over the centuries it has been proven to be the engine that has powered the scientific revolution. I see no evidence that reductionism has suddenly stopped working.  

I'm not a dancing Wu Li master BTW.   My original education/work experience is in biology, and I don't consider reductionism a dirty word.   I didn't say it stopped working; I just am skeptical that your solution will scale/work in this particular case.
 

> "Are you of the opinion that current LLM architectures will lead to strong AI?"

The transformer architecture is certainly the front runner but there are other worthy candidates, such as hyperdimensional vector architecture, and there's no reason an AI couldn't use several different architectures in different parts of its brain:


Thank you for actually responding and sharing the link.   I will take a look. 


I also think you're severely underestimating the complexity of the human brain

And I am quite certain you are overestimating the complexity of the human brain. People used to argue that we could never hope to make a machine that operates like the human brain does because the brain has about 86 billion neurons with 7*10^14 synaptic connections, so it would take an astronomical amount of information to specify exactly how all that is wired up. However only a tiny part of that information could've come from genetics, and the only other place the remaining information could've come from is the environment. I say that for the following reasons:

From experiment we know the human genome contains 3 billion base pairs, and we know there are 4 bases, so each base can represent 2 bits and there are 8 bits per byte; therefore the entire human genome only has the capacity to hold 750 MB of information; that's about the amount of information you could fit on an old-fashioned CD, not a DVD, just a CD. The true number must be considerably less than that because that's the recipe for building an entire human being, not just the brain, and the genome contains a huge amount of redundancy, 750 MB is just the upper bound. With a lossless compression algorithm you could easily put the entire human genome on a CD and still have enough room on it for two or three Taylor Swift songs.

Therefore I think we can be as certain as we can be certain of anything that it should be possible to build a seed AI that can grow from knowing nothing to being super-intelligent, and the recipe for building such a thing must be less than 750 MB, a LOT less. After all Albert Einstein went from understanding precisely nothing in 1879 to being the first person to understand General Relativity in 1915.  The human genome contains less than 750 megs of information and yet that is more than enough information to construct an entire human being, not just a brain. So whatever algorithm Einstein used to extract information from his environment was, it must have been pretty small, much much less than 750 megs.

That's why I've been saying for years that super-intelligence could be achieved just by scaling things up, no new scientific discovery was needed, just better engineering; although I admit I was surprised how little scaling up turned out to be required.

Let's compare the brain hardware that human intelligence is running on with the brain hardware that GPT-4 is running on, that is to say, let's compare synapses to transistors. The human brain has 7*10^14 synapses (a very generous estimate), but the largest supercomputer in the world, the Frontier Computer at Oak ridge, has about 2.5*10^15 transistors, over three times as many. And we know from experiments that a typical synapse in the human brain "fires" between 5 and 50 times per second, but a typical transistor in a computer "fires" about 4 billion times a second (4*10^9).  That's why the Frontier Computer can perform 1.1 *10^18 floating point calculations per second and why the human brain can not.

I’m not saying an AI must use that exact same algorithm but it does tell us that such a simple thing must exist. For all we know an AI might be able to find an even simpler algorithm, after all random mutation and natural selection managed to find it so it’s not unreasonable to suppose that an intelligence might be able to do even better.


No argument on your statement that encoding the architecture and scaffolding of the brain requires a small amount of information to spin up, but I don't think you're going to be able to figure out the algorithm the human body uses to wire a brain using the method you suggested.   Personally, I think we'd be better off starting with a corvid brain in its entirety and attempt to simulate that (and hopefully crack the code of how learning/memory/self awareness work), although even that is no small task.


John Clark

unread,
May 18, 2024, 2:13:26 PMMay 18
to extro...@googlegroups.com
On Sat, May 18, 2024 at 12:46 PM Dylan Distasio <inte...@gmail.com> wrote:

 I'm not with the Chinese room crowd. 

I'm very glad to hear that.  
 
 I purposely used the term 'self-aware' in the beginning of the reply, carefully avoiding the loaded term consciousness. 

I think the words "aware" and "attention" could be considered synonyms, both are about the ability to determine that some input sensations and ideas are more important than others and worthy of more of your finite computational resources. So I think it is interesting that the famous 2017 paper that introduced the idea of transformers and set off the AI revolution that we are living through right now was entitled Attention Is All You Need .  Would you consider the word "consciousness" to also be a synonym of those two words?  

No argument on your statement that encoding the architecture and scaffolding of the brain requires a small amount of information to spin up, butI don't think you're going to be able to figure out the algorithm the human body uses to wire a brain using the method you suggested.   Personally, I think we'd be better off starting with a corvid brain in its entirety and attempt to simulate that (and hopefully crack the code of how learning/memory/self awareness work),

I agree if you want to figure out how biological brains work then it would be wise to start with the brain of a bird like a crow or a parrot because it would likely be more efficient and have fewer wheels within wheels and spaghetti code than a ground-based animal, but if you just want to make a smarter AI I would suggest we just keep doing what we've been doing since 2017. After all, the rate of improvement in just the last two years has been absolutely flabbergasting! I think the world is about to become unrecognizable. 

John K Clark 



Reply all
Reply to author
Forward
0 new messages