> Are rational numbers sufficient to preserve identity or are real numbers required?
> Are rational numbers sufficient to preserve identity or are real numbers required?I think a better question would be, would a finite number of integers be sufficient to preserve identity?
>> I think a better question would be, would a finite number of integers be sufficient to preserve identity?> The set of integers and the set of rationals share the same cardinality Aleph0
> "The complexity of this picture (compared to the actual 1mm^3 slice it represents) is like comparing a crayon drawing to the Sistine Chapel--and that's an understatement, I'd reckon. Even if you think the brain is a simple computer, there is so much going on beyond the connectome. All membranes are fluid all the time and receptors are going between cells and getting shuttled back inside cells. Those receptors have subtypes of subtypes and the way they handle inputs and outputs can be vastly modified by small changes in chemical and electrical gradients. There is retrograde propagation of signals. There are 'structural' components of the brain which handle aspects of computation. The vessels in the brain and fluids and hormones they deliver are crucial. Bacteria are also present in the brain and will almost certainly be found to be as instrumental as they are in other parts of the body. And the genes themselves, and the epigenetics, are maybe the biggest other part of computation that is missing. So I guess it's neat but it's more of a drop in the bucket than something to get excited about. We still don't know shit, and we also don't know how experiences correlate to physical states. "
> "No uploads soon."
--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv1bAgoee20pxn6%3Dwe%2BT_WE2JQVHgjgz78emBUk0C8DDvg%40mail.gmail.com.
> I would say upload technology will be available either "soon" after an AGI comes on the scene or never because the AGI has decided it doesn't want to bother with human uploads.
> "Your last sentence implies strong AI with self-awareness and the ability to pick and choose what it works on."
> "I am willing to bet you $2 we don't see this in the next two years, and that when (if) we do, it won't be utilizing a present day LLM transformer architecture."
> "I am hopeful that we will eventually crack the mystery of the human brain and consciousness, and create strong AI, but I don't see any indication that either of these nuts will be cracked any time soon."
> "Reductionism has its limits in biological systems"
> "Are you of the opinion that current LLM architectures will lead to strong AI?"
> I also think you're severely underestimating the complexity of the human brain
> "I am willing to bet you $2 we don't see this in the next two years, and that when (if) we do, it won't be utilizing a present day LLM transformer architecture."I would happily bet you $100 except you also said ....> "I am hopeful that we will eventually crack the mystery of the human brain and consciousness, and create strong AI, but I don't see any indication that either of these nuts will be cracked any time soon."The fact that you brought up consciousness makes me suspect you will NEVER conclude that a computer is conscious no matter how smart it is because you will be unable to prove it is conscious, and I will also be unable to prove it, just as I am unable to prove you are conscious. Although I can't prove it I strongly believe that a smart computer would be conscious because consciousness is an inevitable byproduct of intelligence, otherwise natural selection would never have been able to come up with me, a conscious being; however I will never be able to prove that with mathematical rigor. And besides, as far as the human race is concerned the important thing about an AI is not its consciousness, the machine's intelligence is what is important.
> "Reductionism has its limits in biological systems"These days it's very trendy to badmouth reductionism, but over the centuries it has been proven to be the engine that has powered the scientific revolution. I see no evidence that reductionism has suddenly stopped working.
> "Are you of the opinion that current LLM architectures will lead to strong AI?"The transformer architecture is certainly the front runner but there are other worthy candidates, such as hyperdimensional vector architecture, and there's no reason an AI couldn't use several different architectures in different parts of its brain:
> I also think you're severely underestimating the complexity of the human brainAnd I am quite certain you are overestimating the complexity of the human brain. People used to argue that we could never hope to make a machine that operates like the human brain does because the brain has about 86 billion neurons with 7*10^14 synaptic connections, so it would take an astronomical amount of information to specify exactly how all that is wired up. However only a tiny part of that information could've come from genetics, and the only other place the remaining information could've come from is the environment. I say that for the following reasons:From experiment we know the human genome contains 3 billion base pairs, and we know there are 4 bases, so each base can represent 2 bits and there are 8 bits per byte; therefore the entire human genome only has the capacity to hold 750 MB of information; that's about the amount of information you could fit on an old-fashioned CD, not a DVD, just a CD. The true number must be considerably less than that because that's the recipe for building an entire human being, not just the brain, and the genome contains a huge amount of redundancy, 750 MB is just the upper bound. With a lossless compression algorithm you could easily put the entire human genome on a CD and still have enough room on it for two or three Taylor Swift songs.Therefore I think we can be as certain as we can be certain of anything that it should be possible to build a seed AI that can grow from knowing nothing to being super-intelligent, and the recipe for building such a thing must be less than 750 MB, a LOT less. After all Albert Einstein went from understanding precisely nothing in 1879 to being the first person to understand General Relativity in 1915. The human genome contains less than 750 megs of information and yet that is more than enough information to construct an entire human being, not just a brain. So whatever algorithm Einstein used to extract information from his environment was, it must have been pretty small, much much less than 750 megs.That's why I've been saying for years that super-intelligence could be achieved just by scaling things up, no new scientific discovery was needed, just better engineering; although I admit I was surprised how little scaling up turned out to be required.Let's compare the brain hardware that human intelligence is running on with the brain hardware that GPT-4 is running on, that is to say, let's compare synapses to transistors. The human brain has 7*10^14 synapses (a very generous estimate), but the largest supercomputer in the world, the Frontier Computer at Oak ridge, has about 2.5*10^15 transistors, over three times as many. And we know from experiments that a typical synapse in the human brain "fires" between 5 and 50 times per second, but a typical transistor in a computer "fires" about 4 billion times a second (4*10^9). That's why the Frontier Computer can perform 1.1 *10^18 floating point calculations per second and why the human brain can not.I’m not saying an AI must use that exact same algorithm but it does tell us that such a simple thing must exist. For all we know an AI might be able to find an even simpler algorithm, after all random mutation and natural selection managed to find it so it’s not unreasonable to suppose that an intelligence might be able to do even better.
> I'm not with the Chinese room crowd.
> I purposely used the term 'self-aware' in the beginning of the reply, carefully avoiding the loaded term consciousness.
> No argument on your statement that encoding the architecture and scaffolding of the brain requires a small amount of information to spin up, butI don't think you're going to be able to figure out the algorithm the human body uses to wire a brain using the method you suggested. Personally, I think we'd be better off starting with a corvid brain in its entirety and attempt to simulate that (and hopefully crack the code of how learning/memory/self awareness work),