One of the authors of the article says "It’s interesting that the computer-science field is converging onto what evolution has discovered", he said that because it turns out that 41% of the fly brain's neurons are in recurrent loops that provide feedback to other neurons that are upstream of the data processing path, and that's just what we see in modern AIs like ChatGPT.
> One of the authors of the article says "It’s interesting that the computer-science field is converging onto what evolution has discovered", he said that because it turns out that 41% of the fly brain's neurons are in recurrent loops that provide feedback to other neurons that are upstream of the data processing path, and that's just what we see in modern AIs like ChatGPT.> I do not think this is true. ChatGPT is a fine-tuned Large Language Model (LLM), and LLMs use a transformer architecture, which is deep but purely feed-forward, and uses attention heads. The attention mechanism was the big breakthrough back in 2017, that finally enabled the training of such big models:
> My intuition is that if we are going to successfully imitate biology we must model the various neurotransmitters.
On 14-Mar-2023, at 5:49 PM, John Clark <johnk...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv089oC%3DAc-DswW5simNfWzQsGAZADjusaWOacE4M6kt9g%40mail.gmail.com.
> Aren’t you an emergent property of the same system that you are criticising?
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/7E212EF5-8533-484A-AA62-BEF352C9C1D4%40gmail.com.
On 14-Mar-2023, at 6:48 PM, John Clark <johnk...@gmail.com> wrote:
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3Yx%2BZJx-iQ0xJn0FLWdAu%2Bec9vQ17BzZLyqwa%3DS2oNnw%40mail.gmail.com.
On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes <te...@telmomenezes.net> wrote:> My intuition is that if we are going to successfully imitate biology we must model the various neurotransmitters.That is not my intuition. I see nothing sacred in hormones, I don't see the slightest reason why they or any neurotransmitter would be especially difficult to simulate through computation, because chemical messengers are not a sign of sophisticated design on nature's part, rather it's an example of Evolution's bungling. If you need to inhibit a nearby neuron there are better ways of sending that signal then launching a GABA molecule like a message in a bottle thrown into the sea and waiting ages for it to diffuse to its random target.
I'm not interested in brain chemicals, only in the information they contain, if somebody wants information to get transmitted from one place to another as fast and reliablely as possible, nobody would send smoke signals if they had a fiber optic cable. The information content in each molecular message must be tiny, just a few bits because only about 60 neurotransmitters such as acetylcholine, norepinephrine and GABA are known, even if the true number is 100 times greater (or a million times for that matter) the information content of each signal must be tiny. Also, for the long range stuff, exactly which neuron receives the signal can not be specified because it relies on a random process, diffusion. The fact that it's slow as molasses in February does not add to its charm.
On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes <te...@telmomenezes.net> wrote:> One of the authors of the article says "It’s interesting that the computer-science field is converging onto what evolution has discovered", he said that because it turns out that 41% of the fly brain's neurons are in recurrent loops that provide feedback to other neurons that are upstream of the data processing path, and that's just what we see in modern AIs like ChatGPT.> I do not think this is true. ChatGPT is a fine-tuned Large Language Model (LLM), and LLMs use a transformer architecture, which is deep but purely feed-forward, and uses attention heads. The attention mechanism was the big breakthrough back in 2017, that finally enabled the training of such big models:I was under the impression that transformers are superior to recurrent neural networks because recurrent processing of data was not necessary with transformers so more paralyzation is possible than with recursive neural networks; it can analyze an entire sentence at once and doesn't need to do so word by word. So Transformers learn faster and need less trading data.
> My intuition is that if we are going to successfully imitate biology we must model the various neurotransmitters.That is not my intuition. I see nothing sacred in hormones,
I don't see the slightest reason why they or any neurotransmitter would be especially difficult to simulate through computation, because chemical messengers are not a sign of sophisticated design on nature's part, rather it's an example of Evolution's bungling. If you need to inhibit a nearby neuron there are better ways of sending that signal then launching a GABA molecule like a message in a bottle thrown into the sea and waiting ages for it to diffuse to its random target.
I'm not interested in brain chemicals, only in the information they contain, if somebody wants information to get transmitted from one place to another as fast and reliablely as possible, nobody would send smoke signals if they had a fiber optic cable. The information content in each molecular message must be tiny, just a few bits because only about 60 neurotransmitters such as acetylcholine, norepinephrine and GABA are known, even if the true number is 100 times greater (or a million times for that matter) the information content of each signal must be tiny. Also, for the long range stuff, exactly which neuron receives the signal can not be specified because it relies on a random process, diffusion. The fact that it's slow as molasses in February does not add to its charm.
If your job is delivering packages and all the packages are very small, and your boss doesn't care who you give them to as long as they're on the correct continent, and you have until the next ice age to get the work done, then you don't have a very difficult profession. Artificial neurons could be made to communicate as inefficiently as natural ones do by releasing chemical neurotransmitters if anybody really wanted to, but it would be pointless when there are much faster, and much more reliable, and much more specific ways of operating.
The question offered up 6 weeks ago was how does the similarity to animal brains arise from a Server Farm?
At this point, I claim it doesn't and that 3 and 4 are clever Language Machines.
To the claim that via magic, a consciousness arises in silicon or gallium arsenide seems a tall order. I have seen no article by any computer scientist, neurobiologist, or physicist, indicating HOW computer consciousness arose? If there is something out there, somebody please present a link to this august mailing-list.
The question offered up 6 weeks ago was how does the similarity to animal brains arise from a Server Farm?
At this point, I claim it doesn't and that 3 and 4 are clever Language Machines.
To the claim that via magic, a consciousness arises in silicon or gallium arsenide seems a tall order. I have seen no article by any computer scientist, neurobiologist, or physicist, indicating HOW computer consciousness arose? If there is something out there, somebody please present a link to this august mailing-list.
Pribram (1976),
“I tend to view animals, especially furry animals, as conscious-not plants,
not inanimate crystals, not computers. This might be termed the "cuddliness
criterion" for consciousness. My reasons are practical: it makes little difference at present whether computers are conscious or not. (p. 298)”
Freud's Project reassessed
Book by Karl H. Pribram
http://karlpribram.com/wp-content/uploads/pdf/theory/T-078.pdf
http://www-formal.stanford.edu/jmc/ascribing.pdf “ASCRIBING MENTAL QUALITIES TO
MACHINES” (1979)
“Machines as simple as thermostats can be said to have beliefs, and having
beliefs seems to be a characteristic of most machines capable of problem solving performance. However, the machines mankind has so far found it useful
to construct rarely have beliefs about beliefs, although such beliefs will be
needed by computer programs that reason about what knowledge they lack
and where to get it.”
Now, for life arising out of chemicals on planet earth, I stumbled upon this yesterday. The theory is called Nickleback (O Canada!)
Scientists Have Found Molecule That Is Behind The Origin Of Life On Earth? Read To Know
Somebody come up with a theory that network systems can accidentally produce a human level mind, before we celebrate chat4 overmuch.
Let humans come up with a network that invents technology that produce inventions that humans alone would not have arrived at for decades of centuries! That, would be the big breakthrough, and not a fun chatbox.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/1412189407.215328.1678880860736%40mail.yahoo.com.
> Connectome studies hold that "The Map is The Landscape."
> When people like Ray Kurzweil were pontificating 25 years ago, it seemed back then like computer science would be roaring to The Singularity. Today, much of the goodies forecast by Kurz and everyone else seems sluggish,
> Uploading seems as far away to me, as ever.
> 3 and 4 are clever Language Machines.
> To the claim that via magic, a consciousness arises in silicon or gallium arsenide seems a tall order.
> The question offered up 6 weeks ago was how does the similarity to animal brains arise from a Server Farm?
On Wed, Mar 15, 2023 at 7:47 AM spudboy100 via Everything List <everyth...@googlegroups.com> wrote:> 3 and 4 are clever Language Machines.You can input nothing but a photograph into a modern "Language Machine" (by "modern" I mean something that has been developed in the last couple of months) and ask it what is in the photograph and it will be able to tell you, or ask it what will likely happen next to the object in the photo and it will give you a good answer. It can read and understand graphs and charts and if you show it a drawing from a high school geometry textbook full of intersecting lines circles squares and triangles and ask it to find the area of the second largest triangle in the upper left quadrant it will be able to do so. And if you ask what's humorous about the photograph it will be able to explain the joke to you. And it works the other way too, if you ask it to paint a picture of something, even something that doesn't exist, it will be able to provide an original painting of it that's far better than anything I could dream of painting. How on earth can something that is just a "Language Machine" do amy of that?> To the claim that via magic, a consciousness arises in silicon or gallium arsenide seems a tall order.It's no more magical than the claim that consciousness arises from 3 pounds of gray goo made of carbon hydrogen and oxygen. Are you claiming that carbon hydrogen and oxygen are sacred but silicon gallium and arsenic are not? And besides, to hell with consciousness! If computers are not conscious then that's their problem not mine; it won't affect me one way or the other if computers are conscious or not, and I could say the same thing about your alleged consciousness. I'm far FAR more interested in if computers are intelligent or not because that most certainly does affect me.
> The question offered up 6 weeks ago was how does the similarity to animal brains arise from a Server Farm?Because both animal brains and server farms process information intelligently.John K Clark See what's on my new list at Extropolispii
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2esT1gmprF-twkFyvdXESoi1asEk6L6mi5Dxmm9_7sGw%40mail.gmail.com.
> It might affect you.
> Do you plan to freeze your brain?
> Do you have a clause to only resuscitate to biological substrates?
On Wed, Mar 15, 2023 at 11:01 AM John Clark <johnk...@gmail.com> wrote:> It might affect you.I don't think so, but because it involves consciousness I'll never be able to prove it, i'll never be able to prove anything about consciousness. But I'm confident that if something acts just like me then it will be me.> Do you plan to freeze your brain?Yes, I've already paid the $80,000 bill to do so.> Do you have a clause to only resuscitate to biological substrates?No, and it would not make any difference even if I did because it would not be followed. I'm not at all sure cryonics will work at all because I'm not sure my brain really will remain at liquid nitrogen temperatures until the singularity, and even if it is I'm not at all sure anybody will think I'm worth reviving, but I think my chances are infinitely better than if my brain is burned up in a furnace or eaten by worms. If I am lucky enough to be brought back I'm certain it will be as an upload, nobody will want somebody as stupid as me (relative to the average citizen living at that time) wasting resources in the physical world.
You can input nothing but a photograph into a modern "Language Machine" (by "modern" I mean something that has been developed in the last couple of months) and ask it what is in the photograph and it will be able to tell you, or ask it what will likely happen next to the object in the photo and it will give you a good answer. It can read and understand graphs and charts and if you show it a drawing from a high school geometry textbook full of intersecting lines circles squares and triangles and ask it to find the area of the second largest triangle in the upper left quadrant it will be able to do so. And if you ask what's humorous about the photograph it will be able to explain the joke to you. And it works the other way too, if you ask it to paint a picture of something, even something that doesn't exist, it will be able to provide an original painting of it that's far better than anything I could dream of painting. How on earth can something that is just a "Language Machine" do amy of that?> To the claim that via magic, a consciousness arises in silicon or gallium arsenide seems a tall order.It's no more magical than the claim that consciousness arises from 3 pounds of gray goo made of carbon hydrogen and oxygen. Are you claiming that carbon hydrogen and oxygen are sacred but silicon gallium and arsenic are not? And besides, to hell with consciousness! If computers are not conscious then that's their problem not mine; it won't affect me one way or the other if computers are conscious or not, and I could say the same thing about your alleged consciousness. I'm far FAR more interested in if computers are intelligent or not because that most certainly does affect me.> The question offered up 6 weeks ago was how does the similarity to animal brains arise from a Server Farm?Because both animal brains and server farms process information intelligently.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0kVLqjWXqAOsLjAkXjhATQ%2BBdJMtR-bo%2BRe1apSpnJ1g%40mail.gmail.com.
> Just as Neuro-guys explan human consciousness
> If we don't know what, we will soon.
> UNLESS you hold that consciousness is a Mystery?
On 16-Mar-2023, at 12:44 AM, spudboy100 via Everything List <everyth...@googlegroups.com> wrote: