Fwd: slime molds

48 views
Skip to first unread message

Linas Vepstas

unread,
May 22, 2022, 5:32:38 PM5/22/22
to opencog, Mark Wigzell
Ooops, include the mailing list.  Shorter reply, read other reply first! -- linas

---------- Forwarded message ---------
From: Linas Vepstas <linasv...@gmail.com>
Date: Sun, May 22, 2022 at 4:31 PM
Subject: Re: slime molds
To: Mark Wigzell <markw...@gmail.com>


I didn't watch the PBS special, but from the synopsis, it seems like they're a decade or two or four behind the times. Cognition has been redefined already: not just slime mold that use bacterial signalling methods, but also plants: There are youtube videos of plant leaves getting munched by insects, the munched part emits these polypeptides/neurotransmitters, which diffuse to other parts of the leaf in about 5-10 minutes, causing the entire leaf to emit bug repellant. It's sped up 100x so you can see it happening, and the chemical reaction is tagged with phosphorescent tags, to make it visible.

There are also results on the computational abilities and problem solving by tree roots -- these also communicate, often using mycelial matts from mold to do so -- so, like nerves, in a way. Biologists are all over this kind of stuff.

Here's one: search for TED talk "quorum sensing" -- I get Bonnie Bassler --  I think that's the right talk.  Go for the full-length talk.

An a related note, check out "Algorithmic botany" -- http://algorithmicbotany.org/papers/ -- it spells out in detail exactly how Turing machines, algorithms, grammars, syntax, Lindenmeyer systems, bacterial quorum sensing and plant development work -- complete with math. Prusinkiewicz has been working on this since the 1980's.  You might learn the most, by reading the oldest papers first, and only then moving to the newer stuff.

--linas

On Sun, May 22, 2022 at 1:10 PM Mark Wigzell <markw...@gmail.com> wrote:
https://groups.google.com/g/opencog/c/Bfjvh_WFVq0

I understand that from a formal ai perspective its not a challenge maybe,  I was enamoured with the basic sentience following along causal chains.
--Mark

On Sun, May 22, 2022 at 10:41 AM Linas Vepstas <linasv...@gmail.com> wrote:
Hi Mark, My email inbox is slammed, I missed it. -- resend? -- linas

On Sat, May 21, 2022 at 11:07 PM Mark Wigzell <markw...@gmail.com> wrote:
Hey Linas, I wrote you a while back about slime molds, did you see that? I was hoping to hear your opinion on the subject.
Cheers,
Mark


--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 



--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 



--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 

Linas Vepstas

unread,
May 27, 2022, 5:32:44 PM5/27/22
to Mark Wigzell, opencog
Hi Mark,

On Tue, May 24, 2022 at 5:00 PM Mark Wigzell <markw...@gmail.com> wrote:
Cognition has been redefined already
Hi LInas, so I get it, that various parts of nature have been classified according to an updated theory of cognition.

Please understand that biologists and neuroscientists are a diverse bunch. They argue amongst one-another. There is no one single "theory of cognition". Instead, there is a large amount of facts known about how bacteria and slime-molds communicate with one-another, and different researchers connect the dots differently, and are interested in different things. 

For example: COVID-19 and long covid: did you know that covid encodes for vesicles that are very similar to the vesicles your neurons use to transport neurotransmitters? This might be why some long-covid sufferers lose a sense of smell -- the neuron transport of signalling molecules is wrecked by the covid encoding. Great! Interesting idea! Back to slime molds: are they also using vesicles similar to those encoded by covid-19? If so, can covid disrupt slime-mold cognition? Why or why not?

Details, details, details ... lose track of the details, and the result is a brutish, naive, simplistic understanding of extremely complex topics. But if you know the details, then you can make deep, sharp, precise statements.

What I'm wondering about, is how does an AGI or more specifically, your "learn" research involving pure symbolic induction work?

That has a simple answer, presented in this PDF. https://github.com/opencog/learn/blob/master/learn-lang-diary/agi-2022/grammar-induction.pdf  -- Its backed by many hundreds of pages of other papers and diary notes. All in github.

Is it that we run the pattern recognition part through the maze to create a grammar then hand it off to a formal symbolic AI system that induces how to "find food" ?

You've used my favorite buzzwords, but I don't  understand the question.
 
So, no need to create a system in which such an algorithm "emerges" ?

The opencog "MOSES" subsystem is able to automatically discover algorithms that fit training data. The concepts that MOSES uses are wide-spread in the industry; there must be hundreds of papers describing similar ideas, and similar systems are the bread-n-butter of the multi-billion-dollar machine learning industry.

A shorter answer: yes, absolutely, one must "create a system in which such an algorithm emerges" !

 
I guess I mean: does AGI not involve an analog to the "protoplasmic compute" that the single slime mold does, which also involves an external chemical memory?

Details details details. What are the details of how "protoplasmic compute" actually works? Lord help if you invoke "tubulin", as then you wreck in the deeps of the Hammeroff-Penrose hypothesis. Mainstream bio rejects Hammeroff-Penrose, but ... who tf knows.

Mainstream biologists will happily point out that biology is a lot more complicated than just gradient descent. Doesn't matter if we're talking about the gradient descent of quorum sensing in bacteria, or the gradient descent of the conditional log-liklihood of an RNN encoder/decoder or multi-head attention transformer network.

It's "obvious" that one wants to create a system that can automatically learn algorithms such as transformer networks. It's just not obvious how to do this.

I too am interested in the automated discovery of algorithms; but I'm interested in an explicitly symbolic approach.

--linas
 
Regards,
Mark
 

Linas Vepstas

unread,
May 30, 2022, 2:41:01 PM5/30/22
to Mark Wigzell, opencog


On Sat, May 28, 2022 at 9:25 AM Mark Wigzell <markw...@gmail.com> wrote:

Regarding "Eva" I think we lost our Blender help? I think I have dropped the attempt to revive her. I wanted to see that software working.

This is unfortunate. It seemed like several people had understood what the bugs/problems were, and had gotten a pretty good idea on how to fix them, and then ... ran out of steam.  It seemed like it was all on the verge of being fixed, and then activity stopped.

Can I ask you to do two things, then? First is to write a progress report of what happened. A simple cut-n-paste of some of the email threads would be enough -- some of those emails were great.

Next, package that progress report as a README, and create a git pull req, so that it appears in the main blender git repo.  If you have any other non-breaking changes, check those in too.

If you have breaking changes, half-finished experiments, push those too, into distinct branches. They wouldn't get merged into the master branch, but it at least moth-balls some maybe-useful half-steps.

I understand it is not the direction that AGI is going in.

Well, Eva was a "GUI" -- a graphical user interface for talking, interacting. Yes, the A(G)I is something distinct, but an A(G)I will still need a GUI of some kind, to interact with the world. Of course, other kinds of GUI's might be possible -- e.g. some photorealistic deep-learning neural-net face could look really really nice. Flip side, you wouldn't be able to use that to drive an actual physical robot.
 
I think your "learn" approach will bear fruit, I'll be watching that!
 
Thanks!

--linas
Reply all
Reply to author
Forward
0 new messages