Hi LInas, so I get it, that various parts of nature have been classified according to an updated theory of cognition.
Please understand that biologists and neuroscientists are a diverse bunch. They argue amongst one-another. There is no one single "theory of cognition". Instead, there is a large amount of facts known about how bacteria and slime-molds communicate with one-another, and different researchers connect the dots differently, and are interested in different things.
For example: COVID-19 and long covid: did you know that covid encodes for vesicles that are very similar to the vesicles your neurons use to transport neurotransmitters? This might be why some long-covid sufferers lose a sense of smell -- the neuron transport of signalling molecules is wrecked by the covid encoding. Great! Interesting idea! Back to slime molds: are they also using vesicles similar to those encoded by covid-19? If so, can covid disrupt slime-mold cognition? Why or why not?
Details, details, details ... lose track of the details, and the result is a brutish, naive, simplistic understanding of extremely complex topics. But if you know the details, then you can make deep, sharp, precise statements.
What I'm wondering about, is how does an AGI or more specifically, your "learn" research involving pure symbolic induction work?
Is it that we run the pattern recognition part through the maze to create a grammar then hand it off to a formal symbolic AI system that induces how to "find food" ?
You've used my favorite buzzwords, but I don't understand the question.
So, no need to create a system in which such an algorithm "emerges" ?
The opencog "MOSES" subsystem is able to automatically discover algorithms that fit training data. The concepts that MOSES uses are wide-spread in the industry; there must be hundreds of papers describing similar ideas, and similar systems are the bread-n-butter of the multi-billion-dollar machine learning industry.
A shorter answer: yes, absolutely, one must "create a system in which such an algorithm emerges" !
I guess I mean: does AGI not involve an analog to the "protoplasmic compute" that the single slime mold does, which also involves an external chemical memory?
Details details details. What are the details of how "protoplasmic compute" actually works? Lord help if you invoke "tubulin", as then you wreck in the deeps of the Hammeroff-Penrose hypothesis. Mainstream bio rejects Hammeroff-Penrose, but ... who tf knows.
Mainstream biologists will happily point out that biology is a lot more complicated than just gradient descent. Doesn't matter if we're talking about the gradient descent of quorum sensing in bacteria, or the gradient descent of the conditional log-liklihood of an RNN encoder/decoder or multi-head attention transformer network.
It's "obvious" that one wants to create a system that can automatically learn algorithms such as transformer networks. It's just not obvious how to do this.
I too am interested in the automated discovery of algorithms; but I'm interested in an explicitly symbolic approach.
--linas