Linas,--I think this reinforces your view of learning from data, instead of adding more human-curated rules:
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com.
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com.
"If you can't spot the pattern, you've not accomplished anything."
Every significant – and truly useful - advance I've made on my own language apprehension code has been based on recognizing a pattern, and coding for it. I fully agree.
Can a neural network be trained on patterns instead of things?
Can code designed to recognize – for example, faces (like eigenfaces) – be trained to instead recognize blocks of data that look the same, despite perhaps being in vastly dissimilar fields?
Apologies if I'm intruding, or seem to be "out of my lane"… a popular buzzword these days.
Dave – LONG time lurker…
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com.
I appreciate your response. Honestly, I have been finding that as of late, I have been coming up against these "definition" issues with greater frequency.
I don't claim to be an expert. I'm probably – mostly – an idiot in this field, but I did figure out a way to discern if the utterance of a speaker (human) is a question or a statement, with 98.4% accuracy. It even can detect an indirect question as well as a direct question.
I found … a pattern – in English speech, and I coded it.
Can my robots *answer* the question when it is detected – mostly no. But they can identify the utterance better than anything I've tried from any other source.
Beyond this, my last message to you regards recognizing patterns in data, the way scripts recognize patterns in an image. Often they are all just three dimensional matrices.
For example, would it be possible to "train" (hold on here… lol) a net to recognize a given data pattern, then have it look at different databases/data lakes/wads of random data… and recognize if that particular data pattern existed in those other regions?
I apologize if I'm oversimplifying things. I'm imagining that data structures would share a common architecture across disparate fields, and may be recognized in this manner.
Again, I may be overstepping my experience, and I apologize for wasting your time if so. I have had some very rewarding experiences with language parsing/understanding based on coding for patterns that were simply determined from my own neurology/intuitions as a native speaker of the language I code in (English). My intuitions tell me that there are possibly ways to view data in the same way as we view images, and to recognize a "data image" in much the same way.
You are correct, I believe, that neural nets can't spot patterns. But humans can. That seems to be one thig we do really well. But I believe if we feed neural nets examples of patterns, instead of things – can we come up with something new?
I wish I had better words here. While I can see the structures I am referring to, I can't seem to really articulate them…
Feel free to tell me to go back to Comp Sci 101 if I am not offering anything here, I won't be offended 😊
I love what you are all doing here. I spend a lot of time imagining cognitive architectures and methods of creating something akin to genuine "understanding" in a coded base….
Feel free to tell me to shut up and leave you alone 😊
Dave
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA368NNEmjBsg_09%3DG7yJOdT2Ur%3DBMYvEZPFh2k_HiWNx7w%40mail.gmail.com.
Another voice from a
long-time lurker: patterns are only relevant if they can be
recognised by humans. Some can, others can't. E.g. I don't think
humans are good at recognising that cba is the reverse of abc,
but I imagine a machine might well pick it up. So maybe you need
a mixture of hand-crafting for the pattern types and
uncontrolled searching for those patterns.
One refinement of this
idea is that purely reactive learning could turn into proactive
learning as you learn what to expect. In language, you learn
that big often stands before book, and that
words which in other ways are like big often stand
before words that are otherwise like book, so you
generalise to adjectives standing before nouns; then whenever
you hit a word which you think is an adjective, you actively look
for its noun - a very different learning strategy from
purely statistical learning. These expectations gradually turn
into a grammar, where you can talk about dependencies and
dependency types (e.g. subjects versus objects). I know that
sounds like hand-crafting creeping back in through the back
door, but all that's hand-crafted is your initial set of pattern
types.
Best wishes for your
thinking. Dick
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA368NNEmjBsg_09%3DG7yJOdT2Ur%3DBMYvEZPFh2k_HiWNx7w%40mail.gmail.com.
-- Richard Hudson (dickhudson.com)
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com.
To unsubscribe from this group and stop receiving emails from it, send an email to link-g...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com.
--Verbogeny is one of the pleasurettes of a creatific thinkerizer.
--Peter da Silva
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com.
Another voice from a long-time lurker: patterns are only relevant if they can be recognised by humans. Some can, others can't. E.g. I don't think humans are good at recognising that cba is the reverse of abc, but I imagine a machine might well pick it up. So maybe you need a mixture of hand-crafting for the pattern types and uncontrolled searching for those patterns.
One refinement of this idea is that purely reactive learning could turn into proactive learning as you learn what to expect. In language, you learn that big often stands before book, and that words which in other ways are like big often stand before words that are otherwise like book, so you generalise to adjectives standing before nouns; then whenever you hit a word which you think is an adjective, you actively look for its noun - a very different learning strategy from purely statistical learning. These expectations gradually turn into a grammar, where you can talk about dependencies and dependency types (e.g. subjects versus objects). I know that sounds like hand-crafting creeping back in through the back door, but all that's hand-crafted is your initial set of pattern types.
This is a very interesting conversation. Thank you all for sharing your insights. Perhaps I could add that another thing to consider is that maybe humans don't have what I would call general pattern recognizers. Perhaps our ability to recognize patterns come from pattern recognizing modules that are specialized to work in various domains (ie. visual, time-frequency, audio, conceptual, etc) created through evolution. This would be similar to a pattern recognizer 'trained' by a neural network, lacking generality. Perhaps there are patterns in the world we don't have the mental faculty to detect. In that case, the pattern recognition would be developed through the evolutionary domain, rather than the learning domain.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/1e81e6b8-01a1-4283-a404-fef1dd36724fo%40googlegroups.com.
I appreciate your response. Honestly, I have been finding that as of late, I have been coming up against these "definition" issues with greater frequency.
I don't claim to be an expert. I'm probably – mostly – an idiot in this field, but I did figure out a way to discern if the utterance of a speaker (human) is a question or a statement, with 98.4% accuracy. It even can detect an indirect question as well as a direct question.
I found … a pattern – in English speech, and I coded it.
Can my robots *answer* the question when it is detected – mostly no. But they can identify the utterance better than anything I've tried from any other source.
Beyond this, my last message to you regards recognizing patterns in data, the way scripts recognize patterns in an image. Often they are all just three dimensional matrices.
For example, would it be possible to "train" (hold on here… lol) a net to recognize a given data pattern, then have it look at different databases/data lakes/wads of random data… and recognize if that particular data pattern existed in those other regions?
I apologize if I'm oversimplifying things. I'm imagining that data structures would share a common architecture across disparate fields, and may be recognized in this manner.
Again, I may be overstepping my experience, and I apologize for wasting your time if so. I have had some very rewarding experiences with language parsing/understanding based on coding for patterns that were simply determined from my own neurology/intuitions as a native speaker of the language I code in (English). My intuitions tell me that there are possibly ways to view data in the same way as we view images, and to recognize a "data image" in much the same way.
You are correct, I believe, that neural nets can't spot patterns. But humans can. That seems to be one thig we do really well. But I believe if we feed neural nets examples of patterns, instead of things – can we come up with something new?
I wish I had better words here. While I can see the structures I am referring to, I can't seem to really articulate them…
Feel free to tell me to go back to Comp Sci 101 if I am not offering anything here, I won't be offended 😊
I love what you are all doing here. I spend a lot of time imagining cognitive architectures and methods of creating something akin to genuine "understanding" in a coded base….
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/004001d65d80%24c71a8790%24554f96b0%24%40xanatos.com.
But I can't find a better word ... so I'm stuck with "statistical learning", for now. It really really sucks, since it is a HUGE impediment to getting the ideas across.
Oh ye minters of catchy phrases, I appeal to you now!
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA3756umETANn42ymt6eQDy6SV6Utu98gSO7%3DH%2BWqBOEiwg%40mail.gmail.com.