GPT2 doesn't do natural language "understanding", I would say its NL
understanding is even worse than OpenCog's right now...
We have used transformer NNs together with OpenCog for NL dialogue
systems, e.g. for controlling the Philip K. Dick robot that we showed
off at the Web Summit last year. However, this was more a software
integration than a deep conceptual integration (basically if the
OpenCog rulebase couldn't come up w/ a good response to something
someone said to the robot, it would fall back to a transformer-NN
model we trained on a corpus of PKD writing)
Andres Suarez and I are also working on a deeper integration where we
use a transformer-NN model as a "sentence probability oracle" or
"paragraph probability oracle" and then use the probabilities
estimated by the model to help rank and prioritize different
rule-inductions and category-formations proposed within OpenCog's
symbolic learning
We have also been experimenting with DeepWalk-type methods for
embedding OpenCog Atomspace hypergraphs into vector spaces (in the
context of the bio-Atomspace containing ensembles of
genomic-data-analysis MOSES classification trees, and lots of data
from numerous bio-ontologies) ... with decent results .. and have been
thinking about how to create new forms of graph transformer networks
that are appropriate for generative modeling of specific sorts of
Atomspace hypergraphs (e.g. hypergraphs that record PLN inference
histories) ...
So Tom, I think your instinct is correct but the details get hairy...
ben
> To view this discussion on the web visit
https://groups.google.com/d/msgid/opencog/CAFYQi%3DsWW9Gd7PyujWteb8Lcr1otYm%2B-5ys9j6f8FnzQhorNRA%40mail.gmail.com.
--
Ben Goertzel, PhD
http://goertzel.org
“The only people for me are the mad ones, the ones who are mad to
live, mad to talk, mad to be saved, desirous of everything at the same
time, the ones who never yawn or say a commonplace thing, but burn,
burn, burn like fabulous yellow roman candles exploding like spiders
across the stars.” -- Jack Kerouac