Le ven. 30 août 2019 à 21:05, Amirouche Boubekki
<
amirouche...@gmail.com> a écrit :
> Any advice welcome.
>
To summarize:
a) According to both Amir and Linas the SAT solver is slow but there
is room for improvements
b) Interfacing with the Atom Space will be nice
c) Building Scheme bindings (using SWIG) would be nice
> Which sat solver do you recommend to use?
SAT solvers have more or less a common interface, it will be easy to
swap implementation.
My goal has not changed since 5 years! I want to create a mini-opencog
framework. In the spirit of Scheme that builds abstractions on top of
powerful primitives, such as a SAT solvers.
Like we discussed previously, I think the mix of programming language
(C, C++, Python, Scheme, Java, Haskell) is not helping embrace the
power of opencog. So I will try only rely on proven C libraries. Link
Grammar is such a beast. But:
a) As far as I understand there still some moving pieces in how LG is
implemented, and there is still improvement to the core mechanic that
parse natural languages that could be made to support more. That is
not everything about LG engineering is performance optimization. A
high level language is a good candidate for experimenting with new
features in LG.
b) I would like to better understand the link grammar theory. A bit of
thinking, goofing around and reading lead me to the personal discovery
that LG is a very peculiar kind of software because it rely upon a
programming language, that is the language that in which dictionary
are expressed, that is very broad and powerful (like other have
noted). It goes along the idea of Domain Specific Language, where one
builds a programming language to solve a particular task. I think LG
is the best example of DSL I know. As such, it calls for more study,
experimentation and understanding. Even if LG or a particular
dictionary e.g. English dictionary is flawed (somehow?) it is a
significant software feat that I am sure will be taken as inspiration
in the future human endeavours.
c) One area, where Link Grammar software will probably will need to
improve is the ability to create, fix, extend and improve the
dictionaries. That is, it needs a user interface and user experience
that looks better than notepad or the current REPL cli tool called
link-parser. Alas, I have no better idea than a REPL of some sort but
use voice... One area, that could improve the ui/ux of the creation
and maintaince of the dictionary is better integration with the
AtomSpace and in general with the end-user application. That is it
would be neat, to allow the user to fix the dictionary. Something that
is made difficult in current microservice-like setup of opencog and
the fact that. In order, to have a quick feedback loop between LG, the
knowledge base and the user. To make it more clear, the current unit
tests obviously are a good thing. One should bring that feature, unit
testing into the client. We could have, ground through knowledge,
similar to expected parse trees in the current unit tests even take
into account inferred knowledge based on parse trees and eventually on
a version of the dictionary which will be checked as soon as the
client user make a change to that dictionary. Basically, give access
to the user to some knobs that are frozen right now and give the user
the tools necessary to make sure: It Works (tm). The reason for that
is two sides: 1) for rare languages, it should be possible to build a
LG dictionary from the user interface, prolly with a bootstrap
language like lobjan or english. 2) I think LG dictionary language (or
something similar) is a good candidate for inclusion in projects such
as wikidata but before that happens it must be possible to test check
the correctness of changes since it is possible to do so. (unlike
common sense, encyclopedic knowledge and, so called, lexicographic
that are ground truth).
d) Like I try to explain above, I prefer easy to code. Fast programs,
Speedy processors et al. have proven numerous times in recent years to
be false friends. So, like Rob Pike might have said: "make it work,
then make it fast".
My understanding is that GOFAI has nightmares about slowness. Like i
tried to explain somewhere else, they are workarounds to slow
processus like a) lazy algorithm or beam search b) probabilistic
models c) slow overall workflow.
The last point is interesting, my idea is that the problem that AGI
needs to solve, are big and slow and sometime not even advancing at
all when humans try to tackle them. So, if the computer is "slow"
compared to ordering a pizza, the user will be thankful even if it
reply with a message saying: need more input.
--
Amirouche ~ amz3 ~
https://hyper.dev