P vs NP

313 views
Skip to first unread message

Ivan Vodišek

unread,
Nov 6, 2018, 12:00:31 PM11/6/18
to ope...@googlegroups.com
Hello everyone :)

I have a question regarding to my independent research relating to OpenCog. I read somewhere (I really don't remember where) that if P = NP then it would be beneficial to AI in general.

There are science fields which would obviously benefit if P = NP. But my question is: how would specifically OpenCog benefit from that solution? Somehow, it should be a matter of reducing a large number of possible combinations, but I don't really see were would AI fit into this equation. Googling around didn't produce anything interesting, so I'm making a post to this OpenCog community in a hope for an answer.

Thank you all for your time,
Ivan V.

Linas Vepstas

unread,
Nov 6, 2018, 4:37:51 PM11/6/18
to opencog
Ivan,

Think of it like so. If you have some algo solving some problem -- doesn't matter what, AI or not AI, opencog or not opencog, anything -- and you make the problem N times bigger, how much longer does it take to solve it? (say twice as big, or three times as big or just N times bigger...)

If you make it twice as big, and it takes the same amount of time, that's "constant time" and is written O(1)  For example, access to an array or vector is O(1), does not depend on the vector size.

If you make it twice as big, and it takes twice as long, that's linear. its written O(N)  For example, walking a linked list is O(N) -- make it twice as long, you have twice as many steps to get to the end.

Some problems are quadratic -- O(N^2)

Some problems are cubic -- O(N^3) -- most parsers, of any kind, are O(N^3) -- basically, you can't beat that, its a kind of a best-possible performance.

In general, write O(N^y)  for some number y.  If y is less than infinity, then the problem is "polynomial" or P

But it could happen that the algorithm is O(N^N) or O(exp(N)) or O(N!) that is, N-factorial.   These are examples of non-polynomial or NP algorithms.

So P=NP says, for every problem, there exists some (maybe unknown) algorithm that solves it in P time. i.e. in O(N^y) for some number y.

Three problems:
a) it seems very very unlikely that P=NP
b) Even if P=NP, you probably still won't know what the best algo actually is.
c) The number y could still be large. I read a paper yesterday for an algo that is O(N^7) which is pretty useless -- if my problem takes an minute of CPU time, and making it twice as big makes it run for 2^7= 128 minutes (two hours) that is pretty useless. If I make it three times as big, 3^7=2187 minutes = 1.5 days!! 

Conclusion P=NP and/or P != NP is pretty much totally useless for practical work. You may as well be designing a spaceship to fly to the center of the galaxy. Its fun -- can't knock it -- do something fun, but it doesn't matter for actually writing any actual code.

--linas




--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W5UhHawabD6NAyT7hEMfkJNCAhd%2BzJcXgFZpJu-MhoMg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


--
cassette tapes - analog TV - film cameras - you

Nil Geisweiller

unread,
Nov 7, 2018, 8:19:32 AM11/7/18
to ope...@googlegroups.com
Hello Ivan,

If P = NP it would mean that, given a problem reframed as reasoning
such that its solution is a proof p of size s, one could construct p
in a polynomial time of s, which sounds very doubtful.

That's in theory, in practice however I think we could make P = NP for
some class of inputs via clever use of meta-learning (such as
inference control meta-learning that we're experimenting within
opencog, see
https://blog.singularitynet.io/introspective-reasoning-within-the-opencog-framework-1bc7e182827).

In fact I had this dream where we could have a sequence of NP problems
and progressively learn how to solve them in P.

Obviously for a finite set of inputs, one can turn any complex
algorithm into a logarithmic one (think of a pre-calculated binary
decision tree, where each branch is a bit describing the input and
each leaf is the solution). But it should still be possible to learn
an actual algorithm rather than a finite giant decision tree, that
performs worse that log, is more compact, but performs better than NP
for a bunch of real-world problems.

Nil

On 11/6/18 7:00 PM, Ivan Vodišek wrote:
> Hello everyone :)
>
> I have a question regarding to my independent research relating to
> OpenCog. I read somewhere (I really don't remember where) that if P = NP
> <https://en.wikipedia.org/wiki/P_versus_NP_problem> then it would be
> beneficial to AI in general.
>
> There are science fields which would obviously benefit if P = NP. But my
> question is: how would specifically OpenCog benefit from that solution?
> Somehow, it should be a matter of reducing a large number of possible
> combinations, but I don't really see were would AI fit into this
> equation. Googling around didn't produce anything interesting, so I'm
> making a post to this OpenCog community in a hope for an answer.
>
> Thank you all for your time,
> Ivan V.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W5UhHawabD6NAyT7hEMfkJNCAhd%2BzJcXgFZpJu-MhoMg%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Ivan Vodišek

unread,
Nov 7, 2018, 9:45:08 AM11/7/18
to ope...@googlegroups.com
Hello Linas,
I agree, the constant y should be in reasonable bounds for a solution to be usable.

Hello Nil,
If P = NP it would mean that, given a problem reframed as reasoning
such that its solution is a proof p of size s, one could construct p
 in a polynomial time of s, which sounds very doubtful.

Thank you very much, that is exactly a kind of answer I was hoping for. It is very inspiring and induces many thoughts.

That's in theory, in practice however I think we could make P = NP for
some class of inputs via clever use of meta-learning (such as
inference control meta-learning that we're experimenting within
opencog, see
https://blog.singularitynet.io/introspective-reasoning-within-the-opencog-framework-1bc7e182827).
 
In fact I had this dream where we could have a sequence of NP problems
and progressively learn how to solve them in P.
 
Very interesting approach. I found myself many times thinking of AI as a black box that could solve anything we place before it. And after all this time, I still consider the final AI capable of things intellectually unimaginable by merely mortal humans. Although there are problem setups that are contradictory, and as such unsolvable, P vs NP problem sounds solid to me.

My approach is to manually resolve some NP complete problems such are Boolean SAT or Traveling salesman in polynomial time. I have some indices that it is a possible task. (My apologies to Singularity.net crew due to mess that P = NP would do to AGI Coin. But there are so much important possibilities it would open for many science tasks - including protein folding - to leave the big question unresolved. I don't know what to hope for. P != NP makes no mess at all, but P = NP generates too many possibilities just to ignore it if P = NP is possible. Time will tell.)


Obviously for a finite set of inputs, one can turn any complex
algorithm into a logarithmic one (think of a pre-calculated binary
decision tree, where each branch is a bit describing the input and
each leaf is the solution). But it should still be possible to learn
an actual algorithm rather than a finite giant decision tree, that
performs worse that log, is more compact, but performs better than NP
for a bunch of real-world problems.

I think of algorithms as a possible compressed forms of their output.

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Nil Geisweiller

unread,
Nov 8, 2018, 1:17:18 AM11/8/18
to ope...@googlegroups.com
Hello,

On 11/7/18 4:44 PM, Ivan Vodišek wrote:
> My approach is to manually resolve some NP complete
> <https://en.wikipedia.org/wiki/NP-completeness> problems such are
> Boolean SAT or Traveling salesman in polynomial time. I have some
> indices that it is a possible task.

I suspect you'd only discover subclasses of inputs for which P = NP.
But that doesn't mean it wouldn't be useful or insightful.

> (My apologies to Singularity.net
> crew due to mess that P = NP would do to AGI Coin. But there are so much

Quite the contrary, any knowledge that can be used to decrease
algorithmic complexity is extremely welcome. Consider that you're
talking to people who are banging their heads against the wall of
complexity on a daily basis. ;-)

Nil

>
>
> Obviously for a finite set of inputs, one can turn any complex
> algorithm into a logarithmic one (think of a pre-calculated binary
> decision tree, where each branch is a bit describing the input and
> each leaf is the solution). But it should still be possible to learn
> an actual algorithm rather than a finite giant decision tree, that
> performs worse that log, is more compact, but performs better than NP
> for a bunch of real-world problems.
>
>
> I think of algorithms as a possible compressed forms of their output.
>
> sri, 7. stu 2018. u 14:19 'Nil Geisweiller' via opencog
> <ope...@googlegroups.com <mailto:ope...@googlegroups.com>> napisao je:
> <mailto:opencog%2Bunsu...@googlegroups.com>
> > <mailto:opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>>.
> > To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>
> > <mailto:ope...@googlegroups.com <mailto:ope...@googlegroups.com>>.
> > Visit this group at https://groups.google.com/group/opencog.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W5UhHawabD6NAyT7hEMfkJNCAhd%2BzJcXgFZpJu-MhoMg%40mail.gmail.com
>
> >
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W5UhHawabD6NAyT7hEMfkJNCAhd%2BzJcXgFZpJu-MhoMg%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W6GfPkQpUM0i_g6LjwBERf0XM906p%2B7rTn-nvRvPEZ7Q%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W6GfPkQpUM0i_g6LjwBERf0XM906p%2B7rTn-nvRvPEZ7Q%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Ivan Vodišek

unread,
Nov 8, 2018, 4:26:03 AM11/8/18
to ope...@googlegroups.com
By definition, NP-complete problems are such NP problems to which any other NP problem can be reduced in polynomial time. There are proven numerous NP-complete problems (Boolean SAT and Traveling salesman being some of them), and if we would solve any of them, we would automatically cover the whole set of NP.

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Nil Geisweiller

unread,
Nov 8, 2018, 9:33:26 AM11/8/18
to ope...@googlegroups.com
On 11/8/18 11:25 AM, Ivan Vodišek wrote:
> By definition, NP-complete problems are such NP problems to which any
> other NP problem can be reduced in polynomial time. There are proven
> numerous NP-complete problems (Boolean SAT and Traveling salesman being
> some of them), and if we would solve any of them, we would automatically
> cover the whole set of NP.

True.

When I say subclass of inputs, I literally mean inputs, so if say f is
a function in NP, there is some subclass of x, S, such that for all x
in S, T(f(x)) = O(|x|^n). An obviously subclass would be where |x| is
bound, but that's of course not interesting.

Or one could consider the complexity over a sequence of inputs, where
only incremental costs are considered.

I guess it means that there are multiple ways to crack P = NP other
than in its full generality that can still be useful.

Nil

>
> čet, 8. stu 2018. u 07:17 'Nil Geisweiller' via opencog
> <ope...@googlegroups.com <mailto:ope...@googlegroups.com>> napisao je:
> <mailto:ope...@googlegroups.com <mailto:ope...@googlegroups.com>>>
> > <mailto:opencog%2Bunsu...@googlegroups.com
> <mailto:opencog%252Buns...@googlegroups.com>>
> > > <mailto:opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>
> > <mailto:opencog%2Bunsu...@googlegroups.com
> <mailto:opencog%252Buns...@googlegroups.com>>>.
> > > To post to this group, send email to
> ope...@googlegroups.com <mailto:ope...@googlegroups.com>
> > <mailto:ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>>
> > > <mailto:ope...@googlegroups.com
> <mailto:ope...@googlegroups.com> <mailto:ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>>>.
> > > Visit this group at https://groups.google.com/group/opencog.
> > > To view this discussion on the web visit
> > >
> >
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W5UhHawabD6NAyT7hEMfkJNCAhd%2BzJcXgFZpJu-MhoMg%40mail.gmail.com
> >
> > >
> >
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W5UhHawabD6NAyT7hEMfkJNCAhd%2BzJcXgFZpJu-MhoMg%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> > > For more options, visit https://groups.google.com/d/optout.
> >
> > --
> > You received this message because you are subscribed to the
> Google
> > Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails from it,
> > send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>
> > <mailto:opencog%2Bunsu...@googlegroups.com
> <mailto:opencog%252Buns...@googlegroups.com>>.
> > To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>
> > <mailto:ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>>.
> > Visit this group at https://groups.google.com/group/opencog.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/opencog/e1418e0e-8604-2421-0597-4846a7644d3c%40gmail.com.
> > For more options, visit https://groups.google.com/d/optout.
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails from it,
> send
> > an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>
> > <mailto:opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>>.
> > To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>
> > <mailto:ope...@googlegroups.com <mailto:ope...@googlegroups.com>>.
> > Visit this group at https://groups.google.com/group/opencog.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W6GfPkQpUM0i_g6LjwBERf0XM906p%2B7rTn-nvRvPEZ7Q%40mail.gmail.com
>
> >
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W6GfPkQpUM0i_g6LjwBERf0XM906p%2B7rTn-nvRvPEZ7Q%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XkQL2voYguNkCQemWsYuX%2BcamVC7qDGFzjSSyeti0V5w%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XkQL2voYguNkCQemWsYuX%2BcamVC7qDGFzjSSyeti0V5w%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Ivan Vodišek

unread,
Nov 8, 2018, 11:02:19 AM11/8/18
to ope...@googlegroups.com
I understand. It is like restricting input to a disjunctive normal form subset of formulas that would produce a SAT results in constant / linear time.

Once I saw an interesting discussion about a specific brute force assembler optimizer. Optimizer was taking an input  domain and a short piece of code that operates over that domain, pairing it with codomain values (complete definition of an arbitrary algorithm). Then it was constructing a different combinations of instruction sequences that behave exactly as the starting program, but happen to be either smaller, or faster than the original code. This brute force optimizer was a successful experiment, it produced more optimized (actually the most optimized because it exhausted all the combinations) versions of code, but was explodingly slow for programs larger than some 12, or so instructions, being ran on a regular PC. I wonder what could be done with supercomputers running that experiment?

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Linas Vepstas

unread,
Nov 8, 2018, 11:36:05 AM11/8/18
to opencog
When I say subclass of inputs, I literally mean inputs, so if say f is
a function in NP, there is some subclass of x, S, such that for all x
in S, T(f(x)) = O(|x|^n).

Such subclasses are known and studied. For example, the reason
that SAT works so well in real life is because almost all real-life
problems fall into the subclass!  But this is well-known, now, and
research has moved on to trying to identify if there are more
subclasses which have been missed so far (and/or for which some
other algo might be effective).

--linas

Linas Vepstas

unread,
Nov 8, 2018, 11:53:52 AM11/8/18
to opencog
On Thu, Nov 8, 2018 at 10:02 AM Ivan Vodišek <ivan....@gmail.com> wrote:


Once I saw an interesting discussion about a specific brute force assembler optimizer. Optimizer was taking an input  domain and a short piece of code that operates over that domain, pairing it with codomain values (complete definition of an arbitrary algorithm). Then it was constructing a different combinations of instruction sequences that behave exactly as the starting program, but happen to be either smaller, or faster than the original code.

This is called a "supercompiler". The earliest experiments were at IBM in the 1980's; since then some half-dozen or more PhD's were written on it. The hard part is determining if two different instruction sequences produce the same output, or not, and for that, it turns out SAT is extremely effective. They don't show up commercially, for many reasons. One is that once you discover pairings, they have to be stuck into some database. A second is that fewer instructions is not the same as lower power (a concern on cell phones) A third is that the alternative may schedule insn's to units that are occupied, so, although its fewer insns, it will take more cycles, waiting for the bubble to drain. This is particularly fun for VLIW machines.  Finally, the top 10 or 20 most important aliases are known, and the compiler writers have inserted them by hand, into compilers. The remaining aliases seem not to improve things by more than a few percent, so the effort does not seem worth it.  How do I know this? I worked on a supercompiler for hexagon for qualcomm.  The hexagon was a six-unit/core VLIW.

--linas


Ivan Vodišek

unread,
Nov 8, 2018, 12:57:45 PM11/8/18
to ope...@googlegroups.com
So, theoretically, given unlimited processing power, it should be possible to pass unoptimized SAT solver to a supercompiler, and obtain an optimized version. If this optimized SAT version is polynomial, then P equals NP. In practice, combinatorial explosion makes this screnario likely unprobable.

However, beside the brute force supercompilers, there should exist a set of rules in a form of A <-> B, where A and B are code fragments equivalent in their functionality. Using these rules, it should be possible to perform code transformations with a result similar to supercompilers, but in much less time (especially given a genetic or otherwise weighted algorithm). This method would be also prone to combinatorial explosion, but I think in much less degree, and more controllable than supercompilers. Under assumption that the method is in computationally acceptable range, the question would be how to automatically construct equivalence rules. A few of these rules could be axiomatic in their nature, and might be automatically derived utilizing supercompiler for a start. Once there is a complete axiomatic set of equivalences, we could dismiss the more time consuming supercompiler, and proceed with faster transforming-by-rules method. And again, if we could get any NP-complete algorithm optimized down to polynomial time, then P equals NP.


--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Nil Geisweiller

unread,
Nov 9, 2018, 1:17:49 AM11/9/18
to ope...@googlegroups.com, Ivan Vodišek
On 11/8/18 7:57 PM, Ivan Vodišek wrote:
> However, beside the brute force supercompilers, there should exist a set
> of rules in a form of A <-> B, where A and B are code fragments
> equivalent in their functionality. Using these rules, it should be

Determining code equivalence is undecidable in general (for recursive
functions and even primitive recursive functions IIRC). In practice
however one can do a good job in many cases. For instance MOSES
(currently being ported for the AtomSpace) has a reduction engine that
takes a program and attempts to normalize it to a canonical form. The
primary goal in this case is not to speed up its execution (though it
often helps) but rather to avoid re-evaluating semantically equivalent
candidates during program evolution, and also (important but subtler)
to increase the correlation between syntax and semantics.

For more info on the reduct engine (which is part of MOSES), see

https://wiki.opencog.org/w/Meta-Optimizing_Semantic_Evolutionary_Search

Nil

Sarah Connorh

unread,
Dec 5, 2018, 10:23:45 AM12/5/18
to ope...@googlegroups.com
Yeah but if P & P are negative ?

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Nil Geisweiller

unread,
Dec 5, 2018, 10:50:19 AM12/5/18
to ope...@googlegroups.com
On 12/5/18 5:23 PM, Sarah Connorh wrote:
> Yeah but if P & P are negative ?

Then I'll be back.

Nil

>
> Le mer. 7 nov. 2018 à 14:19, 'Nil Geisweiller' via opencog
> <ope...@googlegroups.com <mailto:ope...@googlegroups.com>> a écrit :
> <mailto:opencog%2Bunsu...@googlegroups.com>
> > <mailto:opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>>.
> > To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>
> > <mailto:ope...@googlegroups.com <mailto:ope...@googlegroups.com>>.
> > Visit this group at https://groups.google.com/group/opencog.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W5UhHawabD6NAyT7hEMfkJNCAhd%2BzJcXgFZpJu-MhoMg%40mail.gmail.com
>
> >
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W5UhHawabD6NAyT7hEMfkJNCAhd%2BzJcXgFZpJu-MhoMg%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CALJYOy9mgnNF8BX7RxazKYboHyUU2HcYnrMsit8-VY2QQzCzkw%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CALJYOy9mgnNF8BX7RxazKYboHyUU2HcYnrMsit8-VY2QQzCzkw%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Ben Goertzel

unread,
Dec 5, 2018, 12:40:14 PM12/5/18
to opencog
Also, P=NP is about worst-case complexity, and what matters for AGI is
mainly average-case complexity calculated relative to the probability
distributions characteristic of the environments and problems the AI
system will actually need to deal with...
> To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/9bbd5aca-88bf-2a8c-801c-64c687fb5444%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"The dewdrop world / Is the dewdrop world / And yet, and yet …" --
Kobayashi Issa

Ivan Vodišek

unread,
Dec 9, 2018, 11:37:11 AM12/9/18
to ope...@googlegroups.com
Yes, probability complicates stuff a bit...

I don't know, I never understood why is PLN defined at low level in OpenCog. Sure it's a cool asset, to know probability and confidence of any formula (not to degrade it, PLN surely does have a deal of scientific value), but why low level if AtomSpace is Turing complete. Starting from observation of natural general intelligence in humans, we don't have that specific probability feedback when we are thinking about something. But that doesn't mean we can't calculate it on our own, once we receive an interest stimulus about probability.

For example, if it is about natural language and resolving ambiguities, it could still be done by keeping a sum type in relevant position inside syntax tree that can be analyzed and decided afterwards, again using some non-low-level AtomSpace constructs, even possibly a PLN itself defined in a terms of AtomSpace.

But maybe it is my professional deformation to keep things that much simple.

- Ivan V. -

Ben Goertzel

unread,
Dec 9, 2018, 12:00:33 PM12/9/18
to opencog
On Sun, Dec 9, 2018 at 11:37 AM Ivan Vodišek <ivan....@gmail.com> wrote:
...
> I don't know, I never understood why is PLN defined at low level in OpenCog. Sure it's a cool asset, to know probability and confidence of any formula (not to degrade it, PLN surely does have a deal of scientific value), but why low level if AtomSpace is Turing complete. Starting from observation of natural general intelligence in humans, we don't have that specific probability feedback when we are thinking about something. But that doesn't mean we can't calculate it on our own, once we receive an interest stimulus about probability.
>

I'm not sure at what sense PLN is "low level" in the software sense?
Software-wise PLN is one among many rule-systems definable using
Atomspace and executable using URE rule engine... it doesn't have a
distinguished position. The special truth-values used by PLN are a
special case of general Values associated with Atoms etc.

Conceptually PLN was part of the cognitive architecture design that
led to the creation of OpenCog framework in the first place. But in
terms of the actual software architecture of the framework PLN doesn't
have any distinguished role...


> For example, if it is about natural language and resolving ambiguities, it could still be done by keeping a sum type in relevant position inside syntax tree that can be analyzed and decided afterwards, again using some non-low-level AtomSpace constructs, even possibly a PLN itself defined in a terms of AtomSpace.

PLN rules are defined in terms of Atomspace

PLN truth value formulas are defined as external functions wrapped in
GroundedSchemaNodes, but that's a temporary efficiency optimization...
which could also be done for any other rule-set btw...

ben

Ivan Vodišek

unread,
Dec 9, 2018, 12:55:19 PM12/9/18
to ope...@googlegroups.com
Oh, I see, I must be talking about URE then. All cool, then it seems reasonable to me (one ring to rule them all - policy).

I keep persuading myself that a perfect single declarative - logic + lambda calculus + type theory exists, or could be invented, and that it should be consisted only of bare Turing complete declarative minimum such that everything else could be programming within it. I don't mean to criticize, you are doing just fine with OpenCog, I mean there are tangible results, but as screwed as I am, I always tend for perfection and I sometimes don't realize that inventing should be stopped at some point, or the product would never see the light of the day. What I occasionally see in OpenCog is that specifically AtomSpace is a place for an improvement, but what I don't realize that often what I have in mind is not a part of already accessible technologies, and it has to be explained, programmed, and proven effective to actually make a point in advancing OpenCog.

The thing is that our work is overlapping largely in area covered by AtomSpace. The difference is that in my development, I strive for a knowledge base language that is meant to unite all the scientific knowledge under one umbrella, possibly distributed over web, but aware of member theory web displacements. Because of this overlapping, I occasionally ask questions here in a hope for a piece of mind that is hard to get anywhere else. I imagined an AtomSpace equivalent that is equally suitable for machine and human use, and I run for it big time for a long time now. So I kindly ask to excuse my, sometimes silly remarks, as I hope we could contribute each other's work.

Sincerely,
- Ivan V. -

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Nil Geisweiller

unread,
Dec 10, 2018, 2:30:45 AM12/10/18
to ope...@googlegroups.com
Ivan,

not sure what you meant by low level either. The way I first understood
it is whether it should be built-in or emergent. I suppose that in
principle it could be emergent, but in practice it would dramatically
postpone the moment some general form of reasoning can take place, thus
it is handy to have it built-in. And as Ben said, it is only hardwired
at some places for efficiency reason. PLN rules being atoms it makes it
ultimately amenable to self-improvements, or to support other logics,
which is already the case, albeit somewhat brittly.

On 12/9/18 7:55 PM, Ivan Vodišek wrote:
> for a long time now. So I kindly ask to excuse my, sometimes silly
> remarks, as I hope we could contribute each other's work.

Please, your feedback and criticisms are more than welcome. I like to
believe we're free-speech absolutists here and not easily offendable
(oops, I hope it's not gonna trigger a torrent of cursing). ;-)

Nil

>
> Sincerely,
> - Ivan V. -
>
> ned, 9. pro 2018. u 18:00 Ben Goertzel <b...@goertzel.org
> <mailto:b...@goertzel.org>> napisao je:
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XJ9R0tkHbvq4EvXZQcBFd4eQVkAw-GjJNEm22USM8DSA%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XJ9R0tkHbvq4EvXZQcBFd4eQVkAw-GjJNEm22USM8DSA%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Ivan Vodišek

unread,
Dec 10, 2018, 5:32:29 AM12/10/18
to ope...@googlegroups.com
Nil,

not sure what you meant by low level either. The way I first understood
it is whether it should be built-in or emergent. I suppose that in
principle it could be emergent, but in practice it would dramatically
postpone the moment some general form of reasoning can take place, thus
it is handy to have it built-in. And as Ben said, it is only hardwired
at some places for efficiency reason. PLN rules being atoms it makes it
ultimately amenable to self-improvements, or to support other logics,
which is already the case, albeit somewhat brittly.

I think you understood correctly. I thought PLN is entirely programmed in C++ layer, not in URE layer, so I got a bit confused. In my opinion, it is a good approach to have a general meta rule engine (URE) at the bottom and to hold all the knowledge in it, including PLN. And if something deserves to be faster, it is reasonable to make a faster copy in C++ or a similar language. However, keeping an URE copy of accelerated knowledge provides us a meta dimension to mechanically reason about it.

Please, your feedback and criticisms are more than welcome. I like to
believe we're free-speech absolutists here and not easily offendable
(oops, I hope it's not gonna trigger a torrent of cursing). ;-)

Thank you very much, I like this community around here. And I don't believe in cursing, cursing pollutes thinking process, and forms intellectual extremes we should avoid. All knowledge should be tagged for possible improvements, and cursing makes improvements impossible. Like in that popular story: the more we know, the less certain we should be about the knowledge correctness :-)

- Ivan V. -

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Linas Vepstas

unread,
Dec 10, 2018, 10:07:33 PM12/10/18
to opencog
On Sun, Dec 9, 2018 at 11:55 AM Ivan Vodišek <ivan....@gmail.com> wrote:
>
> Oh, I see, I must be talking about URE then. All cool, then it seems reasonable to me (one ring to rule them all - policy).
>
> I keep persuading myself that a perfect single declarative - logic + lambda calculus + type theory exists, or could be invented,

So, OK, the atomspace is trying to be:

+ declarative: so kind-of-like datalog or kind-of like SQL or noSQL,
or some quasi-generic (graph) data store. But you already know this.

- it does NOT have "logic" in it, in any traditional sense of the word
"logic". It does have the ability to perform term-rewriting (pattern
re-writing, graph re-writing).

- lambda calculus is a form of "string rewriting". Note that string
rewriting is closely related to term rewriting (but not quite the same
thing) and that term rewriting is closely related to graph rewriting
(but not quite the same thing).

When lambda calculus was invented, any distinction between strings,
terms, and graphs was unknown and unknowable, until the basic concepts
got worked out. So, due to "historical accident", generic lambda
calculus remains a string rewriting system. As stuff got worked out
over the 20th century, the concept of "term rewriting" gelled as a
distinct concept. (And other mind-bendingly
similar-but-slightly-different ideas, like universal algebra, model
theory ... and bite my tongue. There's more that's "almost the same as
lambda calc. but not quite". A veritable ocean of closely related
ideas.)

From practical experience with atomspace, it turns out that trying to
pretend that all three rewriting styles (string, term, graph) are the
same thing "mostly works", but causes all kinds of friction,
confusion, conundrums in detailed little corners. So, for example,
BindLink was an early attempt to define a Lambda for declarative
graphs. In many/most ways, it really is "just plain-old-lambda". It
works, and works great to this day, but, never-the-less, we also
created more stuff that is "just like lambda, but different",
including LambdaLink, etc.

In many ways, its an ongoing experiment. The search for "perfect" has
more recently lead to "values", which are a lot like "valuations" in
model theory. (and again, recall that model-theory is kind-of-ish like
lambda-calculus, but its typed.)

There's no "logic" in the atomspace, but you could add logic by using
the URE, and/or by several other ways, including parsing, sheaves, and
openpsi. In short, there's lots of different kinds of logic, and lots
of different ways of implementing it, and we are weakly fiddling with
several different approaches. And I have more in mind, but lacking in
time.

-- Linas

Ivan Vodišek

unread,
Dec 11, 2018, 3:24:12 AM12/11/18
to ope...@googlegroups.com
Linas,

Thank you for taking a time to response, I'll try to keep this short. I might be wrong, but Curry-Howard-Lambek isomorphism inspires me greatly in a pursuit for one-declarative-language-like-URE-to-rule-them-all.

I see term rewriting simply as basic implication over input and output terms. A number of term rewriting rules may be bundled together in a conjunction. Alternate pattern-matching options may form a disjunction. And pattern exclusion may be expressed as a negation. These are all common logical operators in a role of defining a term rewriting system. And since this kind of term rewriting is basically a logic, it can generally be tested for contradiction, or it can be used for deriving relative indirect rules - if we want it so.

That's a short version of what I currently work on - a term rewriting system expressed as a crisp logic - just a few basic logic operators with no hard-wired constants other than true and false - all wrapped up in a human friendly code code processor.

- Ivan V. -

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Linas Vepstas

unread,
Dec 11, 2018, 7:07:16 PM12/11/18
to opencog, Zarathustra Goertzel
On Tue, Dec 11, 2018 at 2:24 AM Ivan Vodišek <ivan....@gmail.com> wrote:

> Thank you for taking a time to response, I'll try to keep this short. I might be wrong, but Curry-Howard-Lambek isomorphism inspires me greatly in a pursuit for one-declarative-language-like-URE-to-rule-them-all.

One silly remark, and one serious remark; silly one first:
Curry-Howard correspondence extends all the way to tensor algebras,
quantum mechanics, string theory and cobordisms. If you have the time
and the willpower, John Baez "Rosetta Stone" is a simple,
straightforward introduction.

Curry-Howard also plays a big/central role in type theory, see "the
HoTT book". The first few chapters of that provide a nice readable
introduction to type theory. (and requires much less background than
Baez)

Re: URE: So curry-howard says "proofs are programs" and it turns out
that theorem proving is a lot like parsing (its identical to
parsing???) I DO NOT know of any simple write-up of this topic; I can
only wave my hands around. When I mentioned this to Ben's son Zar, he
kind-of responded and said "duhh its obvious everyone knows this."

Zar, do you know of any nice readable references that explain how
theorem-proving and parsing are "the same thing"?

Anyway, there is more than one way of combining rules to obtain
proofs. URE uses forward/backward chaining. It is "well known" that
SAT-solving is orders of magnitude faster than forward-backward
chaining. As to parsing, I've been hacking on link-grammar. Amir now
has it so that its as fast or faster than the SAT solver more or less.
So URE is not the only way, it is "a way". And also .. openpsi is yet
another alternative, but I'm not prepared to talk about that.


> I see term rewriting simply as basic implication over input and output terms. A number of term rewriting rules may be bundled together in a conjunction. Alternate pattern-matching options may form a disjunction. And pattern exclusion may be expressed as a negation. These are all common logical operators in a role of defining a term rewriting system. And since this kind of term rewriting is basically a logic, it can generally be tested for contradiction, or it can be used for deriving relative indirect rules - if we want it so.

Sure. This is what the atomspace pattern-matcher does. It's
effectively crisp logic. Its also got a rich selection of bells and
whistles; for example, user-defined callbacks in various places, to
alter what it does: reorder, rank, select-top-N, search-this-first.
Support for virtual terms: `if x<y` . Support for term-absence: `if (a
found but b absent)` etc. Its gotten big, hard to understand as a
result. A bit crufty in places, but we've kept it mostly clean and
pretty fast.

But again: when you say "Alternate pattern-matching options may form a
disjunction." etc. well -- hey, this is exactly what link-grammar
does. You can read the link-grammar dicts, and explicitly see
conjunctions and disjunctions.

<noun-main-x>:
(S+ & <CLAUSE>) or SI- or J- or O-
or <post-nominal-x>
or <costly-null>;

This is very similar to what the opencog pattern matcher works with,
but uses a very different notation, and a very different "shallow
theory". But in a deep sense, the two are the same, and I'm trying to
work my way so that the code, the API's, the implementation provide a
more unified, more cohesive view of both. So that the URE can
eventually become a parser, instead of a chainer. (or offer a parser
component, in addition to a chainer component).

So what you are doing is all good. With these emails, I'm trying to
point out "what we've learned so far".

> That's a short version of what I currently work on - a term rewriting system expressed as a crisp logic - just a few basic logic operators with no hard-wired constants other than true and false - all wrapped up in a human friendly code code processor.

Yes, this seems reasonable. I forget, what language are you working
in? is this in git somewhere?

-- Linas
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VaDej%3DGBL3bpMxYArP6ZnkifCiNmStc7-SrJO%2BV3eHQA%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.



Linas Vepstas

unread,
Dec 11, 2018, 7:17:07 PM12/11/18
to opencog, Zarathustra Goertzel
Oh, and one more (minor?) remark: the intermediate states that get
explored during pattern matching are called "Kripke frames", and the
"crisp logic of term re-writing" is one of the modal logics. I know
this to be true in a hand-waving fashion; I have searched long and
hard for a paper or a book that would articulate this in some direct,
detailed fashion. I have not yet found one.

Zar, so second question, any chance at all you might be aware of
references for this?

--linas
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VaDej%3DGBL3bpMxYArP6ZnkifCiNmStc7-SrJO%2BV3eHQA%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.



Ivan Vodišek

unread,
Dec 11, 2018, 8:31:38 PM12/11/18
to ope...@googlegroups.com
Hi Linas,

I started to implement the language I'm talking about in Javascript some year ago. I finished a parser, put it on GitHub (https://github.com/ivanvodisek/V-Parse), but got new ideas in a meanwhile for the language, including inspiration for a linear time SAT solver, which put me back to theoretical investigation. I really like what it looks like by now, as for both visual appearance and theoretical performance.

The parser I implemented is actually based on a novel chart CFG algorithm that handles parsing trees in a certain way, which  enables the parser to exhibit linear parsing time [yes, you read well, it is below O(n^3)], no matter of amount of ambiguity of resulting AST. In short, it is about organizing an AST in a chart sequence of parsed atoms (possibly branched on ambiguous match). The consequence of this organization is that at each offset we have at most |Grammar-complexity| amount of possible distinct atoms, and that takes [Grammar-complexity| amount of time (it is a constant  time) to parse at each offset, hence linear time! This was a hard nut, but I think I made it. Not to stay just on words, you can test it by yourself on the above link, and my experimental readings align with theoretical linearity of the algorithm. The parser is not patented, nor will ever be (I don't believe in intellectual patents), and anyone is free to use it in whichever way he/she finds appropriate (I'd be actually honored if someone uses it for a good cause). If someone needs a help for porting it to C++ or  Java, feel free to contact me, I'm pretty sure I can help. It probably needs certain tweaks to actually use it, but it should work if I'm not mistaken.

I'm afraid I can't offer you more than this concrete result at the present time, but I'm working on it. Probably my next move is to try to implement linear time SAT solver also in Javascript, which will also be open licensed if it turns out it actually works (just theoretical analysis for now, hoping for experimental results soon). And the implementation of the language I'm dreaming about is pending after that. Wish me luck :-)

- Ivan V. -

"Dream big. The bigger the dream is, the more beautiful place the world becomes:" - me


Linas Vepstas

unread,
Dec 11, 2018, 9:39:30 PM12/11/18
to opencog
I'm not really clear on what it is that you are describing. Is it
possible (even in principle) to take some typical atomese pattern and
translate it into your notation? Can you represent something like
"Ivan wrote some code. Linas took a shower. Ivan wrote $X" and run
your system, and find that $X is grounded by "some code" ? What
notation would you use to express this? Or are you describing
something else?

-- Linas
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VrJ%2BH_GZHPNy%3DA4yjdB5hN10oOCAJ1KNu2Q6PzdxQ-_w%40mail.gmail.com.

Ivan Vodišek

unread,
Dec 12, 2018, 3:28:37 AM12/12/18
to ope...@googlegroups.com
I'm sorry, maybe I wasn't specific enough, what I just published is not a complete metalanguage (still have to do some real work for that). What in fact it represents is merely a CFG complete parsing algorithm, similar to GLR, CYK, Earley, LG... But this one is linear, as described. So, why is this linearity important? You see, there exists Curry-Howard correspondence for a variety of systems. But I believe it extends even to parsers. To be specific, a parser could be represented by propositional calculus with extension of sequencing (something like predicates, but containing propositions instead of variables and functions).

The following would be an example of a grammar defined in propositional logic with sequencing:

(
    Sum -> (
        Sum "+" Mul \/
        Fact
    )
) /\ (
    Fact -> (
        Fact "*" Primary \/
        Primary
    )
) /\ (
    Primary -> /[0-9+]/
) /\ (
    TopSymbol -> Sum
)

Some contradictory grammar would represent a grammar that has no valid parsing example, i.e. always terminates with an error. Deriving a contradiction from such a grammar would include parsing each rule against a negation of every other rule, and if such a negation parsing succeeds, the current rule of focus is replaced by False. If any of such false parses apear directly below the root, the grammar is contradictory. If we now have a linear or even polynomial time complexity time parser algorithm, then P equals NP, under assumption that we replace parsing rules by any kind of logic propositions, while rule referencing is determined by the common resolution rule. Does this make sense?

I had other ideas for efficiently deriving contradiction too, but this approach seems relevant for this moment.

Nevertheless, this is not enough for grounding symbols, in a same way that propositional logic is not expressive enough to describe some more complicated expressions. To make grounding possible, it is necessary to introduce some form of quantifying over variables, or even over the whole logic expressions in my case. I still have to finish this part, but it's on the schedule.

- Ivan V. -


Linas Vepstas

unread,
Dec 13, 2018, 1:59:00 AM12/13/18
to opencog
On Wed, Dec 12, 2018 at 2:28 AM Ivan Vodišek <ivan....@gmail.com> wrote:
>
> I'm sorry, maybe I wasn't specific enough, what I just published is not a complete metalanguage (still have to do some real work for that). What in fact it represents is merely a CFG complete parsing algorithm, similar to GLR, CYK, Earley, LG...

If by LG you mean "link-grammar", then thinking about it as if it were
some 1960's context-free-grammar is asking for headaches. Yes, there
might exist some algorithm to convert it into that. I've read (and
lost the reference) a paper that made a hand-waving argument that
algorithms always exist to convert dependency grammars into
phrase-structure grammars. But it was hand-waving, and not a formal
proof, and gave no hint at the required complexity.

In general, don't assume that its easy or even desirable to convert
other formats into 1960's style CFG's.

> But this one is linear, as described.

I'm pretty sure there are proofs that CFG's cannot, in general, be
parsed in linear time. There's a stack, after all, and you can always
make the stack grow arbitrarily deep. I don't think that even regular
languages can be parsed in linear time.

> So, why is this linearity important? You see, there exists Curry-Howard correspondence for a variety of systems. But I believe it extends even to parsers. To be specific, a parser could be represented by propositional calculus with extension of sequencing (something like predicates, but containing propositions instead of variables and functions).

Don't know what you mean by "extended with sequencing".

>
> The following would be an example of a grammar defined in propositional logic with sequencing:
>
> (
> Sum -> (
> Sum "+" Mul \/
> Fact
> )
> ) /\ (
> Fact -> (
> Fact "*" Primary \/
> Primary
> )
> ) /\ (
> Primary -> /[0-9+]/
> ) /\ (
> TopSymbol -> Sum
> )

That's hard to read. It looks like some production rules for
generating sums and products of integers. Wouldn't BNF notation be
easier? By not using BNF, are you trying to show something?

Anyway, production rules are certainly not propositional calculus:
TopSymbol cannot be reduced to true or false. TopSymbol could be
reduced to a single integer. But production rules don't do that ..
they only specify a language ... So I don't understand what you are
trying to do here.

>
> Some contradictory grammar would represent a grammar that has no valid parsing example, i.e. always terminates with an error. Deriving a contradiction from such a grammar would include parsing each rule against a negation of every other rule, and if such a negation parsing succeeds, the current rule of focus is replaced by False. If any of such false parses apear directly below the root, the grammar is contradictory. If we now have a linear or even polynomial time complexity time parser algorithm, then P equals NP, under assumption that we replace parsing rules by any kind of logic propositions, while rule referencing is determined by the common resolution rule. Does this make sense?

I don't understand any of this. What's a "contradictory grammar"?

--linas
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XiNjDz2TwRdmrRO_%2BAPMN95nxCJpj8MYNDiLp%2BL3aong%40mail.gmail.com.

Ivan Vodišek

unread,
Dec 13, 2018, 6:55:46 AM12/13/18
to ope...@googlegroups.com
Things got over-complicated in this thread. It became like you are trying to explain some progressive math area to someone by having a short-sequenced dialog. I think that this matter deserves a decent monologue in a form of a "scientific" or at least "hand-waving" paper. There are particular areas that should be read more slowly and carefully, with understanding. With dialog, it is too easy to skim it fast and say "impossible", "no", or "it's crazy" if something is not understood. I believe I'd have a better chance with a proper monologue and a possible question answering after that. The final goal would be a specific proposition for improving OpenCog, instead of having an endless chitchat. 

- Ivan V. -


Ivan Vodišek

unread,
Dec 13, 2018, 9:15:11 AM12/13/18
to ope...@googlegroups.com
All OpenCog mailng list members,

I'm sorry I lost the temper, and roughly ended the conversation. It just happened that the conversation began to include more things than I was able to properly present, so things went a bit fuzzy for me.

You don't owe me anything, especially not to thoroughly study any nonsense I write as a guy watching from aside.

So please, take my apology for being rude, especially to Linas, who is contributing a way better deal to this community than me.

I'm sorry I abused your open minded attitude.

- Ivan V. -

Nil Geisweiller

unread,
Dec 14, 2018, 1:29:18 AM12/14/18
to ope...@googlegroups.com
Don't worry Ivan, you email was fine. Technically minded communication
is always appreciated. The nature of that medium (email, etc) makes it
difficult to assess emotions on the other end. I think it taps in our
tendency to over-estimate emotions, while in reality people don't care
that much.

That loosely reminds me, I came across a study once (unfortunately I
can't find it) describing the following experiment, 2 groups of students
given problems to solve. The first group was asked to put politeness and
make compromises first, the second group was asked to put ideas first
and debate passionately without worrying about politeness. The result,
the second group was more effective and creative than the first group.

Also reminds me, the Linux kernel maintainers have decided to replace
all "fucks" by "hugs" in the code comments. I don't know if it is
ironic, genuine or both but it's funny in a sweet way nonetheless.

So... let's just be be ourselves ;-)

Nil

On 12/13/18 4:14 PM, Ivan Vodišek wrote:
> All OpenCog mailng list members,
>
> I'm sorry I lost the temper, and roughly ended the conversation. It just
> happened that the conversation began to include more things than I was
> able to properly present, so things went a bit fuzzy for me.
>
> You don't owe me anything, especially not to thoroughly study any
> nonsense I write as a guy watching from aside.
>
> So please, take my apology for being rude, especially to Linas, who is
> contributing a way better deal to this community than me.
>
> I'm sorry I abused your open minded attitude.
>
> - Ivan V. -
>
> čet, 13. pro 2018. u 12:55 Ivan Vodišek <ivan....@gmail.com
> <mailto:ivan....@gmail.com>> napisao je:
>
> Things got over-complicated in this thread. It became like you are
> trying to explain some progressive math area to someone by having a
> short-sequenced dialog. I think that this matter deserves a decent
> monologue in a form of a "scientific" or at least "hand-waving"
> paper. There are particular areas that should be read more slowly
> and carefully, with understanding. With dialog, it is too easy to
> skim it fast and say "impossible", "no", or "it's crazy" if
> something is not understood. I believe I'd have a better chance with
> a proper monologue and a possible question answering after that. The
> final goal would be a specific proposition for improving OpenCog,
> instead of having an endless chitchat.
>
> - Ivan V. -
>
>
> čet, 13. pro 2018. u 07:59 Linas Vepstas <linasv...@gmail.com
> <mailto:linasv...@gmail.com>> napisao je:
> <linasv...@gmail.com <mailto:linasv...@gmail.com>> napisao je:
> <linasv...@gmail.com <mailto:linasv...@gmail.com>> napisao je:
> <linasv...@gmail.com <mailto:linasv...@gmail.com>> napisao je:
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> >> >> >> To post to this group, send email to
> ope...@googlegroups.com <mailto:ope...@googlegroups.com>.
> >> >> >> Visit this group at
> https://groups.google.com/group/opencog.
> >> >> >> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA35V2Uk0YbwERPMkPhpxevriT0DZvaYzLtEPXUQC9TiUHQ%40mail.gmail.com.
> >> >> >> For more options, visit
> https://groups.google.com/d/optout.
> >> >> >
> >> >> > --
> >> >> > You received this message because you are subscribed to
> the Google Groups "opencog" group.
> >> >> > To unsubscribe from this group and stop receiving
> emails from it, send an email to
> opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> >> >> > To post to this group, send email to
> ope...@googlegroups.com <mailto:ope...@googlegroups.com>.
> >> >> > Visit this group at
> https://groups.google.com/group/opencog.
> >> >> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VaDej%3DGBL3bpMxYArP6ZnkifCiNmStc7-SrJO%2BV3eHQA%40mail.gmail.com.
> >> >> > For more options, visit https://groups.google.com/d/optout.
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> cassette tapes - analog TV - film cameras - you
> >> >>
> >> >> --
> >> >> You received this message because you are subscribed to
> the Google Groups "opencog" group.
> >> >> To unsubscribe from this group and stop receiving emails
> from it, send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> >> >> To post to this group, send email to
> ope...@googlegroups.com <mailto:ope...@googlegroups.com>.
> >> >> Visit this group at https://groups.google.com/group/opencog.
> >> >> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA37GsrK8UWA1yLkH0D1d9msCT_pgSenWCqzvdcU2K6zZ4w%40mail.gmail.com.
> >> >> For more options, visit https://groups.google.com/d/optout.
> >> >
> >> > --
> >> > You received this message because you are subscribed to
> the Google Groups "opencog" group.
> >> > To unsubscribe from this group and stop receiving emails
> from it, send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> >> > To post to this group, send email to
> ope...@googlegroups.com <mailto:ope...@googlegroups.com>.
> >> > Visit this group at https://groups.google.com/group/opencog.
> >> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VrJ%2BH_GZHPNy%3DA4yjdB5hN10oOCAJ1KNu2Q6PzdxQ-_w%40mail.gmail.com.
> >> > For more options, visit https://groups.google.com/d/optout.
> >>
> >>
> >>
> >> --
> >> cassette tapes - analog TV - film cameras - you
> >>
> >> --
> >> You received this message because you are subscribed to the
> Google Groups "opencog" group.
> >> To unsubscribe from this group and stop receiving emails
> from it, send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> >> To post to this group, send email to
> ope...@googlegroups.com <mailto:ope...@googlegroups.com>.
> >> Visit this group at https://groups.google.com/group/opencog.
> >> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA34zP3MSpWxJv750ScsGFVT_3-V6K13cj0_Dw6GV5rQXJA%40mail.gmail.com.
> >> For more options, visit https://groups.google.com/d/optout.
> >
> > --
> > You received this message because you are subscribed to the
> Google Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails from
> it, send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> > To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> > Visit this group at https://groups.google.com/group/opencog.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XiNjDz2TwRdmrRO_%2BAPMN95nxCJpj8MYNDiLp%2BL3aong%40mail.gmail.com.
> > For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> cassette tapes - analog TV - film cameras - you
>
> --
> You received this message because you are subscribed to the
> Google Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from
> it, send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6V4pUpK6_ccryUr4VBncJ3F-0w-_hcT0_XQJtP1f%2BiYMw%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6V4pUpK6_ccryUr4VBncJ3F-0w-_hcT0_XQJtP1f%2BiYMw%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Zar Goertzel

unread,
Dec 14, 2018, 6:27:31 AM12/14/18
to Linas Vepstas, opencog
Howdy Linas,

Unfortunately, I'm drastically ignorant compared to my senior colleagues and you. 

Re: URE: So curry-howard says "proofs are programs" and it turns out
that theorem proving is a lot like parsing (its identical to
parsing???)  I DO NOT know of any simple write-up of this topic; I can
only wave my hands around.  When I mentioned this to Ben's son Zar, he
kind-of responded and said "duhh its obvious everyone knows this."

Zar, do you know of any nice readable references that explain how
theorem-proving and parsing are "the same thing"?

We had happened to be talking about it at lunch when you brought that up. It seems likely they're in a similar state to you: it seems obvious and they can wave their hands around, but haven't bothered formally writing it up.

Would it be hard to write up formally?

On Wed, Dec 12, 2018 at 1:17 AM Linas Vepstas <linasv...@gmail.com> wrote:
Oh, and one more (minor?) remark: the intermediate states that get
explored during pattern matching are called "Kripke frames", and the
"crisp logic of term re-writing" is one of the modal logics. I know
this to be true in a hand-waving fashion; I have searched long and
hard for a paper or a book that would articulate this in some direct,
detailed fashion.  I have not yet found one.

Zar, so second question, any chance at all you might be aware of
references for this?

The logic of term rewriting is paramodulation? That's not a modal logic though . . .

Ben Goertzel

unread,
Dec 14, 2018, 6:56:38 AM12/14/18
to opencog, Linas Vepstas
I think this was ye olde original paper on parsing as deduction,

http://aclweb.org/anthology/P89-1033

There's a whole literature on theorem-proving for categorial grammars of course

https://arxiv.org/abs/cmp-lg/9508009

So there's a lot of ways to show that parsing is a kind of deductive inference

The other way around, that "deductive inference is a kind of parsing",
seems to require a bit of a stretch of the concept of "parsing" ...
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHY-%3DHGbf_Cc%2BsaQ%3DFiEw0bwB8G_607vVNMNY6-Jb3zFYyVf3Q%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.



--

Linas Vepstas

unread,
Dec 15, 2018, 11:24:01 PM12/15/18
to Ben Goertzel, opencog
On Fri, Dec 14, 2018 at 5:56 AM Ben Goertzel <b...@goertzel.org> wrote:
>
> I think this was ye olde original paper on parsing as deduction,
>
> http://aclweb.org/anthology/P89-1033

Huff. So I started reading that, got distracted, and went down a
rabbit hole, here:
http://math.andrej.com/2016/01/04/a-brown-palsberg-self-interpreter-for-godels-system-t/
with background reading provided here:
https://cstheory.stackexchange.com/questions/24986/a-total-language-that-only-a-turing-complete-language-can-interpret
>
> There's a whole literature on theorem-proving for categorial grammars of course
>
> https://arxiv.org/abs/cmp-lg/9508009
>
> So there's a lot of ways to show that parsing is a kind of deductive inference
>
> The other way around, that "deductive inference is a kind of parsing",
> seems to require a bit of a stretch of the concept of "parsing" ...

As you well-know, my favorite metaphor is "assembling jigsaw puzzle
pieces", and natural deduction looks like jigsaw-puzzle-piece
assembly, to me. Not every problem can be reduced to this, but where,
exactly, the boundary lies is quite unclear to me.

> On Fri, Dec 14, 2018 at 1:48 PM Zar Goertzel <zar...@gmail.com> wrote:
> >
> > We had happened to be talking about it at lunch when you brought that up. It seems likely they're in a similar state to you: it seems obvious and they can wave their hands around, but haven't bothered formally writing it up.
> >
> > Would it be hard to write up formally?

My personality is such that I'd have to do a lot of background reading
first, and that takes time, which seems in short-supply. There's bit
of a balance point: the best time to write (for me; for everyone?) is
just when I first understand it, before it becomes so trivial that I'm
bored listening to my own voice. There's a shelf-life for ideas.

The rabbit hole, above, is very interesting just right now.

> > The logic of term rewriting is paramodulation? That's not a modal logic though . . .

Beats me. I will have to look, now.

--linas

Zar Goertzel

unread,
Dec 17, 2018, 6:32:57 AM12/17/18
to Linas Vepstas, opencog
Briefly asked Chad the logic guru. 

He says sounds like the proof-logic for parsing would be something in the linear logic direction. Mentions Lambek and https://en.wikipedia.org/wiki/Grammatical_Framework.
But he isn't sure something as detailed/formal as you'd like has been done yet...

Ivan V.

unread,
Mar 26, 2019, 9:39:11 AM3/26/19
to ope...@googlegroups.com
Zar Goertzel wrote:
Briefly asked Chad the logic guru. 

He says sounds like the proof-logic for parsing would be something in the linear logic direction. Mentions Lambek and https://en.wikipedia.org/wiki/Grammatical_Framework.
But he isn't sure something as detailed/formal as you'd like has been done yet...

I'm exploring parsing vs. proof construction. What I've found out for now is that Hilbert system (an analogue to OpenCog unified rule engine) is an equivalent to unrestricted grammars from Chomsky hierarchy (Type 0). The difference is merely in notation direction where Hilbert writes rules from left to right, while parsing grammar productions reverse left and right sides.

The connection is the following (using Hilbert's direction):
  • (cause / string-of-terminals-or-non-terminals-to-be-parsed) -> (consequence / match-pattern)
With a help of common rules (again in Hilbert System):
  • A -> (A \/ B)
  • B -> (A \/ B)

Proving is performed from left to right, while parsing is performed from right to left. Considering Curry-Howard-Lambek correspondence, I suppose we may add unrestricted grammars to it also. Unrestricted grammars are anyway considered Turing machine equivalent rewriting mechanisms, so this should be no surprise at all, although I didn't imagine it would be that simple and straightforward.

If anyone is interested in serious parser/proof material written by professionals, there is excellent Stanford's Introduction to Logic. Chapter 9.6 - Pseudo English covers a kind of context free grammars implemented in logic, and we should be able to extended it to unrestricted grammars also.

Stay cool,
Ivan V.


Linas Vepstas

unread,
Mar 26, 2019, 5:44:30 PM3/26/19
to opencog, Zarathustra Goertzel

Hi Ivan,

Good to hear from you. Some remarks, from least important to most

1) the chomsky hierarchy - I always found it useless, in practice.  Don't waste time on it. Know it exists, then ignore it.

2) If reading wikipedia, look at "natural deduction" - it provides a nice counter-point to hilbert systems.

3) A lot of the considerations re: hilbert systems vs natural deduction, or etc.etc. have to do with algorithms for auto-discovering proofs, and doing so efficiently. And for that, there's just a huge number of different algorithms, a huge number of different ways of representing rules and axioms and knowledge. So 8 years ago or so I became enamored of "answer set programming", there's some great software for that, and since then many other ways of doing things have come to be, including all sorts of theorem provers, and advances in the theory of type systems, and software for agda and hol and etc. Its now all foggy to me, but only because there are so many ways, possibilities.

4) All of the points in point-3 have to do with "crisp" binary logic. The place to make progress, maybe rapid progress, is to map probabilistic concepts onto the above. There are several ways: first, a naive replacement of true/false by a number between 0-1.  (e.g. the fuzzy logic of the 1970's) These kinds of approaches do not seem to be very powerful: all probabilities dissolve into grey quickly. Slightly fancier is PLN, but the ongoing difficulty with PLN and the combinatorial explosion hits a wall (heck, there's a combinatorial explostion even in the crisp theorem provers. Its only made worse in a PLN-style framework)  Third approach is neural-nets/deep learning, where use of float-point numbers is central, but one completely discards all structure, and replaces it by vectors.   So, instead of having a hilbert-axiom or rule, or whatever, you have ... a vector. 

Vectors provide a real shitty mapping to rules/axioms/logic/rationality. All its got going for it is that it's fast. 

There's a reason why vectors work: vectors are described by the tensor category. As it happens, some/much of natural language is described by the same category. This is "the secret", the core reason why word2vec-style neural nets/deep-learning work fairly well for natural language.  Note also: small subsets of mathematical logic can also jam into this category. 

This is also why deep-learning is now hitting the wall: vectors are shitty representations of rules/axioms/structure.

My hypothesis/interest is to clarify this further, and then replace the tensor category by a category that is more appropriate to (a) natural language (b) mathematical logic (hilbert system or natural deduction or agda/dependent type or HOL, or whatever).  Then, properly identified, rip the vectors out of deep-learning, and plug in this correct category.  To repeat myself: keep the insight of deep learning,  but throw away the effing vectors, already.

What's the correct category? Rather than naming it formally, let me (again) describe it informally: its the category of fitting together jigsaw puzzle pieces.  Why? Heck: just look at "judgements" in "natural deduction": they obviously look like jigsaw pieces. Once you are sensitized, they are everywhere.  But there's more: the tab-connectors are tensor-like in their behavior, so the whole thing looks "kind of like tensors", anyway, but just a bit whack/off-kilter (e.g. no dagger, etc.)  So -- replace the vectors of deep-learning by jigsaw pieces. Hand-wavey presto-magic ta-dahh.

I've said this all before, to the point of exhaustion. I've made zero progress on this since last summer. Why? I have a long list of excuses. But they are just excuses.   I remain convinced that this is the correct way forward.  I strongly encourage investigation into this direction.

-- Linas




For more options, visit https://groups.google.com/d/optout.

Sergei Kaunov

unread,
Mar 27, 2019, 1:13:58 PM3/27/19
to opencog
Amused by your work, Linas, and you describe it very interesting. Where should we watch further progress on the topic?

Linas Vepstas

unread,
Mar 27, 2019, 3:04:54 PM3/27/19
to opencog
Hi Sergei,

On Wed, Mar 27, 2019 at 12:14 PM Sergei Kaunov <ska...@gmail.com> wrote:
Amused by your work, Linas, and you describe it very interesting. Where should we watch further progress on the topic?

Thanks!  Progress is hard, because several things have to happen in parallel. 

-- I (or you, or anyone else) has to think hard across multiple different difficult mathematical abstractions.

-- Once I (or you, or anyone else) has some "great idea", it has to be transmitted to others, either by email or by writing papers.

-- Others can understand those "great ideas" only if they have the appropriate mathematical background:  in Ivan's case: hilbert-systems and natural deduction and proof theory and type theory and category theory and neural networks and deep learning and statistical mechanics (for example, the "objective function" that neural net guys love to minimize/maximize is exactly the same thing as a boltzmann distribution from statistical echanics)   So my personal progress on the topic is blocked by the inability to communicate it to others. If you don't understand what I'm talking about, its deeply frustrating for me. (And this is symmetric: sometimes, I am told about great ideas, which I don't understand, because I lack the background knowledge)

-- A "great idea" is great only if it *actually works*. And, here, that means (a) writing software (b) running experiments (c) analyzing data. (d) describing experimental results to others.  So, not only are steps a,b,c, extremely time consuming, but step (d) is often mis-understood/neglected, because the intended audience didn't understand why the experiment is important.

-- After you've conquered steps a,b,c,d then and only then can you do step e) build an insanely great demo that will wow everyone who sees it, even if they are a complete moron. For example, "deep fakes". You don't need math to know that something unusual is happening there.

The pressure I'm under, that I feel, is that I've got a collection of "great ideas", I'm trying to articulate them, having trouble finding an audience, struggling with steps a,b,c,d and meanwhile everyone is shouting out loud "you guys are a bunch of losers because you don't have step e) you suck!" and dealing with the psychological and financial fallout from that.

I'm not unique, here -- most researchers/scientists deal with these same issue. The commonly accepted solution for this is to create collaborations and teams -- "division of labor" -- and tehre's chicken-and-egg problems to solving that, also.

--linas
 

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

For more options, visit https://groups.google.com/d/optout.

Ivan V.

unread,
Mar 27, 2019, 5:40:54 PM3/27/19
to ope...@googlegroups.com
Linas, you lost me at Category theory... Nevertheless, I also find the idea of integrating symbolic system into neural networks amusing. The proof I really do is a recent speculation from a paper I'm writing from time to time, just to gather my thoughts (maybe I got this from you at some point, I really don't remember now):

Nevertheless, symbolic approach may support structure forms on top of which artificial neural networks could operate, thus forming a synergy between the two seemingly opposite philosophies in designing AI.

But then I develop things in symbolic direction because with this, I'm currently interested only in improvement of OpenCog URE engine, as far as I plan to offer contribution around here if the language I'm building passes the stages a, b, c, d, and also e, just in case.

Realizing ideas take time, and life is too short to do it all, while I'm not a fan of bossing around... I also like to see creativity in other people too...

Nil Geisweiller

unread,
Mar 28, 2019, 1:56:42 AM3/28/19
to ope...@googlegroups.com
Hi,

On 3/26/19 3:38 PM, Ivan V. wrote:
> now is that Hilbert system
> <https://en.wikipedia.org/wiki/Hilbert_system> (an analogue to OpenCog
> unified rule engine)

Actually, I think the Unified Rule Engine (URE) is closer to an
unrestricted grammars than a Hilbert system.

> The difference is merely in notation direction where Hilbert writes
> rules from left to right, while parsing grammar productions reverse left
> and right sides.

The URE does both :-)

L->R == Forward Chainer
R->L == Backward Chainer

Nil

>
> The connection is the following (using Hilbert's direction):
>
> * (cause / string-of-terminals-or-non-terminals-to-be-parsed) *->*
> (consequence / match-pattern)
>
> With a help of common rules (again in Hilbert System):
>
> * A *->* (A \/ B)
> * B *->* (A \/ B)
>
>
> Proving is performed from left to right, while parsing is performed from
> right to left. Considering Curry-Howard-Lambek
> <https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence>
> correspondence, I suppose we may add unrestricted grammars to it also.
> Unrestricted grammars are anyway considered Turing machine equivalent
> <https://en.wikipedia.org/wiki/Unrestricted_grammar#Equivalence_to_Turing_machines>
> rewriting <https://en.wikipedia.org/wiki/Rewriting> mechanisms, so this
> should be no surprise at all, although I didn't imagine it would be that
> simple and straightforward.
>
> If anyone is interested in serious parser/proof material written by
> professionals, there is excellent Stanford's Introduction to Logic
> <http://intrologic.stanford.edu/public/index.php>. Chapter 9.6 - Pseudo
> English
> <http://intrologic.stanford.edu/public/section.php?section=section_09_06> covers
> a kind of context free grammars implemented in logic, and we should be
> able to extended it to unrestricted grammars also.
>
> Stay cool,
> Ivan V.
>
>
> pon, 17. pro 2018. u 12:32 Zar Goertzel <zar...@gmail.com
> <mailto:zar...@gmail.com>> napisao je:
> <linasv...@gmail.com <mailto:linasv...@gmail.com>>
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> >> To post to this group, send email to
> ope...@googlegroups.com <mailto:ope...@googlegroups.com>.
> >> Visit this group at https://groups.google.com/group/opencog.
> >> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA35V2Uk0YbwERPMkPhpxevriT0DZvaYzLtEPXUQC9TiUHQ%40mail.gmail.com.
> >> For more options, visit https://groups.google.com/d/optout.
> >
> > --
> > You received this message because you are subscribed to
> the Google Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails
> from it, send an email to
> opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> > To post to this group, send email to
> ope...@googlegroups.com <mailto:ope...@googlegroups.com>.
> > Visit this group at https://groups.google.com/group/opencog.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VaDej%3DGBL3bpMxYArP6ZnkifCiNmStc7-SrJO%2BV3eHQA%40mail.gmail.com.
> > For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> cassette tapes - analog TV - film cameras - you
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHY-%3DHEmBKgtJPAeKTXAOyOaE4gR8cZeouc8v9JhQb3E4jX%2BkQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAHY-%3DHEmBKgtJPAeKTXAOyOaE4gR8cZeouc8v9JhQb3E4jX%2BkQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XXTvSx_BCNHsS-CSfzCjH2y0MeedeepUY_%2B5ANx-h99g%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XXTvSx_BCNHsS-CSfzCjH2y0MeedeepUY_%2B5ANx-h99g%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Linas Vepstas

unread,
Apr 2, 2019, 2:05:59 PM4/2/19
to opencog, link-grammar
Hi Ivan:

Google "category theory for computer scientists" I get this:

> [PDF]Category Theory for Computing Science - Mathematics and Statistics
> by M Barr - ‎Cited by 1642 - ‎Related articles
> Aug 4, 2012 - This book is a textbook in basic category theory, written specifically .... been a major source of interest to computer scientists because they are.

I have not read it. I think that "cited by 1642" means its a good, high-quality book. There are several of these kinds of books -- one that I did skim, maybe a decade ago, gave all of its examples and homework problems in CaML -- it was an older book, predating Haskell. 

See also this: 

Notice that the answer with largest upvotes says "category theory is type theory".  For any complete newbies reading this, "int", "float", "char*", "class FooBar" and "int FooBar::myMethod(int x)"  are all (C/C++/Java) examples of types. 

I'm making three additional claims, in addition to this highly-upvoted answer:

1) Link-grammar connectors/link-types are types, in the sense of type theory.  (This is not really a new claim; the original link-grammar authors made more-or-less this same claim, in 1993, in one of their original papers)

2) Deep-learning neural-nets perform classification by classifying into types. (type-theoretical types) (this claim is kind-of shallow/stupid/"obvious", and needs to be articulated to become interesting and non-trivial.)

3) There is an almost-direct, one-to-one correspondence of  deep-learning neural-nets types to link-grammar connectors/link-types, if you know where to look for them.  This is the controversial claim that everyone rejects. And perhaps I am hallucinating and completely making this up. Like 2+2=3.  I'm struggling with this myself, as the details remain unclear and confusing.  So it's OK if you don't believe this one.  But this is the claim I'm interested in.

-- Linas
 


For more options, visit https://groups.google.com/d/optout.

Ivan V.

unread,
Apr 2, 2019, 5:36:20 PM4/2/19
to opencog
Linas,

Tanks for taking time to answer.

I skimmed over the Stackexchange post and I found it interesting (not that anyone cares :o). I tried once a while ago to learn about Category theory from Wikipedia, but it seemed over complicated. I guess I needed examples closer to my knowledge - not mathematical abstractions, but type theory oriented - as noted on Stackexchange. Maybe I should give it another try, I'll see.

Thanks again,
Ivan V.


Ivan V.

unread,
Jun 14, 2019, 11:55:31 AM6/14/19
to opencog
Hello everyone :)

I was having a few thoughts about time complexity of inference I wanted to share with OpenCog community. I'll try to keep this post short, not to take a much time of yours.

Basically, logical deduction relays on finding a universal unificator for two formulas to derive the third formula from them. But to avoid unification, it is possible from two formulas to derive another two, one with substitution relative to the first, one with substitution relative to the second formula. This unification avoiding puts deduction to a position where it is analogous to function calling.

Further, all logic formulas can be expressed in disjunctive normal form. Also, this DNF is interchangable with implicational logic (only implication and falsehood). And implicational logic is something that reminds a lot of unrestricted grammars. In fact, isn't implicational logic inference a kind of  higher order unrestricted grammar parsing, only with predicates expressed as sequences of constants?

So, why am I writing all of this? Suppose that there is a strict isomorphism from logic inference to higher order unrestricted grammar parsing. Usual CFG parsers perform about O(n^3) worst time complexity when parsing a text (relative to the text length). I don't have performance numbers for unrestricted grammar parsing, but I assume that unrestricted grammar parsing wouldn't be much slower (although it would be relative to a length of a path from the top-most rules to the bottom-most tokens).

The question about this post is: Does this possible isomorphism mean that an inference would be performed in polynomial time, regarding to a length of a proof, at least in a case of propositional logic?

Thank you all,
Ivan V.

Linas Vepstas

unread,
Jun 14, 2019, 1:01:27 PM6/14/19
to opencog
On Tue, Apr 2, 2019 at 4:36 PM Ivan V. <ivan....@gmail.com> wrote:
Linas,

Tanks for taking time to answer.

I skimmed over the Stackexchange post and I found it interesting (not that anyone cares :o). I tried once a while ago to learn about Category theory from Wikipedia,
 
Wikipedia is not a good way to learn math, in general.  You kind-of already have to know it.

but it seemed over complicated.

Turns out addition is really complicated, if you are a number theorist.  But if you are not, then it doesn't have to be.

I guess I needed examples closer to my knowledge - not mathematical abstractions, but type theory oriented - as noted on Stackexchange. Maybe I should give it another try, I'll see.

I pointed you at the PDF.  Go for the PDF.

--linas
 

For more options, visit https://groups.google.com/d/optout.

Linas Vepstas

unread,
Jun 14, 2019, 1:09:29 PM6/14/19
to opencog


On Fri, Jun 14, 2019 at 10:55 AM Ivan V. <ivan....@gmail.com> wrote:

The question about this post is: Does this possible isomorphism mean t

In grammar, you are given a string of symbols, and have to find how they connect.

In theorem-proving, you are only given the end-points.  You have to find everything in the middle.  You could say that it's like parsing, except that every word in the middle of the sentence is a wild-card (could be anything) and you don't know the length of the sentence.

-- Linas

Ivan V.

unread,
Nov 6, 2019, 5:03:11 AM11/6/19
to opencog
Implika rule-based inference engine

Implika is a Javascript library made of rule-based inference engine. The only operator used in input is a implication which operates on three types of expressions: constants, schematic variables, and implications recursively. These elements are just enough to derive all the other operators and data structures we may find useful in computational field. Output of the inference engine includes assertion of all the implicit information that can be derived from input set of mutually correlating rules. One of the surprises about Implika is in its code size: the whole pattern matching and inference system is implemented in only about 150 lines of Javascript code. Nevertheless, this does not diminish ability of Implika to represent a Turing complete system.

Implika input language is a kind of s-expression:

s-exp := () | constant | @variable | (s-exp s-exp)

where the left s-expression implies the right s-expression in `(s-exp s-exp)`. The only rule of inference is modus ponens. Execution time for variable-less expressions such is propositional logic should be between O(n^2) and O(n^4).

Read about it here: https://github.com/e-teoria/Implika
Test it in dedicated UI here: https://e-teoria.github.io/Implika/test


Reply all
Reply to author
Forward
0 new messages