Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Jekejeke Java Interface?

83 views
Skip to first unread message

rupert...@googlemail.com

unread,
Sep 14, 2015, 5:19:34 AM9/14/15
to
Hi, a question for Jan,

There is a uniform Prolog -> Java interface here:

https://github.com/java-prolog-connectivity

This lets you plug in various Prologs into Java and use them all through the same interface.

Would you have any interest in implementing the bridge logic for Jekejeke?

The reason I ask, is because the simple Prolog interpreter that I wrote (eval directly against the AST, not the WAM based one), is beginning to show that it is far too inefficient. I need a 100% pure Java based Prolog, so either I finish the WAM implementation, or move to something else. If I am moving around implementations, doing so through a uniform interface is helpful to keep everything plug-and-play. So the best way forward for me might be to write the bridge logic for JPL, then alter my code to call through JPL, and then switch in a better implementation.

I'll have to dig in deeper, but its also possible that this JPL library links up with the 100% Java GNU Prolog implementation.

The only reason I need 100% Java, is so that it runs out of the box with build tools like Maven - hooking up native libraries will probably break that since users would have to install the right Prolog binaries too.

Rupert

rupert...@googlemail.com

unread,
Sep 14, 2015, 5:27:26 AM9/14/15
to
On Monday, September 14, 2015 at 10:19:34 AM UTC+1, rupert...@googlemail.com wrote:
> I'll have to dig in deeper, but its also possible that this JPL library links up with the 100% Java GNU Prolog implementation.

Supports SWI, YAP and XSB:

http://java-prolog-connectivity.github.io/architecture.html#compatiblePrologEngines

Jan Burse

unread,
Sep 14, 2015, 6:20:19 AM9/14/15
to
Hi,

I could contribute, or somebody else could do it.

I am currently following another strand. Lets call
this strand "JVM foreign function Interface and Scripting".
This strand allows directly calling Java inside the same
thread without some overhead of loosly coupling the
Prolog system. The loosly coupling is seen here:

JPC Architecture
http://java-prolog-connectivity.github.io/architecture.html

Einge Specific Drivers --> Concrete Prolog Engines

But Jekejeke Prolog doesn't need this architecture.
It could directly script Java as did JavaScript with
LiveConnect, and as does Java with Nashorn now. But
I didn't invest much energy to further develop the
tight architecture.

This has changed the last weeks. Synchronically by
accident with some other systems also discovering
the benefit of tight integration. For example one
of Freges benefit over Haskell is tight integration
with Java, see also here:

Frege Day 2015: Simon Peyton Jones transcript
https://github.com/Frege/frege/wiki/Frege-Day-2015:-Simon-Peyton-Jones-transcript

So I am actually working on this tight coupling.
There is alreay some progress. For example I have
last week implemented new directives foreign_constructor/3,
foreign_setter/3, foreign_getter/3, foreign_function/3
and foreign_constant/3. They are scheduled for
the next release 1.0.9:

Foreign Predicates
http://www.jekejeke.ch/idatab/doclet/prod/en/docs/05_run/10_docu/02_reference/07_theories/06_reflect/03_foreign.html

Please feel free to use this tight integration
interface to build a loose integration interface.
You could use the tight integration interface
but more probably you will neeed the programming API,
to implement the loose interface. The programming
API will provide you abstractions such as
Interpreter and Term.

The programming API is documented here:

Programming Interface
http://www.jekejeke.ch/idatab/doclet/prod/en/docs/05_run/10_docu/03_interface/package.html

My current goal is to strengthen the tight interface
to also allow scripting. The missing piece is currently
an auto loader. But I got this already sketched, with
an accidential help of a little SWI-Prolog discussion
recently. So most probably in the upcoming release
one will be able to directly do:

1) ?- java\lang\System:out(X), java\io\PrintStream:println(X, 'abc').

2) ?- X is java\lang\Math:sin(java\lang\Math:'PI').

2) will mostlikely work in the upcoming release, since
there is no ambiguity in finding the Java methods. On
the other hand 1) is a little bit more tricky, since
println is overloaded, and we might need some heuristic
or forbid it alltogether.

The above scripint idea is based on the ISO prolog standard
module (proposal), that suggests (:)/2 as the module
operator. Plus (\)/2 to allow structured module names
which I have already introduced some months ago. I am not
sure whether Frege also allows scripting, and whether
typed Haskell philosophy would allow it.

The nice thing about Prolog is, that there are no
types in the way, that would forbid scripting. And
also Java, although it has types, isn't in the way,
since it has an Universal Type Object and also the
primitve types can be easily wrapped if necessary.

So to sum it up:
- My current focus is JVM foreign function interface
a variant of tight coupling
(Improved in the recent weaks)

- My current focus scripting a variant
of tight coupling
(Currently in the pipeline)

- You can use the Programming interface to build
your own adapter for a loose coupling.
(Available already a couple of years, just
work through the examples in the link "Programming
Interface" above, hope they don't generate to many
errors, they are already old, but you find for
example a HTTP server side Prolog listener example)

I didn't mention some limitations of what has been
implemented so far. For example the FFI now, in
the upcoming release 1.0.9, provides dynamic
invokation. This is seen in the java\io\PrintStream:println
example. But the syntax that is used, is a static
syntax. We don't have an object receive operator
yet in Jekejeke Prolog.

So we have only a module model but not yet an object
oriented model. But my thinking recently was, who
gives a shit in an untyped language anyway? So the
whole object orientation is kind of opaque at the
moment, and hidden in the reference datatype of
Jekejek Prolog. Not yet sure when, how and whether
this will change.

So the term tight integration here is a little oversell.
It is not really a tight integration in that we
also have data types that are shared directly between
Prolog and Java. There is a slight sharing via the
Term abstraction in the Programming Interface and
via the reference datatype in Jekejeke Prolog, but it
isn't that strong. Maybe Frege is better here,
although it doesn't have scripting.

Hope this helps

Bye

rupert...@googlemail.com schrieb:

Jan Burse

unread,
Sep 14, 2015, 6:29:39 AM9/14/15
to
Hi,

Jan Burse schrieb:
> So the term tight integration here is a little oversell.
> It is not really a tight integration in that we
> also have data types that are shared directly between
> Prolog and Java. There is a slight sharing via the
> Term abstraction in the Programming Interface and
> via the reference datatype in Jekejeke Prolog, but it
> isn't that strong. Maybe Frege is better here,
> although it doesn't have scripting.

If you want a Prolog system that is that tightly
coupled you would be better off with Paul Taraus
http://code.google.com/p/styla/

This is only a side note, I understand you are
investigating into "bridges", as needed by the
JPL approach.

The JPL allows you to plug-in different Prolog
systems, such as SWI, YAP and XSB. So theoretically
I should be also interested to do this for Jekejeke.

But I somehow have the feeling I did already my
Job a few years ago in providing the Programming
Interface, and nothing else should be done on
my side, since it would interfer too much.

I also don't want to lockin Jekejeke into a particular
type of "bridge". I guess with my Programming
Interface one can code a couple of different types
of "bridges".

BYe



j4n bur53

unread,
Sep 14, 2015, 6:35:33 AM9/14/15
to
Hi,

Jan Burse schrieb:
> I also don't want to lockin Jekejeke into a particular
> type of "bridge". I guess with my Programming
> Interface one can code a couple of different types
> of "bridges".

I guess SWISH could be viewed as another type of
bridge. You could have on the server side another
Prolog system.

In this kind of bridge, which I am more interested,
aspects of sandboxing are important. Thats why
I take the term "bridge" as plural.

Bye


Jan Burse

unread,
Sep 16, 2015, 4:33:48 AM9/16/15
to
Hi,

rupert...@googlemail.com schrieb:
> The reason I ask, is because the simple Prolog interpreter
> that I wrote (eval directly against the AST, not the WAM
> based one), is beginning to show that it is far too inefficient.
> I need a 100% pure Java based Prolog, so either I finish the
> WAM implementation, or move to something else.

Yes, I know. And I also know why, but its difficult
to solve. Basically what has to be fast in any Prolog
Java implementation:

Problem Zones (where you need to loose weight, He He)
1) The goal invocation.
2) The term access.

In Jekejeke Prolog I use as term access two pointers,
a skeleton and a display. And for goal invocation
I allow a skeleton array and a display:

Jekejeke internal
1) goal invocation: skeleton array and display
2) term access: skeleton and display

Unfortunately the internal API is not yet published,
and will probably also not be published, the reason
is obvious. Its not a safe API.

The skeleton array that is passed is mutable. So
any foreign function can kill the Prolog interpreter,
by writing some illegal stuff into this array.

Thats why there is an external API which takes an
other approach and has an additional layer.

In this external apprach terms are wrapped in
Term objects that hold the skeleton and display pair,
and goals are called by a new created array that
holds Term objects, Numbers, Strings, References
or the auxiliary parameters, depending on signature.

Jekejeke external
1) goal invocation: fresh array if Terms etc..
2) term access: subclasses of Term

The external approach is pretty safe. The Term class
is programmed such that it is immutable. The
fresh array can be modified by client code, its
anyway thrown away after the foreign function has
been executed.

I was aware that the external API is slower than
the internal API. Thats also why most of the ISO
core stuff in Jekejeke Prolog is for example coded
via the internal API, and can currently not yet
been published.

But so far I had no good means and time to measure
the difference. But since I have foreign_function/3
now, I put all the ISO core to the test, and over
the last days did an alternative implementation of
the arithmetic, the comparison via the external API.

I performed some measures, for my usual runtime
benchmark, which has a lot of arithmetic and comparison.
The result were as follows:

Jekejeke Runtime Benchmark:
Different APIs used to realize arithmetic/comparison BIPs

Via Internal: Via External:
ca. 7'400 ms ca. 8'900 ms

Not all test case were slowed down. Only those
using aritmetic and comparison. For example the
simple query test case:

Query Benchmark Program
http://www.jekejeke.ch/idatab/doclet/blog/docs/05_run/06_bench/tests/query.html

Slowed down by 30%.

So if your WAM or AST Java Prolog uses additional
class to wrap pairs of pointers or what ever, it
will slow down. But at the moment I am more clueless
how the internal API could be speeded up than
the external API.

The external API could be speeded up, that in
effect somehow it is compiled down to the internal
API. Or maybe some other ideas I don't know yet
about. Also the fresh arrays could be cashed or
somesuch.

For the internal API I have no more ideas presently,
I did a little reprogramming of the code recently
to take advantage of JVM native code tailrec,
but still there was not much impact. Also how to
make it safe, I am totally clueless.

Bye



Jan Burse

unread,
Sep 16, 2015, 4:38:56 AM9/16/15
to
Hi,

Well applying some ideas from for example
the Nashorn JavaScript realization in Java,
could be also a route.

Nashorn: The New Rhino on the Block
http://ariya.ofilabs.com/2014/03/nashorn-the-new-rhino-on-the-block.html

But its pretty involved viewed from the
distance, and uses the new Java invoke
byte code, and and ..

Bye

Jan Burse schrieb:

rupert...@googlemail.com

unread,
Sep 16, 2015, 6:24:34 AM9/16/15
to
On Wednesday, September 16, 2015 at 9:33:48 AM UTC+1, Jan Burse wrote:
> Hi,
>
> rupert...@googlemail.com schrieb:
> > The reason I ask, is because the simple Prolog interpreter
> > that I wrote (eval directly against the AST, not the WAM
> > based one), is beginning to show that it is far too inefficient.
> > I need a 100% pure Java based Prolog, so either I finish the
> > WAM implementation, or move to something else.
>
> Yes, I know. And I also know why, but its difficult
> to solve.

In this case we are talking about different levels of efficiency - either your internal or external API would easily be fast enough for my requirements.

My Prolog interpreter is massively inefficient in handling memory for one thing, and this can lead to garbage collection thrashing. When a deep enough search path is held in memory and it is exploring the search space below that, there can sometimes only be a small amount of heap left and that is collected repeatedly.

rupert...@googlemail.com

unread,
Sep 16, 2015, 6:27:24 AM9/16/15
to
On Monday, September 14, 2015 at 11:29:39 AM UTC+1, Jan Burse wrote:
> But I somehow have the feeling I did already my
> Job a few years ago in providing the Programming
> Interface, and nothing else should be done on
> my side, since it would interfer too much.

A reasonable position to take. IT would not be hard to implement a JPC bridge on top of that - it only really requires translating the Term implementations for the AST between one API and another, and hooking up the methods to make queries/load programs.

Of course this means that the API then has 3 layers, JPC -> your external API -> your internal API. Not super efficient, but good enough.

rupert...@googlemail.com

unread,
Sep 16, 2015, 6:29:44 AM9/16/15
to
On Monday, September 14, 2015 at 11:29:39 AM UTC+1, Jan Burse wrote:
> This is only a side note, I understand you are
> investigating into "bridges", as needed by the
> JPL approach.

The attraction of this is the plug-and-play aspect. I can swap in a better system and use that. When my own WAM implementation is ready, I can swap back to that.

j4n bur53

unread,
Sep 16, 2015, 2:58:52 PM9/16/15
to
Hi,

rupert...@googlemail.com schrieb:
> My Prolog interpreter is massively inefficient in
> handling memory for one thing, and this can lead
> to garbage collection thrashing. When a deep
> enough search path is held in memory and it is
> exploring the search space below that, there
> can sometimes only be a small amount of heap
> left and that is collected repeatedly.

Yes, I know. I got this also along the different
optimization levels I have. The fewer optimizations
I apply, the more often this happens. With all
the optimizations in place, it still happens with
Jekejeke Minlog, but there the constraint store
is the problem and not the "WAM".

By optimizations for my "WAM", which is not a "WAM"
rather the fore runner, a "PLM", I mean for
example this list:

Available Optimizations
http://www.jekejeke.ch/idatab/doclet/prod/docs/05_run/15_stdy/06_bench/06_optimizations/package.html

So let me explain:
- Choice Point Elimination: Obvious, when you
don't create choice points, i.e. objects on
the heap in the case of Jekejeke Prolog, you
get less pressure on the GC.

- Clause Indexing: Has an indirect effect on
choice point elimination, and thus also on
GC pressure. With clause indexing more
choice points can be eliminated, and thus
GC pressure is lowered.

- Body Variable Elimination: Obivous concerning
the meaning it has in Jekejeke Prolog, it
allows giving back more memory, in that
it helps nulling display entries, and
thus reclaiming whole structures. Less
GC pressure.

- Stack Frame Elimination: Obvious concerning
the meaning it has in Jekejeke Prolog, it
allows giving back more memory, in that
it helps shortening the ancestor list.
Less GC pressure.

- Head Variable Elimination: Obvious concerning
the meaning it has in Jekejeke Prolog, it
allows using less variables in a clause,
thus consuming less memory from the beginning.
Less GC pressure.

As an additional side note, the Clause Indexing above
need not be first argument indexing. For example
just-in-time index as in Jekejeke Prolog or
SWI-Prolog, and not really part of the WAM concept,
obviously has an additional benefit as well and
lowers the GC pressure.

The GC pressure is lowerd by multi-argument indexing
in that it also allows to eliminate more choice
points than with only first argument indexing.
And the more choice points are eliminated the better
body variable elimination and stack frame
elimination work.

Jikes

I don't know yet what the list of optimizations
for the constraint store should be. The Jekejeke
Minlog constraint store is new, and I don't have
yet different levels of performance measurement.
There are aspects like the performance of attribute
variables, the performance of the trail, I need
call/n and so on.

Also the above iist for the non constraint store
optimizations is probably not complete. I guess
there are a couple of other ideas around. For
example SWI-Prolog is pretty good in predicates
that are for example found in the nrev example.
Don't know yet whats the missing magic.

Further basically the above ideas are all more
or less on the local clause level. Of course
some more glonal optimizations or types, would
also give an additional benefit. But probably then
one would end in a language such as Mercury or
somesuch and not in Prolog.

Bye

j4n bur53

unread,
Sep 16, 2015, 3:04:06 PM9/16/15
to
Hi,

j4n bur53 schrieb:
> As an additional side note, the Clause Indexing above
> need not be first argument indexing. For example
> just-in-time index as in Jekejeke Prolog or
> SWI-Prolog, and not really part of the WAM concept,
> obviously has an additional benefit as well and
> lowers the GC pressure.

Further side note, if I remember well the
WAM has some of the optimizations by design,
it depends which instruction set you use.

For example last call optimization is similar
to my stack frame elimination. Or the distinction
of temporary variables is similar to my head
variable elimination.

So you might have implemented a very primitive
WAM, without these instructions, and then yes
its more likely to produce GC prssre.

Bye


R Kym Horsell

unread,
Sep 16, 2015, 5:36:26 PM9/16/15
to
j4n bur53 <janb...@fastmail.fm> wrote:
> ...[thrashing]...

Do you have anything short that causes thrashing in JJP?

Just happen to be trying to tune up a small prolog and need
something for it to chew on.

--
800 PEER REVIEWED Papers Supporting Skepticism of ManMade Global Warming Alarm
http://www.populartechnology.net/2009/10/peer-reviewed-papers-supporting.html#Preface
[Many papers by Idso and Lindzen].
-- BONZO@27-32-240-172 [daily nymshifter], 19 Dec 2010 12:51 +1100

1100+ Peer-Reviewed Papers Supporting Skeptic Arguments Against ACC/AGW Alarm
[325 unique lead authors].
-- Blue [sock], 14 Aug 2013

1350+ Peer-Reviewed Papers Supporting Skeptic Arguments Against ACC/AGW Alarm.
-- tunderbar/Comeau, 14 Feb 2014

ISI Web of Science indexed 39,177 publications related to climate change
and global warming between 1970 to 2007.

800 versus 38,500.

j4n bur53

unread,
Sep 17, 2015, 2:52:51 AM9/17/15
to
Hi,

R Kym Horsell schrieb:
> j4n bur53 <janb...@fastmail.fm> wrote:
>> ...[thrashing]...
>
> Do you have anything short that causes thrashing in JJP?
>
> Just happen to be trying to tune up a small prolog and need
> something for it to chew on.
>

The core benchmark trashes sometimes when I
disable all optimization options. Which can
be done by

set_prolog_flag(<optization>, off).

The flags are listed at the end of this
documentation page:


http://www.jekejeke.ch/idatab/doclet/prod/docs/05_run/10_docu/02_reference/07_theories/01_kernel/04_optimization.html

The core benchmark trashes on some JVM when
all is off and depending on the memory settings.
Especially deriv.p trashes sometimes.

That deriv.p trashes is also seen in the summary
of the different levels of optimization. The
summary is seen here:


http://www.jekejeke.ch/idatab/doclet/prod/docs/05_run/15_stdy/06_bench/07_internal/01_results.html

ms Choice Index Body Stack Head
deriv 6'634 off off off off off
5'506 on off off off off
5'924 on on off off off
4'107 on on on off off
509 on on on on off
391 on on on on on

I am measuring Java GC time as well as Java walltime. There is
not detailed table with Java GC time in the documentation,
but you can read it off by youself. You see code here:

http://www.jekejeke.ch/idatab/doclet/blog/docs/05_run/03_swing/gestalt/ForeignStatistics.html

Time ~ Trashing. Usually this relation, especially in the
strategies table, holds. But trashing means also that the
JVM sometimes emits error messages, about their GC behaviour,
varing from JVM to JVM, since different JVMs have different GCs.

I don't measure the error messages. Also I don't measure core
utility. In some JVMs GC uses multiple threads, so code runs
sometimes more than 100% cores even its sequential. But you
can easily write a memory Monitor, with the Jekejeke Programming
API, for the Jekejeke Prolog:

http://www.jekejeke.ch/idatab/doclet/prod/docs/10_dev/10_docu/03_interface/04_examples/02_memory/05_uses.html

The source code of the memory Monitor is available, it uses
a third party product (org.jfree.chart) for the Diagram. The
memory Monitor is not yet integrated, its just a programming
exercise. The main class of the exercise is:

http://www.jekejeke.ch/idatab/doclet/prod/docs/10_dev/10_docu/03_interface/07_appendix/02_memory/02_MemoryFrame.java.html

The code of the memory threshold listener code, which is
only available for Swing and not for Android, is currently
not public. Since it uses the internal API to signal all
currently running Prolog threads. Don't know yet what I
should do for Android.

Bye









j4n bur53

unread,
Sep 17, 2015, 3:04:11 AM9/17/15
to
Hi,

j4n bur53 schrieb:
>
> I don't measure the error messages. Also I don't measure core
> utility. In some JVMs GC uses multiple threads, so code runs
> sometimes more than 100% cores even its sequential. But you
> can easily write a memory Monitor, with the Jekejeke Programming
> API, for the Jekejeke Prolog:
>
> http://www.jekejeke.ch/idatab/doclet/prod/docs/10_dev/10_docu/03_interface/04_examples/02_memory/05_uses.html

I think you see trashing in the second diagram, when the
memory usage suddently flattens, before the 85% threshold
is reached, and Jekejeke Prolog itself kills
the process.

So the JVM itself starts trashing before 85%. But I don't
know currently where this threshold for GC thrashing could
be read off, and I am currently using a fixed kill
threshold, which might not work always and
for all JVM.

So the killing might come too late, and the result is
either a uncatched memory error or even a System.exit.
Its not 100% reliable, so don't run your nuclear
weapon with Jekejeke, you might blow yourself.

Bye

rupert...@googlemail.com

unread,
Sep 17, 2015, 6:45:32 AM9/17/15
to
On Wednesday, September 16, 2015 at 8:04:06 PM UTC+1, j4n bur53 wrote:
> For example last call optimization is similar
> to my stack frame elimination. Or the distinction
> of temporary variables is similar to my head
> variable elimination.
>
> So you might have implemented a very primitive
> WAM, without these instructions, and then yes
> its more likely to produce GC prssre.


Yes, in the WAM version I have implemented those.

In the AST interpreter version, the only optimization I implemented was first argument indexing (or maybe I did up to first 3 args).

It does things like expanding out all choice points into a list, then exploring the list. All elements of the list are kept until the end of the list is reached. It is possible that I could insert a few nulling statements, like "currentNode = null;", or tweak it a bit to use an iterator instead of a list, and expand choice points only when they are to be examined. These changes would probably result in it using massively less amounts of RAM. As I say, it was a very primitive interpreter I wrote some time ago just to get things off the ground.

R Kym Horsell

unread,
Sep 17, 2015, 8:23:20 AM9/17/15
to
j4n bur53 <janb...@fastmail.fm> wrote:
> Hi,
> R Kym Horsell schrieb:
>> j4n bur53 <janb...@fastmail.fm> wrote:
>>> ...[thrashing]...
>> Do you have anything short that causes thrashing in JJP?
>> Just happen to be trying to tune up a small prolog and need
>> something for it to chew on.
> The core benchmark trashes sometimes when I
> disable all optimization options. Which can
> be done by
> set_prolog_flag(<optization>, off).
> The flags are listed at the end of this
> documentation page:
> http://www.jekejeke.ch/idatab/doclet/prod/docs/05_run/10_docu/02_reference/07_theories/01_kernel/04_optimization.html

...


Thanks for that.

--
I do solemnly swear (or affirm, as the case may be) that I will
support the Constitution of the United States and the Constitution
of this Commonwealth, and be faithful and true to the Commonwealth
of Kentucky so long as I continue a citizen thereof, and that I will
faithfully execute, to the best of my ability, the office of -------
according to law; and I do further solemnly swear (or affirm) that
since the adoption of the present Constitution, I, being a citizen
of this State, have not fought a duel with deadly weapons within
this State nor out of it, nor have I sent or accepted a challenge to
fight a duel with deadly weapons, nor have I acted as second in
carrying a challenge, nor aided or assisted any person thus
offending, so help me God.

-- Section 228 of the Kentucky Constitution, oath of officers and
attorneys.

j4n bur53

unread,
Sep 17, 2015, 9:01:43 AM9/17/15
to
Hi,

deriv.p is the real bad boy. But if you think about,
it is in the original WAM paper by Warren, and was
used as a motivation to move from LISP to Prolog.

deriv.p is just a predicate with the following
input output patter:

% d(+Expr,+Var,-Expr)

And it computes the derivative, non simplified,
of a given mathematical expression.

But it puts the Prolog system to the test what
concerns structure sharing, but also what concerns
choice points. The crucial clauses are here:

etc..
d(log(U), X, DU/U) :- !,
d(U, X, DU).
d(X, X, 1) :- !.
d(_, _, 0).
http://www.jekejeke.ch/idatab/doclet/blog/docs/05_run/06_bench/tests/deriv.html

Even if you have indexing, since the last and the
last but one clause have variables in it, you
will get a list of choices which has length 3.

Bye

P.S.: So besides indexing you need for example
to be able to work as follows: If you have a list
with n choices, you only need n-1 choice points
the last choice is deterministic (optimization alpha).

Further if you back track through all choices,
the same choice point can reused
(optimization beta).

And a neck cut instruction also helps, to quickly
eliminate the creation of choice points
(optimization gamma).

After all these optimizations Prolog nearly
reaches status of LISP. He He

P.P.S.: I would claim Prolog can beat functional
languages such as Haskell etc.. in above examples,
if all these optimizations are in place.

But when Warren wrote the paper, there was LISP
and no Haskell yet. And maybe Prolog cannot beat
the new Frege, more ore less a Haskell for Java
variant.

It all depends how these folks code pattern matching,
testing agains Scala could be also interesting.
But I didn't test.

Also try the other JVM Prologs suchs as tuProlog,
JIProlog, JLog, JScriptLog, jTrolog, etc.. last time
I tested I also saw trashing.

j4n bur53

unread,
Sep 17, 2015, 9:04:36 AM9/17/15
to
This is the benchmark of page 222 of:
Warren, D.H.D. (1983): Applied Logic - Its Use and
Implementation as a Programming Tool,
Technical Note 290, SRI International, 1983

j4n bur53 schrieb:

Jan Burse

unread,
Sep 17, 2015, 1:13:05 PM9/17/15
to
SICStus Prolog

Java Doc: Prolog Beans
https://sicstus.sics.se/sicstus/docs/4.0.5/html/prologbeans/

Manual: Prolog Beans Interface
http://nlpcl.kaist.ac.kr/~cs370/manual/lib_002dprologbeans.html

Jan Burse schrieb:

Jan Burse

unread,
Sep 17, 2015, 3:56:51 PM9/17/15
to
Being trippy with bridges:
https://www.youtube.com/watch?v=coL-Cm3Ugaw

Jan Burse schrieb:

Jan Burse

unread,
Sep 18, 2015, 4:48:53 AM9/18/15
to
Hi,

Most likely native stack overflow error in Frege and
Scala, those that translate to Java. Except if you
set the stack size very large.

There is a functional alternative, I have seen somewhere,
you can use some programming patterns and some types resp.
classes, that realize Trampolines.

With Tampolines maybe the stack overflow goes away,
but also probably performance goes down again. So I
guess Prolog is still better in those areas.

Bye

j4n bur53 schrieb:

n2kr...@gmail.com

unread,
Sep 25, 2015, 9:17:59 AM9/25/15
to
So AUM, used by Shen / Qi Prolog is a Prolog to Lambda,
(Not useful to you)
Vs WAM is to Java / Von Neumann / Turing machine.

There are Prolog on Lisp-2. I wondered if (Lisp-1) Shen / Qi
prolog was it's own namespace,
or is Prolog only re-ified in itself?

On Thursday, September 17, 2015 at 9:01:43 AM UTC-4, Jan Burse wrote:
> Hi,
>
> deriv.p is the real bad boy. But if you think about,
> it is in the original WAM paper by Warren, and was
> used as a motivation to move from LISP to Prolog.

So AUM, used by Shen / Qi Prolog is a Prolog to Lambda,
(Not useful to you)
Vs WAM is to Java / Von Neumann / Turing machine.

There are Prolog on Lisp-2. I wondered if (Lisp-1) Shen / Qi
prolog was it's own namespace,
or is Prolog only re-ified in itself?

Jan Burse

unread,
Sep 25, 2015, 12:54:59 PM9/25/15
to
n2kr...@gmail.com schrieb:
> Shen /

I did some testing of Shen Prolog:
https://plus.google.com/u/0/b/103259555581227445618/+JekejekeCh/posts/Wien54QZFpW

There was also a little thread about it:
https://groups.google.com/d/msg/qilang/fJCEZ0vuBos/strpKkv3ixkJ

The current Prolog LIPS ladder(TM):

300 KLIPS:
For a Shen Qi Prolog like implementation, with
no structure sharing, litterally copying the
clause body and doing some assoc lookups for
the variables, its very hard to even break the
300 KLIPS barrier. But situation might have changed
meanwhile.

3 MLIPS:
Please note that in terms of KLIPS, you should
really get above 3 MLIPS for an AST based Prolog.
For your tiny Prolog you are already close
to 1 MLIPS.

30 MLIPS:
And some of the machine compiled Prolog systems
go to the 30 MLIPS barrier and above.

Bye

(TM) Just joking, no TM.

Jan Burse

unread,
Sep 25, 2015, 1:02:11 PM9/25/15
to
Hi,

Jan Burse schrieb:
Note that I don't trust the LIPS display of
Prolog system. Since different Prolog systems
might count differently.

What I did above was NORMING the count of
inferences by doing my own count, and then
use to recompute a LIPS figure and not use
what the system displays.

Bye

Jan Burse

unread,
Sep 29, 2015, 9:56:35 AM9/29/15
to
Hi,

Jan Burse schrieb:
> My current goal is to strengthen the tight interface
> to also allow scripting. The missing piece is currently
> an auto loader.

The following will work in the upcoming release:

?- java/lang/'System':exit.

See also:
http://plus.google.com/+JekejekeCh/posts/LQKg9bAxwio

Bye

P.S.: Any such calls and evaluations <Java Class>:<Java Member>
will be only loaded once and then bleeding fast via the
polymorphic cache. So you can code loops:

?- repeat, <do something Java>, fail.

And the Java part will not cause some discovery overhead. The
Java discovery reflection is only done once when creating the
synthetic module.

The remaining overhead is only that the synthetic module
uses the foreign directives, and thus the external API
and the overhead of the Java invokation reflection.

rupert...@googlemail.com

unread,
Sep 30, 2015, 10:45:08 AM9/30/15
to
On Thursday, September 17, 2015 at 11:45:32 AM UTC+1, rupert...@googlemail.com wrote:
> It is possible that I could insert a few nulling statements, like
> "currentNode = null;", or tweak it a bit to use an iterator instead of
> a list, and expand choice points only when they are to be examined.
> These changes would probably result in it using massively less amounts
> of RAM. As I say, it was a very primitive interpreter I wrote some time
> ago just to get things off the ground.

So I managed to get my interpreter running at least ten times faster, with a few simple tweaks:

One was as above, when leaving a choice point, null out all fields of the choice point. This caused the garbage collector to drop from consuming about 90% of CPU to 0.1%, and the amount of RAM needed to fall by about a factor of 10 too. I think I was causing the GC to have to walk some long reference chains, and probably also had a lot of objects that were unnecessarily referenced beyond their useful life.

Then I noticed the profiler said it was spending most of its time in NoSuchElementException.<init>. I had written an iterator over a sequence, that signalled the end of the sequence with an exception, instead of using some state flag. I had not realized just how slow exceptions are in Java, I thought it was only C++ that had slow exceptions. My bad anyway for using an exception for flow control.

Sprinkled a few green cuts into my prolog code too, and I got the execution time down from over 20 seconds to under 1.

I should probably try the nrev benchmark on it and see just how slow it is compared with a decent system, but I think its now good enough to continue using for my present purposes at least.

Rupert

rupert...@googlemail.com

unread,
Sep 30, 2015, 10:56:26 AM9/30/15
to
On Wednesday, September 30, 2015 at 3:45:08 PM UTC+1, rupert...@googlemail.com wrote:
> I should probably try the nrev benchmark on it and see just how slow
> it is compared with a decent system.

Works out about 2000 iterations/sec, I think previously 6000 was suggested as typical of a decent interpreter.

1000 iterations in 506 millis, which is 1976 iterations/sec.
1000 iterations in 790 millis, which is 1265 iterations/sec.
1000 iterations in 503 millis, which is 1988 iterations/sec.
1000 iterations in 506 millis, which is 1976 iterations/sec.
1000 iterations in 503 millis, which is 1988 iterations/sec.
1000 iterations in 498 millis, which is 2008 iterations/sec.
1000 iterations in 496 millis, which is 2016 iterations/sec.
1000 iterations in 501 millis, which is 1996 iterations/sec.
1000 iterations in 503 millis, which is 1988 iterations/sec.
1000 iterations in 505 millis, which is 1980 iterations/sec.
1000 iterations in 503 millis, which is 1988 iterations/sec.
1000 iterations in 509 millis, which is 1964 iterations/sec.
1000 iterations in 503 millis, which is 1988 iterations/sec.
1000 iterations in 503 millis, which is 1988 iterations/sec.
1000 iterations in 510 millis, which is 1960 iterations/sec.
1000 iterations in 505 millis, which is 1980 iterations/sec.
1000 iterations in 509 millis, which is 1964 iterations/sec.

Markus Triska

unread,
Oct 4, 2015, 5:46:02 AM10/4/15
to
Hi Rupert,

"rupert...@googlemail.com" <rupert...@googlemail.com> writes:

> So I managed to get my interpreter running at least ten times faster,
> with a few simple tweaks:

It's really great that you are making such nice progress with your
interpreter! I always enjoy reading your reports and findings.

All the best,
Markus

Jan Burse

unread,
Nov 5, 2015, 6:00:09 AM11/5/15
to
R Kym Horsell schrieb:
> j4n bur53 <janb...@fastmail.fm> wrote:
>> ...[thrashing]...
>
> Do you have anything short that causes thrashing in JJP?
>
> Just happen to be trying to tune up a small prolog and need
> something for it to chew on.
>

Hi R Kym,

In some post you said throwing NoSuchElementException
might have been the problem, and you eliminated that.

I am also worrying a lot about this exception throwing of
Java. For example I did not yet figure out a function that
could tell if a class was found or not without throwing
an exception.

The API is:

public Class<?> loadClass(String name)
throws ClassNotFoundException;

Why the heck don't they just return null if they don't
find the class? Anyway, this is the design and one has to
live with it.

It seems that the problem was also noticed later on,
and some JDKs, notable older so called server versions,
did already certain optimization:

"The compiler in the server VM now provides correct stack backtraces
for all "cold" built-in exceptions. For performance purposes, when such
an exception is thrown a few times, the method may be recompiled. After
recompilation, the compiler may choose a faster tactic using
preallocated exceptions that do not provide a stack trace. To disable
completely the use of preallocated exceptions, use this new flag:
-XX:-OmitStackTraceInFastThrow."

So a cold throw is a throw which has a call-site throw
count or somesuch of a small value. As soon as the
value crosses a threshold, a exceptions without backtrace
are thrown. This has lead to around 100 questions and
answers (sic!) on SO:

http://stackoverflow.com/search?q=OmitStackTraceInFastThrow

Preallocated exceptions though dont work fully if they
have some parameter or a cause.

Jan Burse

unread,
Nov 5, 2015, 6:14:14 AM11/5/15
to
Jan Burse schrieb:
> In some post you said throwing NoSuchElementException
> might have been the problem, and you eliminated that.

The Iterator interface of Java must throw a NoSuchElementException,
but the throw can be avoided if one calls hasNext() in advance. So
instead of:

try {
next();
/* success */
} catch (NoSuchElementException x) {
/* failure */
}

One could do:

if (hasNext()) {
next();
/* success */
} else {
/* failure */
}

In my current Interactor prototype I managed to make both
performing same (except for the backtrace), thanks to a small
state machine. So hasNext() will already try to solve the
goal and next() will know this:

The API is:
Class CallIn

http://www.jekejeke.ch/idatab/doclet/prod/en/docs/05_run/10_docu/03_interface/07_call/02_callin.html

The statemachine is:
Interactor Experiment: New CallIn API. (Jekejeke)

https://plus.google.com/u/0/b/103259555581227445618/+JekejekeCh/posts/Qr2wLGBVLXV

But this is stuff for the next release 1.0.10. I am still
seeking a name for a special call sequence, a convenience
of the form:

public void xxx() throws NoSuchElementException {
next();
close();
}

Currently this is stashed in the Interpreter class as
unfoldChecked(), but could also move to the CallIn class.
The above convenience is very similar to:

\+ \+ goal

Which is usually called "garbage collection", since it
doesn't leave choice points nor trail for the given
goal. But I am affraid providing method named gc() in an
Interactor could be a little missleading.

This is useful for goals such as write/1 or consult/1,
where one doesn't want much handling. All directives
in a Prolog text are also similarly handled.

Bye









Jan Burse

unread,
Nov 5, 2015, 6:16:56 AM11/5/15
to
Jan Burse schrieb:
> instead of:
>
> try {
> next();
> /* success */
> } catch (NoSuchElementException x) {
> /* failure */
> }
>
> One could do:
>
> if (hasNext()) {
> next();
> /* success */
> } else {
> /* failure */
> }

Assuming the /* success */ block doesn't throw
a NoSuchElementException.

rupert...@googlemail.com

unread,
Nov 5, 2015, 11:32:17 AM11/5/15
to
On Thursday, November 5, 2015 at 11:00:09 AM UTC, Jan Burse wrote:
> I am also worrying a lot about this exception throwing of
> Java. For example I did not yet figure out a function that
> could tell if a class was found or not without throwing
> an exception.
>
> The API is:
>
> public Class<?> loadClass(String name)
> throws ClassNotFoundException;
>
> Why the heck don't they just return null if they don't
> find the class?

Same with the methods to query for a particular constructor or method, and no method called boolean hasConstructor(...), that could be used to avoid making the call.

Rupert

Jan Burse

unread,
Nov 7, 2015, 5:23:44 AM11/7/15
to
Jan Burse schrieb:
> public void xxx() throws NoSuchElementException {
> next();
> close();
> }

I currently went with the name nextClose() for the above,
there is still some thinking going into it, how should
the Java monad be mapped to the Prolog monad.

Assuming we want to automatically map Java interfaces
to Prolog predicates. And we don't want to always go
with the Iterable/Iterator approach from tuProlog, when
we know the Prolog predicate is deterministic or
semi-deterministic.

Then we could do the following:

An interface method which doesn't have a return value:
void p(T1 a1, .., Tn an);
Could be mapped to a Prolog call:
callin(<build p(a1,..,an)>).nextClose();

But what about the following:

An interface method which does have a return value:
T p(T1 a1,.., Tn an);
Could be mapped to a Prolog call:
callin(<build X>,<build p(a1,..,an,X>).nextClose();
- or -
callin(<build X>,<build p(a1,..,an,X>).hasNextClose();

Meaning nextClose() could also have a return value, possibly
doing a copy of the value before closing the interactor.

And if it has a return value we could need two variants, one
variant that tollerates a failing of the Prolog predicate and
would return null. After all this programming style is
sometimes found in Java as well.

Here is a potato:
http://codegolf.stackexchange.com/questions/62732/implement-a-truth-machine/62901#62901

Jan Burse

unread,
Nov 7, 2015, 5:48:40 AM11/7/15
to
Jan Burse schrieb:
> Meaning nextClose() could also have a return value, possibly
> doing a copy of the value before closing the interactor.

To direct a constraint solver, for example to deterministically
built the model from Java, you would need nextCut() instead
of nextClose(). Only remove the choice points but not
the trail. Bah Bah.

Jan Burse

unread,
Nov 7, 2015, 5:53:25 AM11/7/15
to
Thats why Picat isn't the worst idea...

Jan Burse schrieb:

Jan Burse

unread,
Nov 7, 2015, 6:09:20 AM11/7/15
to
Jan Burse schrieb:
>
> To direct a constraint solver, for example to deterministically
> built the model from Java, you would need nextCut() instead
> of nextClose(). Only remove the choice points but not
> the trail. Bah Bah.

Well the nextCut() or a next() only would probably correspond to:
http://jacopapi.osolpro.com/org/jacop/core/Store.html#imposeWithConsistency-org.jacop.constraints.Constraint-

And we might do something with scheduling goals:
http://jacopapi.osolpro.com/org/jacop/core/Store.html#impose-org.jacop.constraints.Constraint-

Or detect anunattended failure, and block the interpreter,
until the failure is attended.

Etc..

0 new messages