Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The road to Artificial Intelligence

502 views
Skip to first unread message

Thomas Alva Edison

unread,
Jul 10, 2018, 9:28:05 AM7/10/18
to

Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA

burs...@gmail.com

unread,
Jul 13, 2018, 7:08:06 PM7/13/18
to
I wonder what happens when these little Prolog
JavaScript controlled dogs grow up?

http://tau-prolog.org/examples/my-little-doge

Will they roam via some telescript platform
the internet, as intelligent agents?

Alexa please buy me some shoes, and wosh,
a Prolog agent goes shopping.

burs...@gmail.com

unread,
Aug 6, 2018, 1:12:50 PM8/6/18
to
We are still waiting for the greener pastures,
that our heros will lead us to. Where is the

Prolog based AI beyond deep learning?

Where is Waldo Movie Trailer
https://www.youtube.com/watch?v=v2ALzU39LjE

BTW: Here is a collection of Prolog AI links:

Natural Language Processing Techniques in Prolog
http://cs.union.edu/~striegnk/courses/nlp-with-prolog/html/

Artificial Intelligence Techniques in Prolog
von Yoav Shoham - 2nd Edition, 12th May 2014
https://www.cs.cmu.edu/Groups/AI/lang/prolog/bookcode/aitp/

Etc.. Etc..

burs...@gmail.com

unread,
Aug 6, 2018, 1:34:38 PM8/6/18
to
For basic training, still oldie but goldie:

P-99: Ninety-Nine Prolog Problem
Werner Hett, Berner Fachhochschule
https://sites.google.com/site/prologsite/prolog-problems

Michael Ben Yosef

unread,
Aug 7, 2018, 2:28:27 AM8/7/18
to
On Monday, 6 August 2018 19:12:50 UTC+2, burs...@gmail.com wrote:
> Artificial Intelligence Techniques in Prolog
> von Yoav Shoham - 2nd Edition, 12th May 2014
> https://www.cs.cmu.edu/Groups/AI/lang/prolog/bookcode/aitp/

This is one of my favourite Prolog books, but I'm pretty sure there is no 2nd edition. It's 1st Edition, 1994.

burs...@gmail.com

unread,
Aug 7, 2018, 3:51:06 AM8/7/18
to

burs...@gmail.com

unread,
Aug 7, 2018, 4:07:13 PM8/7/18
to
BTW: Thats some brilliant hogwash, nonsense
slogans such as cyber physical systems
and coinduction is crucial to AI's success :

Logic, Co-induction and Infinite Computation
https://www.youtube.com/watch?v=nOqO5OlC920

And now the obligatory Log-Nonsense-Talk
bashing. Why only a coinductive/1 directive.
Why is there no inductive/1 directive as well?

burs...@gmail.com

unread,
Aug 7, 2018, 4:14:51 PM8/7/18
to
Or has a simple table/1 directive (without a
coinductive/1 directive) already some effects

of a hypothetical inductive/1 directive? Nobody
knows? See also this question here:

tabling changing semantics?
https://groups.google.com/d/msg/swi-prolog/V8O8qMeQkEM/Np9pjTH6BwAJ

Ha Ha

burs...@gmail.com

unread,
Aug 7, 2018, 4:20:38 PM8/7/18
to
Well showing the source code Dan would be more
helpful. Possibly also some interaction with
negation, cut, var/1 or (==)/2 I guess...

burs...@gmail.com

unread,
Aug 9, 2018, 10:52:30 AM8/9/18
to
My most favorite use of the ('|')/2 operator
are lazy lists. Here is a little lazy list
interpreter and an example:

take(0, _, R) :- !, R = [].
take(N, C, [H|R]) :-
M is N-1,
call(C, (H|T)),
take(M, T, R).

fib(A, B, (C|fib(B,C))) :-
C is A+B.

Credits: This is stripped down modified version of
Markus Triska prost without using CLP(FD) for elements.
Here is a sample run:

Welcome to SWI-Prolog (threaded, 64 bits, version 7.7.1)
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software.

?- take(5,fib(0,1),L).
L = [1, 2, 3, 5, 8].

?- take(8,fib(0,1),L).
L = [1, 2, 3, 5, 8, 13, 21, 34].

https://www.metalevel.at/various/prost

burs...@gmail.com

unread,
Aug 9, 2018, 10:54:58 AM8/9/18
to
Now, concering co-inductive definitions, we
can easily define an irrational binary sequence:

irr(N, (1|irr2(N,N))).

irr2(0, N, L) :- irr(s(N), L).
irr2(s(M), N, (0|irr2(M,N))).

And here is an example run:

Jekejeke Prolog 3, Runtime Library 1.3.0
(c) 1985-2018, XLOG Technologies GmbH, Switzerland

?- take(10,irr(0),L).
L = [1,1,0,1,0,0,1,0,0,0]

?- take(20,irr(0),L).
L = [1,1,0,1,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0]

Source code and screenshot here:
https://gist.github.com/jburse/a723a1d42c114ff32c006e46a4e02450#gistcomment-2674460

burs...@gmail.com

unread,
Aug 14, 2018, 2:31:43 PM8/14/18
to
Nice one:

Textstory Generator in Prolog using STRIPS - Anne Ogborn, 2018
https://github.com/SWI-Prolog-Education/talespin-annie

burs...@gmail.com

unread,
Aug 17, 2018, 3:52:50 PM8/17/18
to
The code does use dot notation. But not in many places,
not like the Prolog dicts that make up the domain
knowledge of a genre.

So it was feasible to only add Prolog dicts syntax
to our Prolog system, and then massage the code to
make it run in our system as well:

Preview: We can run Anne Ogborns talespin in Prolog
https://plus.google.com/+JekejekeCh/posts/S8CEUA8Ltdf

Disclaimer: This is a preview of the upcoming
release 1.3.0. This Prolog dict syntax is new, not
yet released in any way.

burs...@gmail.com

unread,
Aug 18, 2018, 2:57:00 PM8/18/18
to
One of the hassle of porting the SWI7 program to my
system was the use of strings. In the 90's my
second Prolog interpreter had also strings. You
find the interpreter still on Mark Kantrowiz AI

list as JB-Prolog. But I guess I managed to get
rid of strings now. The usual wisdom so far was keep
an atom table and apply some atom table garbage
collection. In Jekejeke Prolog there is simply

no central atom table, only local polymorphic
cashes to call predicates. For this test case:

between(1,127,A), between(1,127,B), between(1,127,C),
atom_codes(X,[A,B,C]), atom_codes(X, L), L\==[A,B,C].
https://stackoverflow.com/a/51911593/502187

I get SWI atoms: 1140 ms SWI strings 749 ms Jekejeke
atoms: 1360 ms. So I guess there is some impact of strings.
Not yet sure whether I can bring down my numbers to
SWI7 strings. But its not that dramatic. Maybe a better

solution than strings in SWI7 would be atoms that can
automatically also serve as strings, like in Jekejeke
Prolog. This would much less blow up the number of
built-ins and increase the portability of code a little bit.

burs...@gmail.com

unread,
Aug 18, 2018, 3:01:45 PM8/18/18
to
The below figures are from JDK 8. Not
sure how the figures will look like in
JDK 9. Java changed the implementation of
Strings under the hood, by means of this:

http://openjdk.java.net/jeps/254

So the same benchmark might run slower
or faster, I dunno yet. With JDK 9 so far,
I had problems that it had a slow GC. But
there is already JDK 10, JDK 11, on the

horizon, so I guess I am lacking behind.

j4n bur53

unread,
Aug 18, 2018, 3:24:38 PM8/18/18
to
More timings, which show that there is some
headroom for improvements and also worse results:

YAP 6.3.3, it even does some atom GC:

?- time(test).
% 0.437 CPU in 0.469 seconds ( 93% CPU)
no
?- findall(hit, current_atom(X), L), length(L, N),
write(N), nl, fail.
1255298

ECLiPSe Prolog, rather slow with atoms:

[eclipse 8]: test.
No (2.80s cpu)

burs...@gmail.com schrieb:

burs...@gmail.com

unread,
Aug 21, 2018, 8:43:46 AM8/21/18
to
Ulrich Neumerkel asked on SO:
Some Prolog systems even don't have an atom table
at all. Which Prolog are you referring to?

I could answer:
Jekejeke Prolog does not have a global atom table, so
that atom_codes/2 would need to consult this table.
Only a predicate table, but not an atom table. Predicate
lookup is organized locally in the call-site in the
polymorphic caches.

For Polymorphic inline caching see also:
Urs Hölzle, Adaptive Optimization for Self
https://pdfs.semanticscholar.org/78c1/ff396dce695a82bdc35a7bbdcba314a0f421.pdf

burs...@gmail.com

unread,
Aug 21, 2018, 8:50:01 AM8/21/18
to
Thats also the reason why I consider Log-Nonsense-Talk
nonsense, and again I think OO-dispatch should be
implemented with the ordinary ISO module standard

system. By using reexport/1 for IS-A hierarchy. And
Pythonesk OO-call schema. Thats really the easiest
to add OO to a Prolog system. You can still do some

Log-Nonsense-Talk for other stuff, like for example
co-inductive. But OO-calls can be solved much lower
level. Although there is disagreement whether the

PIC solution is the best solution. For example the
Go Programming Language does something slightly
different. See also here:

Go Data Structures: Interfaces
Posted on Tuesday, December 1, 2009.
https://research.swtch.com/interfaces

But I don't know whether the Go idea is applicable,
we don't have such type casts in Prolog, which would
signal, Hey now we want to use this and that protocol.

It would have nice application scenarios for maplist
and foldl for example. But I am not unhappy how maplist
and foldl, respectively call/n perform with PIC.

The performance drain is somewhere else, I guess the
reference counting is not as cheap as I thought, but
there are ideas to make it cheaper....

burs...@gmail.com

unread,
Aug 21, 2018, 11:50:33 AM8/21/18
to
Ulrich Neumerkel was suspicious:
So the unification of two atoms is not a constant operation. Yes?

Here is my answer:

Atom unification is not needed in predicate calls. When a predicate
is called we already know that the functor atom of the clauses are
already the same as the functor atom of the goal that invokes the
predicate. Otherwise the comparison of two atoms is the same as the
comparison of two integers. When the integers are big nums the effort
also increases linearly.

On the other hand the hash value of an atom is cached inside the
atom. So clause indexing which is not the same mechanism as
predicate calls, what concerns the lookup of matching clauses,
is not linearly with the size of the atom, but just amortizised
O(1). The computation of the hash value is done by Java, in
the String class.
https://stackoverflow.com/questions/785091/consistency-of-hashcode-on-a-java-string

So bottom line is you could have 128 GB strings, no
need for an atom table. The hash code calculation as
already explained, that is needed for clause indexing
is also computed outside of an atom table. BTW: Integers
big nums have also such a hash code. Its also computed
inside the object and then use for clause indexing. So
basically we try to reuse proven ideas as much from Java
for the Prolog interpreter.

burs...@gmail.com

unread,
Aug 21, 2018, 11:58:19 AM8/21/18
to
Oops, current implementation of BigInteger
doesn't cache the hash code anymore...

Oho.

Woa, thats when you depend on some host
language. Need to check whats going on...

burs...@gmail.com

unread,
Aug 21, 2018, 12:30:19 PM8/21/18
to
Ok, I am about to fix the BigInteger thingy.
But concerning Ulrich Neumerkels suspicion:

Well it takes the same time as if you would use in
SWI-Prolog the data type string. The SWI-Prolog data
type string is I assume not in the atom table. So if
you compare two strings a hash value would give a
first estimate whether the strings are the same. But
then you still need to compare the whole strings,
which takes linear time. So what I am discussing
here is strings and the effort for comparison is the
same in Jekejeke Prolog where atoms work as
strings and SWI-Prolog.

For those atoms which are not used as strings, I
guess some improvement can be developed as well,
that goes beyond the normal string comparison. Since
from the PIC we might also some additional information
for "records". I have not yet implemented such
an algorithm.

burs...@gmail.com

unread,
Aug 21, 2018, 1:59:58 PM8/21/18
to
Ulrich Neumerkel, the test case doesn't
use atom unification.

between(1,127,A), between(1,127,B), between(1,127,C),
atom_codes(X,[A,B,C]), atom_codes(X, L), L\==[A,B,C].
https://stackoverflow.com/a/51911593/502187

It even does not have unfication, there is
a syntactic equality. I dunno what atom unification
should be. When atoms participate in a unficiation

they unify if they are equal. Its the same
equality as in syntactic equality. But also
the above test case does not test

syntactic equality of atoms. Since the lists
L and [A,B,C] are lists of codes, so its a special
test for atom allocation and garbage collection,
nothing to do with syntactic equality.

Am Samstag, 18. August 2018 20:57:00 UTC+2 schrieb burs...@gmail.com:

burs...@gmail.com

unread,
Aug 21, 2018, 3:24:07 PM8/21/18
to
Well you cannot make atom_codes/2, if the built-in
involves some translation from codes to atom, via an
atom table, a constant operation.

So you have the choice, either you go without
garbage collection for the atom table, and your
system will crash. Here is the same test case

in GNU Prolog:

GNU Prolog 1.4.5 (64 bits)
Copyright (C) 1999-2018 Daniel Diaz
?- between(1,127,A), between(1,127,B), between(1,127,C),
atom_codes(X,[A,B,C]), atom_codes(X, L), L\==[A,B,C].

Crash dialog:
Atom table full (max atom: 32768, environment
variable used: MAX_ATOM)

Or you add garbage collection to your atom table, and
hope that maybe a slightly faster atom unification out-
balances the additional effort for your garbage collection.

But what should be the criteria for this garbage collection.
Here YAP leaves a lot of atoms in the atom table. Why
did it not reclaim more atoms?

> YAP 6.3.3, it even does some atom GC:
>
> ?- time(test).
> % 0.437 CPU in 0.469 seconds ( 93% CPU)
> no
> ?- findall(hit, current_atom(X), L), length(L, N),
> write(N), nl, fail.
> 1255298
https://groups.google.com/d/msg/comp.lang.prolog/vu0_zSd6wdU/J_1k4EZRAAAJ

Well in my atom table less approach I also
rely on garbage collection. But it is not
a special atom table garbage collection, but

the general garbage collection of Java. The
atom removal criteria is thus Java reachability
of the atom. So if you remove a clause, and the

atom was used in this clause, and the atom is
not used elsewhere, the atom goes anyway away
sooner or later. The missing pointer equivalence

is compensated a little bit through the hashing in
clause indexes. Maybe this can be improved. Naybe
there is also a problem with my big numbers that

they don't cache the hash code in the object myself.
But I check my clause indexing again. Actually I
have two types of clause indexing:

Type 1: For small bouquets I don't use
hashing, just linear search. So for example
in this code:

factorial(0, R) :- !, R = 1.
factorial(N, X) :- N>0, M is N-1,
factorail(M, Y), X is N*Y.

the used index would not be a hash index,
just a scan index. So that the hash code
of big num argument would not be needed.

Type 2: Well these are the usual Java hash code
based hash tables, with a little murmur
scrambler to add a little variety.

Type 3: Many Prolog systems have a varienty
of index types beyond Type 1 and Type 2. Also
there the dynamics of strings and atoms, and
also big mumbers, can be totally different.

burs...@gmail.com

unread,
Aug 21, 2018, 4:53:19 PM8/21/18
to
You can of course also test _chars, instead of
_codes. But frankly I never ever use _chars. I
don't know why they exist.

With _chars the test cases are a little bit
more complicated. But you can also test
the same:

test3 :-
between(1, 127, P),
char_code(A, P),
between(1, 127, Q),
char_code(B, Q),
between(1, 127, R),
char_code(C, R),
atom_chars(X, [A,B,C]),
atom_chars(X, L),
L \== [A,B,C].

test4 :-
between(1, 127, P),
char_code(A, P),
between(1, 127, Q),
char_code(B, Q),
between(1, 127, R),
char_code(C, R),
string_chars(X, [A,B,C]),
string_chars(X, L),
L \== [A,B,C].

Here is a comparison. In SWI7, when atoms are used, the
CPU is not 100%, I guess this is some GC:

SWI7, which has strings:
atom_codes
% 8,209,791 inferences, 1.125 CPU in 1.140 seconds (99% CPU, 7297592 Lips)
string_codes
% 8,209,791 inferences, 0.750 CPU in 0.749 seconds (100% CPU, 10946388 Lips)
atom_chars
% 10,274,430 inferences, 1.219 CPU in 1.343 seconds (91% CPU, 8430302 Lips)
string_chars
% 10,274,430 inferences, 0.922 CPU in 0.924 seconds (100% CPU, 11145144 Lips)

Jekejeke Prolog:
atom_codes
% Up 1,398 ms, GC 14 ms, Thread Cpu 1,360 ms (Current 08/18/18 20:35:56)
atom_chars
% Up 1,886 ms, GC 21 ms, Thread Cpu 1,813 ms (Current 08/21/18 22:40:59)

YAP Prolog:
atom_codes
% 0.312 CPU in 0.312 seconds (100% CPU)
atom_chars
% 0.468 CPU in 0.469 seconds ( 99% CPU)

More inferences gives more runtime, probably too much
extra in the case of Jekejeke Prolog. Not sure whether I
am out of band. Lets compare the ratio codes/chars:

SWI7 ratio: 1343 / 1140 = 1.176
924 / 749 = 1.233
Jekejeke Prolog ratio: 1813 / 1360 = 1.333
YAP ratio: 468 / 312 = 1.500

Disclaimer: This is a preview of release 1.3.0. Might
even change to good or to the bad in the future...

burs...@gmail.com

unread,
Aug 21, 2018, 4:59:33 PM8/21/18
to
ECLiPSe Prolog results:
atom_codes
No (2.80s cpu)
atom_chars
No (3.38s CPU)

ECLiPSe Prolog ratio: 3380 / 2800 = 1.207

burs...@gmail.com

unread,
Aug 22, 2018, 12:53:48 PM8/22/18
to
The OO-System in the Polymorphic Inline Caching
Paper, the SELF Programming Language and System,
is very similar to a traditional Prolog implementation.

From the paper:

"SELF uses a tagged object representation with the
two lower bits as the tag. Dispatch tests involving
integers, as well as integer arithmetic operations,
test their argument’s tag using an and instruction;
similarly, the runtime system (e.g., the garbage collector)
often extracts an object’s tag. Together, these and
instructions account for about 25% of the logical
instructions. We are unable to explain the remaining
difference between SELF and the C programs because no
detailed data on C’s use of logical instructions
was available."

I guess this 25% overhead was eliminated in Java,
in that it introduced primitive integers and non-
primitive integers. On the other hand, if I am not
totally wrong, some of the PIC design went into the
Hotspot JIT from SUN.

But there is a tendency now for other compiler
designs from JIT to AOT:

Ahead-of-time compiling for dynamically typed
languages to native machine code or other static
VM bytecode is possible in a limited number of cases
only.[citation needed] For example, the High
Performance Erlang Project (HiPE) AOT compiler
for the language Erlang can do this because
of advanced static type reconstruction techniques
and type speculations.

In most situations with fully AOT compiled programs
and libraries, it is possible to drop a useful
fraction of a runtime environment, thus saving
disk space, memory, battery life, and startup times
(no JIT warmup phase), etc. Because of this, it can
be useful in embedded or mobile devices.

But the dimension JIT/AOT is orthogonal to
the dimension Log-Nonsense-Talk/ISO Module System.
Using reexport/1 as the IS-A relation and a
Pythonesk (::)/2 operator does not

necessarely imply that a JIT technique needs to
be applied. In Jekejeke Prolog I use JIT and no
AOT at the moment. I even don't know how I could
replicate Erlang stuff in my system.

Also a further dimension that is completely
ortogonal is all the gimicks of visibility and
a local modules etc.. Local modules can be done
either in Log-Nonse-Talk or in ISO-Modules,

in Jekejeke Prolog I have demonstrated how
to do it in ISO-Modules. All the additional visibility
rules work so far. There is no runtime overhead
because of the more complex visibility rules,

since the PICs do cache the visibility resolution.
I use PICs with negative and positive caching.
So even running agains a false forbidden call
multiple times in a loop is fast! Ha Ha

It would be interesting to try something AOT
in the future, but JIT is surely simpler to
implement. For Prolog it is advantageous if the
JIT is combined with truth maintenance,

so the negative and positive local caching,
needs to be invalidated appropriately when the
module loader changes the module network. An
atom table is very useless for visibility rules,

because the same "atom", when combined into
an indicator, i.e. functor and arity, inside some
contaxt, i.e. a module might have a totally
different visibility depending on the module

network. So atom tables are automatically a
dead end for OO-systems that have also visibility.
But the following dimensions are all ortogonal:

- JIT vs AOL, the runtime/compilation technique
- Log-Nonsense-Talk vs ISO-modules, the surface syntax
- Visibility/Local Modules, additional language features
- What else?

Best Regards

Am Dienstag, 21. August 2018 14:43:46 UTC+2 schrieb burs...@gmail.com:

burs...@gmail.com

unread,
Aug 22, 2018, 1:03:01 PM8/22/18
to
Sorry this should have been quoted
and given a reference:

"Ahead-of-time compiling for dynamically typed
languages to native machine code or other static
VM bytecode is possible in a limited number of cases
only.[citation needed] For example, the High
Performance Erlang Project (HiPE) AOT compiler
for the language Erlang can do this because
of advanced static type reconstruction techniques
and type speculations.

In most situations with fully AOT compiled programs
and libraries, it is possible to drop a useful
fraction of a runtime environment, thus saving
disk space, memory, battery life, and startup times
(no JIT warmup phase), etc. Because of this, it can
be useful in embedded or mobile devices."
https://en.wikipedia.org/wiki/Ahead-of-time_compilation

I guess simple AOT technique could be to
number the methods and pre-allocate some V-tables
for each module, whereby there is really no

difference between class and module, and then
implement dynamic dispatch as a table index
lookup, like in a simple C++ to C translation,

but this would possibly forbid truth maintenance.
Not sure. The problem is that there was never
research flooding into OO-WAM. What was done

was some research for structured-WAM. But instead
of a pythonesk OO-WAM, the idea is there that
some context is shifted around,

which is much more than a phytonesk OO-WAM would
demand. I remember a couple of such talks, an
example is this paper:

An extended Warren abstract machine for the
execution of structured logic programs
The Journal of Logic Programming
Volume 14, Issues 3–4, November 1992, Pages 187-222
https://www.sciencedirect.com/science/article/pii/074310669290011Q

Also Log-Nonsense-Talk has a little bit too
complicated model, than the simple pythonesk
self parameter and call mechanism through (::)/2

which is so easily supported by the ISO module
standard and is reexport/1 semantics.

burs...@gmail.com

unread,
Aug 22, 2018, 4:31:27 PM8/22/18
to
Ok, I found an old paper with some HiPE benchmarks.

HiPE: High Performance Erlang
Technical Report October 1999 ASTEC 99/04
https://pdfs.semanticscholar.org/5e9f/85c08b00cef939c100f453ea0484fa7f5e35.pdf

They tested 50 x fib(30), nowdays with Prolog I get:

YAP 6.3.3:
?- time(test).
% 10.546 CPU in 10.558 seconds ( 99% CPU)
no

SWI 7.7.18:
?- time(test).
% 201,940,303 inferences, 16.188 CPU in 16.468 seconds (98% CPU, 12475077 Lips)
false.

GNU Prolog 1.4.5:
Crash (sic!)

Source code:
https://gist.github.com/jburse/24b629f6ba6d1577fc3788dd744e9d28#file-fib-p

burs...@gmail.com

unread,
Aug 22, 2018, 9:04:10 PM8/22/18
to
Quiz: Which key here represents the current
*compilation unit* (a term from the Java world BTW).
Can this be determined simply?

prolog_load_context(?Key, ?Value)
http://www.swi-prolog.org/pldoc/man?predicate=prolog_load_context/2

For the top-level I get:

Welcome to SWI-Prolog (threaded, 64 bits, version 7.7.18)
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software.

?- prolog_load_context(X,Y), write(X-Y), nl, fail.
module-user
dialect-swi
script-false
variable_names-[]
term-[]
false.

Which one is the compilation unit? Inside a user
module, i.e. a Prolog text without a module name,
I then get this:

file-c:/users/jan burse/desktop/usertest.pl
source-c:/users/jan burse/desktop/usertest.pl
stream-<stream>(0000023200B01B50)
directory-c:/users/jan burse/desktop
dialect-swi
term_position- $stream_position(0,1,0,0)
script-false
@(variable_names-S_1,[S_1=[X=variable_names,Y=S_1]])
term-[]

Nice use of rational terms for variable_names!

burs...@gmail.com

unread,
Aug 27, 2018, 1:10:58 PM8/27/18
to
In JDK 8, according to the JEP 192, there
is a flag, for showing deduplication statistics:

PrintStringDeduplicationStatistics

I will try to check what the thingy is doing. Maybe
the additional JIT/index idea could be still in
need, since it would have additional benefits:

- Would also elminate the object gutter, since
according to the JEP 192 docu: "When deduplicating,
a lookup is made in this table to see if there is
already an identical character array somewhere on the heap."
only the character array is eleminated.

- Could be also abllied to other ground data, even
for example big integer, and maybe also compound
data that is ground. For high frequence assert/retract
probably only a less far reaching deduplication
during indexing would be appropriate.

Am Dienstag, 21. August 2018 19:59:58 UTC+2 schrieb burs...@gmail.com:

burs...@gmail.com

unread,
Sep 2, 2018, 3:54:50 PM9/2/18
to
We got our head around implementing a first stride
on function expansion. Rest expansion now stands
for any expansion that is not term or goal. We had
to adapt the module "expand" which features the

tradtional term and goal expansion. We added a rest
expansion framework. We added a rest expansion usage
to the module "dict". This rest expansion will now
pre-sort the Prolog dicts for their keys:

?- X = point{x:1, y:2}.
X = point{x:1,y:2}

?- X = point{y:2, x:1}.
X = point{x:1,y:2}

To make the dicts fly as in SWI-Prolog 7 we also
need to implement the dot-operator. This operator
will require even a more advanced expansion, which
should allow to return new goals for rest arguments.

After this has been all done, i.e. the dot-operator
is also there, we will try to run Anne Ogborns talespin
example again. This time without any changes in the
dicts, only changes in the strings and the random.

See also:
Preview: New function expansion to pre-sort Prolog dicts. (Jekejeke)
https://plus.google.com/+JekejekeCh/posts/VGQZU4HCPH9

What was implemented is from here:

SWI-Prolog version 7 extensions
http://www.swi-prolog.org/download/publications/swi7.pdf

Only the dicts and the functional notation will
be adopted. The strings will not be adopted. The
functional notation will also only be partially adopted,
not yet sure whether we will also introduce

a fun() syntax. The idea is to completely stay
in the ISO core standard framework. Means for example
our current dicts are not really a new Prolog term
category. A term of the form:

point{x:1,y:2}

Is nothing else than:

Jekejeke Prolog 3, Runtime Library 1.3.0
(c) 1985-2018, XLOG Technologies GmbH, Switzerland

?- use_module(library(advanced/dict)).
% 1 consults and 0 unloads in 15 ms.
Yes

?- X = point{x:1,y:2}, write_canonical(X), nl.
sys_struct(point,','(:(x,1),:(y,2)))
X = point{x:1,y:2}

The above is so called uncompressed representation.
In the above representation not a single flat compound
is used, but multiple compounds, like it is the
case for a Prolog list.

Am Freitag, 17. August 2018 21:52:50 UTC+2 schrieb burs...@gmail.com:

burs...@gmail.com

unread,
Sep 2, 2018, 4:18:19 PM9/2/18
to
Disclaimer: We still think that dicts are not
an extremly clever thing. They neither help
unification nor indexing, so they are kind of

alien to Prolog. We do it more on the grounds
that it gives us a nice use case to introduce
function expansion in all its glory. Function

expansion and the still to be implemented feature
that goals can be generated, which has its
equivalents that for goals auxiliary terms can
be generated, has some interesting other use cases.

- Compilation of arithmetic expressions
- RDF, normalization of tuple arguments
- CAS, de-normalization of mathematical formulas
- SICStus/ECLiPSe loop compilation
- Novell or Legacy Grammar expansions?
- Novell or Legacy OO-system expansion?
- What else?

So its only a testbed and a prototype. So
far rest expansion doesn't extremly slow down
consult. We saw around 10-20% so far. Probably
on a Android device it will be more. Not yet

sure, it is desired that consult times stay low!

burs...@gmail.com

unread,
Sep 2, 2018, 8:49:03 PM9/2/18
to
Yeah, open dicts could be also fun:

Introduce sideways open dicts
torbjornlager opened this Issue on 6 Nov 2016
https://github.com/SWI-Prolog/roadmap/issues/50

burs...@gmail.com

unread,
Sep 2, 2018, 9:25:20 PM9/2/18
to
This is also fun:

{| type ||
name: String!
age: Integer
books(favourite: Boolean): [Book]
friends: [Person]
|},

Source:
Implementing GraphQL as a Query Language for
Deductive Databases in SWI–Prolog Using DCGs,
Quasi Quotations, and Dicts
Falco Nogatz Dietmar Seipel
https://arxiv.org/pdf/1701.00626.pdf

Can be easily replaced by
function expansion that parses:

type(`name: String!
age: Integer
books(favourite: Boolean): [Book]
friends: [Person]`)

Only need strings that can continue on
new lines and access to variable names.
Nevertheless for SWI7 compatibility could

also introduce their syntax variant...

burs...@gmail.com

unread,
Sep 3, 2018, 7:08:21 PM9/3/18
to
We have overcome the next hurdle towards
functions on Prolog dicts. The new function
expansion now allows to return
side conditions.

With the help of this additional feature of
function expansion we could already implement
a first prototype of functions on
Prolog dictionaries.

The dot syntax is not yet available, it will
have a special functor sys_dot/2 in that we
will alias ('.')/2. So we made a prototype
for the operator ($)/2. Field access
already works:

?- P = point{x:1,y:2}, X = P$x, Y = P$y.
P = point{x:1,y:2},
X = 1,
Y = 2

?- P = point{x:1,y:2}, V = P$K.
P = point{x:1,y:2},
V = 1,
K = x ;
P = point{x:1,y:2},
V = 2,
K = y

See also:

Preview: New module "func" for functions on Prolog dicts. (Jekejeke)
https://plus.google.com/+JekejekeCh/posts/UP1gL3MakJk

burs...@gmail.com

unread,
Sep 4, 2018, 6:41:12 AM9/4/18
to
We already fixed some sin of SWI-Prolog 7, concerning
its dot notation. The question is where to put a
side condition C. For goals G, the natural choice is
to prepend the side condition C, and get (C,G).

But how about terms. Terms are facts and rules. What
happens if we have a term T, and then a side condition
C. What should we do. Here a natural choice is to build
a implication of the form (T:-G).

This is also what SWI-Prolog 7 uses for facts. But
in my opinion it goes wrong for rules. Here is my
own take on this matter. The running example is,
we still work with ($)/2 as the dot operator:

?- [user].
p(X, X$k).
p(X, Y$k) :- q(X, Y).

Now with some simplification pipleline magice we get:

?- listing.
p(X, A) :-
$(X, k, A).
p(X, A) :-
q(X, Y),
$(Y, k, A).

This makes much more sense than the SWI-Prolog 7
translation of dot notation. I opened an issue:

Wrong functions on dicts translation when dicts in head
https://github.com/SWI-Prolog/swipl-devel/issues/329

So what magic does the simplification pipleline
for us? Well we have implemented some (:-)/2 flattening.
From category theory or from ordinary first order
logic, the following equivalence is well known:

A -> (B -> C) <=> A /\ B -> C

So we implemented this simplification:

/* (:-)/2 flattening */
term_simplification(((A :- B) :- C), (A :- H)) :-
simplify_goal(( C, B), H).
term_simplification((C :- true), C).
https://github.com/jburse/jekejeke-devel/blob/master/jekrun/headless/jekpro/frequent/experiment/simp.p#L100

Which does the job. Cool!

burs...@gmail.com

unread,
Sep 4, 2018, 9:47:03 AM9/4/18
to
Corr.: Typo
But how about terms. Terms are facts and rules. What
happens if we have a term T, and then a side condition
C. What should we do. Here a natural choice is to build
a implication of the form (T:-C).

Transfinite Numbers

unread,
Oct 7, 2019, 7:13:40 PM10/7/19
to
Nice one!

Game Line Drawing Recognition via CHR - A. Ogborn et al, 2019
https://ldjam.com/events/ludum-dare/45/three-little-pigs

CHR is done in a SWI server and not at client side.
But also a little anachronistic, given that:

No #cloud required: Why AI’s future is at the edge
https://siliconangle.com/2019/05/26/no-cloud-required-ais-future-edge/

Mostowski Collapse

unread,
Dec 20, 2019, 10:43:15 AM12/20/19
to
We might have Prolog running in some
Ikea device? Whats the driving factor
of such devices, do people not anymore

build analogue audio amps and stuff!
Its easier to multiplex from different
sources to different sinks digitally?

IKEA's Sonos Speaker Has a Secret
https://www.youtube.com/watch?v=ZB413S8KDmo

Mostowski Collapse

unread,
Jan 14, 2020, 7:11:17 PM1/14/20
to
While looking for an IoT fund, I nearly
got carried away by Yewno Nasdaq:

#TradeTalks: Ruggero Gramatica
https://www.youtube.com/watch?v=6tui_g8KEXg

I dunno rick, this looks fake. Is a knowledge
graph a mathematical model? LoL

Mostowski Collapse

unread,
Jan 14, 2020, 7:24:03 PM1/14/20
to
Disclaimer: The signatory gives no warranty,
express or implied, as to description, quality
productiveness or any other matter of any
gibe, rant or opinion posted on usenet.

https://www.jstor.org/stable/23430554

j4n bur53

unread,
Jan 17, 2020, 7:51:58 AM1/17/20
to
2020, the age of AI trolls:

"Facebook AI has built the first AI system that can solve advanced
mathematics equations using symbolic reasoning."

https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/

Whats so "first" about this. Is this a joke?

Mostowski Collapse schrieb:

R Kym Horsell

unread,
Jan 19, 2020, 12:35:29 PM1/19/20
to
j4n bur53 <janb...@fastmail.fm> wrote:
> 2020, the age of AI trolls:
> "Facebook AI has built the first AI system that can solve advanced
> mathematics equations using symbolic reasoning."
> https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/
> Whats so "first" about this. Is this a joke?

One version of the story has "using neural networks" rather than
"symbolic reasoning" which may be "more true". :)

Someone at YCombinator has a similar take to you on this.

But it is an interesting idea.

I did a lil project a few years back at kaggle on doing multiple-choice
grade school science exams using word2vec methods. Worked out quite well.
The idea is to map certain words to vectors (or some other arbitrary
mathematical object :) so that deciding which answer A..E is "closest"
to the correct answer becomes a simple numerical calculation rather than
parsing things and looking up databases.

In this case the authors mapped integration and solving DE's to vector
calculations in some way so the solution for a particular problem
is pretty efficient but calculating the mapping from examples is probably
a bit of a chore. :)

> Mostowski Collapse schrieb:
>> Disclaimer: The signatory gives no warranty,
>> express or implied, as to description, quality
>> productiveness or any other matter of any
>> gibe, rant or opinion posted on usenet.
>> https://www.jstor.org/stable/23430554
>> Am Mittwoch, 15. Januar 2020 01:11:17 UTC+1 schrieb Mostowski Collapse:
>>> While looking for an IoT fund, I nearly
>>> got carried away by Yewno Nasdaq:
>>> #TradeTalks: Ruggero Gramatica
>>> https://www.youtube.com/watch?v=6tui_g8KEXg
>>> I dunno rick, this looks fake. Is a knowledge
>>> graph a mathematical model? LoL

--
Identify pople(sic) who have a high degree of Psychopathy based on Twitter usage.
The aim of the competition is to determine to what degree it's
possible to predict pople(sic) with a sufficiently high degree of
Psychopathy based on Twitter usage and Linguistic Inquiry.
The organizers provide all interested participants an anonymised
dataset of users self assessed psychopathy scores together with 337
variables derived from functions of Twitter information, useage and
lingusitc analysis. Psychopathy scores are based on a checklist
developed by Professor Del Paulhus at the University of British Columbia.
The model should aim to identify pople(sic) scoring high in Psychopathy,
for the purpose of this competition, defined as 2 SD's above a mean of
1.98. This accounts for roughly 3% of the entire sample and therefore
the challenge with this dataset is developing a model to work with a
highly imbalanced dataset.
The best performing model(s) will be formally cited in a future
paper/papers. The authors of the winning model may also be invited to
attend future conferences to discuss their model.
-- http://www.kaggle.com/c/twitter-psychopathy-prediction
[Final results:
(/1472)
#1 y_tag .86997
#2 Bruce Cragin .86745
#3 Indy Actuaries .86700
#4 jontix .86697
#5 JKARP .86683
#6 redjava .86656
#7 YaTa .86651
#8 Killian O'Connor .86649
#9 jjby .86621
#10 JGrow .86621
The scoring metric is "average precision". A result close to 1 indicates
the predicted ordering of subjects by their psychopathy score is very close
to the true ordering.]

Mostowski Collapse

unread,
Jan 20, 2020, 6:31:06 PM1/20/20
to
How do you classify this:

"We all know the best opportunities to see
wildflowers come while on the road. Whether
along an interstate highway or a remote
country road, flowers of all colors and shapes
are there to add beauty to our trip.
Unfortunately, most wildflower field
guides are nearly useless for roadside
flower viewing, written for the eccentric
botanical enthusiast who wanders slowly
through prairies, stooping low to determine
whether the sepals of a flower are hispid
or hirsute.

This book is written for the silent majority
of people who have important places to go,
but want to enjoy and learn about nature
as they travel. What good is a field guide
that relies upon the characteristics of
tiny hairs or even minute differences in
leaf or petal shape when a flower is seen
from a car traveling 70 miles per hour?
The world desperately needs a guide that
illustrates and identifies characteristics
of wildflowers as most people actually
experience them. This is that guide.

https://prairieecologist.com/2020/01/13/finally-a-practical-guide-for-roadside-wildflower-viewing/

By way of twitter.

Mostowski Collapse

unread,
Jan 20, 2020, 6:33:38 PM1/20/20
to

Mostowski Collapse

unread,
Jan 20, 2020, 6:40:10 PM1/20/20
to
This roadside wildflower guide has some
ressemblance to the IBM Cloud Pak videos

that are currently flooding youtube.
And they are collaborating with whom,

lightbend, red hat, your mother, his
uncle, the pope, a cat and a camel.

R Kym Horsell

unread,
Jan 21, 2020, 6:08:20 AM1/21/20
to
>...

Definitely category 27.76.

Mostowski Collapse

unread,
Feb 18, 2023, 6:28:47 AM2/18/23
to
Don't buy your Pearls in Honk Kong. They are all fake.

So what do you prefer, this Haskell monster:
https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf

Or this elegant Prolog code, less than half a page:

% solve(+Integer, -Term, -Integer, +List, -List)
solve(1, N, N, P, Q) :- !, select(N, P, Q).
solve(K, G, N, P, Q) :-
J is K-1,
between(1, J, I),
L is K-I,
solve(I, E, A, P, H),
solve(L, F, B, H, Q),
combine(E, A, F, B, G, N).

combine(E, A, F, B, E+F, N) :- A =< B, N is A+B.
combine(E, A, F, B, E-F, N) :- A > B, N is A-B.
combine(E, A, F, B, E*F, N) :- A =< B, N is A*B.
combine(E, A, F, B, E/F, N) :- A mod B =:= 0, N is A div B.

Speedy enough I guess:

?- time((solve(6, E, 999, [1,3,5,10,25,50], _), fail; true)).
% 4,857,250 inferences, 0.484 CPU in 0.468 seconds (104% CPU, 10027871 Lips)
true.

Mostowski Collapse

unread,
Feb 18, 2023, 7:15:41 AM2/18/23
to
Lets go a little bit beyond what the paper calls
5 Fusing generation and evaluation, by introducing
a little constraint propagation. Now I get:

?- time((solve(6, E, 999, [1,3,5,10,25,50], _), fail; true)).
% % 3,604,685 inferences, 0.344 CPU in 0.343 seconds (100% CPU, 10486356 Lips)
% true.

Code got a little bit longer, but still quite readable:

% solve(+Integer, -Term, -Integer, +List, -List)
solve(1, N, N, P, Q) :- !, select(N, P, Q).
solve(K, G, N, P, Q) :- var(N), !,
J is K-1,
between(1, J, I),
L is K-I,
solve(I, E, A, P, H),
solve(L, F, B, H, Q),
combine(E, A, F, B, G, N).
solve(K, G, N, P, Q) :-
J is K-1,
between(1, J, I),
L is K-I,
solve(I, E, A, P, H),
forward(E, A, F, B, G, N),
solve(L, F, B, H, Q).

combine(E, A, F, B, E+F, N) :- A =< B, N is A+B.
combine(E, A, F, B, E-F, N) :- A > B, N is A-B.
combine(E, A, F, B, E*F, N) :- A =< B, N is A*B.
combine(E, A, F, B, E/F, N) :- A mod B =:= 0, N is A div B.

forward(E, A, F, B, E+F, N) :- N > A, B is N-A, A =< B.
forward(E, A, F, B, E-F, N) :- A > N, B is A-N.
forward(E, A, F, B, E*F, N) :- N mod A =:= 0, B is N div A, A =< B.
forward(E, A, F, B, E/F, N) :- A mod N =:= 0, B is A div N.

Mostowski Collapse

unread,
Feb 18, 2023, 7:47:27 AM2/18/23
to
https://stackoverflow.com/a/74908845/17524790

This stack overflow solution does also some early pruning. But it is based
on powerset permutation and not on permutation. So you will find solutions where not
all numbers are used, only some. You see that it might omit numbers here:

?- time(solve_countdown([1,7,7,3], 24, Ts)).
% 12,117 inferences, 0.002 CPU in 0.002 seconds (99% CPU, 5370013 Lips)
Ts = [3*(1+7), (7-1)*(7-3)].

Or maybe this is part of the requirement? Ok, my bad, wasn’t paying attention.

Mostowski Collapse

unread,
Feb 18, 2023, 11:10:22 AM2/18/23
to
Not sure whether CLP(FD) will show the same timing figures. Since in the
above I did the small CLP(FD) inference manually, CLP(FD) might be still
slower because some overhead, unless some intervals or other approaches

come into play. One might also try freeze/2. Maybe somebody can write a
sequel to the Pearls paper from a Prolog perspective and include the
propagation technique, and show the world its muscles? Don’t have

time for that, also the matter is somehow related to focusing proof calculi.

Mostowski Collapse

unread,
Feb 18, 2023, 11:24:18 AM2/18/23
to
A nice exercise is to eliminate the var/1 in the solution. By splitting the code into two
solve/5 predicates, you then get what is sometime shown in functional programming
language theory papers. Some calculi where some evaluator who knows what has different

focusing modes. Which often only corresponds to different Prolog mode declarations.

Mostowski Collapse

unread,
Feb 20, 2023, 11:47:47 AM2/20/23
to
Now waitinging for a CLP(X) solution of the count down
problem, anybody up to it? Maybe with CLP(FD) or
with freeze/2? But I doubt Scryer Prolog can produce

a solution, its a little bit slow. On my machine:

/* Scryer Prolog 0.9.1-166 */
?- N #= 2^14, time((between(1,N,_), A #\= B, false; true)).
% CPU time: 2.207s
N = 16384.

/* Jekejeke Prolog 1.5.6 */
?- N #= 2^14, time((between(1,N,_), A #\= B, false; true)).
% Threads 391 ms, GC 5 ms, Up 396 ms (Current 02/20/23 17:39:44)
N = 16384.

/* SWI-Prolog 9.1.4 */
?- N #= 2^14, time((between(1,N,_), A #\= B, false; true)).
% 2,310,145 inferences, 0.109 CPU in 0.110 seconds (100% CPU, 21121326 Lips)
N = 16384.

LoL

Mostowski Collapse

unread,
Feb 20, 2023, 11:50:56 AM2/20/23
to

But the number of Scryer Prolog tickets went down
from 222 to 219. If one extrapolates that, I guess
in 100 years from now it will be finished.

Enough time to optimize my own CLP(FD) or
even introduce CLP(FD) to the Dogelog Player.
Not yet sure, whether it will or will not have

attributed variables. Maybe an explicit approach
like in the count down problem is often the better
approach? Well not really, a CLP(X) based

approach has more potential for early pruning.

Mostowski Collapse schrieb am Montag, 20. Februar 2023 um 17:47:47 UTC+1:
> Now waiting for a CLP(X) solution of the count down

Mostowski Collapse

unread,
Feb 20, 2023, 2:12:38 PM2/20/23
to
This is also a nice test case:

bomb(N) :- bomb(N), bomb(N).

/* Scryer Prolog 0.9.1-166 */
?- bomb(1000).
Killed

/* Trealla Prolog 2.9.4 */
?- bomb(1000).
Killed

/* Jekejeke Prolog 1.4.6 */
?- bomb(1000).
Error: Execution aborted since memory threshold exceeded.
bomb/1
bomb/1
bomb/1
bomb/1
bomb/1
bomb/1
bomb/1
bomb/1
bomb/1
bomb/1
... 3259796 more user frames ...
?- sys_trap(bomb(1000), E, true).
E = error(system_error(memory_threshold), [pred(bomb/1), pred(bomb/1), pred(bomb/1), pred(bomb/1), pred(bomb/1), pred(bomb/1), pred(bomb/1), pred(bomb/1), pred(bomb/1), pred(bomb/1), pred_more(2766689)]).

/* SWI-Prolog 9.1.4 */
?- bomb(1000).
ERROR: Stack limit (1.0Gb) exceeded
ERROR: Stack sizes: local: 1.0Gb, global: 80Kb, trail: 1Kb
ERROR: Stack depth: 14,909,825, last-call: 0%, Choice points: 3
ERROR: Probable infinite recursion (cycle):
ERROR: [14,909,825] user:bomb(1000)
ERROR: [14,909,824] user:bomb(1000)
?- catch(bomb(1000), E, true).
E = error(resource_error(stack), stack_overflow{choicepoints:4, cycle:[frame(14912327, user:bomb(1000), []), frame(14912326, user:bomb(1000), [])], depth:14912327, environments:14912326, globalused:4, localused:1048523, stack_limit:1048576, trailused:0}).

LoL

Mild Shock

unread,
Jun 4, 2023, 10:49:20 AM6/4/23
to

Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years

LoL

Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
> Prolog Class Signpost - American Style 2018
> https://www.youtube.com/watch?v=CxQKltWI0NA

Mild Shock

unread,
Jun 20, 2023, 11:20:27 AM6/20/23
to
To hell with GPUs. Here come the FPGA qubits:

Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/

The superposition property enables a quantum computer
to be in multiple states at once.
https://www.techtarget.com/whatis/definition/qubit

Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?

Mild Shock

unread,
Jun 21, 2023, 6:24:01 AM6/21/23
to
So it begins:

Was having some fun with chat gpt and thinkscript.
https://twitter.com/pkay2402/status/1670459050155290627

A Beginner's Guide to thinkScripts - Ameritrade, 2022
https://www.youtube.com/watch?v=qD5RYF5o9fM

Mild Shock

unread,
Jun 23, 2023, 5:14:15 AM6/23/23
to
Not only the speed doesn't double every year anymore,
also the density of transistors doesn't double
every year anymore. See also:

‘Moore’s Law’s dead,’ Nvidia CEO
https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618

So there is some hope in FPGAs. The article writes:

"In the latter paper, which includes a great overview of
the state of the art, Pilch and colleagues summarize
this as shifting the processing from time to space —
from using slow sequential CPU processing to hardware
complexity, using the FPGA’s configurable fabric
and inherent parallelism."

In reference to (no pay wall):

An FPGA-based real quantum computer emulator
15 December 2018 - Pilch et al.
https://link.springer.com/article/10.1007/s10825-018-1287-5

Mild Shock

unread,
Jun 24, 2023, 6:25:14 PM6/24/23
to
Prighozin / Prighozout . Latest news he departed for Belarus?!
Help! I need ChatGPT Plus to follow this shit show:

Scraper is an excellent #ChatGPT plugin for staying on top of the news.
https://twitter.com/irock/status/1672665497140330496

Now, you can use the same info and make a diagram out of it. Using 'diagrams' plugin.
https://twitter.com/irock/status/1672715014271115266

But ChatGPT Plus is like 20$ per Month extra.

Mild Shock

unread,
Jul 29, 2023, 8:58:33 PM7/29/23
to
Omg, Stephen Wolfram wrote a new book!

What Is ChatGPT Doing ... and Why Does It Work?
https://www.amazon.com/dp/1579550819

Did he let ChatGPT write the book, or why was he so fast?

Mild Shock

unread,
Aug 1, 2023, 9:48:32 AM8/1/23
to
How it started:

Remember in 2013 a failed AI stack attemp, people making fun:
Faked Artificial Intelligence like in Game Development
https://area51.meta.stackexchange.com/q/11658/100686

How its going:

Take notd in 2023 sounds like total paniking now:
Announcing OverflowAI, Projects: a bunch of crap, Slack
chatbot and We’ve launched the GenAI Stack Exchange site
https://stackoverflow.co/labs/

Mild Shock

unread,
Aug 1, 2023, 9:54:31 AM8/1/23
to
This is also quite memorable:

"Don't give it a 'Hollywood' title like 'artificial intelligence'.
Call it "Machine Learning and Intelligent Computation"".
https://area51.meta.stackexchange.com/a/13109/100686

LoL

Mild Shock

unread,
Aug 1, 2023, 10:31:54 AM8/1/23
to

There is a serious doubt that there will be a surge
in developers, due to AI. As claimed here:

Stack Overflow: Community and AI
https://www.youtube.com/live/g5F5t205pYA?feature=share&t=376

Its rather in the end about low code as well:

"These tools enable less technical employees to
make a larger business impact in numerous ways,
such as relieving IT department backlogs, reducing
shadow IT, and taking more ownership over
business process management (BPM) workstreams.
https://www.ibm.com/topics/low-code

LoL

Mild Shock

unread,
Aug 2, 2023, 5:02:05 AM8/2/23
to

Experiment by Terrence Tao using ChatGPT - June, 2023
https://mathstodon.xyz/@tao/110601051375142142

by way of Rainer Rosenthal on de.sci.mathematik

Mild Shock

unread,
Aug 2, 2023, 5:14:26 AM8/2/23
to

Simple theory why stackoverflow is dead. Most
of the answers on stackoverflow are any RTFM answers.
And LLM that has done its homework, indexing all

the fucking manuals, performs as well, there is no
need for the "experts" on stackoverflow that are anyway
not real "experts", mostly they are people who can read

and know the relevant sources, they don't recall
solutions from some genuine memory. So I guess
this intermediary, this middleman stackoverflow,

is not needed in the future. ChatGPT and similar
bots will serve as ready help for those too lazy.
And we are all lazy, aren't we?

Abbreviation for ‘Read The Fucking Manual’.
http://www.catb.org/jargon/html/R/RTFM.html

Mild Shock

unread,
Aug 2, 2023, 5:20:16 AM8/2/23
to

Or a hybrid, a stackoverflow with a chatbots integrated.
ChatGPT is already helping me designing such a platform:

Q: What if a chatbot becomes just a member of a community,
this would assure collaboration and community interaction.

A: Incorporating a chatbot as a member of an online community
can indeed offer some benefits in terms of collaboration and
community interaction. However, there are several
considerations to take into account:
[...]

https://chat.openai.com/share/84790b3f-e228-4c91-af89-29cf37842f9d

LoL

Mild Shock

unread,
Aug 15, 2023, 10:34:33 AM8/15/23
to
Want to know what ChatGPT thinks about a person. Try this
prompt engineering, works also with less prominent persons:

Q: Write a joke: Why did Jan Wielemaker go to the gym

A: Why did Jan Wielemaker go to the gym? Because he
heard it was the only place with enough "SWI-t"ness to lift his spirits!

https://chat.openai.com/?model=text-davinci-002-render-sha

Mild Shock

unread,
Sep 23, 2023, 4:51:13 PM9/23/23
to

Now its clear, the Corona vaccine has had a side effect,
everbody got Alzheimer over the last months. The
SWI-Prolog discourse is a typical example, it has become

a retirement home for some self-talking veterans.

Mild Shock

unread,
Sep 27, 2023, 7:06:01 AM9/27/23
to
Don't do the LIPS.

/* SWI-Prolog 9.1.16 */
?- time(tarai(12, 11, 0, X)).
% 54,182,800 inferences, 2.625 CPU in 2.616 seconds (100% CPU, 20641067 Lips)
X = 12.

/* Guarded Horn Clauses */
$ ./tarai 12 11 0
% 196412655 inferences, 3.34256 CPU seconds (58761215.967661 Lips)
12

tadashi9e, 2023
https://qiita.com/tadashi9e/items/45cef62cda6d38dda0c7

Sanity No More - Only Chaos Here

Mild Shock

unread,
Sep 27, 2023, 8:36:35 AM9/27/23
to
Unfortunately I find nowhere the full source of this new
GHC Prolog by tadashi9e, which makes it a little dubious.
Then Dogelog Player yet generates another number:

tarai(X, Y, _, R) :- X =< Y, !, R = Y.
tarai(X, Y, Z, R) :- X_1 is X-1, tarai(X_1, Y, Z, R_X),
Y_1 is Y-1, tarai(Y_1, Z, X, R_Y),
Z_1 is Z-1, tarai(Z_1, X, Y, R_Z),
tarai(R_X, R_Y, R_Z, R).

?- statistics(calls, A), shield(tarai(12, 11, 0, _)), statistics(calls, B), C is B-A.
A = 115190487, B = 230328953, C = 115138466.

?- statistics(calls, A), tarai(12, 11, 0, _), statistics(calls, B), C is B-A.
A = 46911, B = 115187678, C = 115140767.

shield/1 does switch off auto-yield. Expectation was rather
that the second query gives 115138466 again, minus a few
inferences since shield/1 wasn't called. But it seems that

auto-yield leads to some phantom inferences which should
not be the case. So I guess we have a glitch somehere in the
bookkeeping. But why is GHC Prolog count higher? Maybe

because it has some constructs like (->)/2 in the second clause
of tarai. Also both clauses in GHC need (|)/2, and the second
clause has a true/0 guard. This could explain the higher count?

Mild Shock

unread,
Oct 3, 2023, 7:37:15 AM10/3/23
to

So what is the issue that should be solved proactively.
A single person cannot indefinitely maintain a Prolog system.
Why? Not because the person will be dead at some time in

the future, the person might also get unable to continue
naintaining a Prolog system. Same holds for a community,
it cannot age indefinitely. See also:

Memory Loss, Alzheimer's Disease and Dementia, 3rd Edition
by Andrew E. Budson, MD and Paul R. Solomon, PhD
https://evolve.elsevier.com/cs/product/9780323795449

So what are the option? Exit strategies? Generational change
strategies? Where are the youngsters that will take over?
This is a call for action, for people < 30 years old:

- Please show us your Prolog interpreter

Mild Shock schrieb am Samstag, 23. September 2023 um 22:51:13 UTC+2:

Mild Shock

unread,
Oct 3, 2023, 7:54:09 AM10/3/23
to

Could we say if life expectancy increases, senescence increases
as well? Not sure. Senescence (/sɪˈnɛsəns/) or biological aging is the
gradual deterioration of functional characteristics in living organisms.

BTW: Here A Chapter in Idiotic Gurus: Where is Dementia?
What if you your "Prana" goes missing via Dementia?

Secrets Revealed : 5 Stages of Death
https://www.youtube.com/watch?v=ZncdonlR5q8

Unfortunately this lunatic is quite popular, India and USA etc..:

New York Times bestsellers Inner Engineering: A Yogi's Guide to Joy
https://en.wikipedia.org/wiki/Sadhguru

Mild Shock

unread,
Nov 20, 2023, 8:23:55 AM11/20/23
to
Ok, OpenAI is dead. But we need to get out of the claws
of the computing cloud. We need the spirit of Niklaus
Wirth, who combined computer science and

electronics. We need to solve the problem of
parallel slilicon. Should have a look again at these
quantum computers. Can we have them on the Edge?

Mild Shock

unread,
Nov 20, 2023, 12:01:30 PM11/20/23
to

Seems that OpenAI is effectively imploding. Any
actors that wanted this had easy play because there
were two tribes in OpenAI, namely the AI doomers

and the AI futurists. Who cares? The AI doomers
were possibly anyway those contributing less and
will now end on the streets, a nice little shake out!

Sutskever Regret and the Weekend That Changed AI
https://www.youtube.com/watch?v=dyakih3oYpk

Mild Shock

unread,
Nov 25, 2023, 10:23:07 AM11/25/23
to
How my Dogelog Player garbage collector works:

Ashes to ashes, funk to funky
We know Major Tom's a junkie
Strung out in heaven's high
Hitting an all-time low
https://www.youtube.com/watch?v=CMThz7eQ6K0

Unfortunately no generational garbage collector yet. :-(

Mild Shock

unread,
Nov 25, 2023, 4:10:20 PM11/25/23
to
To advance the State of the Art and track performance improvements,
some automatization would be helpful. I can test manually WASM via
this here https://dev.swi-prolog.org/wasm/shell . Since my recent

performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM
SWI-Prolog. This holds not yet for the SAT Solver test cases, that need
GC improvements but for the core test cases. I only tested my Ryzen.

Don’t know yet results for Yoga:

dog swi
nrev 1247 1223
crypt 894 2351
deriv 960 1415
poly 959 1475
sortq 1313 1825
tictac 1587 2400
queens 1203 2316
query 1919 4565
mtak 1376 1584
perfect 1020 1369
calc 1224 1583
Total 13702 22106

LoL

Mild Shock

unread,
Nov 27, 2023, 12:31:25 PM11/27/23
to

Scryer Prolog has made amazing leaps recently concerning
performance, its now only like 2-3 times slower than
SWI-Prolog! What does it prevent to get faster than SWI-Prolog?

See for yourself. Here some testing with a very recent version.
Interestingly tictac shows it has some problems with negation-
as-failure and/or call/1. Maybe they should allocate more

time to these areas instead of inference counting formatting:

$ target/release/scryer-prolog -v
v0.9.3-50-gb8ef3678

nrev % CPU time: 0.304s, 3_024_548 inferences
crypt % CPU time: 0.422s, 4_392_537 inferences
deriv % CPU time: 0.462s, 3_150_149 inferences
poly % CPU time: 0.394s, 3_588_369 inferences
sortq % CPU time: 0.481s, 3_654_653 inferences
tictac % CPU time: 1.591s, 3_285_766 inferences
queens % CPU time: 0.517s, 5_713_596 inferences
query % CPU time: 0.909s, 8_678_936 inferences
mtak % CPU time: 0.425s, 6_901_822 inferences
perfect % CPU time: 0.763s, 5_321_436 inferences
calc % CPU time: 0.626s, 6_700_379 inferences
true.

Compared to SWI-Prolog on the same machine:

$ swipl --version
SWI-Prolog version 9.1.18 for x86_64-linux

nrev % 2,994,497 inferences, 0.067 CPU in 0.067 seconds
crypt % 4,166,441 inferences, 0.288 CPU in 0.287 seconds
deriv % 2,100,068 inferences, 0.139 CPU in 0.139 seconds
poly % 2,087,479 inferences, 0.155 CPU in 0.155 seconds
sortq % 3,624,602 inferences, 0.173 CPU in 0.173 seconds
tictac % 1,012,615 inferences, 0.184 CPU in 0.184 seconds
queens % 4,596,063 inferences, 0.266 CPU in 0.266 seconds
query % 8,639,878 inferences, 0.622 CPU in 0.622 seconds
mtak % 3,943,818 inferences, 0.162 CPU in 0.162 seconds
perfect % 3,241,199 inferences, 0.197 CPU in 0.197 seconds
calc % 3,060,151 inferences, 0.180 CPU in 0.180 seconds

Mild Shock

unread,
Nov 29, 2023, 12:47:15 AM11/29/23
to
Testing scryer-prolog doesn’t make any sense. Its not a
Prolog system. It has memory leaks somewhere.
Just try my SAT solver test suite:

?- between(1,100,_), suite_quiet, fail; true.
VSZ and RSS memory is going up and up, with no end.
Clogging my machine. I don’t think that this should happen,
that a failure driven loop eats all memory?

Thats just a fraud. How do you set some limits?

Mild Shock

unread,
Nov 29, 2023, 12:58:28 AM11/29/23
to

With limits I get this result:

$ ulimit -m 2000000
$ ulimit -v 2000000
$ target/release/scryer-prolog
?- ['program2.p'].
true.
?- between(1,100,_), suite_quiet, fail; true.
Segmentation fault

Not ok! Should continue running till the end.

Mild Shock

unread,
Nov 29, 2023, 2:33:53 AM11/29/23
to
How do you show Segmentation faults in a bar chart diagram?

Here is a test with Trealla Prolog, same limit test, it completes the job.
Doesn’t clog up the memory indefinitely, works just as expected:

$ ./tpl -v
Trealla Prolog (c) Infradig 2020-2023, v2.30.48-21-g8dfd
$ ./tpl
?- ['../ciao/program2.p'].
true.
?- between(1,100,_), suite_quiet, fail; true.
true.
?-

You have to wait some while, but you can use the command ps -aux to
see that it doesn’t eat up memory. And I did the above test with the same

very large ulimit -m | -v which wasn’t hit by a segmentation fault.

Mild Shock schrieb am Mittwoch, 29. November 2023 um 06:58:28 UTC+1:
> With limits I get this result:
>
> $ target/release/scryer-prolog -v
> v0.9.3-57-ge8d8b09e

Mild Shock

unread,
Nov 30, 2023, 6:43:40 PM11/30/23
to
A new player has entered the chat (Amazon Q):

#NLProc researcher @ AWS AI (@AmazonScience). Part-time
machine learner & linguistics enthusiast. Previously: PhD
@stanfordnlp, JD AI. He/him. Opinions my own.

It is really humbling to be part of the team that
launched Amazon Q, a flagship #AWS product that
helps users interact with their knowledge corpus using LLMs.

It's quite special to me personally, since it's only been
3 years since I finished my PhD thesis on this exact topic.
https://twitter.com/qi2peng2

What does Gartner say about Business-Chatbots?

Opening Keynote: The Next Era − We Shape AI
AI Shapes Us l Gartner IT Symposium/Xpo
https://www.youtube.com/watch?v=0s7Jw9xkSYQ
0 new messages