Hi, all!
Intro
I'm new to miniKanren but getting up to speed quickly. I started implementing microKanren in Go earlier today, and may implement it in
V or
Janet afterward.
I'm an avid Emacs user and studied Clojure years ago before deciding to keep things simple for a while by using Go for concurrent programming instead, but thanks to Will's fantastic talk
"The Most Beautiful Program Ever Written" and my growing frustrations with how software is built today -- namely manually, one character at a time, often in a seriously unsafe language (like C or C++), and without our tooling being remotely intelligent nor particularly helpful -- I am looking again at Lisps and am
really excited about relational programming. Alan Kay's description of Planner and my own programming experiences has made me wonder why all programmers aren't using something like a statically (or optionally) typed Lisp with Prolog-like capabilities, and miniKanren in Scheme is closer to that than anything else I've seen (dynamic though it may be)!
Self-optimizing miniKanren?
Question: starting from a Barliman-like setup, is it computationally tractable to have miniKanren or microKanren generate a more efficient version of itself? That is, how about giving Barliman constraints and its own source code as input, but with the slowest portion (used for program synthesis) replaced with X, thereby getting Barliman to synthesize more versions of itself, where those versions are benchmarked against each other, with the winner becoming the new running program that then (more quickly) generates faster versions of itself, ad infinitum?
Seems like a badass use case for BOINC or some SETI@Home-like network, where we all join forces to use our spare compute to make program synthesis faster and faster over time! And hopefully without creating either Skynet or the gray goo scenario in the process :-D.