Hello list.
First post.
Two points.
First point: I recently got around to building quite a few opencog projects (and metta / hyperon-experimental from trueagi). I'm however hitting a wall w/r to the github opencog/learn repo.
The usual `mkdir build && cd build && cmake .. && make` complains about two things: 1. cc and c++ "skipped" 2. weird target names such as COPY_TO_LOAD_PATH_IN_BUILD_DIR_FROM__home_minime_home_cellar_learn_fake that do not compile nor build anything. (cf. issue linked above.)
As I am not that proficient in cmake arcanery, I'd appreciate input into solving such issues. Or I'll spend some time on it when I have it or find it.
Second point: as far as I get it from the writings of Linas Vepstas on this opencog/learn repo and the there referenced .pdf on sheaf theory, the opencog/learn repo has or had the aim of solving the symbol grounding problem.
I am a mathematician and logician by training, and, coming from that background, it has dawned on me a curious connection with Galois theory. I believe it is tied to the symbol grounding problem, and I would perhaps appreciate kickstart a discussion on that topic. I do not believe this connection has dawned on many people, so I'd appreciate attempting to sketch a few points pertaining to that view.
In a nutshell:
Take the task of pattern mining "unstructured" data to the limit of absolute certainty. Pattern mining such data in a relational fashion generalises the notion of "association rule mining" of Agrawal's Apriori algorithm to first-order logic and not mere propositional calculus.
Such absolutely certain association rules are essentially called axioms. And the "unstructured" data on which that would make sense would then be mathematical-alike structures. Consequence: this is an axiomatisation task of mathematical structures.
But the question then is: what inputs does one take in order to perform such an axiomatisation. We want to reconstruct not only axioms but also the language in which one state these axioms. So we cannot start with a structure given by axioms in a language as is usual in mathematics, but we must start with something else...
I believe the answer has been mathematically given by Benda 1979 in "Modeloids I": one starts with a function that takes two arbitrary n-uples and returns True or False depending on whether or not they are analogous or not. The task then is to reconstruct a formal model theoretical language in which these analogies are expressible (in terms of back-and-forth / Ehrenfeucht-Fraïssé games in that language). Relational association rules may then be expressed in such a context as instantiations of the subgraph isomorphism problem. (For the sketch of a conceptual link between model theory and hypergraphs, see Steven Vere 1975 "Induction of concepts in the predicate calculus".)
Here is the link with Galois theory proper and not "mere" Galois connections: If one applies the above approach to well known structures such as algebraically closed fields, which is the natural setting for conventional Galois theory, what does these "analogies" and "back-and-forth" games mean ? They are the subsitutions of 19th century style Galois theory à la Galois proper, and thus permutations of roots making up the so-called Galois groups.
The fact that Galois theory is not merely an algebraic phenomenon has been noted by Marc Krasner. He wrote his PhD in 1937 or so on anabelian class field theory, and felt the need to extend the Galois theory of his time a bit for that specific need. In 1938, he wrote "Une généralisations de la notion de corps", with the *exact* same formulation of Galois' fundamental theorem as in his 1937 PhD. But this paper performed Galois theory in what would nowadays be called arbitrary first-order relational structures, and not algebra ! Krasner is said by some of his students to have claimed throughout his courses in Clermont-Ferrand that Galois theory was a logical phenomenon and not a mere algebraic one.
Galois theory, revisited by Marc Krasner, is in fact an axiomatisation procedure in disguise for arbitrary relational structures. And, in essence, exactly what has been discussed above: relational datamining under the constraint that mined rules are not probabilistic but exact. Galois Theory à la Krasner = Relational Data Mining - Uncertainty.
I wouldn't go as far as claiming that this (conceptually) solves the symbol grounding problem, as I do not yet know exactly what the OpenCog and TrueAGI community fully mean by that. But there is an observation in the above that would resonate with such a claim:
If one sets probabilistic issues aside, Galois theory à la Krasner thus boils down to the following: 1. Identify analogies of tuples internal to a given structure, whether a mathematical structure or unstructured data 2. Use Benda 1797 (+ some obscure paper by Broesterhuizen + personal machinery with Chu spaces) to axiomatise a structure.
Point 1 is what neural nets are good at, in practice. Point 2 is bridging that with axioms and symbolic stuff. Neurosymbolic ??
Admittedly, the gap is arcane. But this is what Krasner was working on. You see references to Krasner's work if you pay really good attention to the footnotes of Birkhoff, Ore, Riguet and al. when Galois connections were invented to provide the workhorse to implement Krasner's ideas. I attribute the fact that the intention of these people is now obscure in large part to the language gap between Krasner in french, and people in the US in english. They were obviously aware of the conceptual link, but the only thing that remains in the litterature are tiny footnotes in stuff like Birkhoff's Lattice Theory book.
I have kind of developed a framework to make brdiging that gap rigorous, but it's a huge pain to write down. It's a category theoretical treatment relying on so-called Chu spaces, where Chu morphisms are what Steven Vere 1975 calls "theta-substitutions" in his hypergraph formalism. Or where these Chu morphisms form Galois groups. Same concept. Different terminologies. It's essentially formal concept analysis, relationalised to first order logic. I believe it provides the correct glue.
Moreover, that category theoretic formalism exhibits a formal dualism between syntax and semantics. In which the following may be said: 1. Reconstructing admissible semantics from a Galois closure in syntax (i.e. a first-order theory in model theoretical positive logic or category theoretic geometric logic) is Gödel's completeness theorem, made effective through SMT solving technologies. 2. Recontructing admissible syntax from a Galois closure in semantics (i.e. Benda's 1979 modeloids, adapted to positive logic) is Galois theory à la Krasner, i.e. data-mining in first-order logic. 3. Transpose "syntax" and "semantics" in the above, and these tasks are formally dual, and solvable by the same tooling if one devises one. (Heard OpenCog's AtomSpace / Atomese / MeTTa technologies is kind of able to handle both in a more or less unified way.)
My 2 cents.
Guillaume Yziquel