here are some of my random thoughts on this.
Indeed, loose reasoning over generalized concepts should be very important for AGI, and proof sketching seems an interesting analogy here. However, there are others. E.g. in Heuristic Search, there were attempts to generalize states and transitions between them, and to search in this greatly reduced search space first. Unfortunately, I don't know any general and interesting solution here. In deep reinforcement learning, there also was a paper on learning both a space of generalized states and a policy for it. I don't believe that such deep learning models will scale up to complex symbolic domains, but 'theorem proving' approach might also be too restrictive...
I have been thinking about this topic for a while recently, and I believe that inference control should be tightly connected with conceptualization. We rarely can find patterns in inference trees per se, but humans usually construct new concepts, in terms of which they can describe (domain-specific) inference rules. E.g. in the Go game, players use quite abstract notions ('wall', etc.) and reasoning over them (building a wall here will protect the territory and spread the influence). Such rules and concepts can be mined not in the inference trees, but in historical data of agent-environment interactions...
So,
- Most inference rules are domain-specific rules, and they involve concepts constructed specifically to be used in these rules (one can go further and say that most of our concepts are inference control concepts, but it sounds too radical)
- There are just a few general inference rules (e.g. entities, which are similar w.r.t. some properties, might be similar w.r.t. other properties). These rules involve general concepts (e.g. similarity), which can be either pre-defined, or which can also be constructed together with these rules for these rules to work (e.g. similar entities are entities for which this inference rule works). Such rules based on predefined abstract concepts and relations can be found by Pattern Miner, but this is of limited interest.
- Inference/reasoning is an abstracted simulation/prediction. There should be no huge difference in constructing higher-level concepts from experience and from inference trees.
- Generalization is an extremely non-trivial task. And what I see is that OpenCog is very refined in the part of reasoning, but it uses very simplistic Pattern Miner for generalization. Obviously, we cannot use anything heavier at the scale of the whole Atomspace, but for isolated domains, this should necessarily be done. Well, there is also MOSES in OpenCog, but it is also somewhat specialized, and not deeply integrated...
Well... 'Proof sketching' for inference control is the step in the right direction, but we should focus much more on a stronger generalization...
-- Alexey