The techniques in the paper are quite advanced but the idea is
simple:
In the past, there has always been a dichotomy between logic-based
theories of knowledge which are highly structured but very brittle and
nearly impossible to connect to sensory data i.e. learn, and
statistical/connectionist theories of knowledge which operate over the
simplest, unstructured forms of knowledge.
It appeared necessary to accept either that people’s abstract
knowledge is not learned or induced in a nontrivial sense from
experience (hence essentially innate) or that human knowledge is not
nearly as abstract or structured (as “knowledge-like”) as it seems
(hence simply associations).
Now, with hierarchical Bayesian models, Tenenbaum & co seem to have
found a way to have it both ways: a way of expressing knowledge which
is both highly structured and based on statistics with all the
advantages in robustness and learnability which statistics brings.
Anyway this is just one direction which you can take machine learning,
it happens to be the one which interests me