A Better Lesson

87 views
Skip to first unread message

Jose Ignacio Rodriguez-Labra

unread,
Nov 10, 2020, 8:42:43 PM11/10/20
to opencog
There was an earlier thread about Sutton's bitter lesson (link), which basically argues that general machine learning methods are always better than specialized methods encoded with human knowledge and optimized, which seemed like most people agreed with. There is a response on it called The Better Lesson by Rodney Brooks (link), pointing out reasons why Sutton is wrong. I really recommend giving it a read. 

It made me think about how using certain concepts that we already know about the world could actually be useful, rather than building a completely blank environment and have it learn everything from scratch. Why throw away all the patterns we've recognized already? Plus we can't rely on the increase in compute (link), which is integral to general methods, and playing into the process perpetuates the ever-increasing carbon footprint of the machine learning industry.

There seems to be a duality between these two methodologies: generality and specialization. Which is the right approach? But they could work together. By using our human ingenuity and our current understanding of the brain, maybe we could build a specialized, but limited version of human intelligence, to then use to create a general intelligence. Perhaps a truly general method for building human intelligence is a task belonging to a post-singularity world. How else could we overcome such a large problem space?

What do you think? is there any merit to this,? Or I am just not experienced enough? 
Maybe I should stop thinking so much and get coding.

Patrick Hammer

unread,
Feb 8, 2021, 10:46:31 AM2/8/21
to opencog
Hi Jose!

Thank you for initiating this interesting discussion!
I guess there are truths in both Sutton's and Brooks views, as often in AI the reality lies somewhere between the extremes! :)
Undoubtedly Deep Learning has made obsolete for instance the comparably way less fruitful approach of feature engineering, here I agree with Sutton.
On the other hand, Brooks has correctly identified that human expertise is now utilized in the design process of the layers, models and loss functions before their parameters are actually optimized.

Personally I'm quite agnostic to whether "human engineering" versus "offline-optimization within human-defined boundaries", is better, both are just two different paradigms of engineering which can also be combined.
While offline-optimization (via Supervised DL especially) has taken over in many domains, for some cases explicit engineering is still superior. An example are the famous legged Boston Dynamics robots: Boston Dynamics engineers physical models and throws them into Model Predictive Controllers, instead of applying any Reinforcement Learning. While there is plenty of research in using Reinforcement Learning in legged robots (often in a RL&Control hybrid approach), these solutions don't perform comparably well so far. Part of the reason is that offline-optimization demands an accurate simulation to work out. This is clearly the case for computer games and board games (perfect simulation availability even there), but not so well for systems which need to operate in the real world!

What matters to me personally is not the particular engineering paradigm to create systems for a specific purpose (via offline-optimization and handcrafting), but whether the AI can effectively adapt, at runtime, to new circumstances. That's a big challenge, and is what distinguishes, at a high level, natural evolution from natural intelligence (whether a single individual can adapt, or whether multiple generations are necessary). Most AGI systems, including OpenCog Prime address this quite well in my opinion, and realizing that's the case was a large part of why I was pulled into this wonderful research field!
Recently, our team has also written a Blog post on this topic, which also addresses the "Generality vs Specialization" issue you have touched on: http://www.opennars.org/blog/post1.html

Best regards,
Patrick
Reply all
Reply to author
Forward
Message has been deleted
0 new messages