Artificial intelligence (AI) systems are already way ahead of us in certain areas – playing Go, for example, or crunching huge sets of data – but in other aspects, AI is still a long way behind human beings, even just a few months after we're born.
But such a simple rule of continuity, along with other basic physical laws, hasn't been so intuitive for AI. Now a new study introduces an AI called PLATO inspired by research on how babies learn.
PLATO stands for Physics Learning through Auto-encoding and Tracking Objects, and it was trained through a series of coded videos designed to represent the same basic knowledge that babies have in their first few months of life.
"Luckily for us, developmental psychologists have spent decades studying what infants know about the physical world and cataloging the different ingredients or concepts that go into physical understanding," says neuroscientist Luis Piloto from the AI research laboratory DeepMind in the UK.
"Extending their work, we built and open sourced a physical concepts data set. This synthetic video data set takes inspiration from the original developmental experiments to assess physical concepts in our models."
There are three key concepts that we all understand from a very young age: permanence (objects won't suddenly disappear); solidity (solid objects can't pass through each other); and continuity (objects move in a consistent way through space and time).
These concepts were put across through clips of balls falling to the ground, bouncing off each other, disappearing behind other objects and then reappearing, and so on. Having trained PLATO on these videos, the next step was to test it.
When the AI was shown videos of 'impossible' scenarios that defied the physics it had learned, PLATO expressed surprise (or the AI equivalent of it): it was smart enough to recognize that something weird had happened that broke the laws of physics.
This happened after relatively short training periods too, only 28 hours in some instances. Technically speaking, just like in infant studies the researchers were looking for evidence of violation-of-expectation (VoE) signals, showing that the AI understood the concepts that it had been taught.
"Our object-based model displayed robust VoE effects across all five concepts we studied, despite having been trained on video data in which the specific probe events did not occur," write the researchers in their published paper.
However, PLATO isn't quite up to the level of a three-month-old baby yet. There was less AI surprise when it was shown scenarios that didn't involve any objects, or when the testing and training models were similar.
What's more, the videos PLATO was trained on included extra data to help it recognize the objects and their movement in three dimensions.
It seems that some built-in knowledge is still required to get the full picture – and that 'nature versus nurture' question is something developmental scientists are still wondering about in infants. The research could give us a better understanding of the human mind, as well as help us build a better AI representation of it.
"Our modeling work provides a proof-of-concept demonstration that at least some central concepts in intuitive physics can be acquired through visual learning," write the researchers.
"Although research in some precocial [born in an advanced state] species suggests that certain basic physical concepts can be present from birth, in humans the data suggest that intuitive physics knowledge emerges early in life but can be impacted by visual experience."
The research has been published in Nature Human Behavior.
Editor's Note (13 July 2022): Our original headline made it incorrectly sound like the AI could think like a human baby when this research has only taught the algorithm one aspect of that. We have updated the headline and several references in the text to distinguish more clearly the work done.