Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Scientists imbue robots with curiosity

36 views
Skip to first unread message

TruthSlave

unread,
Jun 8, 2017, 10:41:54 AM6/8/17
to

In a twist on artificial intelligence (AI), computer scientists have
programmed machines to be curious—to explore their surroundings on
their own and learn for the sake of learning. The new approach could
allow robots to learn even faster than they can now. Someday they
might even surpass human scientists in forming hypotheses and pushing
the frontiers of what’s known.

“Developing curiosity is a problem that’s core to intelligence,” says
George Konidaris, a computer scientist who runs the Intelligent Robot
Lab at Brown University and was not involved in the research. “It’s
going to be most useful when you’re not sure what your robot is going
to have to do in the future.”

Over the years, scientists have worked on algorithms for curiosity,
but copying human inquisitiveness has been tricky. For example,
most methods aren’t capable of assessing artificial agents’ gaps in
knowledge to predict what will be interesting before they see it.

(Humans can sometimes judge how interesting a book will be by its cover.)

Todd Hester, a computer scientist currently at Google DeepMind in
London hoped to do better. “I was looking for ways to make computers
learn more intelligently, and explore as a human would,” he says.
“Don’t explore everything, and don’t explore randomly, but try to
do something a little smarter.”

So Hester and Peter Stone, a computer scientist at the University of
Texas in Austin, developed a new algorithm, Targeted Exploration with
Variance-And-Novelty-Intrinsic-Rewards (TEXPLORE-VENIR), that relies
on a technique called reinforcement learning. In reinforcement
learning, a program tries something, and if the move brings it closer
to some ultimate goal, such as the end of a maze, it receives a small
reward and is more likely to try the maneuver again in the future.
DeepMind has used reinforcement learning to allow programs to master
Atari games and the board game Go through random experimentation. But
TEXPLORE-VENIR, like other curiosity algorithms, also sets an internal
goal for which the program rewards itself for comprehending something
new, even if the knowledge doesn’t get it closer to the ultimate goal.

As TEXPLORE-VENIR learns and builds a model of the world, it rewards
itself for discovering information that’s unlike what’s seen before—for
example, finding distant spots on a map or, in culinary application,
exotic recipes. It also rewards itself for reducing uncertainty—for
becoming familiar with those places and recipes. “They’re fundamentally
different types of learning and exploration,” Konidaris says.
“Balancing them is really important. And I like that this paper did
both of those.”

http://www.sciencemag.org/news/2017/05/scientists-imbue-robots-curiosity

keghn feem

unread,
Jun 10, 2017, 1:39:06 PM6/10/17
to
Good luck to them. But i am many years ahead of them.






TruthSlave

unread,
Jun 13, 2017, 9:40:08 AM6/13/17
to
I must admit i find it remarkable that these scientist are only
now considering this fundamental human trait in their quest for
Artificial Intelligence. And yet it doesn't surprise me. Most models
of A.I seems to have 'pattern recognition' as their primary goal.
These patterns are first taught for them to be later sought. This
seems to rule out actual discovery of new or unexpected patterns.

'Curiosity' should be at the heart of any 'modeling' of the human
experience. I would even say that capacity is fundamental to any
claim to be intelligent. And yet for all that, our natural curiosity
won't necessarily be considered a good thing where 'control' is
bottom line and the principle reason for our technologies. Curiosity
would run counter to our control by technology.

'Curiosity' is too much like free will to be acknowledged. That
Curiosity which takes one off the beaten track, which forces us to
fill known gaps in our knowledge, which forces us to question our
common place answers, might be considered undesirable human traits
and thus made invisible in any automated assessment of the human
experience.

Curiosity forces us to seek the illusive, those better and as yet
unknown answers.

On the flip side of all this, i have a sense of the human experience
being redefined, represented, characterized, profiled, coded to fit
the limits of our present technologies.

What this says for all those with growing dependencies on these crude
tools is a whole other matter. The future. Human thought limited to
answers sought by artificial thought.

keghn feem

unread,
Jun 13, 2017, 11:59:35 AM6/13/17
to
Yes curiosity is needed in a AGI or Pre AGI machine.
The AGI need to set back in a infant state or what every position is
comfortable and what the world go by.
From the beginning of time to the end, if there is a end, it is just a NP
hard temporal pattern.
With are limitations, we just view are local space, and to focus on one
ROI at a time. We watch the world go by. Unable to see the big picture from
top of the hill or deep down past the atom. And unable to everywhere at once.

For a humble AGI, it records local temporal space, and does not move, for now.
Just watches the clouds go by. Since it is not god, that is unable to
seeing the universe and everything in it form now tho the end of time
in one glance, it has to use tricks to view the world. First by looking for repeating temporal patterns.
Once repeating a pattern are found then this give the foundation for curiosity
to come into play.
With this illusion of repetition, illusion because real time of the universe
never repeats. With this illusion intelligence is based on.

keghn feem

unread,
Jun 15, 2017, 2:35:42 PM6/15/17
to


Once a repeating pattern is found. Then will come the curiosity to
finding a new temporal pattern.
Or at different times. The wanting to escape a pattern that repeat to
many times. The push and the pull. Boredom and curiosity.

TruthSlave

unread,
Jun 27, 2017, 8:23:11 AM6/27/17
to
If you think about it, Curiosity is driven by knowledge. Its starts with
us recognizing gaps in our knowledge. Its about building new knowledge.

Often we'll hear a phrase which we repeat as a matter of routine, with-
out actually questioning what it means. Or we'll hear a word, to which
we respond with a learned or conditioned reflex.

Given time, we might build a model of call and response, for x think y,
this is all about the habit of that connection. This occurs before we
ever consider what we mean by x or y.

The history of X or y. The reasons for X and Y. The effect of, or our
control by x or y, are all questions which the curious, or the might
pursue beyond the routine of x and y.

Curiosity is about us asking that all important question, why?

Curiosity has us seeking knowledge for its own sake, and so could be
seen in stark contrast to our pursuit of a information which is based
on its tangible reward, or our pursuit of a thought, as measure of
its controlling influence.

A machine's assessment of Curiosity might question the energy being
expended. A machine might see knowledge solely for its rewards, and
so learn to judge human behavior in terms of 'deliberate' expenditures
of effort. eg a psychologist text book idea of behavior and reward.

I have to wonder about those computers systems at large, and effects
of their simple models on human behavior.

The world we now live in has programs and systems tasked with quantify
human behavior, these systems when they are not based on selling us
stuff, are often only justified by alarms.

It seems to me if we haven't allowed for that fundamental trait -
Curiosity, we risk stomping on its expression.

Curiosity - If that trait were seen without an appropriate 'box' to
tick, what would A.i make of it? A.i is about pattern, our acceptance
of A.i forces us into a relation which is about pattern. We are forced
to confirm its answers, and as we do we reinforce its simpler models
of humanity.

I have to wonder about our ultimate course, the ultimate effect of our
reliance on these artificial systems.

TruthSlave

unread,
Jun 27, 2017, 8:38:47 AM6/27/17
to
On 15/06/2017 19:35, keghn feem wrote:
>
>
[corrections]


If you think about it, Curiosity is driven by knowledge. Its starts with
us recognizing gaps in our knowledge. Its about building new knowledge.

Often we'll hear a phrase which we repeat as a matter of routine, with-
out actually questioning what it means. Or we'll hear a word, or share a
thought, to which we respond with a learned or conditioned reflex.

Given time, we might build a model of call and response, for x think y,
this is all about the habit of that connection. This occurs before we
ever consider what we actually mean by x or y.

The history of X or y. The reasons for X and Y. The effect of, or our
control by x or y. These are all questions which the curious, or the
intelligent might pursue beyond the commonality of x and y.

Curiosity is about us asking that all important question, why?

Curiosity has us seeking knowledge for its own sake, and so could be
seen in stark contrast to our pursuit of a information which is based
on its tangible reward, or in contrast to our pursuit of a thought
as measure of its controlling influence.

A machine's assessment of Curiosity might question the energy being
expended. A machine might see knowledge solely for its rewards, and
so learn to judge human behavior in terms of 'deliberate' expenditures
of effort. eg a psychologist text book idea of behavior and reward.

I have to wonder about those computers systems at large, and the effects
of their simple models on human behavior.

The world we now live in has programs and systems tasked with quantify
human behavior, these systems, when they are not based on selling us

keghn feem

unread,
Jun 30, 2017, 11:38:45 AM6/30/17
to


To remember you need a echo:

https://www.youtube.com/watch?v=v-w2rEsYuNo



0 new messages