Anyhow, the laws, roughly translated are:
1) A robot will hardly ever harm a human
2) A robot will usually cooperate with a human, unless there is greater
probability that Rule 1 will be used
3) A robot will almost always try to protect itself
4) If doing nothing else, a robot can do anything it wants, as long as it
probably won't violate the other 3 laws
I know that's really weird (and this may seem fruitless---we can't even build
a goddarned robot ARM decently, let alone a humanoid robot), but the rules
are represented by bell curves on a scale, with rule 1 being closest to
"baseline" (i.e. most likely to be followed), rule 2 after it, rule 3 after it,
but rule 4 is implied.
What do the rest of you think of this little idea? The reason I call it
intuitive is that note ALL of the robot's actions are dependant on random
probabilities (although the probability of violating rule 1 is <.0001% per
year), and that's how I believe intuition works (random association of past
experiences and future projections at a subconscious level).
Also, while I'm at it, does anyone remember a story about Asimov's view of
an "intuitive" robot? I forgot which book it was in. Grrr. But it had
other stories like a quick one where a man wants to marry a woman, and her
father presents him with a code, and one where Baley solves a plagurism
case between 2 mathematicans while on board a spaceship.
--
Quote: "Love may conquer everything, but it needs Time as its Field General."
Let darkness disappear/In the rays of sunshine/That come from within my heart/
Whenever I think of you.
> 1) A robot will hardly ever harm a human ...
I think the term you are looking for is "fuzzy logic," not "intuitive".
--
Keith Lynch, k...@access.digex.com
f p=2,3:2 s q=1 x "f f=3:2 q:f*f>p!'q s q=p#f" w:q p,?$x\8+1*8