- Tue
All our current technology is designed to be operated by humans.
A humaniform robot could also operate it.
Peter Trei
An increasingly large fraction of our technology is designed to be
operated by signals received by IR remote and/or internet. The fact
that there is a legacy pushbutton interface to allow humans to generate
these signals is beside the point, and increasingly irrelevant as
time goes on.
"You must remember this,
KISS is just KISS,
Retry is just retry.
The fundamental protocols apply,
As time goes by."
Wayne Throop thr...@sheol.org http://sheol.org/throopw
So, you'd be in favor of the anti-prosthesis act of 2048?
> An increasingly large fraction of our technology is designed to be
> operated by signals received by IR remote and/or internet.
Well, I can think of a few rarely-used devices that don't have an IR
or RF control system... like doors, drawers, steps, handles,
screwdrivers, hammers, cars, bikes, vacuum cleaners, electrical
outlets, etc. Yes, you can put chips for IR decoding into just about
anything... but by far the majority of our tools don't have power
sources, or anything like a control interface.
You don't need a humaniform robot for working the DVD player. You need
it to walk across the room, open the cabinet, select the right DVD off
the shelf, take it down and out of the case, & put it in the machine.
--
Brian Davis
So we can have sex with them of course. What did you think the point
was?
We already have pets for that (in principle): substitutes for human
company. I envision a future where people don't need that kind of
substitutes, because they'll actually be able to relate healthily to
each other... but I might be too optimistic.
A humaniform robot that could do anything a human could (or more)
would have to be very valuable, and wouldn't be used for very common
tasks, surely.
In any case, I think it constitutes some kind of fraud to have a
machine impersonate a human being, unless it's for some exceptional
purpose (and I can't even think of a good one). Would it be all right
to have half the human population be robotic, without knowing who were
who? I think that would be pointless and a very silly thing to do.
- Tue
No, I'm exclusively talking about wholly artificial robots that can't
be distinguished from human beings.
- Tue
:: So we can have sex with them of course.
: Tue Sorensen <soren...@gmail.com>
: We already have pets for that (in principle): substitutes for human
: company.
Having sex with animals is likely frowned on a good deal more
than sex with machines.
: I envision a future where people don't need that kind of substitutes,
: because they'll actually be able to relate healthily to each other...
: but I might be too optimistic.
I envision a future where people aren't ostrasized based on what somebody
holds to be "healthily". You may say I'm a dreamer.
: In any case, I think it constitutes some kind of fraud to have a
: machine impersonate a human being,
Who said they had to "impersonate" anybody? And is it "fraud" for men to
"impersonate" somebody with more muscles by carefully tailored clothing,
or by wearing lifts in their heels? Or women to "impersonate" somebody
with unusually plump lips, or sparse eyebrows, or whatever the kids
find attractive these days?
And *are* you in favor of the Prosthetic Body Act of 2048, banning all
cybernetic prostheses? How about plastic surgery?
And finally, why should anybody care?
: Would it be all right to have half the human population be robotic,
: without knowing who were who?
I sure hope so. I wouldn't want anybody messing around and maybe
discriminating against me for my prosthetics, or telling me I have to
make them obviously non-human.
: I think that would be pointless and a very silly thing to do.
Luckily, there's no State Regulatory Bureau of Pointlessness and Silliness.
And Bog willing, there never will be.
If I have total body prosthesis, how is that different? If it's horrible
for robots to walk around like people, why is it OK for people to walk
around like robots? What if some or all of my brain is electronic,
having had functions replaced bit by bit as they fail over the decades?
But my body is still (mostly) human? And why should you, or anybody
else, care? Or be permitted to butt into my business, whether I'm
a robot or a human, or something else entirely?
YASID: the Asimov short story in the form of just such a person writing
to an "advice for the lovelorn" columnist.
Xref: is it better to have R.Dorothy Wainwright walking around and
being mistaken from human, or C'Mell walking around having sex with
humans whether she wants to or not?
>On 6 Sep., 00:35, David Johnston <da...@block.net> wrote:
>> On Fri, 5 Sep 2008 10:27:37 -0700 (PDT), Tue Sorensen
>>
>> <sorenson...@gmail.com> wrote:
>> >Although roboticists are definitely working very hard to create robots
>> >that can eventually pass for human, don't you agree that this is
>> >actually a very bad idea? I think robot humans would be creepy. We
>> >wouldn't be sure how to react to them. And what's the point of having
>> >artificial people running around like normal people?
>>
>> >- Tue
>>
>> So we can have sex with them of course. What did you think the point
>> was?
>
>We already have pets for that (in principle):
<facefault>
substitutes for human
>company. I envision a future where people don't need that kind of
>substitutes, because they'll actually be able to relate healthily to
>each other
And we'll all be good-looking and have high self esteem, right? None
of use will be socially isolated geeks who spend way too much time on
the internet...
>... but I might be too optimistic.
Yeah I think you've proved that at length. People are still going to
insist on being individuals with individual quirks.
>A humaniform robot that could do anything a human could (or more)
>would have to be very valuable, and wouldn't be used for very common
>tasks, surely.
I didn't say anything about them being able to do anything a human
could. All they have to do is look and feel pretty much like a human.
http://www.msnbc.msn.com/id/25209226...5829?GT1=40006
>
>In any case, I think it constitutes some kind of fraud to have a
>machine impersonate a human being, unless it's for some exceptional
>purpose (and I can't even think of a good one).
Assuming we have true Artificial Intelligences actually capable of
everything a human is, then one good reason is so they can participate
in human society rather than being some kind of disgruntled paraplegic
in a box.
I certainly hope so. Large monocultures are notoriously vulnerable
to any number of problems, and heterocultures somewhat less so.
Mind you, people *do* it a lot. If they think they've got control
of everything. But it's balancing a pencil by its tip. It's not
dynamically stable.
As far as the concept of androids being frightening goes...meh. It
could go either way. I can see a society like something out of "Dune"
that absolutely forbids artificial intelligence, for historical or
cultural reasons. On the other hand, practically everything we see
today would be frightening to somebody from a thousand years ago. As
long as it's useful, people can adapt, just like they adapted to
gunpowder and airplanes.
I think we can have a more useful conversation discussing "the point"
of android robots: whether a machine with two arms and two legs is
better for certain purposes than one with, for instance, six arms and
three wheels.
No.
> We wouldn't be sure how to react to them.
Toddlers in California adapted quickly and treated RUBI and QRIO like
other nursery school students: http://www.scienceblog.com/cms/node/8268
Now imagine a generation raised in the age of ubiquitous humanoid
robots.
> And what's the point of having
> artificial people running around like normal people?
Procreation by other means.
--
DJensen
> A humaniform robot that could do anything a human could (or more) would
> have to be very valuable, and wouldn't be used for very common tasks,
> surely.
Depends. Since humans can build such robots; and since these robots can
do everything humans can do, that tends to imply that they can build
others like themselves.
That rather suggests to me that they'd very quickly become ubiquitous and
therefore cheap.
--
=======================================================================
= David --- If you use Microsoft products, you will, inevitably, get
= Mitchell --- viruses, so please don't add me to your address book.
=======================================================================
I read an article in English Wikipedi about this, perhaps half a year
ago. Some technical term for the creepiness of robots that are *much*
too human-like yet not *perfectly* human. Or maybe it was about
computer-animated people (as in Disney and Pixar movies) rather than
robots, although still quite relevant to your question I think. Some
word with "valley" in it IIRC.
Can somebody remember the term? If not, you should be able to find it
using some combination of Wikipedia and Google. If you find it, I'm sure
you'll find something useful. If not in the Wikipedia article itself,
then in other articles on the subject.
--
Peter Knutsen
sagatafl.org
Another point of view is that you don't need a robot. Rather, you need a
lawyer, one of legendary skill, to get at the IP owners and get them to
bend over for our right to transfer the movies (or music, or computer
games) that we have bought onto hard drives and other large-capacity
storage systems, so that we don't have to mess around with physical
discs, except once right after we buy something.
(Not that hard drives are large enough or cheap enough yet, but we'll
get there soon.)
--
Peter Knutsen
sagatafl.org
It's called the "uncanny valley". The idea being that obviously cartoonish
images get abstracted and look fine, and actual humans look fine, but
something that's *nearly* human looks disturbing because our brains are
trying to fit it into the "actually human" category rather than the
"slightly more detailed stick figure" category.
--
Mike Ash
Radio Free Earth
Broadcasting from our climate-controlled studios deep inside the Moon
I get the impressions that you assume humans are intrinsically special.
That once we uplift animals, such as chimpanzees, to human-equivalent
intelligence, or create artificial intelligences, in hardware or
software (or as mobile robots), that can match humans in all areas and
exceed them in a few, they should be legally treated as sub-humans,
having *fewer* rights than you and I have? Despite being at least as
complex and dynamic, internally, as you and I are?
(If I'm right, then the cure for the horrible condition you are
suffering from is to read a lot more science fiction.)
--
Peter Knutsen
sagatafl.org
That link isn't working. Can you please repost it?
--
Peter Knutsen
sagatafl.org
Yes, that was it. Thanks.
--
Peter Knutsen
sagatafl.org
But why do you need a robot to use them? Perhaps there's some widespread
psychological need for a doorman to bow and scrape and open doors and
drawyers for folks, and such? Hm... I don't see the appeal. Further,
even if opening a door is too much of a chore, adding a servo on a
door, even if you have to add one to a zillion doors, is cheaper, more
straightforward, and long-term more useful, than engineering humaniform
general-purpose widgets to do it that have to either move from door to
door, or be vast overkill if you build one to stand beside each door.
Let's see. Look at each of the suggestions. Do you need a robot to
operate your doors for you in your house? If so, cheaper by far to wire
all the doors. Drawyers, same thing. Steps; if you need a robot to climb
your steps for you, there are already escalators, and home lift chairs.
You might need a robot to use a screwdriver for you, but maintenance and
manufacturing doesn't really seem to require humaniform robots; that's
certainly not the way manufacturing automation is going from what I see on
"how it's made" series on the teeveebox. As car control systems become
power-assisted and computer-monitored, it'll quickly become easier to
plug in an AI module rather than have a legacy interface sitting in the
seat... it's already done for lots of diagnostic and maintenance chores,
and will become more common for more functions overtime. Much like
we never got typing robots to set and type on the legacy interface
to mechanical typewriters. (Though admitedly, Ghost the the Shell
has a nifty and/or disturbing look at that issue... but in that case,
humaniform robots were approached largely via prosthetics and augments,
so it was off-the-shelf; but I digress.) Not sure why a robot would be
needed to ride a bike. If the notion is the robot can provide the power
while you ride on an extra seat, then seems easier to add a motor to the
existing frame then maintain the legacy interface (and indeed, that's
where motorcycles comem from, more or less). There are already roombas.
And how often are things plugged-in or unplugged from electrical outlets?
(My computer has been plugged in for more than a year, and I only moved
it back then because I needed to install a new battery backup.)
Even Heinlein's notion of automation from Door into Summer (ie,
specialized widgets even if they have manipulators and such) seems more
likely than android robot lackeys. In general, R2D2 is much more useful
than C3PO (and even R2D2 seems wildly improable; why plug such a widget
along with its expertise into a fighter, when it can be downloaded,
as with the AIs in Adventures of the Galaxy Rangers?).
So anyways. I don't see a large enough appeal to overcome specialized
automation. Indeed, even things that have traditionally been
general-puprpose, have been made more dedicated; tivo for one example.
General-purpose computers get rarer, and are subsumed into a network of
specialized devices.
The number of legacy devices seems highly likely to decline faster than
any demand for a humaniform kludge to operate them can be deployed
cost-effectively to operate them. And, a yasid I've asked for several
times but keeps slipping my mind, there's a classic SF short about
the fact that even *after* humaniform robots become nigh-universal,
they'll be displaced by special purpose widgets, so that a widget that
sits all day typing won't really need legs, etc, etc. (In the story,
this happened becuase robots were developing psychosomatic illnesses,
such as paralysis in the legs if they never used them, etc, etc, so
the obvious solution is to avoid giving robots body parts they never
use to get all psychosomatic about... not that that's all that plausible,
but the issue that having a humaniform robot to sit and push a button
is pretty silly is spot on imo.
: You don't need a humaniform robot for working the DVD player. You
: need it to walk across the room, open the cabinet, select the right
: DVD off the shelf, take it down and out of the case, & put it in the
: machine.
Jukeboxes have existed for decades. And for that matter, who among the
younger generation uses physical storage media for music anymore anyways?
The point is, the niche of "can use legacy stuff" automation is smaller
than might be thought now, and will only shrink in future.
True, if you could magically wave your hand and create a robot buddy
right now, today, a humaniform robot buddy would be able to do more for
you, since there are still plenty of uses for them. But as a plausible
developmental momtive for producing them non-mamgically, it doesn't
really work, imo.
> The point is, the niche of "can use legacy stuff" automation is smaller
> than might be thought now, and will only shrink in future.
I think you missed my point (or more to the point, I didn't make it
well).
My main point was that a humaniform robot has the advantage of being
able to use all the very common infrastructure that we as humans use -
that's the point (even for "sex-bots", it's the compatibility of forms
that's the critical thing here, for psychological reasons in this
case). You don't need a humaniform robot to open the door - but there
are a 1001 taks that might require a robot to access a space already
set up for human beings, who like having it a certain way. You can
indeed modify the entire environment, and that might even be cheaper
in the long run... but that's not how technology works. What set's the
gauge of a train? History, not rationality...
--
Brian Davis
"Uncanny Valley" would be a great name for a TV series.
--
Doesn't the fact that there are *exactly* 50 states seem a little suspicious?
George W. Harris For actual email address, replace each 'u' with an 'i'
I always thought it was because they looked like dead people, rather
than fake people. Fake people are fine, but dead people are not
supposed to be animated.
-l.
> (Not that hard drives are large enough or cheap enough yet, but we'll
> get there soon.)
Very soon, if not already.
Most people are happy with music around 256Kbps, that's 2MB/minute or
approximately 100MB/album. That is literally like 2 cents worth of
storage, a minute fraction of the album-cost.
A movie is DVD-similar at about 1-2GB/hour. So storage is like
$0.25/movie. More expensive, but still a small fraction of the cost of a
typical movie.
Buying a single terabyte-disk for $200 will let you store 10.000 musical
albums, which is more than most people buy in a lifetime. Cheap enough.
Filling the same disk with moview will let you store on the order of 500
movies. This is a new movie a week for 10 years. Seems reasonable enough
to me.
As storage and transmission-rates climb, copyright becomes increasingly
burdensome. The set of useful things you COULD be doing keeps
increasing, so the value of the opportunities that are denied keeps
growing.
Eivind
> We already have pets for that (in principle): substitutes for human
> company. I envision a future where people don't need that kind of
> substitutes, because they'll actually be able to relate healthily to
> each other... but I might be too optimistic.
You should read some Huxley.
Eivind
>On Sat, 06 Sep 2008 18:45:02 +0200, Peter Knutsen
><pe...@sagatafl.invalid> wrote:
>
>>Michael Ash wrote:
>>> It's called the "uncanny valley". The idea being that obviously cartoonish
>>> images get abstracted and look fine, and actual humans look fine, but
>>> something that's *nearly* human looks disturbing because our brains are
>>> trying to fit it into the "actually human" category rather than the
>>> "slightly more detailed stick figure" category.
>>
>>Yes, that was it. Thanks.
>
> "Uncanny Valley" would be a great name for a TV series.
It could be a sequel to "The Stepford Wives."
HOUSTON (Reuters) - The jury deliberations continue for the fifth day
in the murder trial of Samuel Kincaid. District Attorney Michael
Nesbit publicly denounced the unprecedented duration of the
deliberations in what he describes as an open and shut case. "Kincaid,
by his own admission premeditatedly shot and killed Sarah Lenders and
seriously wounded David Paulson after seeing them walk by his house
hand-in-hand. He went to his gun safe got his pistol and shot both of
them as they sat on a bench in the park adjacent to Mr. Kincaid’s
home. The jury should not have even had to withdraw to their chambers,
the video evidence is overwhelming!" Nesbit said reading from a
prepared statement. Defense Attorney Richard Peebles has argued that
because both Lenders and Paulson were wearing metallic designs on
their foreheads mimicking the identifiers required by law to be
attached to all humaniform androids, Kincaid believed them to be
androids at the time of the shootings and thus the charge should be
reduced to manslaughter if not destruction of property. The
prosecution has pointed out that Kincaid has not expressed regret for
his actions and continued to fire on the couple even after it became
apparent that they were not androids.
On Fri, 5 Sep 2008 19:22:45 -0700 (PDT), Damien Valentine
<vale...@gmail.com> wrote:
> ...
>I think we can have a more useful conversation discussing "the point"
>of android robots: whether a machine with two arms and two legs is
>better for certain purposes than one with, for instance, six arms and
>three wheels.
Here's an example, "Ballbot," for just that conversation:
The "uncanny valley" is just a hypothesis. There's no consensus
that the effect even exists, much less what the cause may be.
Isaac Kuo
> As storage and transmission-rates climb, copyright becomes increasingly
> burdensome. The set of useful things you COULD be doing keeps
> increasing, so the value of the opportunities that are denied keeps
> growing.
That doesn't really make sense from the point of view of the content
producers. Yes, the ability to copy and store things makes copyright
limitations burdensome to consumers. But lack of some meaningful
copyright protection discourages producers from producing new content.
--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 18 N 121 57 W && AIM, Y!M erikmaxfrancis
A man's life is what his thoughts make it.
-- Marcus Aurelius
I have a consensus with myself that the effect does exist for me. As
to whether it exists as a general phenomenon among most people, I make
no claim at this time, but I suspect that it's true for a lot of
people. And my explanation for why I feel that way is that things
which are really close to human but not quite perfect activate the
"it's not quite perfect because it's *dead*" response.
-l.
>> As storage and transmission-rates climb, copyright becomes increasingly
>> burdensome. The set of useful things you COULD be doing keeps
>> increasing, so the value of the opportunities that are denied keeps
>> growing.
> That doesn't really make sense from the point of view of the content
> producers. Yes, the ability to copy and store things makes copyright
> limitations burdensome to consumers. But lack of some meaningful
> copyright protection discourages producers from producing new content.
Certainly the BURDENS of copyrigth are on everyone-but-the-owner of a
copyrigthed work. The owner is the benefactor, afterall !
What I meant was this;
In 1800, copyright was 28 years. Copying or distributing a book (say) or
a piece of music wasn't really practically possible for most of the
general public. If you didn't own a printing-press, then making a copy
of the book would be more expensive than buying a new copy anyway. And
even if you -did- own one, making copies was only practical in large
numbers, doing setup work to make a SINGLE, or even a DOZEN extra copies
wouldn't be economical. Copyright adds to the cost of books, but not by
a very large factor because physical production and distribution costs
are the dominant ones.
From the perspective of John and Jane Doe, copyright deprives them of
the right to do things which they very likely couldn't benefit from
doing anyway.
In exchange, it is hoped, they get access to more books and more
creative works. It seems like a fair deal. Or atleast not obviously slanted.
Today, copyright is for over a century. And it is repeatedly
retroactively extended. John and Jane are in posession of a computer,
and many creative works are stored in a computer-readable form. This
means making a single extra copy can in many cases be a trivial, simple
thing to do. Also, today for many works copyright is the overwhelming
part of the cost of a product. A musical album cost $15 or something
bougth online, absent copyright it'd literally be atleast 3 orders of
magnitude less.
True, copyright stimulates creativity since it rewards the creation of
works. And this is the purpose. I think the basic principle is sound,
it's just that it seems to me the scales are very far out of balance.
One could, for example ask:
1) Does _anyone_ honestly believe that the creation of new works would
significantly slow is the terms where half what they are today ? Would
anyone say: "Nah, it's only protected for 50 years, so I won't do it, if
I could get 110, now THAT would be something!"
In economic terms at 5% deprecation a year, getting something for 28
years (the original term) is 77% of the value of getting it forever.
Getting it for 50 years is 93% of forever. Are the last 50-60 years of
protection really needed to ensure works get created, or is this
something we the public give up with nothing to show for it in return ?
2) Is it justified to give legal protection to technical measures that
frequently give copyright-holders control they never had by law, and
that in many cases are anti-competitive ? (such as artificially
splitting a market in different "zones" to price-discriminate)
3) Is it justified to forbid people from breaking 2 in order to
format-shift or time-shift or backup works that they legally own ?
4) Should casual circle-of-friends copying be an offence (in some
jurisdictions it's not, I see no signs of creativity suffering where
that is the case)
I -do- think we should have /some/ copyright-protection. I just think
the status quo is spinning madly out of control, and needs to be readjusted.
I disagree violently with the supreme court in the USA that rules that
the term "for limited terms" in your constitution doesn't in practice
mean anything. (because 10000000 years is "limited")
Eivind
> > On Sep 6, 8:26 pm, Logan Kearsley <chronosur...@gmail.com> wrote:
> > > On Sep 6, 12:22 pm, Michael Ash <m...@mikeash.com> wrote:
> > > > It's called the "uncanny valley". The idea being that obviously cartoonish
> > > > images get abstracted and look fine, and actual humans look fine, but
> > > > something that's *nearly* human looks disturbing because our brains are
> > > > trying to fit it into the "actually human" category rather than the
> > > > "slightly more detailed stick figure" category.
> > > I always thought it was because they looked like dead people, rather
> > > than fake people. Fake people are fine, but dead people are not
> > > supposed to be animated.
> > The "uncanny valley" is just a hypothesis. There's no consensus
> > that the effect even exists, much less what the cause may be.
> I have a consensus with myself that the effect does exist for me.
You may be wrong, though. You may be responding to something
entirely different from what you think you're responding to. For
example, it may simply be that the "almost" human figures which
you find disturbing are disturbing you because they are ugly,
or they move in strange ways. A street mime might disturb you
in the same way despite being completely human.
> As
> to whether it exists as a general phenomenon among most people, I make
> no claim at this time, but I suspect that it's true for a lot of
> people. And my explanation for why I feel that way is that things
> which are really close to human but not quite perfect activate the
> "it's not quite perfect because it's *dead*" response.
I'm skeptical about this. I've seen a few corpses of people who
died of non-disfiguring causes, and they simply didn't look
creepy because they looked like someone sleeping. If one
of them just sat up and woke up, it would have been shocking
on an intellectual level, of course, but not disturbing on a
visceral level. Unless there's some disfigurement or blood or
discoloration or scarring, a dead person simply looks like a person.
Disfigurement, bleeding, scarring, or discoloration, in contrast,
can look disturbing for obvious reasons. It indicates injury and
possibly disease. There's an evolutionary survival benefit to
recognizing injury/disease so an individual may react to a
harmed individual differently from a healthy individual.
Hmm...I wonder if it's possible to make a "zombie movie" where
the zombies simply look like normal humans. Oh wait, it is.
Invasion of the Body Snatchers, of course. The actors portraying
pod people look perfectly human, but they're disturbing because
of the way they move.
Isaac Kuo
Oh, sure. I may be wrong in my *explanation*, but the response still
exists.
> example, it may simply be that the "almost" human figures which
> you find disturbing are disturbing you because they are ugly,
> or they move in strange ways. A street mime might disturb you
> in the same way despite being completely human.
Cartoon figures often move in strange ways, as well. As do humanoid
but obviously inhuman robots. Not that you don't have a good point,
but a complete explanation has to take into account why it's only an
issue for things that are very nearly human-looking.
> I'm skeptical about this. I've seen a few corpses of people who
> died of non-disfiguring causes, and they simply didn't look
> creepy because they looked like someone sleeping. If one
> of them just sat up and woke up, it would have been shocking
> on an intellectual level, of course, but not disturbing on a
> visceral level. Unless there's some disfigurement or blood or
> discoloration or scarring, a dead person simply looks like a person.
>
> Disfigurement, bleeding, scarring, or discoloration, in contrast,
> can look disturbing for obvious reasons. It indicates injury and
> possibly disease. There's an evolutionary survival benefit to
> recognizing injury/disease so an individual may react to a
> harmed individual differently from a healthy individual.
Perhaps I should change "dead" do "unwell", then. Same general idea.
-l.
Yes. Lots of people overlook that part.
Also, lots of people overlook it deliberately, by cheerfully pointing at
all the intensely amateurish crap that is available for free all over
the Internet, legally. Who wants amateurish crap? I certainly don't. I
want *quality* material, created by highly skilled and talented people.
And they create it because they can earn a living doing it. If they
couldn't earn a living doing it, they wouldn't be doing it at all,
they'd be doing something else, and all we'd have would be worthless
amateurish crap.
--
Peter Knutsen
sagatafl.org
Ugly and disturbing are closely related ideas--see below.
>> or they move in strange ways. =A0A street mime might disturb you
>> in the same way despite being completely human.
>
> Cartoon figures often move in strange ways, as well. As do humanoid
> but obviously inhuman robots. Not that you don't have a good point,
> but a complete explanation has to take into account why it's only an
> issue for things that are very nearly human-looking.
In fact the very idea of "ugly" sems to involve a relatively small deviation
from what is accepted as normal human--just enough to be disturbing.
(Chimpanzees are ugly--to many people; most dogs aren't.) There seems to
be a tendency to be disturbed by things that don't look quite
right--probably to so with weeding out "mutants"--and probably not only among
humans. I remember seeing an escaped budgerigar being mobbed by starlings....
--John Park
> >> I'm skeptical about this. =A0I've seen a few corpses of people who
>> died of non-disfiguring causes, and they simply didn't look
>> creepy because they looked like someone sleeping. =A0If one
>> of them just sat up and woke up, it would have been shocking
>> on an intellectual level, of course, but not disturbing on a
>> visceral level. =A0Unless there's some disfigurement or blood or
>> discoloration or scarring, a dead person simply looks like a person.
>>
>> Disfigurement, bleeding, scarring, or discoloration, in contrast,
>> can look disturbing for obvious reasons. =A0It indicates injury and
>> possibly disease. =A0There's an evolutionary survival benefit to
My take on it is that it is all right for copyrights to last for a very
long time if the product *is* kept *continually* available, all over the
world (i.e. Earth, Moon, in-system colonies) at a reasonable price (i.e.
publisher's recommended price or close to it).
If the publisher stops reprinting (or whatever), the rights should
revert automatically and strongly to the creator, and if he doesn't do
anything with them within a very, very short span of time (say, 3
years), then the work becomes forcibly public domain.
In short, if a book (or movie or whatever) goes out-of-print, so that
the only way to get it is that you might get lucky and find it
second-hand (possibly at an inflated price), it is stamped, hard, as
PUBLIC DOMAIN.
Or in other words, law makers ought to acknowledge that there is such a
thing as dormant IP, and then effing *do* something about it.
[...]
> I -do- think we should have /some/ copyright-protection. I just think
> the status quo is spinning madly out of control, and needs to be readjusted.
[...]
All the laws are created from a publisher-centered perspective, rather
than from a consumer perspective. As a consumer I want to be *able* to
consume. I want to be *able* to buy in-print books (and movies and music
CDs and computer games). I want them to remain available. I *hate*
out-of-printness. Vehemently.
Keep your stuff in print, or *lose* it. That's how it should be.
--
Peter Knutsen
sagatafl.org
I get it from the chick robot David linked to. I also kinda got it,
although not very strongly, from a white cat robot that was shown on
Danish morning TV just this morning.
I don't get the effect from computer-animated characters such as the
ones in "The Incredibles", although I believe there are
computer-animated movies with more "realistic" characters (some kind of
spin-off from a computer game?), and it is possible I'd get it from them.
Saw Titan A.E. a year or two ago. Didn't get any uncanny effect. (Note
that this is a hand-animated movie, or at least the human and other
biological characters were (mostly?) hand-drawn.)
> no claim at this time, but I suspect that it's true for a lot of
> people. And my explanation for why I feel that way is that things
> which are really close to human but not quite perfect activate the
> "it's not quite perfect because it's *dead*" response.
Your hypothesis sounds good to me.
Although based on Isaac's writing, I think maybe the ucanniness isn't
from seeing a dead person, but from seeing a dead person that movies
(zombiephobia). Except the robot cat often didn't move. Nor did the
robot chick in the article (since it was just a JPG of the robot chick,
not an animation). So not I'm no longer sure whether it is about seeing
something obviously dead that moves.
--
Peter Knutsen
sagatafl.org
Could it be some general distate for fakeness? Like the robot chick that
is obviously not a real human?
Like what I get with (elective) plastic surgery? In nearly all cases
when I notice it, my reaction is "why did she (or sometimes he) choose
to have that done? It doesn't look like an improvement at all to me."
(And it's not because I'm opposed to pretty faces, or to large breasts.
I'm a fan of both. But only if it looks natural.)
--
Peter Knutsen
sagatafl.org
> > You may be wrong, though. You may be responding to something
> > entirely different from what you think you're responding to. For
> Oh, sure. I may be wrong in my *explanation*, but the response still
> exists.
You might be wrong about what you're responding to. You might
think *wrongly* that you're responding to things which look "almost
human". But in fact you may be responding to some other set of
things, some of which incidentally look "almost human".
> > example, it may simply be that the "almost" human figures which
> > you find disturbing are disturbing you because they are ugly,
> > or they move in strange ways. A street mime might disturb you
> > in the same way despite being completely human.
> Cartoon figures often move in strange ways, as well. As do humanoid
> but obviously inhuman robots. Not that you don't have a good point,
> but a complete explanation has to take into account why it's only an
> issue for things that are very nearly human-looking.
Is it, though? Like I said, you may be wrong about that. It
could easily be an issue for things that are exactly
human-looking also.
For me, almost-human looking robots look strange, but
only because they look like mannequins and mannequins
don't move. Honestly, I do not see dead or diseased people
very often; I'm not familiar with them. In contrast, I see
mannequins in shops all over the place all the time. It's
something familiar, and for as long as I can remember they
just blend into the background scenery. Only a young baby
takes notice of a mannequin as if it were anything like a\
real person. So if a mannequin moves, that's something
unexpected.
I think it's just a question of familiarity.
Isaac Kuo
> Could it be some general distate for fakeness? Like the robot chick that
> is obviously not a real human?
Is it obvious? She looks pale, to me, that's all. Perhaps this is
one of those cultural things. Japanese generally treasure pale
smooth skin, so unsurprisingly Japanese robots will have very
pale skin (hmm...this perhaps includes non-human-like robots,
where white plastic seems to be popular).
I like pale skin also; I love my wife's light skin but she thinks
it's ugly and wishes her skin had a little color.
I just had a thought--baby dolls. I suspect that if the "uncanny
valley" effect were real, then baby dolls would not be ubiquitously
popular toys.
Isaac Kuo
> I don't get the effect from computer-animated characters such as the
> ones in "The Incredibles", although I believe there are
> computer-animated movies with more "realistic" characters (some kind of
> spin-off from a computer game?), and it is possible I'd get it from them.
You're probably referring to _Final Fantasy: The Spirits Within_. Not a
great movie, although it had some interesting themes. But the animation
was very good; there were numerous points through the film where I
forgot that I was watching something animated.
> Saw Titan A.E. a year or two ago. Didn't get any uncanny effect. (Note
> that this is a hand-animated movie, or at least the human and other
> biological characters were (mostly?) hand-drawn.)
I believe some of the external scenes and backgrounds were computer
generated, which is fairly common these days. The characters were all
hand-drawn.
--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 18 N 121 57 W && AIM, Y!M erikmaxfrancis
In war there is no second prize for runner-up.
-- Gen. Omar Bradley
> Eivind wrote:
> [...]
>> 1) Does _anyone_ honestly believe that the creation of new works
>> would significantly slow is the terms where half what they are today
>> ? Would anyone say: "Nah, it's only protected for 50 years, so I
>> won't do it, if I could get 110, now THAT would be something!"
> [...]
>
> My take on it is that it is all right for copyrights to last for a
> very long time if the product *is* kept *continually* available, all
> over the world (i.e. Earth, Moon, in-system colonies) at a reasonable
> price (i.e. publisher's recommended price or close to it).
YES! And this should vehemently be extended to software. If I can't
purchase an old program version I should be allowed to copy it for free:
after all if manufacturers do not sell it and I need it for an old
computer/system that is unable to run new software they only lose a
market segment they have conciously decided to ignore.
H Tavaila
> I just had a thought--baby dolls. I suspect that if the "uncanny
> valley" effect were real, then baby dolls would not be ubiquitously
> popular toys.
Maybe they are sufficiently fake-looking. I'm not very familiar with
them, though.
I am more familiar with computer graphics, since I play video games. I
often get a sort of 'uncanny valley'-like effect, not just from people,
but game objects in general. In older games with poorer graphics I
accept objects in a game world as representations of things in real
world: "That's a representation, a symbol, of a car." Current
generation of games have higher quality graphics, and often fall into
'uncanny valley': "That's a picture of plastic toy car." It looks fake,
and it bothers me even though more fake looking object didn't.
Cars are easy enough that many driving games now have realistic enough
looking cars that they don't bother me. But between obviously not real
and sufficiently real there is an area that doesn't quite work for me.
--
Juho Julkunen
> The "uncanny valley" is just a hypothesis. There's no consensus
> that the effect even exists, much less what the cause may be.
But needlework people have been harping on it for generations.
For example, cross stitches *must*, MUST all be crossed in the same
direction. And one must not switch hands when knitting with two
yarns; if the yellow yarn is in the right hand and the red yarn is in
the left, one must never, never switch to red in the right and yellow
in the left.
You can't quite see the difference, but work with patches of both
parities "needs ironing or something".
Joy Beeson
--
joy beeson at comcast dot net
http://roughsewing.home.comcast.net/ -- sewing
http://n3f.home.comcast.net/ -- Writers' Exchange
The above message is a Usenet post.
I don't recall having given anyone permission to use it on a Web site.
>> 1) Does _anyone_ honestly believe that the creation of new works would
>> significantly slow is the terms where half what they are today ? Would
>> anyone say: "Nah, it's only protected for 50 years, so I won't do it, if
>> I could get 110, now THAT would be something!"
> If the publisher stops reprinting (or whatever), the rights should
> revert automatically and strongly to the creator, and if he doesn't do
> anything with them within a very, very short span of time (say, 3
> years), then the work becomes forcibly public domain.
That is but one of many serious problems with todays copyright-law. It
doesn't invalidate any of my questions.
Does anyone HONESTLY believe that 110 years is needed to stimulate the
creation of new works ? That creation would significantly lessen if the
terms where half that ? If not, what is the justification for the latest
extensions ?
Abandoned works is one category that may deserve special rules, perhaps
along the lines you mention.
Another category is works where the copyright-holder is unknown. There
should be some procedure for this situation, other than "wait 110 years".
Eivind
FWIW, and to reference something vaguely SF, there was an episode of the
TV series Doctor Who, many years ago which had creepy humanoid robots,
and people suffering from "Robophobia", described as a malady suffered by
those who were hypersensitive to body-language, which robots, didn't
display.
--
=======================================================================
= David --- If you use Microsoft products, you will, inevitably, get
= Mitchell --- viruses, so please don't add me to your address book.
=======================================================================
> > The "uncanny valley" is just a hypothesis. There's no consensus
> > that the effect even exists, much less what the cause may be.
> But needlework people have been harping on it for generations.
> For example, cross stitches *must*, MUST all be crossed in the same
> direction. And one must not switch hands when knitting with two
> yarns; if the yellow yarn is in the right hand and the red yarn is in
> the left, one must never, never switch to red in the right and yellow
> in the left.
> You can't quite see the difference, but work with patches of both
> parities "needs ironing or something".
Is this a "valley" effect, though? It's not enough that imperfections
make the work look worse. In order to have a "valley" effect, a
slightly imperfect work needs to look much worse than a very
imperfect work. That's what it means for "slightly imperfect" to be
in a "valley".
Isaac Kuo
Hm. Hadn't thought of that before. If that's the case, then it should
be possible to trigger similar effects with completely un-
anthropomorphic objects, though. I've never paid attention to that,
but it fits with what Juho Julkunen said elsethread about older game
graphics looking like symbols of real things, while newer game
graphics look like fake things.
-l.
> On Sep 9, 9:25 pm, Joy Beeson <jbee...@invalid.net.invalid> wrote:
The point is that these imperfections cannot be used to make a
pattern.
If you work satin stitches pointing in different directions, it can be
a bad design choice -- working them all pointing in the same direction
can be a bad design choice -- but mis-matched satin stitches are not a
mistake in themselves, because the direction of the stitches is
obvious to the most-casual glance.
If one were to copy a cross-stitch design, replacing each cross with a
square of satin stitch that ran parallel to the upper stitch in that
cross, the design would look better. You might think that the
designer ought not to have done that, but you wouldn't think "sloppy
workmanship" and then have to get out a magnifying glass to see *why*
it's sloppy.
--
Joy Beeson
OK, well, you're thinking ahead now (and I was perhaps being unclear).
You're presupposing a development where a gradual potential for
increased and complete cyberneticization can achieve a smooth
transition between organic and artificial consciousness, and asserting
that the two should be regarded as indistinguishable. In that
situation there is no distinction between man and machine, I'll give
you that.
The situation I was talking about however (or that I meant), is when
only the *appearance* - the looks - are indistinguishable. I was not
presupposing an artificial brain that was indistinguishable from an
organic human brain, because I think that kind of development is quite
a few years away. (Besides, I don't believe that people will ever
replace *all* their parts with cyber-bits; bio-technology will make
organic replacements possible long before such a decision needs to be
made, at which time it has become moot.) I was talking about a created
artificial lifeform which appears human, but isn't, and therefore
would not require being treated as human. I think it would be creepy
and a bad idea to have a lot of such creations running around among
us, passing for humans, and making us uncomfortable about how to
relate to them.
If we created robots with brains that were, for all moral and
humanitarian purposes, indistinguishable from ours, then we would have
created a new and sovereign lifeform. I don't see why we would want
that. If we started doing that, where would it end? Would we use a lot
of resources on just creating armies of sovereign lifeforms and
sending them on their way into the universe? Why? To set ourselves up
as gods? I see the use of creating technology and for us to go forth
and multiply, but I don't see the use of creating new lifeforms just
like that. They would have to be considered equal to us, and would
only consume our resources and generally compete with us about all
sorts of things. Unless of course we made them immensely friendly to
us, but in that case they would essentially be slaves and not have a
proper free will of their own. I don't see how we would ever really
need to create new intelligent lifeforms. We have ourselves. We can
hope to find others around the universe who have evolved naturally,
but, I don't think we're lonely enough to justify creating new
lifeforms just because we can. Again, we have each other. Ideally,
that should be enough. And I am a bit of an idealist, so I believe in
a future where people actually like each other and support each other
and cooperate with each other, even though they don't do that a heck
of a lot now.
- Tue
Why have sex with anything non-human? That has never been anything but
a minority fantasy, anyway.
> : I envision a future where people don't need that kind of substitutes,
> : because they'll actually be able to relate healthily to each other...
> : but I might be too optimistic.
>
> I envision a future where people aren't ostrasized based on what somebody
> holds to be "healthily". You may say I'm a dreamer.
Being a science-minded person, I envision a future where the question
of what's healthy is a factual matter, not somebody's opinion.
> And *are* you in favor of the Prosthetic Body Act of 2048, banning all
> cybernetic prostheses? How about plastic surgery?
No, I'm not in favor of banning prosthetics.
> And finally, why should anybody care?
Because it is creepy to deal with things that look like human beings
but aren't. I'm talking about artificially created lifeforms with
their own type of machine intelligence. I am not talking about human
beings with a lot of replaced parts.
- Tue
That's right. Aren't you looking forward to it?
> >... but I might be too optimistic.
>
> Yeah I think you've proved that at length. People are still going to
> insist on being individuals with individual quirks.
People will always be individuals. But there are different levels on
which people can be individuals.
> >In any case, I think it constitutes some kind of fraud to have a
> >machine impersonate a human being, unless it's for some exceptional
> >purpose (and I can't even think of a good one).
>
> Assuming we have true Artificial Intelligences actually capable of
> everything a human is, then one good reason is so they can participate
> in human society rather than being some kind of disgruntled paraplegic
> in a box.
One of the things you're missing here is that if we created wholly
sovereign new sentient lifeforms (electronic *or* organic), which
would of course have all the rights pertaining to such, then they
would have their own agenda. They might be friendly and they might
not. We don't know. Why run the risk just because we can, when there
isn't the remotest necessity?? *That* is my question.
- Tue
And keep evolving into who knows what... Not a Pandora's Box I'd care
to open.
- Tue
Part of my point is that I believe it would be EXTREMELY IRRESPONSIBLE
of us to create sentient non-human creatures equal or superior to
ourselves. It would be stupidity on the same level as deliberately
shooting ourselves in the foot, or the head. Very, *very* serious
thinking, deliberation and legislation should precede any initiative
of such a nature, and I'm not seeing these questions being treated in
our society today. Hence, I worry.
- Tue
My point is really that sentience is a completely different ballpark
of complexity than our usual questions in science and technology. If
we create a truly sentient artificial lifeform, then we are unleashing
a sort of second sentient species, and unless we enslave them (which I
doubt is even an option, nor something I would recommend unless it was
an emergency and the possibility was extant), then they will be
largely out of our control. We cannot know what they would want, or
do. Of course, we might be able to have some say in that during the
creation process, but if we're creating a lifeform with free will,
then it *will* be able to override much of its initial "programming".
So I'm saying that it is most likely not in our interest to create
such a lifeform. (And also, if it is different from us, not in our
interest to make it humaniform, because it would not then act human.)
- Tue
Brrrrrrr! You'd have to be very far out before you'd leave your
children in non-human care.
> > And what's the point of having
> > artificial people running around like normal people?
>
> Procreation by other means.
I think it's nonsense.
- Tue
>Part of my point is that I believe it would be EXTREMELY IRRESPONSIBLE
>of us to create sentient non-human creatures equal or superior to
>ourselves.
1. What is equal/superior? Stronger? Bigger? Better football
players? More religious? Better space pilots? Better servants?
2. When a tool has been available for the powerful to gain more power
- the powerful don't seem interested in eschewing that tool.
--
"In no part of the constitution is more wisdom to be found,
than in the clause which confides the question of war or peace
to the legislature, and not to the executive department."
- James Madison
Yeah. Disturbing. That's what I meant by creepy, exactly.
On principle, I absolutely have nothing against non-sentient robots -
on the contrary -, I just don't want to run the risk of mistaking them
for human beings.
And as for sentient robots (or artificially intelligent lifeforms in
general), I'm against creating them in the first place. Unless we can
thoroughly control them to the point where they don't start competing
with us for resources, space, etc. Setting them free to pursue
whatever agenda of their own would potentially be extremely hazardous
to humankind, and is an utterly unnecessary risk.
- Tue
> On 6 Sep., 08:04, DJensen <i_m...@yahoo.ca> wrote:
>> On Sep 5, 1:27 pm, Tue Sorensen <sorenson...@gmail.com> wrote:
>>
>>> Although roboticists are definitely working very hard to create robots
>>> that can eventually pass for human, don't you agree that this is
>>> actually a very bad idea?
>>
>> No.
>>
>>> We wouldn't be sure how to react to them.
>>
>> Toddlers in California adapted quickly and treated RUBI and QRIO like
>> other nursery school students:http://www.scienceblog.com/cms/node/8268
>>
>> Now imagine a generation raised in the age of ubiquitous humanoid
>> robots.
>
> Brrrrrrr! You'd have to be very far out before you'd leave your
> children in non-human care.
Errm... it wasn't so long ago that black people were considered by some to be
less than human... but that did not prevent those same in-duh-viduals from
doing exactly that.
>
>>> And what's the point of having
>>> artificial people running around like normal people?
>>
>> Procreation by other means.
>
> I think it's nonsense.
I suspect that a sizable fraction of the population would disagree with you.
>
> - Tue
--
email to oshea dot j dot j at gmail dot com.
Point. I do not object to relating to an artificial lifeform, but I
want to relate to it on its terms, not to perceive it as human when it
isn't. I think the problem with many people's views here is that
people are assuming that a sentient humaniform robot would be on
exactly equal footing with a human being; would by and large be
defined *as* a human being. That it would be like you or me, no more
different from us than some other minority group that was
discriminated against in past (and current) history. But this is just
not the case at all. We're talking about something totally new and
different; a sentient non-human lifeform. We cannot assume that we
would understand it, nor that it would be like us, nor that it would
happily participate in human society. We're talking about a lifeform
that has been given its own free will and its own sovereign rights.
Its agenda might be compatible with ours, and it might not. It would
be *alien* to us, at least to the extent that we can speculate about
it from our current point of view. So no, it's not a question of how
many arms or wheel it has; it's a question of fundamental co-existence
with a non-human lifeform. Created by us for no other reason at all,
judging by the current state of the debate, than because we can.
That's not reason enough. We need to be more responsible than that.
And I believe we will be. People just aren't thinking about it yet.
But they'll have to start pretty soon.
- Teu
Of what use is a baby? For that matter, why do people want children?
: I don't see the use of creating new lifeforms just like that. They
: would have to be considered equal to us, and would only consume our
: resources and generally compete with us about all sorts of things.
Yeah, those dratted children. Bunch of parasites, all of them.
: Ideally, that should be enough. And I am a bit of an idealist, so I
: believe in a future where people actually like each other and support
: each other and cooperate with each other, even though they don't do
: that a heck of a lot now.
But of course, for some reason, they would all compete with, denigrate,
and in general conflict with their new robot buddies.
Specisist bastards.
Wayne Throop thr...@sheol.org http://sheol.org/throopw
Why not? How do you feel about masturbation? Dildoes? Vibrators?
All of these are Right Out in your perfect world?
: Being a science-minded person, I envision a future where the question
: of what's healthy is a factual matter, not somebody's opinion.
Seems extremely unlikely. What's "healthy" depends on what goals are
targeted. What goals are targeted, or should be targeted, is simply has
not ever been , is not now, and I see no reason to expect it ever to be,
a matter of fact.
OK. Simple. Extend the label "human" to them.
After all, what makes a critter trustworthy? Why are children OK?
Why are children smarter than you will ever be OK? And why aren't
robot buddies OK? Is sperm and egg the only OK way? What if the
genes in the sperm or egg are designed? And why is the progeny of
our gonads more OK than the progeny of our minds?
You really seem to have an odd perspective on this.
It seems rather unthinking and knee-jerk to me.
Perhaps it isn't, but if it isn't, you aren't presenting it well.
Species schmecies. Species is simply the new "race".
Pigeonholing folks by what "species" they belong to is no more inherrently
rational or "scientific" than pigeonholding by what race one belongs to.
Because their genes have trained them to. Do you want a humaniform
robot as a baby? Or would you prefer your own genetic offspring?
You are projecting humanness onto the robot. There's no guarantee that
this would be a reasonable thing to do. It might, and it might not.
Even if you think of artificial and natural lifeforms as being of
equal significance, then we can still ask what would make you choose
something artificial instead of something natural? The only obvious
reason for choosing the artificial is if you can't have the natural.
And that would be because of some problem that science can eventually
solve. Hence - ideally -, there is no need for the artificial. The
natural would be human. The artificial would be Something Different.
You don't know, currently, just how different, and it is folly to just
assume that it would be just like us. Sentience is more complex than
that, and we're talking about *artificial* sentience here; it could be
extremely different from what human beings possess.
> : I don't see the use of creating new lifeforms just like that. They
> : would have to be considered equal to us, and would only consume our
> : resources and generally compete with us about all sorts of things.
>
> Yeah, those dratted children. Bunch of parasites, all of them.
Natural procreation of humans I'm fine with. Heck, I'd happily embrace
totally artificial procreation of humans. But creating non-human
lifeforms who will be our equals in every way that matters is just to
make something that would needlessly drain our natural as well as
personal resources (which I admit wouldn't matter too much if we had
plenty to spare), and, what's worse, might just possibly become a
great hazard to us due to some clash between humanness and alienness.
It's like choosing aliens instead of our own children. Sane people
shouldn't want that. So I don't see why it should be done. To rush
uncritically towards post-Singularity AI dominance is comparable to
lemmings rushing over the cliff. We need to think about these things
before it's too late.
> : Ideally, that should be enough. And I am a bit of an idealist, so I
> : believe in a future where people actually like each other and support
> : each other and cooperate with each other, even though they don't do
> : that a heck of a lot now.
>
> But of course, for some reason, they would all compete with, denigrate,
> and in general conflict with their new robot buddies.
>
> Specisist bastards.
I assume nothing; you assume a lot. I'm being cautious; you're not.
- Tue
That's what you've been doing. It doesn't work. They *wouldn't* be
human, and you can't say at this point what they'd be like.
- Tue
So, when you said "I believe in a future where people actually like each
other and support each other and cooperate with each other", you weren't
assuming anything? And when I say (by implication) that I believe in a
future where sapient critters of whatever origin like and support each
other, I'm assuming more than you did?
But, somehow, *you* know what they'd be like. They'd be non-human,
and (by implication) somehow untrustworthy. You seem to have a deep-seated
irrational prejudice against non-humans.
>: Part of my point is that I believe it would be EXTREMELY IRRESPONSIBLE
>: of us to create sentient non-human creatures equal or superior to
>: ourselves.
>
>OK. Simple. Extend the label "human" to them.
>After all, what makes a critter trustworthy? Why are children OK?
>Why are children smarter than you will ever be OK? And why aren't
>robot buddies OK? Is sperm and egg the only OK way? What if the
>genes in the sperm or egg are designed? And why is the progeny of
>our gonads more OK than the progeny of our minds?
Would you use technology to improve your children? Of course we do.
>Peter Knutsen skreiv:
>> Eivind wrote:
>
>>> 1) Does _anyone_ honestly believe that the creation of new works would
>>> significantly slow is the terms where half what they are today ? Would
>>> anyone say: "Nah, it's only protected for 50 years, so I won't do it, if
>>> I could get 110, now THAT would be something!"
>
>> If the publisher stops reprinting (or whatever), the rights should
>> revert automatically and strongly to the creator, and if he doesn't do
>> anything with them within a very, very short span of time (say, 3
>> years), then the work becomes forcibly public domain.
>
>That is but one of many serious problems with todays copyright-law. It
>doesn't invalidate any of my questions.
>
>Does anyone HONESTLY believe that 110 years is needed to stimulate the
>creation of new works ? That creation would significantly lessen if the
>terms where half that ?
Yes. There are actually people who honestly believe that. We've heard
from some of them here, in previous discussions of the matter, and you
can find more in e.g. the debates surrounding the Sonny Bono Copyright
Eternification Act.
Some of these people believe that a True Artist has a Moral Right to
control his creations Forever, even after he has sold them, and that
the prospect of losing control over what he creates will dissuade
people from pursuing their art at all. Others, more pragmatically
but also more clearly wrongly, believe that the only way an artist
can pay for his retirement, his children's college education, etc,
is with the ongoing royalty revenues from his Greatest Hits. Toss
in some worst-case assumptions, ignore all the other vastly superior
ways of providing for one's retirement etc, and you get life+50 or
thereabouts.
These people are *wrong*, but that hardly matters. It's an appealing
sort of error for a lot of people, and there are enough of them to give
Disney & friends an effective smokescreen as they maneuver for eternal
copyright for their own purposes.
--
*John Schilling * "Anything worth doing, *
*Member:AIAA,NRA,ACLU,SAS,LP * is worth doing for money" *
*Chief Scientist & General Partner * -13th Rule of Acquisition *
*White Elephant Research, LLC * "There is no substitute *
*John.S...@alumni.usc.edu * for success" *
*661-718-0955 or 661-275-6795 * -58th Rule of Acquisition *
>One of the things you're missing here is that if we created wholly
>sovereign new sentient lifeforms (electronic *or* organic), which
>would of course have all the rights pertaining to such, then they
>would have their own agenda. They might be friendly and they might
Where would this agenda come from?
Most likely, they'd have an agenda we gave them, on account of being
designed by us.
-xx- Damien X-)
Way more. You're assuming that all sort of potential problems won't be
there at all. That's simplistic.
- Tue
As I said in another post, if they have free will, they will be free
to establish an agenda of their own. Why should they be satisfied with
an agenda given them by their creators? Would they be docile disciples
of their master race? Hardly. Sure, if they were designed that way,
but that would be like creating slaves, and so true sentience would
neither be required nor present.
I'm not against giving them an agenda, but this is exactly what I'm
saying that roboticists (and the rest of us) should be debating. If,
for instance, we can agree that Asimov's Three Laws are a good
solution, then we should try to strive for that, when the day comes
that we can create artificial intelligence. All I'm arguing for is
that we think about it, consider the possible dangers, and behave in a
responsible manner! We need to make damn sure that any AIs will not
have an agenda that can be postentially hostile to humans.
- Tue
Certainly not. That's my point: from where we are now, it's unknown.
And hence something to be investigated in detail before we can know
how to deal with it. As with all other things in science.
> They'd be non-human,
> and (by implication) somehow untrustworthy. You seem to have a deep-seated
> irrational prejudice against non-humans.
Only if there is a reason to, and in this case we can't say either
way! There is nothing more moral or rational about trusting artificial
lifeforms than about not trusting them. If we don't know what we're
dealing with, the wise course is to be cautious.
Having said that, I trust human beings to create a good society one
day, and also to create benevolent technology, whether it is sentient
or not. So the only reason I worry is because of how thoughtless
people tend to be today about these things. The more rashly we act,
the more we endanger ourselves. And we are, after all, still in our
infancy. But this may change soon. So we need to start wising up
pretty soon.
- Tue
You say, "I was talking about a created artificial lifeform which
appears human, but isn't, and therefore would not require being
treated as human. I think it would be creepy and a bad idea to have a
lot of such creations running around among
us, passing for humans, and making us uncomfortable about how to
relate to them."
So far so good. I don't think breaking an AIBO should be reported to
the ASPCA either. But then you say:
"If we created robots with brains that were, for all moral and
humanitarian purposes, indistinguishable from ours, then we would have
created a new and sovereign lifeform."
This is the part I don't understand. Assuming that a robot CAN be
built that is truly "indistinguishable from human"...then what exactly
is the difference between this robot and something else (such as
myself, yourself, or possibly Cher) which is also "indistinguishable
from human"? If it walks like a duck and quacks like a duck...why
should we call it a dog?
>On 6 Sep., 03:28, David Johnston <da...@block.net> wrote:
>> On Fri, 5 Sep 2008 17:22:39 -0700 (PDT), Tue Sorensen
>> <sorenson...@gmail.com> wrote:
>> >On 6 Sep., 00:35, David Johnston <da...@block.net> wrote:
>> >> On Fri, 5 Sep 2008 10:27:37 -0700 (PDT), Tue Sorensen
>> >> <sorenson...@gmail.com> wrote:
>> >> >Although roboticists are definitely working very hard to create robots
>> >> >that can eventually pass for human, don't you agree that this is
>> >> >actually a very bad idea? I think robot humans would be creepy. We
>> >> >wouldn't be sure how to react to them. And what's the point of having
>> >> >artificial people running around like normal people?
>>
>> >> So we can have sex with them of course. What did you think the point
>> >> was?
>>
>> >We already have pets for that (in principle): substitutes for human
>> >company. I envision a future where people don't need that kind of
>> >substitutes, because they'll actually be able to relate healthily to
>> >each other
>>
>> And we'll all be good-looking and have high self esteem, right? None
>> of use will be socially isolated geeks who spend way too much time on
>> the internet...
>
>That's right. Aren't you looking forward to it?
No. I don't particularly look forward to a future in which individual
qualities of personality and form have all been stamped out.
>> Assuming we have true Artificial Intelligences actually capable of
>> everything a human is, then one good reason is so they can participate
>> in human society rather than being some kind of disgruntled paraplegic
>> in a box.
>
>One of the things you're missing here is that if we created wholly
>sovereign new sentient lifeforms (electronic *or* organic), which
>would of course have all the rights pertaining to such, then they
>would have their own agenda.
Of course they would.
They might be friendly and they might
>not. We don't know.
Excuse me? You're foretelling a future in which humans will be easier
to program than machines we made?
Ha! Good one! Oh... you were serious?
Remember the context you yourself set. "Create a brain
indistinguishable from human". What problems is that going to cause
that humans don't already cause?
Gee. Just like the people we've got now. So... why is this a problem,
given that you think the people we've got now are no problem?
: Tue Sorensen <soren...@gmail.com>
: Only if there is a reason to,
And you go on to say
: and in this case we can't say either way!
that we don't have a reason to. So why are you prejudiced
against non-humans again?
Or did you mean, one should only trust what one has reson to trust.
So, what reason do you have to trust a human unknown to you over a non-human
unknown to you? You know, like a real, actual *reason*, as opposed to
"butbutbut... they're *human*. And these other guys *aren't*".
That's not a "reason", that's just prejudice.
> On 6 Sep., 18:25, Peter Knutsen <pe...@sagatafl.invalid> wrote:
>> Tue Sorensen wrote:
>>
> Part of my point is that I believe it would be EXTREMELY IRRESPONSIBLE
> of us to create sentient non-human creatures equal or superior to
> ourselves. It would be stupidity on the same level as deliberately
> shooting ourselves in the foot, or the head. Very, *very* serious
> thinking, deliberation and legislation should precede any initiative of
> such a nature, and I'm not seeing these questions being treated in our
> society today. Hence, I worry.
>
> - Tue
Well, they are being addressed by some of the people working in the field.
Look up "Friendly AI", or "Eliezer Yudkowsky".
--
David
Mitchell
I don't know about Tue, but my "prejudice" here is that unless we knew
_exactly_ what we were doing with regards to programming the AI's in our
spiffy new humaniform robots, or they were less capable than us in some
important way, then we would run the risk of unleashing an intelligence
with unlimited capacity with no way to predict what it would do next.
Given how difficult we find it to write even relatively simple programs
which are bug-free, I think it unlikely we'll ever be able to claim that
we perfectly understand anything as complex as an AI is likely to be.
Look how much fun Asimov had with his three laws, and then imagine an AI
with fifty, or a hundred similar laws.
Compare with the "human experience", we are all, roughly, equally capable
such that it's difficult for one human to outclass all the others,
especially when they work together.
But I think that what you are calling Tue's prejudice is recognition of
the disparity in our ability to predict how a given intelligence will
behave, between Joe average, and a shiny new AI.
We've evolved together, and can read each other fairly well. We know the
range of values that "sane" can take in our species, (and, I suspect,
that some of the more extreme ends of that bell curve have been evolved
out); but an AI is a whole new ball-game - hence Tue's fears - which I
share.
--
=======================================================================
= David --- If you use Microsoft products, you will, inevitably, get
= Mitchell --- viruses, so please don't add me to your address book.
=======================================================================
> On 6 Sep., 11:34, David Mitchell <da...@edenroad.demon.clo.uk> wrote:
>> On Fri, 05 Sep 2008 17:22:39 -0700, Tue Sorensen wrote:
>> > A humaniform robot that could do anything a human could (or more)
>> > would have to be very valuable, and wouldn't be used for very common
>> > tasks, surely.
>>
>> Depends. Since humans can build such robots; and since these robots
>> can do everything humans can do, that tends to imply that they can
>> build others like themselves.
>>
>> That rather suggests to me that they'd very quickly become ubiquitous
>> and therefore cheap.
>
> And keep evolving into who knows what... Not a Pandora's Box I'd care to
> open.
Personally, I'd severely restrict their sentience: make them capable of
following commands, but not initiating actions on their own except under
severe restrictions (ie. only within the home and its grounds), and have
higher-level "meta" commands which their owners could not change to
cancel and report "dangerous" commands ("Jeeves - design me a nuclear
weapon!").
I'd also give it a bit more thought than this, but you get the idea.
True, the AI equivalent of grey goo. And I'd agree with that.
But that doesn't seem to be the issue at hand; the issue at hand was
(exagerating and joshing), given those issues are solved and the AIs are
morally-ethically on an even footing with humans, it's still horribly
squicky and/or irresponsible to have the metalic races mixing with
our wimmin.
I think I would define that as programming, not sentience. Sentience
entails self-awareness in a real sense, and some intelligence, which I
think includes self-reflection and the ability to choose between
several course of action. Of course, artificial intelligence can be
just that: artificial, i.e. simulated, but then we're not talking
about actual sentience; actual intelligence.
- Tue
Yeah. See?
- Tue
Because human beings have a sense of community among ourselves.
Different forms of life would have a sense of community among
*them*selves. I certainly hope we could co-exist peaceably, but
there's no guarantee that their agenda and ours would be compatible.
And I still don't see why we would create them in the first place,
when we've got ourselves. We should create mindless machines to be our
technological tools, but there's no sense in creating new sentient
lifeforms. At least not unless and until we could be certain of their
benevolence. And if we just go ahead and create 'em without properly
treating this issue, as seems to be the direction the field of
robotics is headed in now, then we don't have that certainty.
- Tue
A lot of the people who've drawn the short end of the stick as regards
"form" will probably disagree with you...
Regardless, I don't see how "good-looking and with high self-esteem"
translates to "stamping out indivual qualities of personality and
form". There can be both. Good-looking people don't all look alike.
> >> Assuming we have true Artificial Intelligences actually capable of
> >> everything a human is, then one good reason is so they can participate
> >> in human society rather than being some kind of disgruntled paraplegic
> >> in a box.
>
> >One of the things you're missing here is that if we created wholly
> >sovereign new sentient lifeforms (electronic *or* organic), which
> >would of course have all the rights pertaining to such, then they
> >would have their own agenda.
>
> Of course they would.
>
> > They might be friendly and they might
> >not. We don't know.
>
> Excuse me? You're foretelling a future in which humans will be easier
> to program than machines we made?
No question. As soon as we make machines as complex as us, they will
almost instantly be able to make themselves *more* complex than us.
- Tue
If they're their own sort of thing, then they should be treated as
that. They would demand this.
> You say, "I was talking about a created artificial lifeform which
> appears human, but isn't, and therefore would not require being
> treated as human. I think it would be creepy and a bad idea to have a
> lot of such creations running around among
> us, passing for humans, and making us uncomfortable about how to
> relate to them."
>
> So far so good. I don't think breaking an AIBO should be reported to
> the ASPCA either. But then you say:
>
> "If we created robots with brains that were, for all moral and
> humanitarian purposes, indistinguishable from ours, then we would have
> created a new and sovereign lifeform."
>
> This is the part I don't understand. Assuming that a robot CAN be
> built that is truly "indistinguishable from human"...then what exactly
> is the difference between this robot and something else (such as
> myself, yourself, or possibly Cher) which is also "indistinguishable
> from human"? If it walks like a duck and quacks like a duck...why
> should we call it a dog?
Now, I did say "for all moral and humanitarian purposes,
indistinguishable". The brain chemistry, or technology, and indeed the
overall physiology, could be completely different, incl. lots of
mechanisms working by different principles than do our body
chemistries. By "moral and humanitarian purposes" I mean something
with should be treated by us in as ethical a manner as we treat each
other. A sovereign lifeform, complete with free will, and basically
having all the same rights as human beings. But the similarity may end
there. And may not. I'm pointing out a possible danger, and arguing
that there's very little real purpose in our creating something like
that, unless we are sure we are in complete control of how it's done
in all details.
- Tue
Only *in case* there's a reason to. Because there *may* be. If we
don't make sure to take every precaution.
> > Or did you mean, one should only trust what one has reson to trust. So,
> > what reason do you have to trust a human unknown to you over a non-human
> > unknown to you? You know, like a real, actual *reason*, as opposed to
> > "butbutbut... they're *human*. And these other guys *aren't*". That's
> > not a "reason", that's just prejudice.
Certainly not. We do have some knowledge of human nature, and unless
we knew exactly what we were doing, we might not have that kind of
knowledge about created sentient non-humans.
> I don't know about Tue, but my "prejudice" here is that unless we knew
> _exactly_ what we were doing with regards to programming the AI's in our
> spiffy new humaniform robots, or they were less capable than us in some
> important way, then we would run the risk of unleashing an intelligence
> with unlimited capacity with no way to predict what it would do next.
>
> Given how difficult we find it to write even relatively simple programs
> which are bug-free, I think it unlikely we'll ever be able to claim that
> we perfectly understand anything as complex as an AI is likely to be.
>
> Look how much fun Asimov had with his three laws, and then imagine an AI
> with fifty, or a hundred similar laws.
>
> Compare with the "human experience", we are all, roughly, equally capable
> such that it's difficult for one human to outclass all the others,
> especially when they work together.
>
> But I think that what you are calling Tue's prejudice is recognition of
> the disparity in our ability to predict how a given intelligence will
> behave, between Joe average, and a shiny new AI.
>
> We've evolved together, and can read each other fairly well. We know the
> range of values that "sane" can take in our species, (and, I suspect,
> that some of the more extreme ends of that bell curve have been evolved
> out); but an AI is a whole new ball-game - hence Tue's fears - which I
> share.
Good to hear! :-)
- Tue
But that's just it! They're not. The debate here is to make sure the
AIs are benevolent before we go ahead and make them equal to ourselves
- which include making damn sure that they won't take over everything
as soon as they're switched on. Not that I believe that will happen,
but it's important to give a voice to such concerns.
- Tue
Thanks, will do!
- Tue
Folks schmolks. Folks is simply the new "species".
But you're dead-wrong. Species are a scientific description; there's
no way different species can be seen as essentially the same. In some
areas, perhaps, but not in all areas. Assuming it was the same weight
as you, would the 50% DNA you have in common with an ant entitle it to
50% of your picnic sandwich? Of course, it might settle for 50% of you
instead, and still be within its rights...
Sentient creatures of any description deserve respect and should enjoy
rights, but you're making the mistake of thinking they would probably
be like us. They would probably be very different. They might be
ammonia-based quantum-brains who would regard us as indigenous fauna
and relegate us to reservations while they took over all industry and
started planning how best to colonize globuar clusters millions of
light-years away.
- Tue
> > : Tue Sorensen <sorenson...@gmail.com>
> > : As I said in another post, if they have free will, they will be free
> > : to establish an agenda of their own.
> > Gee. Just like the people we've got now. So... why is this a problem,
> > given that you think the people we've got now are no problem?
> Because human beings have a sense of community among ourselves.
> Different forms of life would have a sense of community among
> *them*selves. I certainly hope we could co-exist peaceably, but
> there's no guarantee that their agenda and ours would be compatible.
Replace "human beings" with "Christians", or "Communists",
or "Jews", or "Americans", or "The Libertarian Party", or any
other subdivision of human beings.
You seem to honestly believe that you're presenting an example
of some sort of "new" problem that doesn't already exist.
> And I still don't see why we would create them in the first place,
> when we've got ourselves.
I can only imagine you've never met a (male) robot geek
before.
We would create them even if only to be companions in our
journey through the universe.
(And if you have to ask, "Why not just a mindless sex toy?",
then I can only imagine you've never met a male robot geek
before.)
> We should create mindless machines to be our
> technological tools, but there's no sense in creating new sentient
> lifeforms. At least not unless and until we could be certain of their
> benevolence. And if we just go ahead and create 'em without properly
> treating this issue, as seems to be the direction the field of
> robotics is headed in now, then we don't have that certainty.
There's no certainty of the benevolence of human beings.
There's absolute certainty of the malevolence of some
human beings.
Isaac Kuo
Good-looking people don't all look equally good. Once you've
eliminated all the ugly people you'll just have set a new standard of
ugly.
>> > They might be friendly and they might
>> >not. We don't know.
>>
>> Excuse me? You're foretelling a future in which humans will be easier
>> to program than machines we made?
>
>No question. As soon as we make machines as complex as us, they will
>almost instantly be able to make themselves *more* complex than us.
Nonsense.
: Tue Sorensen <soren...@gmail.com>
: Yeah. See?
No. Unless you mean we should forthwith stop all production of humans.
You say that almost as if you thought it clarified your position.
Hint: it doesn't. First, who says "humans have a sense of community
among ourselves"? Not noticeably so, for many many humans today, and
for the foreseeable future. Second, who says both humans and others
can't have a sense of community spanning sentient critters?
You're still speaking from pure prejudice, near as I can tell. Jumping to
conclusions ahead of time, based on shaky "it's always been done that
way" assumptions. The same sort of thing you decry in others.