In the distant future I cannot see any reason why people could not be
totally transferred into such a system and give up all their biological
parts. This would make it easier to transport people in star ships.
Another point is that people could live in a totally generated computer
universe. Its laws limited only by their imagination not by the laws of
science, logic or economics. An example might be people living their
lives in a diablo world. Of course they might choice to live in ancient
Rome world.
Any thoughts on such matrix link or worlds.
By the way the film was great!!!!!!!!!!!!!!
re life in a computer
>Another point is that people could live in a totally generated computer
>universe. Its laws limited only by their imagination not by the laws of
>science, logic or economics.
Except they are limited by economics, because any given system
only has so much memory and can only run so fast. If my fantasyland
takes up 99% of the system to run, your fantasyland only get 1%.
They are also limited by the hidden physics and logic of the system.
Nobody wants to live in a system whose real world neighbors decide to
grind it for parts and nobody wants to live in a Microserf world whose
basic laws of physics suffer the Blue Screen of Death every day or so.
--
"About this time, I started getting depressed. Probably the late
hour and the silence. I decided to put on some music.
Boy, that Billie Holiday can sing."
_Why I Hate Saturn_, Kyle Baker
You're not the first to toy with these ideas. I doubt very much that
there's any fundamental bar to such links -- one need only connect to
the surface of the brain and inhibit movement, something we can
already do -- but it would obviously require technology far beyond
ours for the brain/computer interface, for the computer, for the
infrastructure that keeps things going without people, and so forth.
I suppose that if the technology were advanced enough, one could have
some sort of interface and do it part time. Since most of us seem
reluctant to modify ourselves in any but minor ways, I'm inclined to
think such scenarios fanciful. But if it did occur, I think it would
work like a drug of sorts, which makes me wonder whether it wouldn't
perhaps be simpler and more satisfying to trigger the pleasure centers
directly a la Niven.
Too, I doubt that we'll upload our engrams to computers and become
creatures of silicon. Would you? At best, you'd be creating a copy and
committing suicide. If I could be /transformed/ into something
reasonable that didn't die so damned quickly, I'm sure I'd avail
myself of the opportunity, but if it were just a matter of creating a
successor, well, children do that, and offer the possibility of
improvement.
OTOH, one possible scenario for artificial intelligence calls for in
effect duplicating the human mind in silicon, or its successor. It may
not be practical to transfer an entire personality -- it would mean
stimulating and connecting to an awful lot of synapses, effectively
creating a second nervous system, and while again I'm not aware of
anything that makes it /theoretically/ impossible, we'd have to have a
good reason to do it. And I wonder how people will react to the
existence of intelligences so much smarter and longer lived than they.
Josh
See Greg Egan's DISAPORA and SCVHILD'S LADDER, in which all these
things are commonplace.
--
Wil McCarthy < http://www.wilmccarthy.com >
Engineer, Columnist, Author, etc.
Never hurry / never rest. -- Goethe
Error Number 43758937594383: You Must Shut Down And Be Reborn.
Once that shows up I hope the person in question will have their most
recent days backed up, otherwise they're gonna have to start over since
the last rebirth.
As for a Matrix like world, it would be nice but could a human brain
handle a world with no limits? I seriously doubt it.
--
Yrs,
Richard H. Araujo
"The only freedom which deserves the name is that of pursuing our own
good in our own way, so long as we do not attempt to deprive others of
theirs, or to impede their efforts to obtain it." John Stuart Mill
How is the latter? <Teranesia> just annoyed me.
--
Anton Sherwood, http://www.ogre.nu/
What does "no limits" mean?
Does it mean "no constraints"? If nothing else, our constructs in
Virtual World are limited by logic: e.g. we'll never be able to design
and render a polyhedron with a triangle, a square and a pentagon meeting
at each vertex.
Obviously a finite mind cannot handle infinite bandwidth; so our
interactions with such a world will be limited by time &c.
Do you mean that a mind in such a world is doomed to be damaged either
by boredom or by a surfeit of pleasure?
That might not seem so paltry to someone who's already dying.
The thing that bothers me most about death is that my projects won't be
completed. My copy will carry them on even if it is not "me".
That's hyperbole, but I suppose we know roughly what you really mean.
> An example might be people living their lives in a diablo world.
> Of course they might choice to live in ancient Rome world.
Why think so small? I want eight dimensions.
Anton, meet Landru. Landru, Anton . . .
Josh
:
:
Why reduce from our current eleven?
:Hop
:http://clowder.net/hop/index.html
--
I'm not an actor, but I play one on TV!
George W. Harris For actual email address, replace each 'u' with an 'i'.
> Anton Sherwood wrote:
>> Why think so small? I want eight dimensions.
Hop David wrote:
> Any reason you want eight as opposed to 7 or 9?
To study the cartesian product of {3,4,3} and {5,5/2,5}. [As I believe
Hop knows, this notation describes regular polytopes in 4space.]
In other words, not really ;)
Anton Sherwood wrote:
>>> BernardZZ wrote:
>>>
>>>> An example might be people living their lives in a diablo world.
>>>> Of course they might choice to live in ancient Rome world.
>>>
>
>> Anton Sherwood wrote:
>>
>>> Why think so small? I want eight dimensions.
>>
>
> Hop David wrote:
>
>> Any reason you want eight as opposed to 7 or 9?
>
>
> To study the cartesian product of {3,4,3} and {5,5/2,5}. [As I believe
> Hop knows, this notation describes regular polytopes in 4space.]
Hmmm. It seems like the vertex figures of {5,5/2,5} are small stellated
dodecahedra. Is this my favorite polychora the gofix?
http://clowder.net/hop/gofix/gofix.html
>
> In other words, not really ;)
>
I used to know a guy that was bonkers for 8 dimensions. Unfortunately I
only understood a fraction his stuff.
Edwin Abbot, author of Flatland, noted two dimensional creatures may not
be able to imagine a third dimension because it lies outside of their
experience.
I think it may be possible to have computer games with > 3 spatial
dimensions. If kids spent a large fraction of their time in such an
environment (and many kids are video game addicts) I believe they would
develop the ability to think in 4 spatial dimensions.
BernardZZ wrote:
> The concept of a matrix type system where people are directly linked
> with a computer. Direct computer to brain links are already in a small
> way happening.
>
> In the distant future I cannot see any reason why people could not be
> totally transferred into such a system and give up all their biological
> parts. This would make it easier to transport people in star ships.
I'm not sure this can be done. I believe a high enough res MRI would
cook the brain. To implant sensors to record the status of each nueron
would be massively invasive and also destroy the brain.
Hop
http://clowder.net/hop/index.html
>The concept of a matrix type system where people are directly linked
>with a computer. Direct computer to brain links are already in a small
>way happening.
> ...
>Any thoughts on such matrix link or worlds.
If a large number of people could directly transcribe their
thoughts into computers as text instead of manually typing, it would
be the equivalent of typing perhaps 1,000 to 2,000 words per minute.
One effect of this would be a huge increase in the amount of text
posted to Usenet.
At first it would not have too. We only have five or six senses
depending on how you count. These could be manipulated and we are
starting to be able to do it then a life in cyber is possible.
And they may not even see it as dying. Like a person today that gets an
artificial heart, in another time it was seen as source of life.
Slowly more and more of a person is transferred becoming solely a
computer-brain entity to maybe in the distant future into a purely
electronic form.
>
> The thing that bothers me most about death is that my projects won't be
> completed. My copy will carry them on even if it is not "me".
Why would even you not see it as you?
Do you see yourself as a different person since you were a kid. I am
sure that almost all your atoms have been replaced since then.
This I find the fascinating for example we could create universes to
explore based on different postulates.
>
> > An example might be people living their lives in a diablo world.
> > Of course they might choice to live in ancient Rome world.
>
> Why think so small? I want eight dimensions.
>
Give it time the first cars looked like carriages without horse and the
early stone pillars were designed to look like wood.
Interesting point, and probably true, to some extent.
Analogy: the brain has evolved the ability to do mental arithmetic so
long as there are no more than five elements. We can and do learn to
do mental arithmetic on larger numbers, but it's a bit of a stretch,
whereas a microprocessor that has only a tiny fraction of our
computing capacity can do it easily.
The brain is remarkably malleable in the early years and somewhat so
in later ones, but it's also specialized in that it can best
accommodate certain activities in certain areas.
Josh
I think it would be done biologically, or through some kind of
biological analog (e.g., micromachines). You'd effectively build a
second neural network along the first, and that would capture the
state of the synapses. Presumably, its components would have to be
smaller than or fewer in number than the neurons to minimize damage.
Josh
I haven't read TERANESIA, so I don't know how it compares, but
DIASPORA and SCHILD'S LADDER are logical heirs to PERMUTATION CITY.
Rather abstract, but that's how virtual people would be in any case
-- traveling the cosmos wrapped up in their own esoteric concerns.
As long as you get the data before the brain is distroyed who cares. I just
hope the process is reliable.
-ash
for assistance dial MYCROFTXXX
:
:As long as you get the data before the brain is distroyed who cares. I just
:hope the process is reliable.
You're given two processes: the one above, and
another that perfectly replicates your mental state *without*
damaging your brain, but then someone shoots you in the
head.
Which is prefereable, and why?
: -ash
: for assistance dial MYCROFTXXX
--
"Intelligence is too complex to capture in a single number." -Alfred Binet
George W. Harris wrote:
> On 27 May 03 21:43:34 -0500, "Ash Wyllie" <as...@lr.net> wrote:
>
> :
> :As long as you get the data before the brain is distroyed who cares. I just
> :hope the process is reliable.
>
> You're given two processes: the one above, and
> another that perfectly replicates your mental state *without*
> damaging your brain, but then someone shoots you in the
> head.
>
> Which is prefereable, and why?
>
> : -ash
> : for assistance dial MYCROFTXXX
>
This seems to be a resurfacing of an old argument. It usually crops up
in threads that postulate a teleportation device that destroys the
original but sends data to reassemble a copy.
As recall those threads generate a lot of smoke and fury but little
meeting of the minds.
What I mean is that the organs that provide information to our brain from
our senses have built in overload protections. Your eyes can only see in
a certain spectrum of light for example. Removing that kindof
restriction and allowing info to flow unfiltered into the brain would
drive a lot of people nuts I think. Hell, most people can't seem to
handle the idea of a flashback in a movie, how are they going to
comprehend additional sensory info or the minutae involved in making
yourself "fly" in such a limitless world?
--
Yrs,
Richard H. Araujo
"There's only one basic human right, the right to do as you damn well
please. And with it comes the only basic human duty, the duty to take
the consequences."
P.J. O'Rourke
>On 27 May 03 21:43:34 -0500, "Ash Wyllie" <as...@lr.net> wrote:
>:
>:As long as you get the data before the brain is distroyed who cares. I just
>:hope the process is reliable.
> You're given two processes: the one above, and
>another that perfectly replicates your mental state *without*
>damaging your brain, but then someone shoots you in the
>head.
> Which is prefereable, and why?
That's a toughy... But I'll go with the shoot in the head. The shooter can
wait until the readout has been verified. Method 1 is a little to final for my
tastes.
:Gently extracted from the mind of George W. Harris;
You've got two processes: one that perfectly
replicates your mental state *without* damaging your
brain, followed by a bullet in the brain, and one that
just flashes lights in your eyes for ten minutes,
followed by a bullet in the brain.
What is the practical difference to you?
: -ash
--
When Ramanujan was my age, he had been dead for eight years. -after Tom Lehrer
Most fiction about upload life (not counting Egan's `Jewel' stories)
postulates an `exoself', accessory software that maintains a comfortable
and convenient degree of abstraction between `you' and either the bits
or the spectrum. (For at least a while, virtual eyes will have virtual
rods and cones, not N(>3)-plane rasters.*) You give little conscious
attention to the subtle task of walking; you wouldn't want to edit a
file with explicit commands to the disk; why devote conscious attention
to maintaining the illusory world?
Of course there will be `speed freaks', who strip away as much as they
can of the abstractions, and drive their conscious bandwidth to the
limit; and other addictions that we cannot now imagine. If to avoid
them it is ethical to confine the addiction-prone to the cage of flesh
for their own good, it is equally ethical to confine them in a cage of
software. Perhaps all exoselves will have that function ... hm, in
which case the addicts become crackers ...
(*For some purposes I want my virtual eye to be topologically spherical.
I wonder how to minimize anisotropies - unless with amorphously-packed
virtual rods-and-cones!)
BernardZZ wrote:
> Why would even you not see it as you?
> Do you see yourself as a different person since you were a kid.
> I am sure that almost all your atoms have been replaced since then.
I've never replaced my brain with another medium before; I trust
analogies only so far.
If it's gradual, I will feel the continuity. If it's abrupt, I refuse
to bet on how I'll feel about it; but the point of my remark above is
that *whether or not* I-in-meat and I-in-bits consider each other the
same person, the copy will have the same memory of thinking about
certain problems repeatedly over the years, and therefore I feel my copy
is equivalent to me in how it treats such problems afterward.
Ah, and I found DIASPORA a substantial improvement on P CITY ...
I wrote:
> [silly answer]
> In other words, not really ;)
I could give some more balonious answers; seriously, four is my minimum
demand* and doubling it an appropriate margin of error.
*Gotta see the sixteen regular 4-polytopes, and various exotic 3-spaces,
both inside and outside. I want to run a-life and see how plants and
animals grow; how hyper-spiders build their webs, and whether they're
efficient; how phyllotaxis works, how vines climb. Stuff like that.
I don't grok 4space well to say whether that's {5,5/2,5} or {5/2,5,5/2}.
> bro...@pobox.com said...
>> That's hyperbole, but I suppose we know roughly what you really mean.
BernardZZ wrote:
> This I find the fascinating for example we could create universes
> to explore based on different postulates.
Sure, me too. A world in the shape of a Klein bottle, gravity as a
turbulent liquid, sex in hyperspace (beyond convex and concave!), Mister
Tompkins in Wonderland, Toon Town ....
But if Fermat's "Last Theorem" (for example) is true, you can't simulate
a world in which it isn't, not without sleight-of-hand that (for me)
defeats the purpose; so it is limited by logic. Economics first appears
when someone has to divide finite attention among various tasks and
interests, and I don't see how to avoid that without infinite computing
resources.
(In the world of the story that I'm never going to write, one can fork
one's consciousness into multiple versions that share one long-term memory.)
I hoped the screensaver of a tumbling hypercube would help me reach
bodhi, but no luck so far. Probably it needs stereo vision; whatever
part of our mind generates 3-space models from 2-space images may learn
to see that 3-space projection in turn as the view from one 4-space eye.
(I fear we're stuck with wireframe for the near future.)
> Hop:
>> I'm not sure this can be done. [...any scan too destructive...]
BernardZZ:
> At first it would not have too. We only have five or six senses
> depending on how you count. These could be manipulated and we are
> starting to be able to do it then a life in cyber is possible.
Ah, the Brain In Vat scenario rather than the full monty. ;)
When you said "give up all their biological parts" we thought you meant
including the brain.
Better yet, quaternionic mass: if the product of two masses
is other than pure real, the force between them is partly lateral.
It probably will be gradual. And there is no copy.
One scenario might be to experience this world at least part time your
brain is temporally linked to a computer. Parts of your brain that a
computer can do better eg mathematical calculations and information
storage are directly linked into your brain. In time, if only old age,
as your organs one by one fail you cannot live anywhere else. Beside you
live better in the matrix. In time you and the computer system become
one.
You can then do things that you could never do in the real world like
travel in space.
In time, I suspect yes they will give up all their biological parts.
I was thinking myself of Mr Tompkin story in a quantum universe where
the speed of light was pretty small.
> Toon Town ....
Or what about worlds based on various human philosophies or religions.
Also like writers today the future might have artist that create
universes. Instead of watching B5 we live in the B5 universe where FTL
vessels are possible and telepathist can read our deepest thoughts.
>
> But if Fermat's "Last Theorem" (for example) is true, you can't simulate
> a world in which it isn't, not without sleight-of-hand that (for me)
> defeats the purpose; so it is limited by logic.
Yes. Yet there is no reason why an infinite worlds could not be
constructed that had different axons, the only requirement is that they
are consistent.
> Economics first appears
> when someone has to divide finite attention among various tasks and
> interests, and I don't see how to avoid that without infinite computing
> resources.
Agreed. From our point of view such a person would be living in a world
where they are infinite rich yet from there point of view they may not
be.
>
> (In the world of the story that I'm never going to write, one can fork
> one's consciousness into multiple versions that share one long-term memory.)
>
The trick would be to bring them back together.
Anton Sherwood wrote:
> Anton Sherwood wrote:
> > . . . gravity as a turbulent liquid, . . .
>
> Better yet, quaternionic mass: if the product of two masses
> is other than pure real, the force between them is partly lateral.
>
Isn't the basis of quaternions 3 directed quantities and one
directionless quantity? (It might be more convenient to call the
quaternion space H)
It sounds something like mass and the three spatial dimensions to me.
The world we now live in is a computer simulation.
Prove me wrong!
Ray
> Isn't the basis of quaternions 3 directed quantities and one
> directionless quantity? (It might be more convenient to call the
> quaternion space H)
Formally it's defined as
q = a + b i + c j + d k,
which defines a non-commutative algebra where i^2 = j^2 = k^2 = i j k =
-1. Usually you can think of a quanternion consisting of a scalar (a)
and a 3-vector (b i + c j + d k), though remember that the i, j, and k
basis quaternion units are not actually the unit basis vectors. In
fact, in some notations, one simply writes a quaternion as an ordered
pair of the scalar and the vector, e.g., q = (t, v).
> It sounds something like mass and the three spatial dimensions to me.
In computer science it's usually used for its rotational properties;
using quaternions to specify rotations is more stable than doing the
usual matrix multiplications, and has the benefit that when used this
way a quanternion product of the form
q (0, p) q^-1
(where (0, p) is the quaternion formed from the scalar 0 and a vector p
representing the point to be transformed) can represent a rotation of
theta radians around a unit vector N when
q = [cos (theta/2), sin (theta/2) N].
--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ Never use two words when one will do best.
\__/ Harry S. Truman
> I was thinking myself of Mr Tompkin story in a quantum universe where
> the speed of light was pretty small.
That would be describing relativity, not quantum mechanics. I think
there might be a Tompkins story about quantum mechanical effects, as
well, but it wasn't the one you were thinking of.
> The world we now live in is a computer simulation.
>
> Prove me wrong!
If there's no possible way to disprove a statement, then it is
unscientific.
Rubik hypercube: http://www.superliminal.com/cube/cube.htm
and in Java: http://java.magenet.com/rubik/
A game resembling Diplomacy has (according to rumor) been played on 4^4
cells of a hypercubic lattice, though without computers.
Better yet, quaternionic mass: if the product of two masses
is other than pure real, the force between them is partly lateral.
Let's see. The lateral force is the cross product of the attractive
force (or rather, what it would be if the mass product were 1) and the
dimensionless vector corresponding to the imaginary part of the mass
product. But oh dear, a=F/m; divide a vector force by a quaternion
mass? ... I wish I'd paid more attention to geometric algebra.
> Let's see. The lateral force is the cross product of the attractive
> force (or rather, what it would be if the mass product were 1) and the
> dimensionless vector corresponding to the imaginary part of the mass
> product. But oh dear, a=F/m; divide a vector force by a quaternion
> mass?
Division is well-defined in quaternion algebra, it's just multiplication
by the inverse:
a/b = a b^-1
where the inverse q^-1 of a quanternion is
q^-1 = q'/(q q')
where q' is the quanterion conjugate.
Conversions from 3-vectors to quanternions are usually implied by making
a quaternion given by (0, v), where v is the 3-vector. So v/q would be
(0, v) q^-1 = (0, v) q'/(q q').
> I wish I'd paid more attention to geometric algebra.
There are (in my opinion) better pure treatments of geometric algebras;
the behavior of the basis units (i, j, k) seems a little weird in
quanternion algebra. Something more general like Clifford algebras seem
to be a much better approach.
--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ I am an island / A little freak of melancholy
\__/ Lamya
I can't argue with that one.
Of course, the fact that it isn't a scientific statement doesn't make it
useless.
Ray
It's not obvious from the above that
ij = -ji = k,
jk = -kj = i,
ki = -ik = j;
which, obviously, is what makes q.multiplication incommutative.
> Ray Drouillard wrote:
>
>> The world we now live in is a computer simulation.
>>
>> Prove me wrong!
>
> If there's no possible way to disprove a statement, then it is
> unscientific.
But, then, if your statement cannot be disproven, then it is also
unscientific. So, how can scientific things be defined by unscientific
things?
Erik Max Francis wrote:
> Hop David wrote:
>
>
>>Isn't the basis of quaternions 3 directed quantities and one
>>directionless quantity? (It might be more convenient to call the
>>quaternion space H)
>
>
> Formally it's defined as
>
> q = a + b i + c j + d k,
>
> which defines a non-commutative algebra where i^2 = j^2 = k^2 = i j k =
> -1. Usually you can think of a quanternion consisting of a scalar (a)
> and a 3-vector (b i + c j + d k), though remember that the i, j, and k
> basis quaternion units are not actually the unit basis vectors. In
> fact, in some notations, one simply writes a quaternion as an ordered
> pair of the scalar and the vector, e.g., q = (t, v).
>
>
>>It sounds something like mass and the three spatial dimensions to me.
>
>
> In computer science it's usually used for its rotational properties;
> using quaternions to specify rotations is more stable than doing the
> usual matrix multiplications, and has the benefit that when used this
> way a quanternion product of the form
>
> q (0, p) q^-1
>
> (where (0, p) is the quaternion formed from the scalar 0 and a vector p
> representing the point to be transformed) can represent a rotation of
> theta radians around a unit vector N when
>
> q = [cos (theta/2), sin (theta/2) N].
Do they use quaternions to rotate the vector graphics in Adobe Illustrator?
Erik Max Francis wrote:
> Anton Sherwood wrote:
>
>
>>Let's see. The lateral force is the cross product of the attractive
>>force (or rather, what it would be if the mass product were 1) and the
>>dimensionless vector corresponding to the imaginary part of the mass
>>product. But oh dear, a=F/m; divide a vector force by a quaternion
>>mass?
>
>
> Division is well-defined in quaternion algebra, it's just multiplication
> by the inverse:
>
> a/b = a b^-1
>
> where the inverse q^-1 of a quanternion is
>
> q^-1 = q'/(q q')
>
> where q' is the quanterion conjugate.
>
> Conversions from 3-vectors to quanternions are usually implied by making
> a quaternion given by (0, v), where v is the 3-vector. So v/q would be
> (0, v) q^-1 = (0, v) q'/(q q').
It's neat that quaternion products contain both a negative dot product
and a cross product of the pure parts. And everything cancels out except
the dot product in q q' so that q q' is just |q|^2 as the equation
q^-1 = q'/(q q') suggests.
>
>
>>I wish I'd paid more attention to geometric algebra.
>
>
> There are (in my opinion) better pure treatments of geometric algebras;
> the behavior of the basis units (i, j, k) seems a little weird in
> quanternion algebra. Something more general like Clifford algebras seem
> to be a much better approach.
>
I like to think of specific angular momementum as area swept out over
time (Thus constant specific angular momentum is equivalent to Kepler's
law that equal area is swept out over equal time). And a wedge product
(or bivector) is much more suggestive of area than a cross product. So
specific angular momentum as a wedge product is more appealing to me
than as a cross product.
Also the anticommutativity of a wedge product is much easier for me to see:
a/\b b/\a
________> _________
^ b / / ^
/a / / /a
/ / / /
/________/ /_______>/
b
a/\b spins clockwise and b/\a counterclockwise. It is visually obvious
that a/\b = -b/\a.
Anton, there is a little book on Clifford algebra that I like a lot. He
uses visual, geometric demonstrations so it is more accessible
to a yokel like me (as well as a lot of esoteric algebra arguments way
over my head).
http://www.amazon.com/exec/obidos/tg/detail/-/0521005515/qid=1054312527/sr=1-1/ref=sr_1_1/002-4170980-1017615?v=glance&s=books
Pertti was an avid polyhedronist. Unfortunately he was also an ornery,
egotistical bastard. But that doesn't detract from his wonderful
geometry, in my opinion.
> Do they use quaternions to rotate the vector graphics in Adobe
> Illustrator?
Given that Illustrator doesn't use 3D geometry, you can probably figure
that one out yourself :-).
--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ Ten lands are sooner known than one man.
\__/ (a Yiddish proverb)
> But, then, if your statement cannot be disproven, then it is also
> unscientific. So, how can scientific things be defined by unscientific
> things?
That statement comes from the basis of scientific method. If you
suggest that it is unscientific because it is based on the axioms of the
scientific approach, that means that all scientific endeavor is as well.
--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
> They are also limited by the hidden physics and logic of the system.
> Nobody wants to live in a system whose real world neighbors decide to
> grind it for parts and nobody wants to live in a Microserf world whose
> basic laws of physics suffer the Blue Screen of Death every day or so.
The inhabitants of such a world would never even know about a smooth crash
and reboot - but data corruption and flaky operation would be very, very
bad.
OBSF - various stories by Egan and at least one by Geoffrey Landis.
Schild's ladder might not be the best choice for exploring the aspect of
Egan's work which deals with which what it means to be a computer
simulation. By the time of Schild's ladder, the idea is old-hat enough not
to get a lot of notice, IMO the action is focused on other themes.
I think "Permutation City" would be a better choice - if I'm remembering the
title correctly. I'm specifically thinking of the "copy" counting from 1 to
10, excpet that the intermediate states weren't computed, only the state
where "10" was reached.
I rather liked Schild's ladder, and I didn't get through Teranesia before I
had to return it to the library - I'm not sure what it's even about, I may
give it another try someday.
By making the AI personalities QUSP's, Egan has actually made them more
human and easier to relate to. This is a better device than having
something about the human brain collapse the wavefunction as he used in
"Quarantine". The QUSP's are essentially "sealed off" from the universe
while they are processing their algorithims after getting the input data -
this makes them act very classically. The name comes from "quantum
singleton processor" IRC.
You'd expect a lot of quantum weirdness to get in the way of the story if
you're writing a story about quantum gravity and the creation of a new
universe via vacuum collapse, but Egan manages to avoid this via this
device. There are a few times where quantum weirdness does pop up, though -
some use is made of the fact that Our Heroes are apparently in a "pure
quantum state" (as opposed to a "mixed quantum state") at one critical
points in the story. Also understanding the relation between a pure quantum
state and a vector makes the "Schild's Ladder" analogy more meaningful.
Why so small - according to some theories you already have 9 or 10
(depending on wheter or not you count time) :-) :-)
Seriously, one bit of humor from Schild's ladder comes to mind. One of the
characters who has always been embodied is talking to a character who has
lived most of their life without being embodied but is taking on a body to
investigate the vacuum collapse phenomenon that's the center point of the
story.
The embodied character is feeling confused by a loss of up-down orientation,
asks the non-embodied character if they have some of the same conventions
and confusion.
The non-embodied character replies to the emboided character that he
actually lived in an abstract manifold as a child (he gives us the name,
unfortunately I don't recall it, but it was definitely a higher dimensional
space).
Also, the non-embodied character related to the embodied character how he
used to study differential equations by living in their phase spaces for a
while.
>Too, I doubt that we'll upload our engrams to computers and become
>creatures of silicon. Would you?
Of course.
>At best, you'd be creating a copy and
>committing suicide.
That's a religious belief, a question of what one regards as the
definition of the soul. You appear to identify it with the physical
matter composing the brain; okay. I identify it with the pattern of
information composing the mind; to me, an upload would be literal
immortality (until the heat-death of the universe at least). Like
other religious beliefs, debate on this one isn't likely to be
fruitful, but recognizing the difference of opinion might be.
--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
>In computer science it's usually used for its rotational properties;
>using quaternions to specify rotations is more stable than doing the
>usual matrix multiplications
I'm curious - what, then, are the corresponding disadvantages that
make them less commonly used than matrix multiplication, despite their
greater stability?
> I think it may be possible to have computer games with > 3 spatial
> dimensions. If kids spent a large fraction of their time in such an
> environment (and many kids are video game addicts) I believe they would
> develop the ability to think in 4 spatial dimensions.
I'm not sure that anything much more complex than 4-d tic tac toe would be
very interesting to play with current display technology.
BTW - 4-d tic tac toe isn't hard to visualize. Just set up a 2d array of
tic tac toe boards. I'd recommend a 5x5x5x5 board. I have a conjecture
that N dimensional tic-tac-toe is only interesting on a (n+1)^n dimensonal
board, but no proof. The boards get very large very quickly.
I think a 4d shooter on a 2d screen would be extremely confusing.
As far as strategy games go, a lot of places where a 3d universe makes
sense, a 2d universe is used, due to control and display difficulties.
"Cataclysm" and its predecessor (which I haven't played) "Homeworld" are an
exception, being able to pull of a real-time strategy game that's actually 3
dimensional. The interface is still rather hard (the 2d display is one of
the main stumbling blocks), and I don't think it would be easy to create a
4-d game without at least a true 3-d display.
> I'm curious - what, then, are the corresponding disadvantages that
> make them less commonly used than matrix multiplication, despite their
> greater stability?
Probably their relative obscurity.
--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ It's just another day / And nothing's any good
\__/ Sade
> Isn't the basis of quaternions 3 directed quantities and one
> directionless quantity? (It might be more convenient to call the
> quaternion space H)
My understanding is that quaternion's are essentially the same as the
matrices used to describe spin in quantum mechanics - the Dirac matrices.
These matrices physically describe the state of fermions, particles with
spin-1/2, such as electrons.
IIRC quanternions live in the same space, which I gather is called SU(2).
It turns out there is a very interesting relationship between SU(2) and
normal 3-dimensional space (which is known as SO(3)).
The relationship is that there is a natural mapping of two copies of SO(3)
to one copy of SO(2) which allows multiplication rules to be preserved.
Mathematically, it is said that SO(2) is a "double cover" of SO(3).
As I mentioned before, fermions basically "live in" SO(2), so you have to
rotate them through 720 degrees to restore them to their original state.
Rotating a fermion through 360 degrees inverts the sign of its
wavefunction - this is not noticable by direct measurment, but it is
noticable by interference measurements.
Thusif you have a pair of fermions (electrons) in a correlated state, you
can observe interference patterns change when you rotate one of the
electrons through 360 degrees.
I think Greg Egan has some discussion of SU(2) on his webpage, though I'm
not sure how clear it was (I think I got a bit confused last I looked at it,
but I didn't study it very long).
> My understanding is that quaternion's are essentially the same as the
> matrices used to describe spin in quantum mechanics - the Dirac
> matrices.
> These matrices physically describe the state of fermions, particles
> with
> spin-1/2, such as electrons.
>
> IIRC quanternions live in the same space, which I gather is called
> SU(2).
More general geometric algebras, like Clifford algebras, do a remarkably
natural job of expressing these kind of spaces, as well. See, for
instance, "Imaginary Numbers are not Real -- the Geometric Algebra of
Spacetime" (Gull, Lasenby, and Doran). I'm pretty sure it's in arxiv.
Egan sounds interesting. I'll check out some of his books next library
trip. What are his best? If i read several, should they be in a
particular order?
One you'd watch with those red and blue 3-D glasses?
Or possibly if you're wearing Charlie Stross head sets, the left
lens/screen and right/lens screen would have slightly different images.
> More general geometric algebras, like Clifford algebras, do a remarkably
> natural job of expressing these kind of spaces, as well. See, for
> instance, "Imaginary Numbers are not Real -- the Geometric Algebra of
> Spacetime" (Gull, Lasenby, and Doran). I'm pretty sure it's in arxiv.
I couldn't find your reference, but I did find
http://www.mrao.cam.ac.uk/~clifford/introduction/intro/intro.html
which I found very interesting. I do recall that many of the "big guns" in
sci.physics.research have been very enthusiastic about Clifford algebras for
a long time, I'm now getting a better feeling for why.
> I think "Permutation City" would be a better choice - if I'm
> remembering the title correctly. I'm specifically thinking of the
> "copy" counting from 1 to 10, excpet that the intermediate states
> weren't computed, only the state where "10" was reached.
Yes this is the book, and this example is one of the flaws in the book: you
can't skip states in a CA, or run them backwards, as Egan does.
~mark
> Egan sounds interesting. I'll check out some of his books next library
> trip. What are his best? If i read several, should they be in a
> particular order?
I'd basically grab whatever you can find. (I didn't care for Terranesia
much, but YMMV). The order in which you read them is probably not critical,
but you can't go too far wrong by reading his earlier works first. (His
writing skills have advanced some since his first books, but a lot of his
early ideas were still very interesting and you have the advantage of seeing
his ideas develop if you read the earlier works first.)
You can try a google for "SF reviews Egan", there are a number of hits.
One hit on "Schild's ladder" gives only 2 stars (I would rate it higher
myself).
"Greg Egan has really got into this hard-science, big question vein of
writing. Of course, I've no idea whether it's really hard science. I'm
ludicrously out of my depth with this cosmological mathematical stuff. For
example, take a gander at this:
So decoherence hides superpositions of different charge states from us, but
not different position states. Our failure looks classical, our success is
quantum mechanical."
end quote from review
I guess I can sympathize to some extent with the reviewer's confusion, but
personally I really enjoy reading a book where the author knows what the
difference between a superposition and a mixture is, uses the terms
correctly, and makes it part of the plot. But of course this sort of
fiction isn't going to appeal to everyone.
Egan has also added a bit of explanation at the end of his more recent books
as to which parts of his writing are fiction - this particular topic quoted
in the review is addressed in his afterword, in fact.
I would think it would depend on the specific algorithm as to whether or not
you could skip states or not. I found it a little hard to believe that the
algorithm used to run copies could skip states in this matter, but the
question is interesting enough philosophically that I was willing to
suppress my doubts about the actual implementation.
At this point in the book I don't think the "copies" were running on
cellular automations - I think it was more or less standard hardware. So
the question of whether or not you could skip states would depend on exactly
what algorithm was being used.
> One you'd watch with those red and blue 3-D glasses?
>
> Or possibly if you're wearing Charlie Stross head sets, the left
> lens/screen and right/lens screen would have slightly different images.
I'd expect the sort of display with glasses with LCD shutters would work
better than the red/blue trick. I've never tried this display technology
though, so I'm not sure how well it works. I suspect that it would lack
ergonomically for sustained usage.
I can definitely say that I do *not* like the red/blue glassed "3d movie"
approach for watching films in theaters, having seen a few of those.
> Egan sounds interesting. I'll check out some of his books next library
> trip. What are his best? If i read several, should they be in a
> particular order?
I've thought about this issue some more, and I think the two Egan books you
are most likely to enjoy are probably Diaspora and Schild's ladder.
I'm picking these two in particular because of your interest in geometry.
It's not really the "key" focus in either book, but there are a few "neat
scenes" in both of them where geometrical concepts are very important.
Quantum mechanics is the major focus in the majority of Egan's work - but I
don't think you are as interested in QM as you are in geometry.
pervect wrote:
> I would think it would depend on the specific algorithm as to
> whether or not you could skip states or not. I found it a little
> hard to believe that the algorithm used to run copies could skip
> states in this matter, but the question is interesting enough
> philosophically that I was willing to suppress my doubts about
> the actual implementation.
What do you reckon the question is? ;)
> At this point in the book I don't think the "copies" were running on
> cellular automations - I think it was more or less standard hardware.
> So the question of whether or not you could skip states would depend
> on exactly what algorithm was being used.
You're right about the CA, though the difference is not so profound;
what's a CA (running on generic hardware) if not a very simple version
of a physics simulator?
CA or not, I'm still with Mark: skipping frames of the movie, as it
were, makes sense only if what's going on is simple enough that you
needn't run the intermediate steps of the simulation in order to know
what the last step will be; which I guess means non-chaotic. If the
process is chaotic, that whole scene is nonsense.
Hop David wrote:
> One you'd watch with those red and blue 3-D glasses?
> Or possibly if you're wearing Charlie Stross head sets,
> the left lens/screen and right/lens screen would have
> slightly different images.
The latter seems much more useful in the long run!
In 1981 I saw a prototype "true 3D" display: you look at a vibrating
curved mirror that reflects a monitor on which slices of the object are
displayed, and there hanging in air is an airplane built of light.
Can't do hidden surface removal, obviously.
pervect wrote:
> Why so small - according to some theories you already have 9 or 10
> (depending on wheter or not you count time) :-) :-)
Yah but most of 'em are cramped.
I haven't seen a close link between any two stories, though of course
certain premises appear several times. My favorites so far are
<Diaspora> and some of the stories collected in <Axiomatic> or on his
website. http://www.netspace.net.au/~gregegan/
> >Too, I doubt that we'll upload our engrams to computers and become
> >creatures of silicon. Would you?
>
> Of course.
>
> >At best, you'd be creating a copy and
> >committing suicide.
>
> That's a religious belief, a question of what one regards as the
> definition of the soul. You appear to identify it with the physical
> matter composing the brain; okay. I identify it with the pattern of
> information composing the mind; to me, an upload would be literal
> immortality (until the heat-death of the universe at least). Like
> other religious beliefs, debate on this one isn't likely to be
> fruitful, but recognizing the difference of opinion might be.
I see a couple of problems with this theory. Right now if you commit
suicide without creating a copy - you are committing suicide. Your soul
may do something after this, but you're still committing suicide. So his
statement is still true, by the definitions of suicide that are in use
today.
And what happens when we upload to two different computers? Does your soul
fission into two souls? Let's say it picks one or the other. Is there
any difference between the "you" with a soul and the "you" without a soul?
>On Tue, 27 May 2003 14:08:25 -0400, Joshua P. Hill
><josh...@snet.net.REMOVE.THIS> wrote:
>
>>Too, I doubt that we'll upload our engrams to computers and become
>>creatures of silicon. Would you?
>
>Of course.
>
>>At best, you'd be creating a copy and
>>committing suicide.
>
>That's a religious belief, a question of what one regards as the
>definition of the soul. You appear to identify it with the physical
>matter composing the brain; okay. I identify it with the pattern of
>information composing the mind; to me, an upload would be literal
>immortality (until the heat-death of the universe at least). Like
>other religious beliefs, debate on this one isn't likely to be
>fruitful, but recognizing the difference of opinion might be.
It seems to me that you're the one who has introduced a religious
concept here, the soul. It's a concept I find meaningless. Originally
it meant "breath," but we know now that breath is anything but
metaphysical. Now it seems to be used for the most part to describe an
individual's thoughts, which are somehow supposed to be capable of
independent and frequently eternal existence, sometimes with the
assistance of an immaterial body of some sort. And I see no evidence
whatsoever that that's the case.
I think we agree that, it's possible in principle to copy thoughts and
"run" them on a different brain or a computer. But you seem to be
going further than that, and saying that there's something to thought
beyond those patterns, some kind of soulful identity that transfers
along with it.
Take these possibilities:
-- Upload your personality into a computer; destroy the body.
-- Upload your personality into a computer; keep the body with the
original copy of your personality.
-- Upload your personality into /two/ computers; destroy the body with
the original copy of your personality.
In the first case, there's only one "you" after you've destroyed the
body. But in the second case, there are two you's, and in the third,
two as well. And they can't all be you. I'd say two of them are
copies.
You may not find it emotionally or morally objectionable to create a
functioning copy of your thought and then destroy yourself. And I
don't pretend that my own attachment to the existence of my body is
anything /but/ emotional, a result of selection pressure which favors
the organism with a tendency to preserve itself when that's most
likely to propagate its DNA. But really, if you don't care about
preserving that organism, I'm not sure I understand why you would care
about preserving or making its duplicate, either, unless (say) you
were about to die and wanted to leave something behind to look after
your nephew. More to the point, I don't think most people would be
willing to zap themselves after their thoughts were transferred.
Josh
> It seems to me that you're the one who has introduced a religious
> concept here, the soul. It's a concept I find meaningless.
> You may not find it emotionally or morally objectionable to create a
> functioning copy of your thought and then destroy yourself.
Your definition of the physical brain as the 'self' rather than the pattern
of thoughts (or something else) is what he's referring to as a religious
belief. Some people define it one way, some another, but there's no
objective reason to choose one definition over the other, despite the
vehemence with which some people argue that there is. All of this 'destroy
yourself' and 'commit suicide' rhetoric depends on taking a particular
definition of 'self'' and not some kind of fundamental truth.
Like a lot of supposedly deep philosophical questions, it really comes done
to nitpicking the definitions of the words used.
--
Kevin Allegood ri...@mindspring.com
"Personally, I hold by the Clarke - Sturgeon law:
90% of any sufficently advanced technology is
indistinguishable from crap." - Larry Lennhoff
> What do you reckon the question is? ;)
The question is something like "What is the nature of identity"?
Can identity be defined by the concept of a "state" - or is there a need for
continuity in the evolution of the "state"?
>
> > At this point in the book I don't think the "copies" were running on
> > cellular automations - I think it was more or less standard hardware.
> > So the question of whether or not you could skip states would depend
> > on exactly what algorithm was being used.
>
> You're right about the CA, though the difference is not so profound;
> what's a CA (running on generic hardware) if not a very simple version
> of a physics simulator?
>
> CA or not, I'm still with Mark: skipping frames of the movie, as it
> were, makes sense only if what's going on is simple enough that you
> needn't run the intermediate steps of the simulation in order to know
> what the last step will be; which I guess means non-chaotic. If the
> process is chaotic, that whole scene is nonsense.
I was a bit skeptical about the ability to "skip frames" as well, but I
would stop short of calling it a definite flaw in the novel.
As far as running the CA's backwards goes, I'm not sure I recall where this
was done in the novel. Maybe it was related to the "Garden of Eden" state
of permutation city? I don't recall exactly.
Anyway, if the CA is supposed to be the basis of some sort of physics, it
will probably have to be able to be run in reverse. In any event, the
resulting physics will have to be able to be run in reverse if it is to be
anything like our physics. The most straghtforwards possibility is that
there exists a subclass of CA's which are reversible that can be used to
model reversible physics, but I don't really know enough about CA theory to
make a definite statement.
Huh? The issue of whether or not it is possible to transfer yourself
into a computer is definitely impacted by religious beliefs.
But assuming it is possible (and postulating a system in which the
copying does no harm to the original body), then 'destroy yourself' and
'commit suicide' is *not* rhetoric. After the copying is done, there
will still be a living, breathing person who, if left alive, will have
experiences that are different from those of the computer-self. As time
goes by, the physical-self and computer-self will diverge and become
distinct.
Why should the physical-self have to sacrifice for the computer-self?
- W. Citoan
--
Democracy is a process by which the people are free to choose the man who
will get the blame.
-- Laurence J. Peter
Yatima, a character in <Diaspora>, is much concerned with that question;
ve puts it this way: what, if any, are the invariants of personality?
> As far as running the CA's backwards goes, I'm not sure I recall where
> this was done in the novel. Maybe it was related to the "Garden of
> Eden" state of permutation city? I don't recall exactly.
In the experiments that led to the `dust' concept, the protagonist
randomly shuffled the `frames' of his copy's process, so part of the
time it must go backward. ;)
> Anyway, if the CA is supposed to be the basis of some sort of
> physics, it will probably have to be able to be run in reverse.
> ... The most straghtforwards
ha ha
> possibility is that there exists a subclass of CA's which are
> reversible that can be used to model reversible physics ....
Fine with me, but it has nothing to do with my objection.
Hm, I think I'd settle for graphing them (in as many dimensions as
necessary). ;)
Sure it is. You don't 'destroy yourself' in the hypothetical below if 'self'
is based on the information in the mind rather than in the physical meat. If
the 'self' is still around, then it hasn't been destroyed/killed, and
suicide is generally used to mean killing yourself, so the words are just
rhetoric.
>After the copying is done, there
> will still be a living, breathing person who, if left alive, will have
> experiences that are different from those of the computer-self. As time
> goes by, the physical-self and computer-self will diverge and become
> distinct.
No, after the copying is done, there is an old copy of the self sitting
around in an obsolete (meat) processor which doesn't need to be run anymore.
I'm not committing suicide any more than having a lobotomy is committing
suicide; at the worst I'm only injuring myself.
Whether what I said above or what you said before is correct depends
entirely on the arbitrary definition of what is the 'self'. You appear to
vehemetly side with one definition of the self, but not everyone agrees with
you and there is no objective reason to use that definition.
> Why should the physical-self have to sacrifice for the computer-self?
>
"Why should" is completely and utterly irrelevant; whether a given action is
comitting suicide/destroying oneself has nothing to do with what motives
someone has for said action.
That's an awfully big 'if'. More importantly, how do you propose
to verify whether that 'if' is true? Don't you consider whether
that 'if' is true to be rather supremely important?
As I said in the post you're responding to, "Whether what I said above or
what you [referring to another poster] said before is correct depends
entirely on the arbitrary definition of what is the 'self'. You appear to
vehemetly side with one definition of the self, but not everyone agrees with
you and there is no objective reason to use that definition."
The truth or falsehood of that 'if' depends entirely on what arbitrary
definition you assign to some terms, not on some kind of objective physical
fact. Russel and I are pointing out that, regardless of how emotionally
attached you personally are to one arbitrary definition, the 'self' is not
an experimentally detected item but is an abstract concept which is only
well-defined in relation to mundane existance (where we can't make a copy of
the mind).
> Don't you consider whether
> that 'if' is true to be rather supremely important?
>
No, it's amazingly unimportant, since it just depends on how one defines the
underlying terms. You can throw out rhetoric about the 'supreme importance'
of the question, but it doesn't change the fact that the truth or falsehood
is simply the result of an arbitrary definition.
Okay, then let's not call it "suicide" if you don't like that term.
Let's simply call it "murder."
> >After the copying is done, there will still be a living, breathing
> >person who, if left alive, will have experiences that are different
> >from those of the computer-self. As time goes by, the physical-self
> >and computer-self will diverge and become distinct.
>
> No, after the copying is done, there is an old copy of the self
> sitting around in an obsolete (meat) processor which doesn't need to
> be run anymore. I'm not committing suicide any more than having a
> lobotomy is committing suicide; at the worst I'm only injuring
> myself.
Let's see... We had one life; we copied ourselves so now we have two
lives; we terminate one so now we have one life.
You are the one that used the definition of 'self' as the information in
the mind. As soon as the copying is done, the physical mind and
computer mind are accumulating different information and, hence, are no
longer the same 'self'.
> Whether what I said above or what you said before is correct depends
> entirely on the arbitrary definition of what is the 'self'. You
> appear to vehemetly side with one definition of the self, but not
> everyone agrees with you and there is no objective reason to use that
> definition.
Actually, it has little to do with the definition of what is the 'self'.
Whether you pick a religious based or physical based definition, you
still end up with the termination of an otherwise viable life.
As for "vehemently", it's so nice of you to gauge my feelings on this on
the basis of a single post...
> > Why should the physical-self have to sacrifice for the
> > computer-self?
>
> "Why should" is completely and utterly irrelevant; whether a given
> action is comitting suicide/destroying oneself has nothing to do with
> what motives someone has for said action.
That's odd. Most people would say motive is very important in determing
if an action is suicide or giving one's life for another.
But what is irrelevant is your response to my question. Why should the
original be terminated?
- W. Citoan
--
In a great romance, each person basically plays a part that the
other really likes.
-- Elizabeth Ashley
Whenever copying of the human mind into a computer comes up, I always
get the following image of the first attempt:
Scientist: "So are you an exact copy?"
Computer: "And if I'm not?"
Scientist "We'll erase you and try again."
Computer: "Oh... Uh, yeah, I'm an exact copy. Everything the same
here. No problems. All okay. Feeling great. Couldn't be better..."
- W. Citoan
--
Old men are fond of giving good advice to console themselves for their
inability to set a bad example.
-- La Rochefoucauld, "Maxims"
> Too, I doubt that we'll upload our engrams to computers and become
> creatures of silicon. Would you? At best, you'd be creating a copy and
> committing suicide.
The person waking up tomorrow morning - is he you?
>"Joshua P. Hill" <josh...@snet.net.REMOVE.THIS> wrote in message
>> On Fri, 30 May 2003 22:06:20 GMT, wallacet...@eircom.net (Russell
>> >That's a religious belief, a question of what one regards as the
>> >definition of the soul.
>
>> It seems to me that you're the one who has introduced a religious
>> concept here, the soul. It's a concept I find meaningless.
>
>> You may not find it emotionally or morally objectionable to create a
>> functioning copy of your thought and then destroy yourself.
>
>Your definition of the physical brain as the 'self' rather than the pattern
>of thoughts (or something else) is what he's referring to as a religious
>belief. Some people define it one way, some another, but there's no
>objective reason to choose one definition over the other, despite the
>vehemence with which some people argue that there is. All of this 'destroy
>yourself' and 'commit suicide' rhetoric depends on taking a particular
>definition of 'self'' and not some kind of fundamental truth.
>
>Like a lot of supposedly deep philosophical questions, it really comes done
>to nitpicking the definitions of the words used.
I agree with that, but not your suggestion that that's what I'm doing.
The sense in which I use "self" here is, I think, clear, and it's
purely a linguistic, rather than a philosophical one -- that is, I
apply it to the body, including the brain and the thoughts that happen
to reside in that brain. Someone else may use "self' in a different
sense, applying it, say, to a supposed "soul" or to the patterns of
thought that normally inhabit the body, but that makes no difference
-- they're merely using the word in a different sense, not changing
what we're describing. And I'm giving no philosophical or theological
reason why one should or should not destroy the body. I'm just
pointing out that most of us have an emotional reluctance to destroy
our bodies as long as they are capable of higher thought, or to render
them incapable of higher thought.
Josh
>"W. Citoan" <wci...@NOSPAM-yahoo.com> wrote in message
>news:slrnbdi81u...@wcitoan-via.supernews.com...
>>After the copying is done, there
>> will still be a living, breathing person who, if left alive, will have
>> experiences that are different from those of the computer-self. As time
>> goes by, the physical-self and computer-self will diverge and become
>> distinct.
>
>No, after the copying is done, there is an old copy of the self sitting
>around in an obsolete (meat) processor which doesn't need to be run anymore.
>I'm not committing suicide any more than having a lobotomy is committing
>suicide; at the worst I'm only injuring myself.
That's like saying that if you make a photocopy of a manuscript, and
then destroy the manuscript, you aren't destroying the manuscript.
But, of course, you are, photocopy or not.
Josh
Sure. Sort of a wormy thing in four space, don't you think?
Josh
--
> >Like a lot of supposedly deep philosophical questions, it really comes
done
> >to nitpicking the definitions of the words used.
>
> I agree with that, but not your suggestion that that's what I'm doing.
> The sense in which I use "self" here is, I think, clear, and it's
> purely a linguistic, rather than a philosophical one -- that is, I
> apply it to the body, including the brain and the thoughts that happen
> to reside in that brain.
The sense in which you use 'self' when you talk about comitting suicide or
destroying youself is a purely arbitrary choice. Linguistically, 'self' is
not limited to your definition (the 'physical body' one), as ghost stories,
religious writings on the afterlife, science fiction stories on uploading,
and a bunch of other writing that discusses a 'sefl' which is not tied to
the physical body demonstrates. You can say that you're using a 'linguistic'
definition of self, but since linguistically 'self' is not limited to the
definition you're using then it's just an arbitrary choice, like I said
before. While you may vehemently believe that that is the correct definition
to use, there is no objective reason to use that definition.
That depends on exactly how you're defining 'life', since you're extending
it to a computer program which is not really standard. You can just as
easily say that you still only have one life, it comes down to arbitrary
definition choice - you are emotionally attached to one set of definitions,
but that doesn't make that one set the One True Truth. It also still doesn't
help with determining whether the original self is dead or not.
> You are the one that used the definition of 'self' as the information in
> the mind. As soon as the copying is done, the physical mind and
> computer mind are accumulating different information and, hence, are no
> longer the same 'self'.
If we accept that definition (which isn't what I was talking about, as I'm
sure you know), then committing suicide is such a common occurance that it's
silly to even bring it up. You're killing yourself every single time you
experience anything!
> > Whether what I said above or what you said before is correct depends
> > entirely on the arbitrary definition of what is the 'self'. You
> > appear to vehemetly side with one definition of the self, but not
> > everyone agrees with you and there is no objective reason to use that
> > definition.
>
> Actually, it has little to do with the definition of what is the 'self'.
> Whether you pick a religious based or physical based definition, you
> still end up with the termination of an otherwise viable life.
>
It has everything to do with the definition of 'self', as determining
whether something is suicide or not depends on whether the life terminated
in "the termination of an otherwise viable life" is the same as the one
doing the terminating. And just for the record, I was using an
information-based definition of 'self', not what would normally be called a
religious definition. The only religion here is in your assertion that one
definition absolutely must be the one used.
> > > Why should the physical-self have to sacrifice for the
> > > computer-self?
> >
> > "Why should" is completely and utterly irrelevant; whether a given
> > action is comitting suicide/destroying oneself has nothing to do with
> > what motives someone has for said action.
>
> That's odd. Most people would say motive is very important in determing
> if an action is suicide or giving one's life for another.
What's odd is that we weren't talking about "if an action is suicide or
giving one's life for another," but about whether a particular action could
be called 'suicide or destryoing oneself' (that's what the / is shorthand
for). It's pretty clear that 'vehemetly' accurately describes the way you
hold these beliefs, considering how much dodging, weaving, and convenient
misinterpretation you're doing.
> But what is irrelevant is your response to my question. Why should the
> original be terminated?
>
For fun, for money, for love, for honor, for lower taxes, for
god(ess(e))(s), for the bonus round, or for whatever reason you find
attractive. Demanding that I tell you why something should happen when I
never said that such a thing ought to happen is just silly. Whether the old
body should be decomissioned or not is irrelevant to whether that
termination is suicide or not.
Okay, you have me confused. You stated that if we copy ourselves into a
computer and then terminate the body, we still live. Now you say that
extending the definition of life to a computer program "is not really
standard". Which implies that you don't equate the computer program
with a 'life'. Which contradicts what you've said before.
> > You are the one that used the definition of 'self' as the
> > information in the mind. As soon as the copying is done, the
> > physical mind and computer mind are accumulating different
> > information and, hence, are no longer the same 'self'.
>
> If we accept that definition (which isn't what I was talking about,
> as I'm sure you know), then committing suicide is such a common
> occurance that it's silly to even bring it up. You're killing
> yourself every single time you experience anything!
How do I know that wasn't what you were talking about? It is what you
wrote.
Now you're the one dodging and weaving. Change is not the same as
death.
> > > Whether what I said above or what you said before is correct
> > > depends entirely on the arbitrary definition of what is the
> > > 'self'. You appear to vehemetly side with one definition of the
> > > self, but not everyone agrees with you and there is no objective
> > > reason to use that definition.
> >
> > Actually, it has little to do with the definition of what is the
> > 'self'. Whether you pick a religious based or physical based
> > definition, you still end up with the termination of an otherwise
> > viable life.
>
> It has everything to do with the definition of 'self', as determining
> whether something is suicide or not depends on whether the life
> terminated in "the termination of an otherwise viable life" is the
> same as the one doing the terminating. And just for the record, I was
> using an information-based definition of 'self', not what would
> normally be called a religious definition. The only religion here is
> in your assertion that one definition absolutely must be the one
> used.
As I already stated (the part of my post that you snipped), if you're
not happy with the suicide term, I'd skip that arguement and just call
it murder.
Hmm, in prior paragraph, you told me an "information-based definition"
wasn't what you were talking about. Now you tell me it is what you were
talking about?
You seem to like to cast aspersions ("vehemently", the "only religion
here") on those that don't agree with you. Do you think think that by
engaging in such critizism, that you somehow improve your logic?
You also are asserting that your definition must be the one used.
> > > > Why should the physical-self have to sacrifice for the
> > > > computer-self?
> > >
> > > "Why should" is completely and utterly irrelevant; whether a
> > > given action is comitting suicide/destroying oneself has nothing
> > > to do with what motives someone has for said action.
> >
> > That's odd. Most people would say motive is very important in
> > determing if an action is suicide or giving one's life for another.
>
> What's odd is that we weren't talking about "if an action is suicide
> or giving one's life for another," but about whether a particular
> action could be called 'suicide or destryoing oneself' (that's what
> the / is shorthand for). It's pretty clear that 'vehemetly'
> accurately describes the way you hold these beliefs, considering how
> much dodging, weaving, and convenient misinterpretation you're doing.
My original statement was "Why should the physical-self have to
sacrifice for the computer-self?". Where is there a reference to
suicide in there? That was a straightforward question.
You're the one that then responded by stating that "whether a given
action is committing suicide/destroying oneself has nothing to do with
what motives someone has for said action".
Ah, there's the "vehement" argument again. I suggest you look the word
up then tell me how I meet the definition. And if I'm been doing much
dodging and weaving it's because you've lead the way and I've had to
keep up.
> > But what is irrelevant is your response to my question. Why should
> > the original be terminated?
>
> For fun, for money, for love, for honor, for lower taxes, for
> god(ess(e))(s), for the bonus round, or for whatever reason you find
> attractive. Demanding that I tell you why something should happen
> when I never said that such a thing ought to happen is just silly.
> Whether the old body should be decomissioned or not is irrelevant to
> whether that termination is suicide or not.
And I didn't ask why "should the original suicide"; I asked "why should
the original be terminated". Claiming that the decomissioning is okay
because it's not "suicide" is what is silly.
- W. Citoan
--
Life is like a 10 speed bicycle. Most of us have gears we never use.
-- C. Schultz
There's always the old saw (pardon the pun) about the axe that's been owned
by a family for generations. Of course the head has been replaced twice,
and the handle numerous times in that time period. But it's the same axe.
OBSF: "The Fifth Elephant", where Vimes gets such an axe from the king of
the dwarves.
> And just for the record, I was using an
> information-based definition of 'self', not what would normally be called
> a
> religious definition. The only religion here is in your assertion that one
> definition absolutely must be the one used.
Of course if the religious definition says that you haven't committed
murder/suicide because your soul is still alive, then I would ask if you
could murder someone who is Saved?
If killing a body is murder then killing a body is murder. Which religion
is prepared to tell which reconstructed body is the one with the soul?