This weekend, hundreds of people from across the globe will gather in
Madrid to discuss how to turn themselves into a new species.
The occasion is TransVision, the world’s biggest annual meet-up of
transhumanists — and probably the most important intellectual summit
you’ve never heard of. This year, anti-ageing specialist Aubrey de Grey
will explain why he thinks most people alive today have a 50/50 chance
of living to a thousand years old. The CEO of the Alcor Life Extension
Foundation, Max More, will discuss cryogenics, the process by which the
newly deceased are frozen in giant, stainless steel vats and preserved
for resurrection down the line. And Google’s Ray Kurzweil will talk
about the 'singularity': the moment in our not-too-distant future — he
reckons around 2045 — when artificial intelligence finally outstrips the
collective brainpower of mankind and absorbs us into its plans.
Until recently, I — like most people, I suspect — believed this stuff to
be pure science fiction. But then browsing aimlessly one night in March,
I stumbled upon a passing reference to a transhumanist political party
that had apparently put up a candidate for election in 2015. My
immediate assumption was that it was a prank. But looking at their
website, they seemed pretty serious — and surprisingly active.
I went straight to my emails and clicked 'compose new message' — setting
in motion a series of events that not only transported me into the
strange parallel universe of transhumanism.
There’s something fitting about meeting a transhumanist on Zoom. The
disembodied, two-dimensional head of pixels on my laptop screen belongs
to David Wood, the co-founder and current leader of Transhumanist UK.
An austere, middle-aged Scotsman, with fading straw-coloured hair and
thick fiery eyebrows, Wood comes across more Presbyterian minister than
Cyberpunk. His manner is calm and matter-of-fact, as though merely
filling in the details about something we already basically know to be
true: the process by which tin and copper become bronze, say, rather
than the process by which man and machine become cyborg.
Our conversation begins unremarkably, with a brief chat about, of all
things, Universal Basic Income. UBI is, Wood says, one of his party’s
key policies — though he envisions it providing basic resources as well
as cash. By the middle of the century, he says, we’ll have achieved
'sustainable superabundance': enough renewable energy, thanks to nuclear
fusion, and enough food, thanks to lab-grown meat, to make both
essentially free. We’ll also have artificial intelligence providing
education and healthcare for all — and gigantic virtual adventures, 'a
bit like Westworld'.
And that’s just for starters. Wood says he’s a huge advocate of life
extension — and thinks Aubrey de Grey’s prediction that we’ll soon be
living well into four figures is correct. Over the next decade or so, he
says, we’ll develop nanotechnology that goes inside the body and not
only halts ageing, but reverses it by making cells 'biologically
younger' — essentially eliminating all natural causes of death. Wood is
also 'very much in favour' of creating Artificial General Intelligence
(AGI) — robots smarter than humans — and believes they’ll likely arrive
sometime around the middle of the century, though possibly as soon as 2030.
And uploading the mind? Wood says he, like most transhumanists, believes
humans are ultimately material beings, and that we will, therefore, one
day be able to decant our minds into replica silicon brains. But he
hasn’t yet made up his own grey matter whether he wants to do it
himself. 'I’m not sure whether it would really be me,' he says.
Wood is keen to stress transhumanism’s emphasis on 'morphological
freedom' — the right of every individual to choose exactly how, and how
far, to augment themselves. Want to wire your neurons up to a
supercomputer? Great! Just want a few more 'normal' years tacked onto
the end of your lifespan? That’s fine too. Transhumanism isn’t, he says,
for all the sci-fi stereotypes, really about specific goals at all:
'it’s not an end destination we’ve got in mind — it’s the next phase of
the journey'.
What that next phase consists of depends on who you ask. Some
transhumanists want exoskeletons to allow them to run faster — others,
like Kurzweil, want to transform every atom in the universe into a giant
conscious supercomputer. But all transhumanists agree, Wood says, on a
trio of broad pursuits — superlongevity, superintelligence, and
superhappiness. To which he, a self-professed 'technoprogressive', adds
a fourth: fairness, or 'transhumanism for all, rather than transhumanism
for the one per cent'.
Wood acknowledges that some sort of world government would probably be
necessary, though he stresses he still thinks decisions should be made
at as local a level as possible. When I ask if AGI will be able to vote,
or whether there’ll be a difference between the rights of 'enhanced' and
'un-enhanced' humans, he says he doesn’t have the answers — these are
questions that will have to be figured out when we get there. This seems
to me a handy get-out clause for transhumanists: any currently
intractable problems can simply be left to be solved by the smarter,
enhanced people of the future.
When I mention the u-word —'utopia' — Wood bristles. 'It’s a word
transhumanists don’t really like', he says, telling me that four of the
eight clauses in the 1998 Transhumanist Declaration — 'the nearest thing
there is to a canonical document' — highlight the risks as well as
advantages of technological innovation.
Wood admits that his party — in common with the surprising number of
other transhumanist parties around the world, including Somos Miel in
Spain, the AFT in France, and The Innovation in Poland — is unlikely to
come to power anytime soon. Their main goal is, 'like the Greens', to
raise awareness and influence mainstream politicians.
Are they having any success? Finally, Wood beams. Yes. He gives two
examples: an Obama-era white paper that discussed the singularity, and a
speech given by Boris Johnson at the UN in 2019, which was, he says,
dripping with transhumanist ideas.
Both were startling news to me. But both, it turned out, were relatively
small fry. After Wood and I wrapped up our conversation, I spent the
evening following up a few of the other things he’d mentioned — and this
time the safe passage back to normality sealed behind me once and for all.
There was the outgoing U.S. Director of National Intelligence, John
Ratcliffe, claiming that China was conducting 'human testing' on members
of the People’s Liberation Army with the aim of creating soldiers with
'biologically enhanced capabilities'. There was the EU report on
'converging technologies' discussing the prospect of using
nanotechnology to reengineer the brain.
Elsewhere, Elon Musk is pumping hundreds of millions of dollars into
Neuralink, an 'implantable brain-machine' interface that will eventually
allow humans to compete with superintelligent robots. PayPal founder
Peter Thiel and Amazon’s Jeff Bezos have each ploughed hundreds of
millions of dollars into anti-ageing research. The Russian billionaire
Dmitry Itskov is aiming to allow us to transplant our minds into
immortal holographic bodies by the middle of the century. The founder of
MIT Media Lab, Nicholas Negroponte, has talked about 'ingesting'
information by swallowing tiny pills that then make their way through
the bloodstream and deposit knowledge in the brain. As he put it in a
recent TED talk, 'You’re going to swallow a pill and know English.
You’re going to swallow a pill and know Shakespeare.' Some of their
claims might well be a little overblown, but these aren’t just nerds
fiddling with soldering irons in their parents’ basements.
Clearly then, the question isn’t whether this technology is going to
come. The question now is how we stop ourselves using it to destroy each
other.
It’s April 2019. Chris Anderson, the head of TED, and Nick Bostrom — one
of the founders, back in 1997, of the World Transhumanist Association —
are on stage in conversation in Vancouver. Bostrom, bespectacled and
bookish, is now an academic philosopher at the University of Oxford and
something of a big name public intellectual.
Anderson and Bostrom are discussing the technocalypse. Bostrom, while he
still thinks radical human enhancement is a fundamentally good idea, has
become noticeably more pessimistic in recent years about the chances of
our using transformative technology responsibly — and thinks it’s quite
plausible we’ll simply end up using it to wipe each other out. So what,
Anderson asks, can we do?
Bostrom presents four options. The first, simply banning or restricting
scientific research, he says is neither desirable nor realistic. The
second, killing or incarcerating those most likely to commit atrocities,
is unlikely to be 100 per cent foolproof. Our best shot at survival, he
says, is to combine options three and four: world government and
individual, micro-level surveillance: quite literally, all of us being
watched, all of the time, by the superintelligent monitoring devices.
Julian Savulescu, a bioethicist and colleague of Bostrom’s at Oxford, is
somewhat less gloomy. While he readily admits that humans are, to quote
the title of one of his books, 'unfit for the future', he has a far more
ambitious solution: to bioengineer us to be not just stronger and
smarter, but more ethical beings, thanks to 'moral enhancement' pills.
Savulescu gives two examples of ways we already know chemicals can
change our behaviour: the hormone oxytocin, which is known to boost
empathy, and the drug Ritalin, which every day helps millions of ADHD
sufferers to manage impulse control. Both are obviously, as they stand,
blunt tools — but it follows, Savulescu argues, that as our
understanding of neurochemistry improves, we ought to be able to design
ever-more precise pills, perhaps even tailored to the shortcomings of
each individual.
But surely no matter how refined we make these drugs, they’d only really
be addressing our surface behaviour, not improving our underlying
morality itself? A supercharged version of Ritalin might give us the
patience of a Buddhist monk, but it couldn’t help us to answer a
question like 'is it morally legitimate to edit the genes of a human
embryo to make it superintelligent?'. Nor could super-empathy pills tell
us how to treat a cyborg with an IQ of a million.
Let’s imagine for a moment, though, that the thing we call our moral
code — our ethical beliefs and values — really is just the result of
chemicals sloshing around in our brains. And let’s imagine, that it
really will be possible, therefore, one day to pop a pill that erases
our entire belief system and replaces it with a ‘better’ one.
Even granted all that, to design such pills, you’d still need a clear
idea of what moral code you wanted to promote. From what I can tell,
transhumanists don’t. Nor, more critically, do they seem to have any
secure philosophical basis for saying what a 'better' or 'worse' moral
code would even look like.
It isn’t that Transhumanists don’t talk in terms of good and bad. They
do — a lot. Transhumanism, they argue, would allow us to 'flourish' make
our lives vastly 'more worthwhile', unleash our 'cosmic potential'. But
whenever you try to track their philosophical steps and find out where
they thought these words got their meaning from, the intellectual
footprints just seemed to disappear.
A paper by Nick Bostrom called 'Transhumanist Values', for instance,
promised to explain where transhumanist values came from, only to end up
going round in circles. The title of one section, for example, is: 'The
core transhumanist value: exploring the posthuman realm'. In other
words, a transhumanist’s core value is… being a transhumanist.
But then Transhumanists can’t, I finally realised, tell us where
morality comes from, because by the logic of their own philosophical
convictions, morality shouldn’t exist.
Central to transhumanism, after all, is the idea that humans are purely
material beings — that, as Max More puts it in The Philosophy of
Transhumanism, 'our thinking, feeling selves are essentially physical
processes'. This is, of course, why transhumanists are so confident that
we can upgrade ourselves. But such cold materialism can’t give any good
explanation for why we ought to do so. If we really are wholly material,
ethics become disposable.
Transhumanists don’t seem troubled or even aware of this glaring
intellectual problem. Most people, after all, in modern society share
transhumanism’s materialist assumptions about reality. Most people,
therefore, struggle to explain where their sense of right and wrong
comes from. But because our lives are, in historical terms, relatively
comfortable, we simply look the other way and pretend none of us has
noticed. It’s what the atheist philosopher Alex Rosenberg calls ‘nice
nihilism’ — life is ultimately meaningless, but, since it’s more
pleasant for us, we can nonetheless agree just to behave as if it weren’t.
In that sense, transhumanism captures the philosophical mood of our age
perfectly. But the question is whether such 'nice' moral role-play can
last, especially when technology starts radically changing our abilities
and powers. We live in an anomalous moment in history: we think we’ve
moved beyond the superstitions of our past, but we’re still really
subsisting on the residual moral instincts of traditions we’ve otherwise
done away with.
And if anything’s going to shake us from our zombie state and challenge
us with questions that can’t just be answered by being ‘nice', it’s
transhumanism.
Even if we 'nice nihilists' aren’t willing to follow through the logical
implications of our materialism, a superintelligent AGI, being more
intellectually consistent than us, surely would. And there’s absolutely
no reason why it shouldn’t simply dispense altogether with the foolish
moralities of the humans that invented it.
Hugo de Garis, a former AI expert turned author, was the most
fascinating character I came across in the movement — a Nietzschean
tragedy of a man, willing to stare unflinchingly at the potential horror
of what he was doing and seemingly paying for it with his sanity.
His most famous book, The Artilect Wars, centres on a single, bleak
prediction: that the second half of this century will see a global war
between ‘Cosmists’ who want to create superintelligent, godlike machines
he calls 'artilects', and 'Terrans' who want to stop the Cosmists at all
costs. 'The Cosmists will want to build artilects,' de Garis writes,
'because to them it will be a religion, a scientist’s religion that is
compatible with modern scientific knowledge. Not to do so would be a
tragedy on a cosmic scale to them.' The Terrans, meanwhile, will argue —
quite correctly, he says — that artilects will almost certainly wipe out
the humans that created them.
Terrans will decide, therefore, that the only solution is to exterminate
Cosmists before they get their way. Terrans will see Cosmists as
man-killers, and Cosmists will see Terrans as god-killers. The result
will be a catastrophe costing billions of lives — what de Garis calls a
'gigadeath'.
All this made de Garis, to use his word, 'schizophrenic'. 'Since
ultimately, I am a Cosmist,' he writes, 'I do not want to stop my work.
I think it would be a cosmic tragedy if humanity freezes evolution at
the puny human level, when we could build artilects with godlike powers.
However, I am not a 100 per cent Cosmist. I shudder at the prospect of
gigadeath…. I lie awake at night trying to find a realistic scenario
that could avoid ‘gigadeath.’ I have not succeeded, which makes me feel
most pessimistic.'
De Garis’s book really hammered something home. Even if you try your
utmost to live according to the logic of materialism — even if you
believe at a rational level that morality is nothing but an illusory
social construct — no-one can extract from their experience of reality
the fundamental sense that things matter. De Garis, who sees traditional
religion as a hopeless superstition, thinks building artilects matters
profoundly — perhaps even more than the survival of mankind. 'The
prospect of building godlike creatures,” he writes, “fills me with a
sense of religious awe that goes to the very depth of my soul and
motivates me powerfully to continue, despite the possible horrible
negative consequences.' And he’s not alone. Wood thinks “human
flourishing” matters. Savulescu thinks 'becoming better' matters.
Bostrom thinks 'valuable experiences' matter.
Morality is, in this sense, as irrefutable a dimension of experience as
space or time — we might disagree over specific cases of right and
wrong, but none of us can shake the underlying intuition that reality
contains a moral dimension in which we orientate ourselves.
But then smuggled into transhumanism are, when you think about it, all
sorts of claims that can’t be reconciled with its underlying materialism.
Take, for instance, its rather remarkable faith in the power of human
reason. If the human brain really is just the freak result of some cells
stumbling, by chance, on ways of combining, surviving, and reproducing,
then it would be bizarre to think it would have got anywhere close to
perceiving the deepest truths of the universe — and odder still to think
it would be capable of devising a superintelligence that really could
crack the code of reality once and for all.
In the few weeks I spent writing up this article, Elon Musk released a
video of a monkey playing Pong with just its brain. Chinese and American
scientists announced the creation of the first mixed human-monkey
embryo. This stuff is coming rapidly and we need to be prepared.
So do we try to direct the course of this technology — or do we ban it,
like those calling for a treaty to protect the endangered human being?
Do we trust that decent, well-intentioned men like Wood will be able to
keep their hands on the controls, or do we conclude, like the AI pioneer
Bill Joy, that some knowledge will always be too dangerous for humans?
Hurry up and decide. We don’t have long.
https://www.spectator.co.uk/article/we-need-to-talk-about-transhumanism