Scare Mongering?

15 views
Skip to first unread message

Daniel C.

unread,
Dec 29, 2008, 12:54:34 AM12/29/08
to diy...@googlegroups.com
From this page: http://tinyurl.com/aywlq6

"Certainly there are enough twelve year old kids making computer
viruses to show what can happen."

This concern does not carry across - at least currently - to the DIY
bio community. 12 year old kids "making" computer viruses are
actually using premade virus construction kits which are written by
more experienced programmers. The kits can be downloaded anonymously
from the internet by anyone who can find them, and the viruses they
make are all basically the same with minor variations. All of the
viruses made this way are already detectable by most commercial virus
scanners because the degree of variation with a kit like this is
fairly limited. Doing this with biological viruses would be
significantly complicated by the requirement to purchase and order -
presumably by mail - a lot of equipment, set up a laboratory, learn to
use the equipment, etc. etc. None of these things are easy for
adults; they would be nearly impossible for an adolescent.

"imagine a twelve year old kid spreading a computer virus that gets
into 3D printers and has them print out spider-like robots at night
with venom injectors or knives that murder people in their beds."

This is so ridiculous that it almost doesn't need to be responded to.
If anyone really is worried about something like this happening I urge
them to look into the current state of the art in robotics and AI
programming, and then compare their findings to the technology that
would be required to make a murderous knife-wielding robot.

-DTC

Bryan Bishop

unread,
Dec 29, 2008, 1:09:04 AM12/29/08
to diy...@googlegroups.com, kan...@gmail.com
On Sun, Dec 28, 2008 at 11:54 PM, Daniel C. wrote:
> From this page: http://tinyurl.com/aywlq6

.. a link to one of Paul's posts. I agree, it's scaremongering.

> "Certainly there are enough twelve year old kids making computer
> viruses to show what can happen."
>
> This concern does not carry across - at least currently - to the DIY
> bio community. 12 year old kids "making" computer viruses are
> actually using premade virus construction kits which are written by

Don't be so sure, I was once a 12 year old and I was once making
viruses, playing around with buffer overflows and the crap that counts
as 'quality code'. Though the majority, yes, are using kits.

> significantly complicated by the requirement to purchase and order -
> presumably by mail - a lot of equipment, set up a laboratory, learn to
> use the equipment, etc. etc. None of these things are easy for
> adults; they would be nearly impossible for an adolescent.

Wait, what? Most people find it's the younger children that understand
computers more than the older people -- usually this is because the
young people have so much free time on their hands to just sit and
tinker with technical stuff all day for hours on end. Ordering from a
list of materials using the internet is something that a lot of kids
know how to do, "just enter the credit card number and press go". I
don't think an argument of "it's too complex" is worthwhile here -
it's becoming increasingly simpler with technology.

> "imagine a twelve year old kid spreading a computer virus that gets
> into 3D printers and has them print out spider-like robots at night
> with venom injectors or knives that murder people in their beds."
>
> This is so ridiculous that it almost doesn't need to be responded to.
> If anyone really is worried about something like this happening I urge
> them to look into the current state of the art in robotics and AI
> programming, and then compare their findings to the technology that
> would be required to make a murderous knife-wielding robot.

What, like this?

http://en.wikipedia.org/wiki/Military_robot

And what, rapid prototyping printers like these?

http://reprap.org/
http://fabathome.org/

Even those that do electric circuitry?

http://heybryan.org/books/Manufacturing/rapid_prototyped_electronic_circuits/report.html

As for the robotic controller aspects, there's software like OpenCP
for facial recognition, and lots of fancy algorithms that you can play
with (also in a simulator to get some variation and maybe improvement)
to get the controller working properly with the available legs and
actuators and such.

But still, I agree it's scaremongering. I replied to his email by
pointing out that talking about the relative comparison of risks is
somewhat inappropriate because there are similar risks directly from
nature itself. But let's lock down nature- stop all bacteria from
operating! That'll do the trick (erm, not-- it'll do *something*, but
something even far worse..).

- Bryan
http://heybryan.org/
1 512 203 0507

Daniel C.

unread,
Dec 29, 2008, 1:33:12 AM12/29/08
to diy...@googlegroups.com
On Sun, Dec 28, 2008 at 11:09 PM, Bryan Bishop <kan...@gmail.com> wrote:
> Wait, what? Most people find it's the younger children that understand
> computers more than the older people -- usually this is because the
> young people have so much free time on their hands to just sit and
> tinker with technical stuff all day for hours on end. Ordering from a
> list of materials using the internet is something that a lot of kids
> know how to do, "just enter the credit card number and press go". I
> don't think an argument of "it's too complex" is worthwhile here -
> it's becoming increasingly simpler with technology.

Sure, ordering equipment is easy. Even learning the requisite biology
to engineer a new strain of bacteria would probably be within the
reach of a smart 12 year old. But setting up the laboratory -
presumably in their parent's house - and actually carrying it out,
without their parents noticing? Certainly one can postulate a
situation in which it could happen, but one can postulate a lot of
things that aren't worth worrying about.

> http://en.wikipedia.org/wiki/Military_robot

Making dangerous robots is easy. You could even make them with a
rapid prototyping printer. But...

> As for the robotic controller aspects, there's software like OpenCP

this is where it all breaks down. Don't have time just now to go into
detail (and probably no-one cares anyway) but suffice it to say that
the technology required to make a robot that can hunt through a
person's home and then attack them does not exist -- or if it does
(haven't read up on the most recent DARPA tests on autonomous land
vehicles) it requires parts and systems that a prefab printer cannot
create.

-DTC

JonathanCline

unread,
Dec 29, 2008, 3:17:13 AM12/29/08
to DIYbio
On Dec 29, 12:09 am, "Bryan Bishop" <kanz...@gmail.com> wrote:
> On Sun, Dec 28, 2008 at 11:54 PM, Daniel C. wrote:
> > From this page:http://tinyurl.com/aywlq6
>
> .. a link to one of Paul's posts. I agree, it's scaremongering.

Whether he is trolling or honestly fearful doesn't affect the
underlying issues.

The facts are, the GMO corn accidentally escaped while it was still in
beta testing despite the best intentions of the designers.

Gewin V (2003) Genetically Modified Corn— Environmental Benefits and
Risks. PLoS Biol 1(1): e8 doi:10.1371/journal.pbio.0000008

http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2Fjournal.pbio.0000008


The first official computer worm was an experiment by a grad student
who made a mistake, and it subsequently infected many.

http://en.wikipedia.org/wiki/Morris_worm

No one currently knows how to engineer a permanent self-destruct
(suicide gene) into a GMO, correct? It can get dropped through
mutation. That's the feedback I've recvd. Although the ability to
reproduce can be stopped, which is a difficult limitation to work
with. I am also interested in the answer to the question: how to
properly contain GMO in the future, as are most of us. That future
won't happen for several years, because near everything dies before
'escaping'. Arguing with guys online is probably about as productive
as arguing years ago about command line only -vs- windowed operating
systems, I think we know who won, regardless of the efforts of those
who attempted to halt technological progress. The effort is better
spent on engineering better safety into the system.

Engineers have safety as part of their curriculum - I don't think
bioengineers do. Usually starting with http://en.wikipedia.org/wiki/Tacoma_Narrows_Bridge
("oops"). What happens when a Tacoma Narrows Bridge mistake occurs as
a communicable system or self-reproducing system? It seems this is a
question which is currently undergoing research. As well as the
question of how to properly beta test. My naive guess is that we
start having more allergies? Though, I haven't seen many papers on
it, so maybe it's an area that needs more funding, or I'm looking in
the wrong place (links should go in a FAQ). With the new openness of
bio, Paul can start his own project to solve the research question so
the answer will be ready sooner rather than later. If he is trolling
or fearful then a logical discussion won't change his opinion.

Keep in mind that biobrick.org describes a biohacker who is self-
regulating, in terms of safety and ethics. The world financial
markets recently proved that a market can not be self-regulating for
safety even though the market might drastically harm itself if it
doesn't properly regulate itself. Now everyone is going bankrupt even
though they balanced their own books and should have known better than
to employ ridiculous amounts of financial leverage. To this end,
Greenspan said, "I made a mistake [in assuming individuals/
corporations would self-regulate their behavior to avoid accidental
harm, even self-destruction, due to lack of safety]." Oops. 50
years of market self-regulating theory refuted. A market can't be
trusted to be self-regulating for safety.

The 'scaremongers' have many uneducated points and exaggerations and
misplaced analogies, though their fear is based on fuzzy observational
evidence. Many people already want to drink the biobeer - who's to
say some people wouldn't have strange allergic reactions. Are the
'scaremongers' in general, pro- or against animal testing for new
products? If against, then I've always wondered what/who they thought
the new products could be tested on, if not people themselves.

You might want to point out that the entire purpose of the biohack
melamine detection project is that the agricultural market is
obviously not regulating itself, and needs an external safety
mechanism (aka: me, creating a project to possibly detect the crap in
their products). The milk producers taint their milk for profit,
regardless of the fact that this will harm their own children --
amazingly dumb, right? In order to verify what I buy is safe, I'm
using biotech as a safety control. If current safety controls
actually worked (i.e. if current test methods were successful) then
the harmful product should have been caught far before hitting
consumer shelves. So obviously someone & something is not doing the
proper job. The FDA previously set up additional controls, and still
months later found melamine in eggs on consumer store shelves; again
not doing a good enough job. It's a tough problem, as the scale is
huge. What's good enough? For the telephone company, 99.999% ("5
nines") is considered good enough. For agriculture, I don't know
their safety criteria, however the FDA ex-chairman stated that less
than 1% of food imports are ever visually inspected, and much less
than half of that is ever chemically tested. If 12 year old
biohackers can design biobricks to test for everything harmful,
allergic, or fattening in 20 years, I'll be very happy. The
applications necessitate the technology, and we are all too linked
into the global economy to do anything different. I wonder how Paul
would respond to this idea -- if he's trolling, maybe he will just
spam it. Or how he would solve the problem -- other than permanently
abstaining from milk or milking every cow himself. The same idea goes
for other detection prospects: the assumption is that we will be able
to detect & nullify the effects of the biological Tacoma Narrows
Bridge mistake, when it happens 20 years from now, prior to any real
damage being done. That's the same technology which will find & fix
the next rapid bacterial bug which doesn't respond to available
antibiotics, so the technology better be ready.



## Jonathan Cline
## jcl...@ieee.org
## Mobile: +1-805-617-0223
########################

Daniel C.

unread,
Dec 29, 2008, 12:03:46 PM12/29/08
to diy...@googlegroups.com
On Mon, Dec 29, 2008 at 1:17 AM, JonathanCline <jnc...@gmail.com> wrote:
> (A long and insightful email)

You present several valid concerns -- a small subset of everything
that could conceivably go wrong, I'm sure. But it's important to keep
any discussion of risks focused on hazards that are real.

-DTC

Guido D. Núñez-Mujica

unread,
Dec 29, 2008, 12:34:04 PM12/29/08
to diy...@googlegroups.com
And insisting so much in hypothetical risks will increase public
resistance, rational risk assesment is necessary, sure, but insisting
so much in far fetched events will make them look as a certainty
rather than a very unlikely event.

Jeff Gritton

unread,
Dec 29, 2008, 1:48:40 PM12/29/08
to diy...@googlegroups.com
Hazards that are "real"? Someone needs to watch Jurassic Park. A risk
assessment approach that limits itself only to risks that are real or
sufficiently likely seems like a template for a disaster story. If you
don't like fiction then go read "When Genius Failed" the story of Long
Term Capital Management.

Guido D. Núñez-Mujica

unread,
Dec 29, 2008, 2:59:58 PM12/29/08
to diy...@googlegroups.com
I love science fiction.
And yet, I go back to my analogy with cooking. When was the last time
you called the fire department and a team of doctors just in case you
got badly burned when you cooked your breakfast? Yet, the risk is
there. We choose to emphasize how good eggs taste, instead of skin
grafts procedures. That is my point. I am not saying that risk should
be dismissed, rather that we must not emphasize what is not likely, if
we do people will assume it is certain.

Kay Aull

unread,
Dec 29, 2008, 3:35:22 PM12/29/08
to DIYbio
There's a place for considering low-probability, high-risk events.
Just not to the exclusion of everything else.

For instance, I chose to leave my house today and go to work. While
on this epic 200-yard journey, I was exposed to a number of dangers -
serial killers, meteor strikes, velociraptors, etc. I would have been
safer had I remained in my apartment, preferably under the bed.

However, I bravely soldiered on. Why? Maybe I'm a radical with no
regard for my own safety. Or maybe I weighed the risk of velociraptor
vs. the reward of building things the world needs (and getting paid
for it). Today, the reward won. I may be eaten on the way home, but
I've made my choice.

That doesn't mean I reject risk mitigation strategies - I look both
ways before crossing the street, as cars are the most likely source of
trouble on my commute and are also easily avoided. I reject the call
for absolute assurance of safety before proceeding. Inaction is no
safeguard either; the velociraptors may find you under the bed, and
then what do you do?

- Kay

Terry Parks

unread,
Dec 29, 2008, 4:02:01 PM12/29/08
to diy...@googlegroups.com
I've been designing microprocessors for 25 years. Humans invented the transistor, logic gate, and computer architecture as we know it. Still this highly evolved technology, which we understand the principles of operation, is woefully imperfect. There is no microprocessor which worked correctly the first time it was fabricated. We have simulators which will duplicate the errors found in silicon, they were just not discovered before fabrication. The saving grace is that bad chips are inert, usually they actually mostly work like they are supposed to, but might get some operations incorrect; sometimes they are just rocks (or pieces of glass).

The stakes in the biotech arena are much higher. It seems clear that since we cannot correctly engineer devices which we invented (transistors) that the likelihood that we can always correctly engineer life forms that we don't even really understand is zero.

You are then left with some argument that your failures are either inviable or harmless to other life forms. I have no idea how to calculate the probability of that, but I would like to see it done ...

Terry Parks

Nadeem Mazen

unread,
Dec 29, 2008, 4:03:05 PM12/29/08
to diy...@googlegroups.com
Hilariously put. Two claws up, would maull* again.

Although I disagree a bit. As a biologist**, I'm going to go ahead and say that not sufficiently understanding a given problem at hand or the impact of neat stuff like gene therapy has been a long road of fail for a bunch of really smart people. Not that anyone on this list is (or even could be) making viruses in their basement - but still, I think an overview of past fail in video form would be a great jumping-off point for a generally awesome and well-intentioned movement like this.

Here's why: very quickly we are going to realize that we are smart enough to come up with low-cost solutions for high rpm centrifuges, amongst other things, and the question won't be "how can I make a yoghurt culture that turns green," but "what do I have the tools to do that might be useful?***"
I think at that point I will hide under the bed if we (DIYers) haven't developed a culture of fear and restraint. I think the time to discuss our future direction as a movement and future hopes (and fears) as individuals is now, while we're stuck messing with e.coli or something. 

I would not, by any stretch, put the chances of future DIYbio technology having the potential for harm in the same space as raptors and fatal car accidents.  

-Nadeem
 


*see what I did there? with raptors...and your name...yeah.
**I feel like saying "As a biologist" is a really douchy way to establish credability. What I mean to say is "as someone who's been in a biological engineering program for 2 years and still knows almost nothing," but that doesn't roll off the tongue in an introductory sentence.
*** the answer will eventually be "anything I please"

Daniel Wexler

unread,
Dec 29, 2008, 5:38:30 PM12/29/08
to diy...@googlegroups.com
When recombinant DNA technology was first developed in the mid-70's the first response to this by the government in response to public concerns was a 2-year moratorium on genetic engineering, which was lifted when the public gradually became acclimated to the idea.  This does not mean it is without risk, but the benefits far outway those risks.

Daniel Wexler

unread,
Dec 29, 2008, 5:48:02 PM12/29/08
to diy...@googlegroups.com
I'd like to see someone engineer a true-blue rose...

Alec Nielsen

unread,
Dec 29, 2008, 6:06:34 PM12/29/08
to diy...@googlegroups.com
Florigene and Suntory have engineered a true blue rose, and will be selling them this coming year. Really cool, but it would be even cooler for a DIY planthacker to be able to do the same.
 
Alec

T

unread,
Dec 29, 2008, 9:40:45 PM12/29/08
to DIYbio
Just a simple thing like a high temperature furnace would be enough to
render null and dead any biological specimen be it bacterial or viral.
I'm talking on the order of 1,500F or more. Pretty easy to generate
with a induction furnace.

Kay Aull

unread,
Dec 30, 2008, 5:00:53 PM12/30/08
to DIYbio
Terry Parks: "It seems clear that since we cannot correctly engineer
devices which we invented (transistors) that the likelihood that we
can always correctly engineer life forms that we don't even really
understand is zero."

This is correct. Biology is an utter kludge - held together by safety
pins and duct tape, for billions of years. You want to re-engineer
that, you're in for a treat. The dependencies we can see are nasty,
and explained only by the logic of evolutionary convenience; the
dependencies we can't see, because they're unknown, undocumented, or
documented incorrectly, are even worse.

We're still trying. And failing most of the time.

The argument, however, is this - because we don't fully understand
biology, we're open to a catastrophic failure. You think that you've
made a virus that cures cancer, but instead it mutates and turns
people into zombies. That kind of thing.

Also, if you install a chess AI on your laptop and then throw it down
the stairs, your damaged hard drive might mutate into Skynet.
Fortunately, it's much easier to accidentally break things than it is
to accidentally give them evil superpowers.

Murphy's Law still holds in biology. For instance, engineered
organisms are going to get loose. This happens naturally too -
mutations and gene swapping are common. Nearly all of these variants
will be selected against, but a few will survive. Our current GMOs
are low-risk for even that. Agricultural traits (e.g., large fruits
or herbicide resistance) are liabilities in the wild. So are
bacterial party tricks like GFP. People are working on designs that
fail safely, or rely on rare metabolites not found in the wild, and so
on, but our best defense remains that life doesn't want to do most of
the things we engineer it for.

There's also malware. Yes, there will be black hat biohackers making
weapons. The Soviets were doing it in the '80s, Aum Shinrikyo
apparently tried in the '90s, and it's gotten much easier since then.
(No, I'm not doing it. Bad cop, no donut.) But to put it bluntly -
there are easier ways to kill people. If you've got a doomsday cult
willing to spend a hundred million dollars to make ebolapox, they can
do it. Most disgruntled teenagers will buy a gun instead.


Nadeem Mazen: "...the question won't be "how can I make a yoghurt
culture that
turns green," but "what do I have the tools to do that might be
useful?" "

That has always been the question. However, right now the answer is
"make green yogurt". =) And blue roses - but I'd bet the price tag on
that project is in the hundreds of millions by this point.

Terry Parks

unread,
Dec 30, 2008, 8:16:58 PM12/30/08
to diy...@googlegroups.com


Kay Aull wrote:
> Terry Parks: "It seems clear that since we cannot correctly engineer
> devices which we invented (transistors) that the likelihood that we
> can always correctly engineer life forms that we don't even really
> understand is zero."
>
> This is correct. Biology is an utter kludge - held together by safety
> pins and duct tape, for billions of years. You want to re-engineer
> that, you're in for a treat. The dependencies we can see are nasty,
> and explained only by the logic of evolutionary convenience; the
> dependencies we can't see, because they're unknown, undocumented, or
> documented incorrectly, are even worse.
>
> We're still trying. And failing most of the time.
>
>
The real argument is about goodness = (probability of success *
potential benefit) / (probability of catastrophic failure * potential
cost). Since the potential cost is the cost of the destruction of all
life on earth - which I will equate with infinity for arguments
purposes, It seems that the (probability of catastrophic failure) has to
be brought to zero. It sounds to me like everybody expects nature, or
god, or just probabilities to insure that their experiments do not
result in a really bad organism.
> The argument, however, is this - because we don't fully understand
> biology, we're open to a catastrophic failure. You think that you've
> made a virus that cures cancer, but instead it mutates and turns
> people into zombies. That kind of thing.
>
> Also, if you install a chess AI on your laptop and then throw it down
> the stairs, your damaged hard drive might mutate into Skynet.
> Fortunately, it's much easier to accidentally break things than it is
> to accidentally give them evil superpowers.
>
>
Hopefully my hard drive, and my computer do not have access to any any
weapon systems. I experiment constantly with the software on my
computer. I expect the only negative effects of my experimentation to
occur on my computer.
> Murphy's Law still holds in biology. For instance, engineered
> organisms are going to get loose. This happens naturally too -
> mutations and gene swapping are common. Nearly all of these variants
> will be selected against, but a few will survive. Our current GMOs
> are low-risk for even that. Agricultural traits (e.g., large fruits
> or herbicide resistance) are liabilities in the wild. So are
> bacterial party tricks like GFP. People are working on designs that
> fail safely, or rely on rare metabolites not found in the wild, and so
> on, but our best defense remains that life doesn't want to do most of
> the things we engineer it for.
>
Trusting the fate of all life on earth to Murphy's Law is totally
ridiculous.

Bryan Bishop

unread,
Dec 30, 2008, 8:45:37 PM12/30/08
to diy...@googlegroups.com, kan...@gmail.com
On Tue, Dec 30, 2008 at 7:16 PM, Terry Parks <wiz...@centtech.com> wrote:
> Trusting the fate of all life on earth to Murphy's Law is totally
> ridiculous.

Here's my basic argument structure:

* x can probably cause y
* the mindset under which you consider the possibility of x is
actually also concerned with many things that can cause y
* from an engineering point of view, all of this adds up to 'bad'
because y is a SPOF
* so let's fix the SPOFyness.

Let's look at the situation from the point of view of an engineer.
What we have here is a giant planet with an open, exposed atmosphere
that I'm rather fond of and would prefer to keep. Because of the
self-replicating nature of organisms, Bad Stuff can rapidly propagate
through nearly any available medium and channel. This Bad Stuff can
result in global catastrophic risks, the likes of which make up the
reason for the existence of the Lifeboat Foundation. Other global
catastrophic risks have been discussed to death by the futurist
communities- asteroid impact, artificial intelligence, Drexlerian
molecular nanotech in the form of grey goo converting everything into
computronium, singularities, etc. etc. The common cause for all of
this is that each opportunity for augmenting an organism makes up this
giant, global SPOF, or Single Point of Failure.

http://en.wikipedia.org/wiki/Single_point_of_failure

"A Single Point of Failure, (SPOF), is a part of a system which, if it
fails, will stop the entire system from working. They are undesirable
in any system whose goal is high availability, be it a network,
software application or other industrial system."

"""
The assessment of a potentially single location of failure identifies
the critical components of a complex system, that would provoke a
total systems failure in case of malfunction. Highly reliable systems
may not rely on any such component.

The strategy to prevent from total systems failure is

1) Reduced Complexity

Complex systems shall be designed according to principles decomposing
complexity to the required level.

2) Redundancy

Redundant systems include a double instance for any critical component
with an automatic and robust switch or handle to turn control over to
the other well functioning unit (failover)

3) Diversity

Diversity design is a special redundancy concept that cares for the
doubling of functionality in completely different design setups of
components to decrease the probability that redundant components might
fail both at the same time under identical conditions.

4) Transparency

Whatever systems design will deliver, long term reliability is based
on transparent and comprehensive documentation.
"""

DIYbio has already been working on #1 (reduced complexity), somewhat
on #3 (though I don't know how we could promote even more diversity),
and definitely on #4 (transparency). The issue of **redundancy** is
one for the aerospace engineers. The NewSpace culture has been working
steadily on privatizing space, rocket engineering, and so on, and
we've been seeing XCOR, Armadillo Aerospace, Masten, Unreasonable
Rocket, SpaceX, FREDNET, and others take up some of the responsibility
here. In space, there's not a shared atmosphere, so the risk of
biological threats being communicable goes way down. It's important to
note that the solution sort of transcends the realm that diybio deals
with-- in other words, the "open atmosphere system" thingy is broken
by 'design'. Oh wait, we didn't design it in the first place ;-) so
why shock people and claim that we're the one breaking something that
is broken by design?

Anyway, this is an obligatory reference:

The Lifeboat Foundation
http://lifeboat.com/ex/main
"""
The Lifeboat Foundation is a nonprofit nongovernmental organization
dedicated to encouraging scientific advancements while helping
humanity survive existential risks and possible misuse of increasingly
powerful technologies, including genetic engineering, nanotechnology,
and robotics/AI, as we move towards a technological singularity.

Lifeboat Foundation is pursuing a variety of options, including
helping to accelerate the development of technologies to defend
humanity, including new methods to combat viruses (such as RNA
interference and new vaccine methods), effective nanotechnological
defensive strategies, and even self-sustaining space colonies in case
the other defensive strategies fail.
"""

And their 'bioshield project', though I'm not optimistic-

http://lifeboat.com/ex/bio.shield
"""
Ray Kurzweil says "We have an existential threat now in the form of
the possibility of a bioengineered malevolent biological virus. With
all the talk of bioterrorism, the possibility of a bioengineered
bioterrorism agent gets little and inadequate attention. The tools and
knowledge to create a bioengineered pathogen are more widespread than
the tools and knowledge to create an atomic weapon, yet it could be
far more destructive. I'm on the Army Science Advisory Group (a board
of five people who advise the Army on science and technology), and the
Army is the institution responsible for the nation's bioterrorism
protection. Without revealing anything confidential, I can say that
there is acute awareness of these dangers, but there is neither the
funding nor national priority to address them in an adequate way."

Today more than a quarter of all deaths worldwide — 15 million each
year — are due to infectious diseases. These include 4 million from
respiratory infections, 3 million from HIV/AIDS, and 2 million from
waterborne diseases such as cholera. This is a continuing and
intolerable holocaust that, while sparing no class, strikes hardest at
the weak, the impoverished, and the young.

President Bush's plan to spend $7.1 billion on this threat which was
reduced to $2.3 billion by Congress is not nearly enough for a threat
that could easily cost hundreds of billions of dollars to the US alone
if it materialized, not to mention the damages to the rest of the
world. $2.3 billion is just $153 per person who WILL die this year due
to infectious diseases.

The new realities of terrorism and suicide bombers pull us one step
further. How would we react to the devastation caused by a virus or
bacterium or other pathogen unleashed not by the forces of nature, but
intentionally by man?

No intelligence agency, no matter how astute, and no military, no
matter how powerful and dedicated, can assure that a small terrorist
group using readily available equipment in a small and apparently
innocuous setting cannot mount a first-order biological attack. With
the rapid advancements in technology, we are rapidly moving from
having to worry about state-based biological programs to smaller
terrorist-based biological programs.

It's possible today to synthesize virulent pathogens from scratch, or
to engineer and manufacture prions that, introduced undetectably over
time into a nation's food supply, would after a long delay afflict
millions with a terrible and often fatal disease. It's a new world.

Though not as initially dramatic as a nuclear blast, biological
warfare is potentially far more destructive than the kind of nuclear
attack feasible at the operational level of the terrorist. And
biological war is itself distressingly easy to wage.

...

CODES OF CONDUCT

More generally, we think that the idea of codes of conduct for
biosecurity is somewhat misleading. Codes of conduct probably make
sense for biosafety, because in that case each biologist needs to be
continuously thinking about whether his or her experiment is being
done safely.

Biosecurity is different. The main thing we want to avoid here is
somebody doing an "experiment of concern" that makes weapons radically
easier or more effective. This is a one-time decision and most of the
knowledge needed to make that judgment does not really involve
biology. People have been building bioweapons for fifty years and if
you aren't part of that community it's very easy to guess wrong about
whether your experiment is harmless.

Some time ago, NIH funded a grant to improve our knowledge of how
toxic botox really is. Sounds fine. But the experimental method
involved figuring out how to stabilize ultrapure botox, which is
something that the US and USSR both failed to do in the sixties.
Biologists can't reliably know this kind of history or what's
important, it's not their subject and its not reasonable for every
biologist to learn it.

So we think that the room for codes of conduct is pretty limited. Our
suggestions would be:

Get a sanity check. If you think that you have an experiment of
concern, then get qualified outside advice. Lifeboat Foundation
Scientific Advisory Board member Stephen M. Maurer is working with
people at Maryland, Duke, and Northewestern to set up a portal where
people can get this advice. We think a public pronouncement that you
should always get a qualified outside opinion is important and would
stop the practice of doing the experiment and then announcing it to
the AP.

Make the community more transparent. If you look at how US
intelligence went about deciding whether the Nazis had a bomb project,
they used the worldwide physics community to find out who had suddenly
stopped teaching or dropped out of sight. So if we can make science
communities more transparent, then that's presumably going to pay
dividends later on. You can imagine taking steps like holding
reunions, even a community web site with names would be good.

..

FIRST LEVEL BARRIER

We call for the development of a "first level barrier" by 1)
Stockpiling industrial disposable particulate respirators such as the
N95, N99, or N10 types. This mask will prevent the user from being
infected, or if already infected, from spreading it to other people.
If a plague became serious, it would probably be best for the
government to put on TV spots that tell people that your best chance
of surviving a plague (natural or otherwise) is to simply stay home.
Which, by the way, is also the best way to stop transmission. 2)
Installation of intense UV field generators in the HVAC system of
aircraft and other public places as described in our report for Virgin
Atlantic. 3) Stockpiling of antiviral drugs such as Tamiflu and
antibiotics such as Cipro.

TECHNOLOGIES TO COMBAT BIOLOGICAL VIRUSES

One technology to develop is RNAi-based viral suppression. Also,
further strategies for battling viral infections are being developed
by biotech and pharma companies, such as research programs focusing on
the use of decoy oligonucleotides, aptamers, and other small molecules
such as peptides and glycopeptides to inhibit viral fusion with human
cell membranes or function. These technologies are new and largely
unproven so the important and definitive times are ahead.

Other technologies that should be developed include:

1) Development of rapid detection and identification technology:
technologies such as these are being developed, based on electrostatic
interactions with unmodified gold nanoparticles, silicon transistors
(also described in DNA detection made easy), or DNA pairing with a
single strand DNA tethered to an enzyme which becomes activated upon
binding to the complementary strand.

2) Development of "smart" materials such as antiviral surface coatings
that are being tested for use in face masks and other applications.

3) Further advances in sequencing technologies, ultimately reaching a
target of full virus sequencing within hours. As mentioned,
identification of the virus used for the development of a vaccine or
other treatment for an unknown virus necessitates the rapid sequencing
of its entire DNA or RNA genome. Sequencing technology is in
widespread use and is constantly decreasing in cost per segment
sequenced as well as in the time taken for the sequencing.

The relevant outcomes of development in this field will reduce sample
preparation time as well as expand the diversity of materials useful
for isolation of viruses to be sequenced (blood, saliva, skin,
mucosa). As faster sequencing times and better sequence assembly
software are constantly being developed the need for these measures to
be specifically undertaken is of lesser importance. Examples for
emerging rapid sequencing technologies include nanopore-based
sequencing, sequencing based on nano-scale electronic and photonic
effects, and sequencing performed using microarray-based
fluorescently-tagged polymerase and nucleotides.

4) Software-based treatment design. A longer term and expensive
(though ultimately valuable) avenue of research, which would be useful
in a variety of medicinal applications, would be the development of a
comprehensive software system able to analyze the genetic makeup of a
virus as well as the proteins it expresses (its proteome), which could
provide specific epitopic or conformational targets to interfere with
the production, processing and function of these molecules.

The initial identification of viruses susceptibilities would help
determine the most likely effective antiviral treatments based on DNA,
RNA, or protein-based interference strategies. Software-based
strategies should also allow the identification of the optimal protein
sequences to be used as a vaccine, and be able to accelerate the "good
guys" response in the "arms race" as further bioengineered, malicious
pathogens are developed.

Because it would be suicidal for a terrorist group or nation to use
airborne infectious viruses, they may decide to use engineered
bacteria or prions instead. (Although suicidal terrorists do exist!)
To combat these threats, we propose frequent testing of the water
supply, not just for known bacteria but for the biologically necessary
consensus DNA sequences that would be present even in engineered
organisms. All known toxin-producing sequences should be tested for as
well.

We also propose more extensive testing of the meat supply for prion
sequences and we are definitely against the current government
regulations which prohibit meat processors from doing extra prion
tests at their own expense! This testing would be expensive but we are
currently doing way too little of it. Additionally, testing air in
cities would be useful.

Note that technologies like PCR get cheaper every day and large scale
testing of this kind would further reduce the per test cost.

We support development of the prion blood test being developed by
Claudio A. Soto's group. This new test is a million times more
sensitive than conventional antibody-based techniques for detecting
prions.

CONCLUSION

It would be more cost effective if those funding the BioShield set
specific goals and gave prize money to the people/organizations that
accomplished them than simply funding research without such goals.

We propose that we take the measure of this threat and make
preparations today to engage it with the force and knowledge adequate
to throw it back wherever and however it may strike. It is time to
accelerate the development of antiviral and antibacterial technology
for the human population. The way to combat this serious and
ever-growing threat is to develop broad tools to destroy viruses and
bacteria. We have tools such as those based on RNA interference that
can block gene expression. We can now sequence the genes of a new
virus in a matter of days, so our goal is within reach!

We call for the creation of new technologies and the enhancement of
existing technologies to increase our abilities to detect, identify,
and model any emerging or newly identified infective agent, present or
future, natural or otherwise — we need to accelerate the expansion of
our capacity to engineer vaccines for immunization, and explore the
feasibility of other medicinals to cure or circumvent infections, and
to manufacture, distribute, and administer what we need in a timely
and effective manner that protects us all from the threat of
bioengineered malevolent viruses and microbial organisms. Time is
running out.

These goals have been endorsed by Bill Joy and Ray Kurzweil in the New
York Times op-ed Recipe for Destruction and by U.S. Senator Bill
Frist.

The time for action is now!
"""

One of there other projects is 'space habitats' -- "To build
fail-safes against global existential risks by encouraging the spread
of sustainable human civilization beyond Earth." which is somewhat
related to O'Neill's work, OSCOMAK, etc.
http://lifeboat.com/ex/space.habitats

michael taylor

unread,
Dec 31, 2008, 12:17:55 AM12/31/08
to diy...@googlegroups.com
On Tue, Dec 30, 2008 at 8:16 PM, Terry Parks <wiz...@centtech.com> wrote:
>
> Trusting the fate of all life on earth to Murphy's Law is totally
> ridiculous.

We cannot legislate or regulate nature, and that's essentially how
nature works itself. People who perform biology and generic research
regardless of whether they are professionals in corporate or
government laboratories and amateurs in self-funded make-shift labs
have the same degree of innate desire of self-preservation, and the
same probability of making mistakes that could lead to releasing a GMO
into the wild. The facilities and resources differ, but the underlying
fundamentals are unchanged.

>> There's also malware. Yes, there will be black hat biohackers making
>> weapons. The Soviets were doing it in the '80s, Aum Shinrikyo
>> apparently tried in the '90s, and it's gotten much easier since then.

Richard Prestons conjected in _The Demon In The Freezer_ [1] that it
would cost (from memory) approximately $5000-10000 (USD) to minimally
equip a lab to do bioterrorism virology circa 2001, so let's admit
bioterrorism is a realistic threat. But that has nothing to do with
DIYbio, the basic knowledge is already unclassified and widely known
around the world, in universities in nearly every country. Completely
stopping amateur biologists would have zero relevant impact on
stopping any motived terrorists foreign or domestic.

Through out the history science there are cases of significant
contributions to scientific knowledge by amateurs although we have
historically not identified them as amateurs but often as geniuses.
The fears appear to be primarily of two basic forms, first a
xenophobic fear of the unknown, this appears to be most prevalent
with the general public and popular media. The second is a fear of
change, of the scientific community being changed in the "rules of the
game," mathematics as an example experienced a similar fear of change
with the introduction of computer completed mathematical proofs (the
four colour theorem being the famous example[2]).

I think the part that disappoints me is a blissful ignorance that
"professional" science is somehow better, for however you wish to
define / measure better, than "amateur" science and that it is okay
for science to be conducted out of sight, and out of the public
awareness except when rare ethical issues flare up in the public
awareness like stem cell research and genetically modified agriculture
where 'gut instincts' opinions and decisions are typically made
without understanding what they are actually even talking about.
Imagine deciding what car to buy, without knowing what an automobile
is.

I am interested in identifying concrete and practical concerns about
lab safety, but I admit that I find some of the "fear-mongering" about
vague, fanciful stories that seem contrived. I guess I have trouble
believing that someone could follow instructions well enough to
perform such majestic feats of genetics, without managing to notice
any one of the numerous safety warnings on the various reagents,
equipment and 3rd-party supplied organisms. My problem is, I can't
believe that you could manage to conduct any successful generic
alterations without acquiring the necessary skills, tools, and
understanding of sterilization and contamination.

[1] Excerpt from The Demon In The Freezer,
<http://cryptome.info/0001/smallpox-wmd.htm>
[2] Four Colour (Map) Theorem
<http://www.math.gatech.edu/~thomas/FC/fourcolor.html> and article
Swart, ER (1980). "The philosophical implications of the four-color
problem". American Mathematical Monthly 87 (9): 697--702.
<http://www.joma.org/images/upload_library/22/Ford/Swart697-707.pdf>

-Michael

Daniel C.

unread,
Dec 31, 2008, 12:47:36 AM12/31/08
to diy...@googlegroups.com
On Tue, Dec 30, 2008 at 6:16 PM, Terry Parks <wiz...@centtech.com> wrote:
> Since the potential cost is the cost of the destruction of all
> life on earth

I am assuming, since this is a biology-themed mailing list, that the
hypothetical destroying agent would itself be alive, and therefore
would not in fact destroy all life on earth. Unless you posit that it
somehow self destructs after killing everything else.

-DTC

Message has been deleted
Message has been deleted

Kay Aull

unread,
Dec 31, 2008, 4:54:02 AM12/31/08
to DIYbio
On Dec 30, 8:16 pm, Terry Parks <wiz...@centtech.com> wrote:
> The real argument is about goodness = (probability of success *
> potential benefit)  / (probability of catastrophic failure * potential
> cost). Since the potential cost is the cost of the destruction of all
> life on earth - which I will equate with infinity for arguments
> purposes, It seems that the (probability of catastrophic failure) has to
> be brought to zero.

I agree with you - but the baseline (probability of catastrophic
failure) is not zero. Technology can protect us from existential
risks. Yes, it creates them too. But if it improves the odds
overall, then the technology is worth supporting.

goodness = sum( (incremental change in probability of foo) *
(consequences of foo) )

My instinct says that synthetic biology has positive goodness. The
benefits of reducing pressure on our food, water, and fuel supplies -
so we don't need to destroy the earth to get more, or fight wars over
what remains - is worth the risk of creating something horrible that
gets loose.

> Trusting the fate of all life on earth to Murphy's Law is totally
> ridiculous.

Bulletproof evidence or no, the fact that we're still alive in a
Murphy's Law-enabled biosphere is a darn good sign. Everything that
can go wrong will go wrong - and it's off to a fine start. We've got
exotic invasive species: rabbits, starlings, zebra mussels, kudzu.
We've got deadly bioweapons: black plague, anthrax, ebola, smallpox.
All of these are Bad Things. We've survived all that and worse.

And if we roll out the technology with humility and common sense, and/
or get lucky, we may not have to find out how wide that margin really
is.

Nick Taylor

unread,
Dec 31, 2008, 5:29:23 AM12/31/08
to diy...@googlegroups.com
> goodness = sum( (incremental change in likelihood of foo) *
> (consequences of foo) )

I think that formula is possibly a little misleading in it's simplicity. If you're talking about self-replicating organisms, then  the resulting ecosystems are (often delicate) balances of exponential forces. You don't necessarily get a stable outcome.


 
>> Trusting the fate of all life on earth to Murphy's Law is totally
>> ridiculous.

> Bulletproof evidence or no, the fact that we're still alive in a
> Murphy's Law-enabled biosphere is a darn good sign. Everything that
> can go wrong will go wrong - and it's already doing a fine job. We
> have exotic invasive species of all sorts: rabbits, starlings, zebra
> mussels, kudzu, and more. We have a quality assortment of bioweapons:
> black plague, anthrax, ebola, smallpox. These are all Bad Things.
> But we have handled all that and worse.

I'm not sure that the way that humanity "handled" the black death would be acceptable to most people.

If you are creating organisms that can out-compete native organisms which are part of an interdependent ecosystemic web, then that is potentially catastrophically dangerous to other parts of the system.

I'm from New Zealand - I'm less concerned about weaponising of the technology than interference with existing ecosystems - because I've seen at first hand the damage that this can do. We have sniffer dogs at our airports here, going over everyone who comes through customs, not for drugs or weapons, but to stop meat and fruit. To get into NZ, you have to sign a card saying that you haven't been on a foreign farm in the last couple of weeks.  

To say that "we've handled it" trivialises the phenomenal costs of "handling it"... the piles of burning cattle in England during the 2003 Foot and Mouth outbreak, 50 million people dead in 1918, the gargantuan effort it took to eradicate smallpox - or the fact that we're currently presiding over what is candidly known has the Holocene Extinction Period.

For what it's worth, I think the cat is already out of the bag and we should probably be thinking in terms of how to survive a serious bio-cockup... but that's a big one... because let's face it, we aren't actually "handling things" as it is. We're seriously screwing everything up, and that's without people tinkering with randomly introducing new species into unsuspecting ecosystems.

Nick



Oh, Hi everyone btw.





Bryan Bishop

unread,
Dec 31, 2008, 12:48:29 PM12/31/08
to diy...@googlegroups.com, kan...@gmail.com
On Wed, Dec 31, 2008 at 4:29 AM, Nick Taylor <nick...@googlemail.com> wrote:
> For what it's worth, I think the cat is already out of the bag and we should
> probably be thinking in terms of how to survive a serious bio-cockup... but
> that's a big one... because let's face it, we aren't actually "handling
> things" as it is. We're seriously screwing everything up, and that's without
> people tinkering with randomly introducing new species into unsuspecting
> ecosystems.

I'd be interested in hearing your ideas on how to help things.
Immediately what comes to mind are the Biosphere projects, but these
were failures. The seed facilities planned for the moon are an
interesting idea, but that's only for seeded life. Re: survival, some
less interesting solutions IMHO would be the typical bunker and
"controlled atmospheric ventillation-exposure systems" (caves) where
you keep track of what buildings you have been in and try to maintain
full knowledge of what might be in your system (this was somewhat
mentioned in a recent email I sent re: the Lifeboat Foundation). But
personally I like living outside without suits :-/.

Ecological bootstrapping requires some serious study. :-)

Nick Taylor

unread,
Jan 1, 2009, 2:40:40 AM1/1/09
to diy...@googlegroups.com
> I'd be interested in hearing your ideas on how to help things.
> Immediately what comes to mind are the Biosphere projects, but these
> were failures. The seed facilities planned for the moon are an
> interesting idea, but that's only for seeded life. Re: survival, some
> less interesting solutions IMHO would be the typical bunker and
> "controlled atmospheric ventillation-exposure systems" (caves) where
> you keep track of what buildings you have been in and try to maintain
> full knowledge of what might be in your system (this was somewhat
> mentioned in a recent email I sent re: the Lifeboat Foundation). But
> personally I like living outside without suits :-/.

Ok - I haven't forgotten, but as I said this is a big one, and I don't have answers, just various observations etc. I'll need to give it a couple of days thought, and as this is new year's day, trains of thought are difficult to maintain.

 
> Ecological bootstrapping requires some serious study. :-)

Ecology is already a pretty good bootstrapper. If you get a little fish tank (or big vase), fill it up with water and drop a tablespoon of weed you've scooped off the surface of a pond into it, you'll get a thriving little ecosystem that will go for years - I gave up on my last one when the vase thing was 1/2 filled with sludge. 

And therein lies the rub I think. Ecology is already boot-strapping and is actually pretty hard to stop. Getting it so it's stable, and providing the various bits that are needed for the likes of we humans to survive is another matter altogether. Stability is the thing. A pandemic is basically just a de-stabilisation brought about by the introduction of a new species/variant.

There is some discussion over on Technium theorising about making a civilisation seed : http://www.kk.org/thetechnium/archives/2006/02/the_forever_boo.php

I guess a biosphere seed might be a similar sort of thing, only many dimensions more complicated. It might be best to find solutions that can be implemented before we get to that stage.

But as I say though, it's a big one, and I don't have answers. Just observations.




Nick





Bryan Bishop

unread,
Jan 1, 2009, 12:30:34 PM1/1/09
to diy...@googlegroups.com, kan...@gmail.com
On Thu, Jan 1, 2009 at 1:40 AM, Nick Taylor <nick...@googlemail.com> wrote:
>> I'd be interested in hearing your ideas on how to help things.
>> Immediately what comes to mind are the Biosphere projects, but these
>> were failures. The seed facilities planned for the moon are an
>> interesting idea, but that's only for seeded life. Re: survival, some
>> less interesting solutions IMHO would be the typical bunker and
>> "controlled atmospheric ventillation-exposure systems" (caves) where
>> you keep track of what buildings you have been in and try to maintain
>> full knowledge of what might be in your system (this was somewhat
>> mentioned in a recent email I sent re: the Lifeboat Foundation). But
>> personally I like living outside without suits :-/.
>
> Ok - I haven't forgotten, but as I said this is a big one, and I don't have
> answers, just various observations etc. I'll need to give it a couple of
> days thought, and as this is new year's day, trains of thought are difficult
> to maintain.

Understandably. :-)

>> Ecological bootstrapping requires some serious study. :-)
>
> Ecology is already a pretty good bootstrapper. If you get a little fish tank
> (or big vase), fill it up with water and drop a tablespoon of weed you've
> scooped off the surface of a pond into it, you'll get a thriving little
> ecosystem that will go for years - I gave up on my last one when the vase
> thing was 1/2 filled with sludge.
>
> And therein lies the rub I think. Ecology is already boot-strapping and is
> actually pretty hard to stop. Getting it so it's stable, and providing the
> various bits that are needed for the likes of we humans to survive is
> another matter altogether. Stability is the thing. A pandemic is basically
> just a de-stabilisation brought about by the introduction of a new
> species/variant.

Yes, that's for non-isolated systems though. All ecologies and
ecosystems are open to some extent, cite thermodynamics here and so
on, but when I talk about bootstrapping I'm sort of talking about
making life work in some environment that it has never worked in
before. Not just a tank, but say a completely isolated ecology that is
only able to be given some initial seeds to grow from. This is the
same idea that leads to thinking about space habitats, the Biosphere
projects, terraforming Mars, the moons or other planets, and so on.

> There is some discussion over on Technium theorising about making a
> civilisation seed :
> http://www.kk.org/thetechnium/archives/2006/02/the_forever_boo.php

Woah, you're now like my best friend or something for hitting on the
magic link :-). That's one of my favorite Kevin Kelly articles. I was
writing about it a few weeks ago:

http://groups.google.com/group/openmanufacturing/browse_frm/thread/4a6f612fb2d9069a/e4c375acce772250?lnk=gst&q=kk.org#e4c375acce772250

which I'll quote from below:
=================================

On Thu, Dec 18, 2008 at 3:13 PM, Bryan Bishop <kanz...@gmail.com> wrote:
> I first learned about Dave Gingery from Kevin Kelly:
> http://www.kk.org/thetechnium/archives/2007/03/bootstrapping_t.php
> (Another article of his worth reading and on topic is re:
> civilizations as creatures:
> http://www.kk.org/thetechnium/archives/2006/03/civilizations_a.php )

> "Recently a guy re-invented the fabric of industrial society in his
> garage. The late Dave Gingery was a midnight machinist in Springfield,
> Missouri who enjoyed the challenge of making something from nothing,
> or perhaps it is more accurate to say, making very much by leveraging
> the power of very little. Over years of tinkering, Gingery was able to
> bootstrap a full-bore machine shop from alley scraps. He made rough
> tools that made better tools, which then made tools good enough to
> make real stuff."

Hm. That second kk.org link, I think, is the wrong one. Let's try this one:

http://www.kk.org/thetechnium/archives/2006/02/the_forever_boo.php


"I've been thinking of civilization (the technium) as a life form, as
a self-replicating structure. I began to wonder what is the smallest
seed into which you could reduce the "genes" of civilization, and have
it unfold again, sufficient that it could also make another seed
again. That is, what is the smallest seed of the technium that is
viable? It must be a seed able to grow to reproduction age and express
itself as a full-fledge civilization and have offspring itself --
another replicating seed.


This seed would most likely be a library full of knowledge and perhaps
tools. Many libraries now contain a lot of what we know about our
culture and technology, and even a little bit of how to recreate it,
but this library would have to accurately capture all the essential
knowledge of cultural self-reproduction. It is important to realize
that this seed library is not the universal library of everything we
know. Rather, it is a kernel that contains that which cannot be
replicated and that which when expanded can recover what we know."


Anyway, somewhere in his bloggings he specifically relates the nucleus
of the civilization creature as the self-replicating library of tools,
information and culture, in the sense of von Neumann probes:


Implementation notes on von Neumann probes
http://heybryan.org/projects/atoms/ (ok, it's old)
"The basic idea of a von Neumann probe is to have a space-probe that
is able to navigate the galaxy and use self-replication (see RepRap
and bio). The probe would contain hundreds of thousands of digital
genomes (sequenced DNA), DNA synthesizers and sequencers, bacteria,
embryos, stem cells, copies of the Internet Archive and a significant
portion of the WWW in general, plus the immediate means and tools to
copy all of the information and create a material embodiment, kind of
like running an unzip utility on top of the thousands of exabytes
predicted to be inexistence today. This would probably include many
people, societies, even entire civilizations if we can collect enough
data and begin to 'debug' civilization. The system might end up using
an ion drive and a hydrogen collector, with on-board nucleosynthesis
to create the biomolecules necessary for life, plus ways to attach to
asteroids and begin replicating and copying the data and
biomaterials."


von Neumann replicator award/prize:
http://www.chiark.greenend.org.uk/~douglasr/prize/


"The Prize


So what needs to be done to bring these two things together?


1) Show that 90 % of a self assembling robotic system can be
fabricated using a rapid prototyping system that can also self
replicate


2 )Show that 90 % of the assembly from parts of a rapid prototyping
system can be done by a robotic system that can also self assemble."


**but** Freitas clearly outlines the issue of closure engineering that
shouldn't be ignored in his KSRM book and AASM report:


http://groups.google.com/group/openmanufacturing/msg/4ff7a92e2425dde2
http://www.islandone.org/MMSG/aasm/AASM53.html#536
http://www.molecularassembler.com/KSRM/5.6.htm


Which I'll quote from again:


================


Fundamental to the problem of designing self-replicating systems is
the issue of closure.


In its broadest sense, this issue reduces to the following question:
Does system function (e.g., factory output) equal or exceed system
structure (e.g., factory components or input needs)? If the answer is
negative, the system cannot independently fully replicate itself; if
positive, such replication may be possible.


Consider, for example, the problem of parts closure. Imagine that the
entire factory and all of its machines are broken down into their
component parts. If the original factory cannot fabricate every one of
these items, then parts closure does not exist and the system is not
fully self-replicating .


In an arbitrary system there are three basic requirements to achieve closure:
Matter closure - can the system manipulate matter in all ways
necessary for complete self-construction?
Energy closure - can the system generate sufficient energy and in the
proper format to power the processes of self-construction?
Information closure can the system successfully command and control
all processes required for complete self-construction?


Partial closure results in a system which is only partially
self-replicating. Some vital matter, energy, or information must be
provided from the outside or the machine system will fail to
reproduce. For instance, various preliminary studies of the matter
closure problem in connection with the possibility of "bootstrapping"
in space manufacturing have concluded that 90-96% closure is
attainable in specific nonreplicating production applications (Bock,
1979; Miller and Smith, 1979; O'Neill et al., 1980). The 4-10% that
still must be supplied sometimes are called "vitamin parts." These
might include hard-to-manufacture but lightweight items such as
microelectronics components, ball bearings, precision instruments and
others which may not be cost-effective to produce via automation
off-Earth except in the longer term. To take another example, partial
information closure would imply that factory-directive control or
supervision is provided from the outside, perhaps (in the case of a
lunar facility) from Earth-based computers programmed with
human-supervised expert systems or from manned remote teleoperation
control stations on Earth or in low Earth orbit.


The fraction of total necessary resources that must be supplied by
some external agency has been dubbed the "Tukey Ratio" (Heer, 1980).
Originally intended simply as an informal measure of basic materials
closure, the most logical form of the Tukey Ratio is computed by
dividing the mass of the external supplies per unit time interval by
the total mass of all inputs necessary to achieve self-replication.
(This is actually the inverse of the original version of the ratio.)
In a fully self-replicating system with no external inputs, the Tukey
Ratio thus would be zero (0%).


It has been pointed out that if a system is "truly isolated in the
thermodynamic sense and also perhaps in a more absolute sense (no
exchange of information with the environment) then it cannot be
self-replicating without violating the laws of thermodynamics"
(Heer,1980). While this is true, it should be noted that a system
which achieves complete "closure" is not "closed" or "isolated" in the
classical sense. Materials, energy, and information still flow into
the system which is thermodynamically "open"; these flows are of
indigenous origin and may be managed autonomously by the SRS itself
without need for direct human intervention.


Closure theory. For replicating machine systems, complete closure is
theoretically quite plausible; no fundamental or logical
impossibilities have yet been identified. Indeed, in many areas
automata theory already provides relatively unambiguous conclusions.
For example, the theoretical capability of machines to perform
"universal computation" and "universal construction" can be
demonstrated with mathematical rigor (Turing, 1936; von Neumann, 1966;
see also sec. 5.2), so parts assembly closure is certainly
theoretically possible.


An approach to the problem of closure in real engineering-systems is
to begin with the issue of parts closure by asking the question: can a
set of machines produce all of its elements? If the manufacture of
each part requires, on average, the addition of >1 new parts to
product it, then an infinite number of parts are required in the
initial system and complete closure cannot be achieved. On the other
hand, if the mean number of new parts per original part is <1, then
the design sequence converges to some finite ensemble of elements and
bounded replication becomes possible.


The central theoretical issue is: can a real machine system itself
produce and assemble all the kinds of parts of which it is comprised?
In our generalized terrestrial industrial economy manned by humans the
answer clearly is yes, since "the set of machines which make all other
machines is a subset of the set of all machines" (Freitas et
al.,1981). In space a few percent of total system mass could feasibly
be supplied from Earth-based manufacturers as "vitamin parts."
Alternatively, the system could be designed with components of very
limited complexity (Heer, 1980). The minimum size of a self-sufficient
"machine economy" remains unknown.


===
===


According to the NASA study final report [2]: "In actual practice, the
achievement of full closure will be a highly complicated, iterative
engineering design process.* Every factory system, subsystem,
component structure, and input requirement must be carefully matched
against known factory output capabilities. Any gaps in the
manufacturing flow must be filled by the introduction of additional
machines, whose own construction and operation may create new gaps
requiring the introduction of still more machines. The team developed
a simple iterative procedure for generating designs for engineering
systems which display complete closure. The procedure must be
cumulatively iterated, first to achieve closure starting from some
initial design, then again to eliminate overclosure to obtain an
optimized design. Each cycle is broken down into a succession of
subiterations which ensure qualitative, quantitative, and throughput
closure. In addition, each subiteration is further decomposed into
design cycles for each factory subsystem or component." A few
subsequent attempts to apply closure analysis have concentrated
largely on qualitative materials closure in machine replicator systems
while de-emphasizing quantitative and nonmaterials closure issues
[1128], or have considered closure issues only in the more limited
context of autocatalytic chemical networks [2367, 2686]. However, Suh
[1160] has presented a systematic approach to manufacturing system
design wherein a hierarchy of functional requirements and design
parameters can be evaluated, yielding a "functionality matrix" (Figure
3.61) that can be used to compare structures, components, or features
of a design with the functions they perform, with a view to achieving
closure.


* To get a sense of the complex iterative nature of closure
engineering, the reader should ponder the design process that he or
she might undertake in order to generate the following full-closure
self-referential "pangram" [2687] (a sentence using all 26 letters at
least once), written by Lee Sallows and reported provided by
Hofstadter [260]: "Only the fool would take trouble to verify that his
sentence was composed of ten a's, three b's, four c's, four d's,
forty-six e's, sixteen f's, four g's, thirteen h's, fifteen i's, two
k's, nine l's, four m's, twenty-five n's, twenty-four o's, five p's,
sixteen r's, forty-one s's, thirty-seven t's, ten u's, eight v's, four
x's, eleven y's, twenty-seven commas, twenty-three apostrophes, seven
hyphens, and, last but not least, a single !" Self-enumerating
sentences like these are also called "Sallowsgrams" [2687] and have
been generated in English, French, Dutch, and Japanese languages using
iterative computer programs.


Partial closure results in a system that is only partially
self-replicating. With partial closure, the machine system will fail
to self-replicate if some vital matter, energy, or information input
is not provided from the outside. For instance, various preliminary
studies [2688-2690] of the materials closure problem in connection
with the possibility of macroscale "bootstrapping" in space
manufacturing have concluded that 90-96% closure is attainable in
specific nonreplicating manufacturing applications. The 4-10% that
still must be supplied are sometimes called "vitamin parts." (The
classic example of self-replication without complete materials
closure: Humans self-reproduce but must but take in vitamin C, whereas
most other self-reproducing vertebrates can make their own vitamin C
[2691].) In the case of macroscale replicators, vitamin parts might
include hard-to-manufacture but lightweight items such as
microelectronics components, ball bearings, precision instruments, and
other parts which might not be cost-effective to produce via
automation off-Earth except in the longer term. To take another
example, partial information closure might imply that factory control
or supervision is provided from the outside, perhaps (in the case of a
lunar facility) from Earth-based computers programmed with
human-supervised expert systems or from manned remote teleoperation
control stations located on Earth or in low Earth orbit.


Regarding closure engineering, Friedman [573] observes that "if 96%
closure can really be attained for the lunar solar cell example, it
would represent a factor of 25 less material that must be expensively
transported to the moon. However, ...a key factor ... which deserves
more emphasis [is] the ratio of the weight of a producing assembly to
the product assembly. For example, the many tons of microchip
manufacturing equipment required to produce a few ounces of microchips
makes this choice a poor one – at least early in the evolution – for
self-replication, thus making microelectronics the top of everyone's
list of 'vitamin parts'."


================
================


Here's to cramming everything into as small a space as possible.

Nick Taylor

unread,
Jan 1, 2009, 11:48:46 PM1/1/09
to diy...@googlegroups.com
I threw a bunch of initial (and vaguely scrambled) thoughts at this here blog,  because there's a bit much to fit in an email.

http://www.genomicon.com/2009/01/here-comes-the-flood-diy-bio/

Don't worry, there are pictures etc :)





n
Reply all
Reply to author
Forward
0 new messages