"Certainly there are enough twelve year old kids making computer
viruses to show what can happen."
This concern does not carry across - at least currently - to the DIY
bio community. 12 year old kids "making" computer viruses are
actually using premade virus construction kits which are written by
more experienced programmers. The kits can be downloaded anonymously
from the internet by anyone who can find them, and the viruses they
make are all basically the same with minor variations. All of the
viruses made this way are already detectable by most commercial virus
scanners because the degree of variation with a kit like this is
fairly limited. Doing this with biological viruses would be
significantly complicated by the requirement to purchase and order -
presumably by mail - a lot of equipment, set up a laboratory, learn to
use the equipment, etc. etc. None of these things are easy for
adults; they would be nearly impossible for an adolescent.
"imagine a twelve year old kid spreading a computer virus that gets
into 3D printers and has them print out spider-like robots at night
with venom injectors or knives that murder people in their beds."
This is so ridiculous that it almost doesn't need to be responded to.
If anyone really is worried about something like this happening I urge
them to look into the current state of the art in robotics and AI
programming, and then compare their findings to the technology that
would be required to make a murderous knife-wielding robot.
-DTC
.. a link to one of Paul's posts. I agree, it's scaremongering.
> "Certainly there are enough twelve year old kids making computer
> viruses to show what can happen."
>
> This concern does not carry across - at least currently - to the DIY
> bio community. 12 year old kids "making" computer viruses are
> actually using premade virus construction kits which are written by
Don't be so sure, I was once a 12 year old and I was once making
viruses, playing around with buffer overflows and the crap that counts
as 'quality code'. Though the majority, yes, are using kits.
> significantly complicated by the requirement to purchase and order -
> presumably by mail - a lot of equipment, set up a laboratory, learn to
> use the equipment, etc. etc. None of these things are easy for
> adults; they would be nearly impossible for an adolescent.
Wait, what? Most people find it's the younger children that understand
computers more than the older people -- usually this is because the
young people have so much free time on their hands to just sit and
tinker with technical stuff all day for hours on end. Ordering from a
list of materials using the internet is something that a lot of kids
know how to do, "just enter the credit card number and press go". I
don't think an argument of "it's too complex" is worthwhile here -
it's becoming increasingly simpler with technology.
> "imagine a twelve year old kid spreading a computer virus that gets
> into 3D printers and has them print out spider-like robots at night
> with venom injectors or knives that murder people in their beds."
>
> This is so ridiculous that it almost doesn't need to be responded to.
> If anyone really is worried about something like this happening I urge
> them to look into the current state of the art in robotics and AI
> programming, and then compare their findings to the technology that
> would be required to make a murderous knife-wielding robot.
What, like this?
http://en.wikipedia.org/wiki/Military_robot
And what, rapid prototyping printers like these?
http://reprap.org/
http://fabathome.org/
Even those that do electric circuitry?
http://heybryan.org/books/Manufacturing/rapid_prototyped_electronic_circuits/report.html
As for the robotic controller aspects, there's software like OpenCP
for facial recognition, and lots of fancy algorithms that you can play
with (also in a simulator to get some variation and maybe improvement)
to get the controller working properly with the available legs and
actuators and such.
But still, I agree it's scaremongering. I replied to his email by
pointing out that talking about the relative comparison of risks is
somewhat inappropriate because there are similar risks directly from
nature itself. But let's lock down nature- stop all bacteria from
operating! That'll do the trick (erm, not-- it'll do *something*, but
something even far worse..).
Sure, ordering equipment is easy. Even learning the requisite biology
to engineer a new strain of bacteria would probably be within the
reach of a smart 12 year old. But setting up the laboratory -
presumably in their parent's house - and actually carrying it out,
without their parents noticing? Certainly one can postulate a
situation in which it could happen, but one can postulate a lot of
things that aren't worth worrying about.
> http://en.wikipedia.org/wiki/Military_robot
Making dangerous robots is easy. You could even make them with a
rapid prototyping printer. But...
> As for the robotic controller aspects, there's software like OpenCP
this is where it all breaks down. Don't have time just now to go into
detail (and probably no-one cares anyway) but suffice it to say that
the technology required to make a robot that can hunt through a
person's home and then attack them does not exist -- or if it does
(haven't read up on the most recent DARPA tests on autonomous land
vehicles) it requires parts and systems that a prefab printer cannot
create.
-DTC
You present several valid concerns -- a small subset of everything
that could conceivably go wrong, I'm sure. But it's important to keep
any discussion of risks focused on hazards that are real.
-DTC
Here's my basic argument structure:
* x can probably cause y
* the mindset under which you consider the possibility of x is
actually also concerned with many things that can cause y
* from an engineering point of view, all of this adds up to 'bad'
because y is a SPOF
* so let's fix the SPOFyness.
Let's look at the situation from the point of view of an engineer.
What we have here is a giant planet with an open, exposed atmosphere
that I'm rather fond of and would prefer to keep. Because of the
self-replicating nature of organisms, Bad Stuff can rapidly propagate
through nearly any available medium and channel. This Bad Stuff can
result in global catastrophic risks, the likes of which make up the
reason for the existence of the Lifeboat Foundation. Other global
catastrophic risks have been discussed to death by the futurist
communities- asteroid impact, artificial intelligence, Drexlerian
molecular nanotech in the form of grey goo converting everything into
computronium, singularities, etc. etc. The common cause for all of
this is that each opportunity for augmenting an organism makes up this
giant, global SPOF, or Single Point of Failure.
http://en.wikipedia.org/wiki/Single_point_of_failure
"A Single Point of Failure, (SPOF), is a part of a system which, if it
fails, will stop the entire system from working. They are undesirable
in any system whose goal is high availability, be it a network,
software application or other industrial system."
"""
The assessment of a potentially single location of failure identifies
the critical components of a complex system, that would provoke a
total systems failure in case of malfunction. Highly reliable systems
may not rely on any such component.
The strategy to prevent from total systems failure is
1) Reduced Complexity
Complex systems shall be designed according to principles decomposing
complexity to the required level.
2) Redundancy
Redundant systems include a double instance for any critical component
with an automatic and robust switch or handle to turn control over to
the other well functioning unit (failover)
3) Diversity
Diversity design is a special redundancy concept that cares for the
doubling of functionality in completely different design setups of
components to decrease the probability that redundant components might
fail both at the same time under identical conditions.
4) Transparency
Whatever systems design will deliver, long term reliability is based
on transparent and comprehensive documentation.
"""
DIYbio has already been working on #1 (reduced complexity), somewhat
on #3 (though I don't know how we could promote even more diversity),
and definitely on #4 (transparency). The issue of **redundancy** is
one for the aerospace engineers. The NewSpace culture has been working
steadily on privatizing space, rocket engineering, and so on, and
we've been seeing XCOR, Armadillo Aerospace, Masten, Unreasonable
Rocket, SpaceX, FREDNET, and others take up some of the responsibility
here. In space, there's not a shared atmosphere, so the risk of
biological threats being communicable goes way down. It's important to
note that the solution sort of transcends the realm that diybio deals
with-- in other words, the "open atmosphere system" thingy is broken
by 'design'. Oh wait, we didn't design it in the first place ;-) so
why shock people and claim that we're the one breaking something that
is broken by design?
Anyway, this is an obligatory reference:
The Lifeboat Foundation
http://lifeboat.com/ex/main
"""
The Lifeboat Foundation is a nonprofit nongovernmental organization
dedicated to encouraging scientific advancements while helping
humanity survive existential risks and possible misuse of increasingly
powerful technologies, including genetic engineering, nanotechnology,
and robotics/AI, as we move towards a technological singularity.
Lifeboat Foundation is pursuing a variety of options, including
helping to accelerate the development of technologies to defend
humanity, including new methods to combat viruses (such as RNA
interference and new vaccine methods), effective nanotechnological
defensive strategies, and even self-sustaining space colonies in case
the other defensive strategies fail.
"""
And their 'bioshield project', though I'm not optimistic-
http://lifeboat.com/ex/bio.shield
"""
Ray Kurzweil says "We have an existential threat now in the form of
the possibility of a bioengineered malevolent biological virus. With
all the talk of bioterrorism, the possibility of a bioengineered
bioterrorism agent gets little and inadequate attention. The tools and
knowledge to create a bioengineered pathogen are more widespread than
the tools and knowledge to create an atomic weapon, yet it could be
far more destructive. I'm on the Army Science Advisory Group (a board
of five people who advise the Army on science and technology), and the
Army is the institution responsible for the nation's bioterrorism
protection. Without revealing anything confidential, I can say that
there is acute awareness of these dangers, but there is neither the
funding nor national priority to address them in an adequate way."
Today more than a quarter of all deaths worldwide — 15 million each
year — are due to infectious diseases. These include 4 million from
respiratory infections, 3 million from HIV/AIDS, and 2 million from
waterborne diseases such as cholera. This is a continuing and
intolerable holocaust that, while sparing no class, strikes hardest at
the weak, the impoverished, and the young.
President Bush's plan to spend $7.1 billion on this threat which was
reduced to $2.3 billion by Congress is not nearly enough for a threat
that could easily cost hundreds of billions of dollars to the US alone
if it materialized, not to mention the damages to the rest of the
world. $2.3 billion is just $153 per person who WILL die this year due
to infectious diseases.
The new realities of terrorism and suicide bombers pull us one step
further. How would we react to the devastation caused by a virus or
bacterium or other pathogen unleashed not by the forces of nature, but
intentionally by man?
No intelligence agency, no matter how astute, and no military, no
matter how powerful and dedicated, can assure that a small terrorist
group using readily available equipment in a small and apparently
innocuous setting cannot mount a first-order biological attack. With
the rapid advancements in technology, we are rapidly moving from
having to worry about state-based biological programs to smaller
terrorist-based biological programs.
It's possible today to synthesize virulent pathogens from scratch, or
to engineer and manufacture prions that, introduced undetectably over
time into a nation's food supply, would after a long delay afflict
millions with a terrible and often fatal disease. It's a new world.
Though not as initially dramatic as a nuclear blast, biological
warfare is potentially far more destructive than the kind of nuclear
attack feasible at the operational level of the terrorist. And
biological war is itself distressingly easy to wage.
...
CODES OF CONDUCT
More generally, we think that the idea of codes of conduct for
biosecurity is somewhat misleading. Codes of conduct probably make
sense for biosafety, because in that case each biologist needs to be
continuously thinking about whether his or her experiment is being
done safely.
Biosecurity is different. The main thing we want to avoid here is
somebody doing an "experiment of concern" that makes weapons radically
easier or more effective. This is a one-time decision and most of the
knowledge needed to make that judgment does not really involve
biology. People have been building bioweapons for fifty years and if
you aren't part of that community it's very easy to guess wrong about
whether your experiment is harmless.
Some time ago, NIH funded a grant to improve our knowledge of how
toxic botox really is. Sounds fine. But the experimental method
involved figuring out how to stabilize ultrapure botox, which is
something that the US and USSR both failed to do in the sixties.
Biologists can't reliably know this kind of history or what's
important, it's not their subject and its not reasonable for every
biologist to learn it.
So we think that the room for codes of conduct is pretty limited. Our
suggestions would be:
Get a sanity check. If you think that you have an experiment of
concern, then get qualified outside advice. Lifeboat Foundation
Scientific Advisory Board member Stephen M. Maurer is working with
people at Maryland, Duke, and Northewestern to set up a portal where
people can get this advice. We think a public pronouncement that you
should always get a qualified outside opinion is important and would
stop the practice of doing the experiment and then announcing it to
the AP.
Make the community more transparent. If you look at how US
intelligence went about deciding whether the Nazis had a bomb project,
they used the worldwide physics community to find out who had suddenly
stopped teaching or dropped out of sight. So if we can make science
communities more transparent, then that's presumably going to pay
dividends later on. You can imagine taking steps like holding
reunions, even a community web site with names would be good.
..
FIRST LEVEL BARRIER
We call for the development of a "first level barrier" by 1)
Stockpiling industrial disposable particulate respirators such as the
N95, N99, or N10 types. This mask will prevent the user from being
infected, or if already infected, from spreading it to other people.
If a plague became serious, it would probably be best for the
government to put on TV spots that tell people that your best chance
of surviving a plague (natural or otherwise) is to simply stay home.
Which, by the way, is also the best way to stop transmission. 2)
Installation of intense UV field generators in the HVAC system of
aircraft and other public places as described in our report for Virgin
Atlantic. 3) Stockpiling of antiviral drugs such as Tamiflu and
antibiotics such as Cipro.
TECHNOLOGIES TO COMBAT BIOLOGICAL VIRUSES
One technology to develop is RNAi-based viral suppression. Also,
further strategies for battling viral infections are being developed
by biotech and pharma companies, such as research programs focusing on
the use of decoy oligonucleotides, aptamers, and other small molecules
such as peptides and glycopeptides to inhibit viral fusion with human
cell membranes or function. These technologies are new and largely
unproven so the important and definitive times are ahead.
Other technologies that should be developed include:
1) Development of rapid detection and identification technology:
technologies such as these are being developed, based on electrostatic
interactions with unmodified gold nanoparticles, silicon transistors
(also described in DNA detection made easy), or DNA pairing with a
single strand DNA tethered to an enzyme which becomes activated upon
binding to the complementary strand.
2) Development of "smart" materials such as antiviral surface coatings
that are being tested for use in face masks and other applications.
3) Further advances in sequencing technologies, ultimately reaching a
target of full virus sequencing within hours. As mentioned,
identification of the virus used for the development of a vaccine or
other treatment for an unknown virus necessitates the rapid sequencing
of its entire DNA or RNA genome. Sequencing technology is in
widespread use and is constantly decreasing in cost per segment
sequenced as well as in the time taken for the sequencing.
The relevant outcomes of development in this field will reduce sample
preparation time as well as expand the diversity of materials useful
for isolation of viruses to be sequenced (blood, saliva, skin,
mucosa). As faster sequencing times and better sequence assembly
software are constantly being developed the need for these measures to
be specifically undertaken is of lesser importance. Examples for
emerging rapid sequencing technologies include nanopore-based
sequencing, sequencing based on nano-scale electronic and photonic
effects, and sequencing performed using microarray-based
fluorescently-tagged polymerase and nucleotides.
4) Software-based treatment design. A longer term and expensive
(though ultimately valuable) avenue of research, which would be useful
in a variety of medicinal applications, would be the development of a
comprehensive software system able to analyze the genetic makeup of a
virus as well as the proteins it expresses (its proteome), which could
provide specific epitopic or conformational targets to interfere with
the production, processing and function of these molecules.
The initial identification of viruses susceptibilities would help
determine the most likely effective antiviral treatments based on DNA,
RNA, or protein-based interference strategies. Software-based
strategies should also allow the identification of the optimal protein
sequences to be used as a vaccine, and be able to accelerate the "good
guys" response in the "arms race" as further bioengineered, malicious
pathogens are developed.
Because it would be suicidal for a terrorist group or nation to use
airborne infectious viruses, they may decide to use engineered
bacteria or prions instead. (Although suicidal terrorists do exist!)
To combat these threats, we propose frequent testing of the water
supply, not just for known bacteria but for the biologically necessary
consensus DNA sequences that would be present even in engineered
organisms. All known toxin-producing sequences should be tested for as
well.
We also propose more extensive testing of the meat supply for prion
sequences and we are definitely against the current government
regulations which prohibit meat processors from doing extra prion
tests at their own expense! This testing would be expensive but we are
currently doing way too little of it. Additionally, testing air in
cities would be useful.
Note that technologies like PCR get cheaper every day and large scale
testing of this kind would further reduce the per test cost.
We support development of the prion blood test being developed by
Claudio A. Soto's group. This new test is a million times more
sensitive than conventional antibody-based techniques for detecting
prions.
CONCLUSION
It would be more cost effective if those funding the BioShield set
specific goals and gave prize money to the people/organizations that
accomplished them than simply funding research without such goals.
We propose that we take the measure of this threat and make
preparations today to engage it with the force and knowledge adequate
to throw it back wherever and however it may strike. It is time to
accelerate the development of antiviral and antibacterial technology
for the human population. The way to combat this serious and
ever-growing threat is to develop broad tools to destroy viruses and
bacteria. We have tools such as those based on RNA interference that
can block gene expression. We can now sequence the genes of a new
virus in a matter of days, so our goal is within reach!
We call for the creation of new technologies and the enhancement of
existing technologies to increase our abilities to detect, identify,
and model any emerging or newly identified infective agent, present or
future, natural or otherwise — we need to accelerate the expansion of
our capacity to engineer vaccines for immunization, and explore the
feasibility of other medicinals to cure or circumvent infections, and
to manufacture, distribute, and administer what we need in a timely
and effective manner that protects us all from the threat of
bioengineered malevolent viruses and microbial organisms. Time is
running out.
These goals have been endorsed by Bill Joy and Ray Kurzweil in the New
York Times op-ed Recipe for Destruction and by U.S. Senator Bill
Frist.
The time for action is now!
"""
One of there other projects is 'space habitats' -- "To build
fail-safes against global existential risks by encouraging the spread
of sustainable human civilization beyond Earth." which is somewhat
related to O'Neill's work, OSCOMAK, etc.
http://lifeboat.com/ex/space.habitats
We cannot legislate or regulate nature, and that's essentially how
nature works itself. People who perform biology and generic research
regardless of whether they are professionals in corporate or
government laboratories and amateurs in self-funded make-shift labs
have the same degree of innate desire of self-preservation, and the
same probability of making mistakes that could lead to releasing a GMO
into the wild. The facilities and resources differ, but the underlying
fundamentals are unchanged.
>> There's also malware. Yes, there will be black hat biohackers making
>> weapons. The Soviets were doing it in the '80s, Aum Shinrikyo
>> apparently tried in the '90s, and it's gotten much easier since then.
Richard Prestons conjected in _The Demon In The Freezer_ [1] that it
would cost (from memory) approximately $5000-10000 (USD) to minimally
equip a lab to do bioterrorism virology circa 2001, so let's admit
bioterrorism is a realistic threat. But that has nothing to do with
DIYbio, the basic knowledge is already unclassified and widely known
around the world, in universities in nearly every country. Completely
stopping amateur biologists would have zero relevant impact on
stopping any motived terrorists foreign or domestic.
Through out the history science there are cases of significant
contributions to scientific knowledge by amateurs although we have
historically not identified them as amateurs but often as geniuses.
The fears appear to be primarily of two basic forms, first a
xenophobic fear of the unknown, this appears to be most prevalent
with the general public and popular media. The second is a fear of
change, of the scientific community being changed in the "rules of the
game," mathematics as an example experienced a similar fear of change
with the introduction of computer completed mathematical proofs (the
four colour theorem being the famous example[2]).
I think the part that disappoints me is a blissful ignorance that
"professional" science is somehow better, for however you wish to
define / measure better, than "amateur" science and that it is okay
for science to be conducted out of sight, and out of the public
awareness except when rare ethical issues flare up in the public
awareness like stem cell research and genetically modified agriculture
where 'gut instincts' opinions and decisions are typically made
without understanding what they are actually even talking about.
Imagine deciding what car to buy, without knowing what an automobile
is.
I am interested in identifying concrete and practical concerns about
lab safety, but I admit that I find some of the "fear-mongering" about
vague, fanciful stories that seem contrived. I guess I have trouble
believing that someone could follow instructions well enough to
perform such majestic feats of genetics, without managing to notice
any one of the numerous safety warnings on the various reagents,
equipment and 3rd-party supplied organisms. My problem is, I can't
believe that you could manage to conduct any successful generic
alterations without acquiring the necessary skills, tools, and
understanding of sterilization and contamination.
[1] Excerpt from The Demon In The Freezer,
<http://cryptome.info/0001/smallpox-wmd.htm>
[2] Four Colour (Map) Theorem
<http://www.math.gatech.edu/~thomas/FC/fourcolor.html> and article
Swart, ER (1980). "The philosophical implications of the four-color
problem". American Mathematical Monthly 87 (9): 697--702.
<http://www.joma.org/images/upload_library/22/Ford/Swart697-707.pdf>
-Michael
I am assuming, since this is a biology-themed mailing list, that the
hypothetical destroying agent would itself be alive, and therefore
would not in fact destroy all life on earth. Unless you posit that it
somehow self destructs after killing everything else.
-DTC
Nick
I'd be interested in hearing your ideas on how to help things.
Immediately what comes to mind are the Biosphere projects, but these
were failures. The seed facilities planned for the moon are an
interesting idea, but that's only for seeded life. Re: survival, some
less interesting solutions IMHO would be the typical bunker and
"controlled atmospheric ventillation-exposure systems" (caves) where
you keep track of what buildings you have been in and try to maintain
full knowledge of what might be in your system (this was somewhat
mentioned in a recent email I sent re: the Lifeboat Foundation). But
personally I like living outside without suits :-/.
Ecological bootstrapping requires some serious study. :-)
Understandably. :-)
>> Ecological bootstrapping requires some serious study. :-)
>
> Ecology is already a pretty good bootstrapper. If you get a little fish tank
> (or big vase), fill it up with water and drop a tablespoon of weed you've
> scooped off the surface of a pond into it, you'll get a thriving little
> ecosystem that will go for years - I gave up on my last one when the vase
> thing was 1/2 filled with sludge.
>
> And therein lies the rub I think. Ecology is already boot-strapping and is
> actually pretty hard to stop. Getting it so it's stable, and providing the
> various bits that are needed for the likes of we humans to survive is
> another matter altogether. Stability is the thing. A pandemic is basically
> just a de-stabilisation brought about by the introduction of a new
> species/variant.
Yes, that's for non-isolated systems though. All ecologies and
ecosystems are open to some extent, cite thermodynamics here and so
on, but when I talk about bootstrapping I'm sort of talking about
making life work in some environment that it has never worked in
before. Not just a tank, but say a completely isolated ecology that is
only able to be given some initial seeds to grow from. This is the
same idea that leads to thinking about space habitats, the Biosphere
projects, terraforming Mars, the moons or other planets, and so on.
> There is some discussion over on Technium theorising about making a
> civilisation seed :
> http://www.kk.org/thetechnium/archives/2006/02/the_forever_boo.php
Woah, you're now like my best friend or something for hitting on the
magic link :-). That's one of my favorite Kevin Kelly articles. I was
writing about it a few weeks ago:
which I'll quote from below:
=================================
On Thu, Dec 18, 2008 at 3:13 PM, Bryan Bishop <kanz...@gmail.com> wrote:
> I first learned about Dave Gingery from Kevin Kelly:
> http://www.kk.org/thetechnium/archives/2007/03/bootstrapping_t.php
> (Another article of his worth reading and on topic is re:
> civilizations as creatures:
> http://www.kk.org/thetechnium/archives/2006/03/civilizations_a.php )
> "Recently a guy re-invented the fabric of industrial society in his
> garage. The late Dave Gingery was a midnight machinist in Springfield,
> Missouri who enjoyed the challenge of making something from nothing,
> or perhaps it is more accurate to say, making very much by leveraging
> the power of very little. Over years of tinkering, Gingery was able to
> bootstrap a full-bore machine shop from alley scraps. He made rough
> tools that made better tools, which then made tools good enough to
> make real stuff."
Hm. That second kk.org link, I think, is the wrong one. Let's try this one:
http://www.kk.org/thetechnium/archives/2006/02/the_forever_boo.php
"I've been thinking of civilization (the technium) as a life form, as
a self-replicating structure. I began to wonder what is the smallest
seed into which you could reduce the "genes" of civilization, and have
it unfold again, sufficient that it could also make another seed
again. That is, what is the smallest seed of the technium that is
viable? It must be a seed able to grow to reproduction age and express
itself as a full-fledge civilization and have offspring itself --
another replicating seed.
This seed would most likely be a library full of knowledge and perhaps
tools. Many libraries now contain a lot of what we know about our
culture and technology, and even a little bit of how to recreate it,
but this library would have to accurately capture all the essential
knowledge of cultural self-reproduction. It is important to realize
that this seed library is not the universal library of everything we
know. Rather, it is a kernel that contains that which cannot be
replicated and that which when expanded can recover what we know."
Anyway, somewhere in his bloggings he specifically relates the nucleus
of the civilization creature as the self-replicating library of tools,
information and culture, in the sense of von Neumann probes:
Implementation notes on von Neumann probes
http://heybryan.org/projects/atoms/ (ok, it's old)
"The basic idea of a von Neumann probe is to have a space-probe that
is able to navigate the galaxy and use self-replication (see RepRap
and bio). The probe would contain hundreds of thousands of digital
genomes (sequenced DNA), DNA synthesizers and sequencers, bacteria,
embryos, stem cells, copies of the Internet Archive and a significant
portion of the WWW in general, plus the immediate means and tools to
copy all of the information and create a material embodiment, kind of
like running an unzip utility on top of the thousands of exabytes
predicted to be inexistence today. This would probably include many
people, societies, even entire civilizations if we can collect enough
data and begin to 'debug' civilization. The system might end up using
an ion drive and a hydrogen collector, with on-board nucleosynthesis
to create the biomolecules necessary for life, plus ways to attach to
asteroids and begin replicating and copying the data and
biomaterials."
von Neumann replicator award/prize:
http://www.chiark.greenend.org.uk/~douglasr/prize/
"The Prize
So what needs to be done to bring these two things together?
1) Show that 90 % of a self assembling robotic system can be
fabricated using a rapid prototyping system that can also self
replicate
2 )Show that 90 % of the assembly from parts of a rapid prototyping
system can be done by a robotic system that can also self assemble."
**but** Freitas clearly outlines the issue of closure engineering that
shouldn't be ignored in his KSRM book and AASM report:
http://groups.google.com/group/openmanufacturing/msg/4ff7a92e2425dde2
http://www.islandone.org/MMSG/aasm/AASM53.html#536
http://www.molecularassembler.com/KSRM/5.6.htm
Which I'll quote from again:
================
Fundamental to the problem of designing self-replicating systems is
the issue of closure.
In its broadest sense, this issue reduces to the following question:
Does system function (e.g., factory output) equal or exceed system
structure (e.g., factory components or input needs)? If the answer is
negative, the system cannot independently fully replicate itself; if
positive, such replication may be possible.
Consider, for example, the problem of parts closure. Imagine that the
entire factory and all of its machines are broken down into their
component parts. If the original factory cannot fabricate every one of
these items, then parts closure does not exist and the system is not
fully self-replicating .
In an arbitrary system there are three basic requirements to achieve closure:
Matter closure - can the system manipulate matter in all ways
necessary for complete self-construction?
Energy closure - can the system generate sufficient energy and in the
proper format to power the processes of self-construction?
Information closure can the system successfully command and control
all processes required for complete self-construction?
Partial closure results in a system which is only partially
self-replicating. Some vital matter, energy, or information must be
provided from the outside or the machine system will fail to
reproduce. For instance, various preliminary studies of the matter
closure problem in connection with the possibility of "bootstrapping"
in space manufacturing have concluded that 90-96% closure is
attainable in specific nonreplicating production applications (Bock,
1979; Miller and Smith, 1979; O'Neill et al., 1980). The 4-10% that
still must be supplied sometimes are called "vitamin parts." These
might include hard-to-manufacture but lightweight items such as
microelectronics components, ball bearings, precision instruments and
others which may not be cost-effective to produce via automation
off-Earth except in the longer term. To take another example, partial
information closure would imply that factory-directive control or
supervision is provided from the outside, perhaps (in the case of a
lunar facility) from Earth-based computers programmed with
human-supervised expert systems or from manned remote teleoperation
control stations on Earth or in low Earth orbit.
The fraction of total necessary resources that must be supplied by
some external agency has been dubbed the "Tukey Ratio" (Heer, 1980).
Originally intended simply as an informal measure of basic materials
closure, the most logical form of the Tukey Ratio is computed by
dividing the mass of the external supplies per unit time interval by
the total mass of all inputs necessary to achieve self-replication.
(This is actually the inverse of the original version of the ratio.)
In a fully self-replicating system with no external inputs, the Tukey
Ratio thus would be zero (0%).
It has been pointed out that if a system is "truly isolated in the
thermodynamic sense and also perhaps in a more absolute sense (no
exchange of information with the environment) then it cannot be
self-replicating without violating the laws of thermodynamics"
(Heer,1980). While this is true, it should be noted that a system
which achieves complete "closure" is not "closed" or "isolated" in the
classical sense. Materials, energy, and information still flow into
the system which is thermodynamically "open"; these flows are of
indigenous origin and may be managed autonomously by the SRS itself
without need for direct human intervention.
Closure theory. For replicating machine systems, complete closure is
theoretically quite plausible; no fundamental or logical
impossibilities have yet been identified. Indeed, in many areas
automata theory already provides relatively unambiguous conclusions.
For example, the theoretical capability of machines to perform
"universal computation" and "universal construction" can be
demonstrated with mathematical rigor (Turing, 1936; von Neumann, 1966;
see also sec. 5.2), so parts assembly closure is certainly
theoretically possible.
An approach to the problem of closure in real engineering-systems is
to begin with the issue of parts closure by asking the question: can a
set of machines produce all of its elements? If the manufacture of
each part requires, on average, the addition of >1 new parts to
product it, then an infinite number of parts are required in the
initial system and complete closure cannot be achieved. On the other
hand, if the mean number of new parts per original part is <1, then
the design sequence converges to some finite ensemble of elements and
bounded replication becomes possible.
The central theoretical issue is: can a real machine system itself
produce and assemble all the kinds of parts of which it is comprised?
In our generalized terrestrial industrial economy manned by humans the
answer clearly is yes, since "the set of machines which make all other
machines is a subset of the set of all machines" (Freitas et
al.,1981). In space a few percent of total system mass could feasibly
be supplied from Earth-based manufacturers as "vitamin parts."
Alternatively, the system could be designed with components of very
limited complexity (Heer, 1980). The minimum size of a self-sufficient
"machine economy" remains unknown.
===
===
According to the NASA study final report [2]: "In actual practice, the
achievement of full closure will be a highly complicated, iterative
engineering design process.* Every factory system, subsystem,
component structure, and input requirement must be carefully matched
against known factory output capabilities. Any gaps in the
manufacturing flow must be filled by the introduction of additional
machines, whose own construction and operation may create new gaps
requiring the introduction of still more machines. The team developed
a simple iterative procedure for generating designs for engineering
systems which display complete closure. The procedure must be
cumulatively iterated, first to achieve closure starting from some
initial design, then again to eliminate overclosure to obtain an
optimized design. Each cycle is broken down into a succession of
subiterations which ensure qualitative, quantitative, and throughput
closure. In addition, each subiteration is further decomposed into
design cycles for each factory subsystem or component." A few
subsequent attempts to apply closure analysis have concentrated
largely on qualitative materials closure in machine replicator systems
while de-emphasizing quantitative and nonmaterials closure issues
[1128], or have considered closure issues only in the more limited
context of autocatalytic chemical networks [2367, 2686]. However, Suh
[1160] has presented a systematic approach to manufacturing system
design wherein a hierarchy of functional requirements and design
parameters can be evaluated, yielding a "functionality matrix" (Figure
3.61) that can be used to compare structures, components, or features
of a design with the functions they perform, with a view to achieving
closure.
* To get a sense of the complex iterative nature of closure
engineering, the reader should ponder the design process that he or
she might undertake in order to generate the following full-closure
self-referential "pangram" [2687] (a sentence using all 26 letters at
least once), written by Lee Sallows and reported provided by
Hofstadter [260]: "Only the fool would take trouble to verify that his
sentence was composed of ten a's, three b's, four c's, four d's,
forty-six e's, sixteen f's, four g's, thirteen h's, fifteen i's, two
k's, nine l's, four m's, twenty-five n's, twenty-four o's, five p's,
sixteen r's, forty-one s's, thirty-seven t's, ten u's, eight v's, four
x's, eleven y's, twenty-seven commas, twenty-three apostrophes, seven
hyphens, and, last but not least, a single !" Self-enumerating
sentences like these are also called "Sallowsgrams" [2687] and have
been generated in English, French, Dutch, and Japanese languages using
iterative computer programs.
Partial closure results in a system that is only partially
self-replicating. With partial closure, the machine system will fail
to self-replicate if some vital matter, energy, or information input
is not provided from the outside. For instance, various preliminary
studies [2688-2690] of the materials closure problem in connection
with the possibility of macroscale "bootstrapping" in space
manufacturing have concluded that 90-96% closure is attainable in
specific nonreplicating manufacturing applications. The 4-10% that
still must be supplied are sometimes called "vitamin parts." (The
classic example of self-replication without complete materials
closure: Humans self-reproduce but must but take in vitamin C, whereas
most other self-reproducing vertebrates can make their own vitamin C
[2691].) In the case of macroscale replicators, vitamin parts might
include hard-to-manufacture but lightweight items such as
microelectronics components, ball bearings, precision instruments, and
other parts which might not be cost-effective to produce via
automation off-Earth except in the longer term. To take another
example, partial information closure might imply that factory control
or supervision is provided from the outside, perhaps (in the case of a
lunar facility) from Earth-based computers programmed with
human-supervised expert systems or from manned remote teleoperation
control stations located on Earth or in low Earth orbit.
Regarding closure engineering, Friedman [573] observes that "if 96%
closure can really be attained for the lunar solar cell example, it
would represent a factor of 25 less material that must be expensively
transported to the moon. However, ...a key factor ... which deserves
more emphasis [is] the ratio of the weight of a producing assembly to
the product assembly. For example, the many tons of microchip
manufacturing equipment required to produce a few ounces of microchips
makes this choice a poor one – at least early in the evolution – for
self-replication, thus making microelectronics the top of everyone's
list of 'vitamin parts'."
================
================
Here's to cramming everything into as small a space as possible.