And so it begins.
This is exactly what I was on about a couple of months ago... corporate
media attacking with frames rather than facts.
If a Fox News headline ends with a question, the answer is always "no",
but it doesn't matter because what they are doing is taking advantage of
Persistence of Myths to establish an idea.
http://www.washingtonpost.com/wp-dyn/content/article/2007/09/03/AR2007090300933.html
Whether they're doing this deliberately, or merely because hitting a
wedge between conservative and liberal moralities sells and gets
attention, is anyone's guess.
But there you go - if the results in the link above are any
representation, a massive percentage of the people who read this article
will (after a short while) believe that the WSJ said that DIYbio is a
threat to national security - regardless of what the article went on to say.
I may be mistaken on this one - someone please correct me if I am -
but it's possible that *not* doing a mad scientist-type laugh on
camera might be a good first step.
-Dan
Sent via BlackBerry by AT&T
The best succinct message I've heard that I think DIYbio groups should
repeat at every opportunity to mass-media is this:
Mutation in the wild of viruses and bacteria is vastly more likely to
cause an actual pandemic or threat to national security than any
biohacker ever could. Including amateurs in the biosciences, and
completely public sharing of all available information, is the surest
way that we'll find a cure to stop any potential malevolent mutation
as quickly as possible and with as few casualties.
This is especially pertinent now that H1N1 swine flu has been in the spotlight.
(I think I actually heard this first via the television show
ReGenesis, alluding to the real-life debate about whether the 1918
spanish flu virus gene sequence should have been posted freely online
or not.)
> That aside, I'm still working out what to do when reporters ask, "So
> is this a threat to national security?".
Answering the question reinforces the frame - so you need to get your
own frames in first (to do with protecting and empowering).
But what I'd say, once the question is asked...
the long version:
Terence Taylor, the Washington biosecurity expert says "our best defence
against biological risks is the rapid dissemination of the life-sciences".
"what we have to do, and this is so important in the approach that we
take, is to safeguard the advance of this technology"
http://www.genomicon.com/2009/03/nice-synthetic-bio-intro-vid/
the short version:
Our best defence against anything is to have a lot of smart people on
the ground.
Nick
If you can't twitter it, it's not succinct.
n
Fair enough. I'll attempt a "sound bite" version:
> That aside, I'm still working out what to do when reporters ask, "So
> is this a threat to national security?".
"The threat to national security is greatest from *naturally*
occurring mutations in pathogens. Without amateur biologists working
on all of the available data, we may not discover how to halt the
spread of those pathogens before they become epidemics -- epidemics
that could disrupt continuity of government."
ok, here's one that's actually tweetable:
"The threat to national security is greatest from *natural* mutations
in pathogens. Without amateurs, we may not discover the cure in
time."
You don't know that.
n
I never have liked talking about life in terms of stakes and risks.
I'm no finance-quant. I have many friends that basically live like
wild-frontier anti existential risk bounty hunters, but I disagree
that with their strategy of hunting down existential risks. Any
"existential risk" is by definition unacceptable. The way to prevent
total system failure- in this case, the way to prevent total system
failure of life- is not to track down every imaginable risk and try to
minimize it, but rather to engineer transparency in, and redundancy
throughout. As we have discussed on this list many times before, we
have this giant atmosphere which has allowed unfortunate biological
issues- like the 1979 Sverdlovsk air-borne particles. NASA and the
military have already solved this issue. I suppose it is unfair to say
that it was NASA or the military the first solved it, since
atmospherically-sealed chambers have been far longer than submarines
have existed. In a submarine, you are environmentally protected from
the atmosphere. The same goes for the astronauts in the International
Space Station or any other space-faring vehicle. I'm not so sure about
high altitude planes. It is theoretically [er, happened a few times
already?] possible for extremophiles to survive in the harsh
environment of space, but their available mediums of transportation
are significantly reduced.
I think I can see which one sounds more reasonable. Expecting
everybody to be perfect, always, and to never do something stupid, or
do anything harmful, and never to be mallacious. Versus building our
infrastructure and engineering the solution to the problem and i.e.
taking responsibility for our health and continued health.
I suppose I'm coming from a slightly different perspective, like with
the background in open manufacturing, fablabs, open source hardware,
space shuttles, etc. So I can see why this isn't getting through to
anybody here, but still- it's not entirely hopeless- but there needs
to be a sufficient number of people that know what's going on so that
they might help out .. even while everyone else is still
fearmongering, or something.
While I don't disagree, the typical response to that is that "but then
something bad will happen and we'll all die". And you know what? They
are right- it's true that if nothing changes, nothing at all changes,
that infections can become worse, that health can take a downward
dive, and you and everyone else eventually dies. But instead of
telling them to sit down, how about you help them figure out how to
not be screwed by what they think they're going to be screwed by? I
don't know how to put it more simply. Sitting down is stagnant.
It is at least partly getting through.
What I'm seeing on this list are people, many of whom are dauntingly
smart within their fields, but outside that... don't even seem to know
what the issues are.
Lets be real - DIYbio is not ever going to be 100% safe. You've got
humans involved - and my short tenure on this list has been a stark
education on how clueless smart people can actually be.
So yea - resilience is probably a more optimal focus than white-knuckle,
vice-like prevention of all anticipated THREATS!!! (and god-damn, aren't
we all sick of hearing about those). Minimise the risks we know about,
but bear in mind that there will be others that we don't know about.
And with regards the initial drift of this thread - which is how to deal
with a media who's stock in trade is fearmongering... DIYbio is possibly
the most important resilience activity we can engage in - an educated
and experienced population of people who are best positioned to handle
unforseen events.
A textbook invocation of Godwin's Law there then.
How is that being "real"? Nobody has ever said that anything can ever
be 100% safe. Instead, the focus should be on reliability of
repeatable instructions (protocols), especially for amateurs. Then,
you could set the DIYbio "brand" as specifically those reliable
instructions or that reliable system of sharing diy projects - the
fearmongering that you see around the issues of biology then can no
longer be entirely applicable because those "grey areas" of vagueness
are not actually defined within the confines of DIYbio as a brand or
community. The possibility of problems arising then still exist (as
they always have), but how could the media possibly blame DIYbio for
that? I don't think this sacrifices any of the principles of DIYbio as
a brand name either. Admittedly, for the time being, there is no
community standardization of open source hardware projects, even
though there has been a flurry of activity in communicating about
thermocyclers and millifluidics.
> So yea - resilience is probably a more optimal focus than white-knuckle,
> vice-like prevention of all anticipated THREATS!!! (and god-damn, aren't
> we all sick of hearing about those). Minimise the risks we know about,
> but bear in mind that there will be others that we don't know about.
That's not the point- the point is that it's a non-issue. DIYbio is
not an insurance company, it's not buying up people of Earth as
liability. That's completely wrong thinking, although an easy mistake
for the media to make.
It reflects reality?
> Nobody has ever said that anything can ever be 100% safe.
That is nonetheless, the spirit of such laws that (will) attempt to
regulate.
> Instead, the focus should be on reliability of
> repeatable instructions (protocols), especially for amateurs.
Well that as well, but resilience (you know, the redundancy you talk
about?) is as important I think, for a whole variety of reasons.
>> So yea - resilience is probably a more optimal focus than white-knuckle,
>> vice-like prevention of all anticipated THREATS!!! (and god-damn, aren't
>> we all sick of hearing about those). Minimise the risks we know about,
>> but bear in mind that there will be others that we don't know about.
>
> That's not the point- the point is that it's a non-issue. DIYbio is
> not an insurance company, it's not buying up people of Earth as
> liability. That's completely wrong thinking, although an easy mistake
> for the media to make.
What do you mean it's not the point? It's your point.
a) "Instead, the focus should be on reliability of repeatable
instructions (protocols), especially for amateurs."
aka: minimising risks we know about.
b) "but rather to engineer transparency in, and redundancy
throughout"
aka: bearing in mind others we don't know about.
I'm saying that it wasn't a goal in the first place, so why would that
be "being real" if we weren't actually trying to add foam to
everything in the world so that people don't hit their heads and
bruise their bottoms?
>> Nobody has ever said that anything can ever be 100% safe.
>
> That is nonetheless, the spirit of such laws that (will) attempt to
> regulate.
That just means the legal system is broken.
>>> So yea - resilience is probably a more optimal focus than white-knuckle,
>>> vice-like prevention of all anticipated THREATS!!! (and god-damn, aren't
>>> we all sick of hearing about those). Minimise the risks we know about,
>>> but bear in mind that there will be others that we don't know about.
>>
>> That's not the point- the point is that it's a non-issue. DIYbio is
>> not an insurance company, it's not buying up people of Earth as
>> liability. That's completely wrong thinking, although an easy mistake
>> for the media to make.
>
> What do you mean it's not the point? It's your point.
No, it's not. Minimizing risk is not the point. In fact, I don't even
really work in terms of "risks" (as I mentioned previously).
> a) "Instead, the focus should be on reliability of repeatable
> instructions (protocols), especially for amateurs."
>
> aka: minimising risks we know about.
Reliability is less about "minimizing risks" (whatever that means) and
more about offering proofs of the correctness of certain programs
involving instruction generation or at least instruction display and
sharing protocols (thankfully we have a lot of previous work to build
on! that's why we don't have to use hyperterminal, or code in
assembly, or synthesize oligos by hand pipetting). Not anything about
risk minimization .. it's not a "risk" that your program or
infrastructure for diybio malfunctions if your method is correct- we
know that sharing is possible, our computers are doing it all the
time- passing protocols back and forth, etc. The malfunction is
inherently present in the larger system of something so easily
hackable and so "easily" turned into an inferno (just throw a giant
asteroid at earth and it's all doom (and you'll soon find it's not so
easy to get an asteroid at the moment)). DIYbio is not an insurance
company: it is not responsible for the fact that a giant asteroid
could destroy the earth. Similarly, it is not responsible for lowering
the IQ required to do harmful things .. not because it's an
irresponsible organization, but because it's mission isn't to fix
what's wrong with the world- it's mission isn't to fix the fact that
an airborne virus could potentially flood the atmosphere. That's not
it's mission, that's not it's "problem" (even though its members would
gladly work towards solving that problem- indeed, some are interested
also in space habitation, etc. etc.).
> b) "but rather to engineer transparency in, and redundancy
> throughout"
>
> aka: bearing in mind others we don't know about.
No. That's not what I said. You fail at reading, please stop trying.
By now I have linked to the SPOF article on Wikipedia way too many
times- redundancy isn't about bearing in mind things we don't know
about, it's about good system design and reliability.
Thinking in terms of "risks" is risky. <-- that should put you into an
infinite regression loop, or something.
See next point, sans straw-man.
>>> Nobody has ever said that anything can ever be 100% safe.
>> That is nonetheless, the spirit of such laws that (will) attempt to
>> regulate.
>
> That just means the legal system is broken.
Perhaps. Still, there it is.
>> a) "Instead, the focus should be on reliability of repeatable
>> instructions (protocols), especially for amateurs."
>>
>> aka: minimising risks we know about.
>
> Reliability is less about "minimizing risks" (whatever that means) and
> more about offering proofs of the correctness of certain programs
> involving instruction generation or at least instruction display and
> sharing protocols (thankfully we have a lot of previous work to build
> on! that's why we don't have to use hyperterminal, or code in
> assembly, or synthesize oligos by hand pipetting). Not anything about
> risk minimization .
Oh you know, washing hands afterwards, not dumping the results of
certain experiments down the sink, wearing safety gear when appropriate
- I'm sure that there are aspects on these instructions (especially
for amateurs) that are as much about safety as reliability.
Or maybe I was wrong. Maybe your vision of what these instructions
(specially for amateurs) don't mention safety information - although
this would surprise me as the reason you originally made this point was
in the context of taking the wind out of fearmongering.
> DIYbio is not an insurance
> company: it is not responsible for the fact that a giant asteroid
> could destroy the earth. Similarly, it is not responsible for lowering
> the IQ required to do harmful things .. not because it's an
> irresponsible organization, but because it's mission isn't to fix
> what's wrong with the world- it's mission isn't to fix the fact that
> an airborne virus could potentially flood the atmosphere. That's not
> it's mission, that's not it's "problem" (even though its members would
> gladly work towards solving that problem- indeed, some are interested
> also in space habitation, etc. etc.).
Yea - you said that it's not an insurance company before. I don't recall
anyone ever saying that it was.
> No. That's not what I said. You fail at reading, please stop trying.
Woo, feisty. Careful Bryan.
This thread is about communicating through the filter of the media to
the general public. If you can't successfully get your point across to
me then you're sure as shit going to fuck it up with everyone else.
I like you Bryan - I think you're a tower of strength to be honest...
one of those people who's levels of drive/ability leave me feeling
slightly gob-smacked - but don't assume for a second that your
communication skills are better than anyone else's comprehension skills.
> By now I have linked to the SPOF article on Wikipedia way too many
> times- redundancy isn't about bearing in mind things we don't know
> about, it's about good system design and reliability.
Cheers, I know what Single Point of Failure means - I worked in
infrastructure support (at levels varying from gopher to
head-of-department) in a previous life.
The reason it's good system design is precisely for the purpose of
reducing the impact of unanticipated fuckups.
n
Washing your hands doesn't stop a hand-washing-immune virus. Not
coming in to contact with said virus, will stop an infection from
happening, however.
> certain experiments down the sink, wearing safety gear when appropriate
> - I'm sure that there are aspects on these instructions (especially
> for amateurs) that are as much about safety as reliability.
I think you're mincing words. These instructions are not about risk
minimization .. they are for carrying out a specific experiment. While
there may be particular instructions for proposal disposal of
equipment and reagents, this is hardly the same as promising to the
media that the world will be saved from doom and gloom, and indeed
even assuming for a second that DIYbio as an organization or as a
concept could ever possibly promise that is .. just silly.
> Or maybe I was wrong. Maybe your vision of what these instructions
> (specially for amateurs) don't mention safety information - although
Meh. You're turning this into something it's not. You've read my
emails on this mailing list and many others that explicitly mention
safety information as an integral component. So I know you're well
aware of these concepts and ideas, but still being a jerk or
something? I don't know what's going on.
>> DIYbio is not an insurance
>> company: it is not responsible for the fact that a giant asteroid
>> could destroy the earth. Similarly, it is not responsible for lowering
>> the IQ required to do harmful things .. not because it's an
>> irresponsible organization, but because it's mission isn't to fix
>> what's wrong with the world- it's mission isn't to fix the fact that
>> an airborne virus could potentially flood the atmosphere. That's not
>> it's mission, that's not it's "problem" (even though its members would
>> gladly work towards solving that problem- indeed, some are interested
>> also in space habitation, etc. etc.).
>
> Yea - you said that it's not an insurance company before. I don't recall
> anyone ever saying that it was.
That's what the media is treating it as. "Look, they're doing stuff
that could potentially vaguely do something terrible to us all, but
they shouldn't because we're acting like they're supposed to be
offering us insurance from the bad things that happen as a result of
using something easily infected".
>> No. That's not what I said. You fail at reading, please stop trying.
>
> Woo, feisty. Careful Bryan.
What?
> This thread is about communicating through the filter of the media to
That's true.
> the general public. If you can't successfully get your point across to
> me then you're sure as shit going to fuck it up with everyone else.
Maybe you're just being stubborn.
> slightly gob-smacked - but don't assume for a second that your
> communication skills are better than anyone else's comprehension skills.
So what? That doesn't mean you get to ignore my emails but still
respond to them with the sort of response that smells of "tl;dr".
Anyway, maybe if it is truly impossible to comprehend my emails,
questions would be a good idea? I'm sure there's a solution that you
can assist in.
> The reason it's good system design is precisely for the purpose of
> reducing the impact of unanticipated fuckups.
No, they are anticipated failures. This is why redundancy is one of
the four points suggested in that article, for instance- because you
know of particular points of failure, you can remove them. Without
knowing them ("unanticipated fuckups") how can you possibly solve
them? And yet in reality, we do know of these points of potential
problems. For instance, I like to complain about the atmosphere
because of airborne technologies (just because it's a good example).
This can be removed. It's not unanticipated- just everyone is too busy
trying to install a Super Government to protect everything ever, too
busy to work on the actual engineering projects that need to take
place. Bah. Too busy fearmongering.
A you wish; as you were.
I'm not your enemy Bryan.
n