I want to write software that helps kill people

420 views
Skip to first unread message

Zack Maril

unread,
Apr 6, 2013, 2:00:54 PM4/6/13
to philosophy-in-a-...@googlegroups.com
https://gist.github.com/zmaril/5326884#file-softwarehelpskill-md

This is something I've been thinking about for a while now. I thought this would be a good place to post it because it details a philosophical problem I've encountered with open source software. Any and all comments are welcome. 
-Zack

Chris Anderson

unread,
Apr 6, 2013, 2:10:10 PM4/6/13
to philosophy-in-a-...@googlegroups.com
The FSF and Gnu are against field of use restrictions, and they have a philosophy section on their site. But I don't see anything in particular addressing this question. 

On patents and field of use: http://www.gnu.org/philosophy/w3c-patent.html

Chris
--
You received this message because you are subscribed to the Google Groups "Philosophy in a time of Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to philosophy-in-a-time-o...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


--
Chris Anderson
http://jchrisa.net
http://www.couchbase.com

Greg Borenstein

unread,
Apr 6, 2013, 2:37:29 PM4/6/13
to philosophy-in-a-...@googlegroups.com
Thoughtful analysis, Zack.

This resonates with something I brought up in the Welcome thread: the Precautionary Principle vs the Proactionary Principle.

The Precautionary Principle is one solution to the moral conundrum you bring up: it articulates the moral principle that policies and technologies should be proven not to cause harm before being released into the wild. <http://en.wikipedia.org/wiki/Precautionary_principle>

As your analysis of Palantir makes clear, the intricate and intimate relationship between things makes the Precautionary Principle very restrictive if you take it seriously -- especially in a field like technology where objects are especially promiscuous. The inter-operative nature of software (and especially open source software) means that the potential unanticipated harm caused by any project is nearly incalculable.

On the opposite side of the Precautionary Principle is the Proactionary Principle, the idea that the negative consequences of not acting are vast enough to require us to try to improve the world. Max More, the CEO of the Alcor Life Extension Foundation, makes an interesting case for this: <http://www.findtheconversation.com/episode-two-dr-max-more/>

I can see the arguments for and against both of these sides. But I find the framing of these issues a bit paralyzing: they raise the stakes of technological and social action or inaction to the apocalyptic.

One way out of this paralysis towards, at least, incremental action for improvement comes from Amory Lovins, founder of the Rocky Mountain Institute and an expert on efficiency and environmental science. In talking about environmentally friendly cars, Lovins says "it's cheapest to save energy at the wheels." <http://csi.gsb.stanford.edu/energy-efficiency-in-transportation-part-1> His point is that every unit of energy spent at the wheels in moving the car has to be stored and transported as fuel, converted into kinetic energy by the motor, transmitted to the wheels by gears and shafts, etc. All of these processes are inefficient. So, for every unit of energy you save at the wheels you eliminate many times that in overall energy spent by never having to waste it on that inefficiency.

I think a parallel way of thinking applies here. Trying to solve this problem solely at the level of open source software is like trying to improve car efficiency starting at the fuel instead of the wheels. Instead, a broad resistance to the kind of immoral state violence represented by Palantir directly would reduce these kinds of ethical dilemmas "at the wheels": not just for the authors of the open source software projects you site, but for the operators of network infrastructure, the producers of cloth used in uniforms, the manufacturers of glass used in display screens, and all of us as citizens of democratic countries whose taxes go to pay for all of this.

-- Greg

Simon St.Laurent

unread,
Apr 6, 2013, 2:39:31 PM4/6/13
to philosophy-in-a-...@googlegroups.com
I have similar issues.  My primary way of addressing them is less to do with open source than with choice of topics, though.  I know that the military does use JavaScript and Web technologies, but it seems unlikely that anything I do in that field would help them in particular.  Sooner or later I'll find out about militarized hypertext and have to shift to a different field, but for now...

Also, for my last book - Introducing Erlang - I added a section to the preface. Erlang is definitely a step closer toward such issues, as it's designed more or less to create "unkillable" software.  (The original use case was telephone switching.)

------------------------------------------------------------

Please Use It For Good

I'll let you determine what "good" means, but think about it.  Please try to use Erlang's power for projects that make the world a better place, or at least not a worse place.

------------------------------------------------------------

Thanks,
Simon St.Laurent
http://simonstl.com/

Jeff Lee

unread,
Apr 7, 2013, 4:47:32 AM4/7/13
to philosophy-in-a-...@googlegroups.com
I'm guessing that a theme of this group will be whether technology has any inherent ethical content, or if it is entirely neutral sphere. For what it's worth, I think there's a large portion of the humanities canon that leans toward the neutrality camp; i.e. that technology's ability to improve or destroy is simply a function of it being man's instrument.

While I like that idea quite a bit, it is a distant cousin to the guns-don't-kill-people, people-kill-people line of thinking, which has always made me pretty uncomfortable. If we in software are in the business of evaluating our complicity in Doing Evil, there's a reasonable argument to be made towards considering the purpose and applicability of specific technologies. I believe it isn't unreasonable to say that there is some moral distance between doing research on the chemistry of gunpowder and designing an actual gun. Likewise, I'd submit to you that there is a difference between developing software that Palantir happens to use, and actually being Palantir. There's real validity to Zack's line of thinking, but pursued fairly, it implicates all of computer science, electronics, engineering, physics, or whathaveyou.

On a level of "practical" philosophy, which is to say how one justifies getting up every day to continue to do one's job, I think the ethics of creating generic open source projects are sufficiently murky that it's mostly up to you to decide whether it's right or wrong. I wish saying that didn't feel like such a cop-out, but it's not a very tractable problem, and it's potentially larger than what someone can take responsibility for on an individual basis. I read something like this and immediately think of the Faust myth, which to my understanding is basically about modernity at large, and how the drive to enlighten and improve basically ends up burying a lot of people under the steamroller of progress, as defined by the winners. The more pessimistic wing of the aforementioned humanities canon says that if you give modern man tools, he'll probably end up using those tools to fuck a lot of things up. So it's no mystery that many modern fantasies of film and literature are bucolic in nature--everyone ends up abdicating the world of science and profits and goes to live on a farm or open up a small used bookstore somewhere.

Somewhat tangentially, it's worth pointing out that you don't even need to draw the line at killing--your open source technology, or something else like it, is being used by very large institutions to violate people's privacy, or convince people that they aren't worth anything unless they buy their way into a particular lifestyle. There's tremendous harm already being done that's not at the level of outright killing, but nonetheless being levered by convenient, freely-available software.

dan mcquillan

unread,
Apr 7, 2013, 5:09:44 AM4/7/13
to philosophy-in-a-...@googlegroups.com
people on this thread might be interested in The Hacktivismo Enhanced-Source Software License Agreement, which combined the GPL with elements of the  Universal Declaration of Human Rights. when i was at Amnesty i tried to persuade them to support this, as a symbolic gesture (it didn't happen). http://www.hacktivismo.com/about/hessla.php

i was also wondering if a tactic like Citizen Weapons Inspections could be applied to software companies. http://www.lasg.org/inspections/resources.htm

dan


--

Nick Novitski

unread,
Apr 7, 2013, 4:37:11 PM4/7/13
to philosophy-in-a-...@googlegroups.com
Advances in technology disproportionately favor actors with wealth, and actors with agility.  To the degree that Bad People Who Do Bad Things are either rich in resources or short in OODA loops, they will use new things to their advantage.

They will do this, and I can't believe I have to say this, no matter what what license terms you offer your freely available software under.  They give even less of a shit about copyleft than they do about the Geneva Convention.

Whether your work is a net good to the world or not depends on marketing.  If you don't want organizations you don't like being the main beneficiaries of stuff you do, then find (or create) organizations you think will balance the scales the right way, and make them see how your Cool New Thing can help them fulfill their purpose.

dan mcquillan

unread,
Apr 7, 2013, 6:41:23 PM4/7/13
to philosophy-in-a-...@googlegroups.com
good sentiments. i think you separate the stuff and the actors a bit too neatly, though. 

mca

unread,
Apr 7, 2013, 7:18:57 PM4/7/13
to philosophy-in-a-...@googlegroups.com
the question i continue to ask myself is: "do my actions contribute to pain and suffering in the world?"

- do i create/maintain things designed to inflict pain/suffering?
- do i create/maintain things are are parts of things designed to...?
- do i work for a company that creates/maintains...?
- do i work for a company that is a parent/subsidiary...?
- do i work for a company that contributes to the pain and suffering of it's employees?
- do i contribute to the pain and suffering of my fellow employees/supervisors/customers?
- do i purchase things from a company....?

certainly, it would be difficult for us to say "No" to all these questions, for all past time, and into the future.

there have been times when was am proud of my answers, and times when i was not. there are times when i realized, in hindsight, that i was uninformed, naive, and un-realistic. 
 
i think the Q is not "if", but "when" and "what do i do about it." 
- we can to change our behavior
- we can influence the behavior of others
- we can identify/report the behavior of ourselves and others
- we can offset the behavior of ourselves and others

what we cannot do is stop all causes of pain and suffering.

focus on what we can do an what we actually see, not on what we cannot do and/or what we imagine could happen at some point in time.

technology has changed quite a bit since i first started asking this Q. the details have changed, but not the question, not (at least for me) the possibilities.

in the end (for me) this is an iterative process. i ask, i act, i ask again, and so forth. 

i only hope i never stop.


Zack Maril

unread,
Apr 22, 2013, 10:22:45 AM4/22/13
to philosophy-in-a-...@googlegroups.com
Thank you all for all the comments, suggestions and links! Sorry for
the delay in replying. Somewhat ironically, I've been hacking on some
crazy powerful open source[0], reading about "software" being used to
help kill people[1], and, recently, visiting Chernobyl[2]. This thread
of ideas is very much at the forefront of my mind and I'm still
sussing out much of what I've been thinking about.

Reinforced by the visit to Chernobyl and learning more about the
history of the accident, it's clear that "hell is paved with good
intentions". We live and work in a system that is inherently unstable,
fragile, and chaotic. While it's easy to pretend that, given a
sufficient amount of forethought, I could predict the full
consequences of my actions, I'd be lying most of the time. All of my
actions have a certain amount of random noise in terms of how "bad" or
"good" the consequences will be. From what I understand, Chernobyl was
caused by multiple system failures during an experiment that would
have lead to cheaper/cleaner energy for all of Ukraine.

Another thought is that if I don't aim to do something "good"[4] for
the world, I probably won't. It's a lot easier to mess things up than
it is to build, i.e. it's easier to break a system than to build it to
resist breaking. If I merely say, I'll just not do "bad" things like
make weapon targeting systems or steal people's money, then the only
options for work I have left are "neutral" areas and "good" things.
I'd argue that even picking a "neutral" area of work would be the
wrong choice if I don't want to do bad things. The random noise
between my intentions and the outcomes means that anything in the
middle of "good" and "bad" could just as easily land in either moral
territory. A rough estimation of the "only do neutral" strategy with a
normal distribution in results[3] means that 50% of the time I'll be
doing something that could be considered "bad".

Looking again at the correlation between my intentions and the
outcomes, we see that this correlation depends on how good I am at my
work in a number of ways. If I'm good at what I do, then I'll be more
effective at making "good" things happens . And if I'm "good" at what
I do, I can avoid the jobs that will involve me taking on "bad" or
"neutral" intentions, simply by virtue of having other options open to
me. Thus, the better at my work I am, the more likely I'll be
successful at what I attempt and the more likely I'll be able to work
on arguably "good" things. So, studying and leveling up as a developer
isn't just a matter of curiosity, it's a way of ensuring that I'll be
doing "good" things for the world.

Thanks again for all the comments. They've provided much food for
thought.

-Zack

[0]



[3] We know that it's easier to do "bad" things than "good" though, so
it'd probably be a skewed normal distribution towards bad, making the
percentage much higher.

[4] I've been purposefully vague about what's "good" or "bad" because
those words depend on your values, which depend on your parent's and
friend's values, which depend on, which depend on so forth and so on.
I'm trying to find a general guiding path beyond what I happen to hold
dearest to my heart today versus what I'll hold dearest tomorrow
To unsubscribe from this group and stop receiving emails from it, send an email to philosophy-in-a-time-of-software+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "Philosophy in a time of Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to philosophy-in-a-time-of-software+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "Philosophy in a time of Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to philosophy-in-a-time-of-software+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "Philosophy in a time of Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to philosophy-in-a-time-of-software+unsubscribe@googlegroups.com.

Ed Summers

unread,
Dec 17, 2013, 11:35:45 AM12/17/13
to philosophy-in-a-...@googlegroups.com, thewi...@gmail.com
I was reminded of this post when wandering across Christopher Alexander's OOPSLA keynote from 1996:

    The Origins of Pattern Theory the Future of the Theory, And The Generation of a Living World

Alexander's talk is one of the better pieces I've read about ethics and programming.

//Ed

mca

unread,
Dec 17, 2013, 11:41:44 AM12/17/13
to philosophy-in-a-...@googlegroups.com, thewi...@gmail.com
yep - fantastic talk

"Please forgive me, I'm going to be very direct and blunt for a horrible second. It could be thought that the technical way in which you currently look at programming is almost as if you were willing to be “guns for hire.” In other words, you are the technicians. You know how to make the programs work. “Tell us what to do daddy, and we'll do it.” That is the worm in the apple."



mca

--
You received this message because you are subscribed to the Google Groups "Philosophy in a time of Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to philosophy-in-a-time-o...@googlegroups.com.

mca

unread,
Dec 18, 2013, 11:15:46 PM12/18/13
to Zack Maril, philosophy-in-a-...@googlegroups.com

Audrey Waters delivered an excellent talk along similar lines at API Days Partis earlier this month.

Hopefully the video will be avail soon.

On Dec 18, 2013 8:08 PM, "Zack Maril" <thewi...@gmail.com> wrote:
What an excellent talk. Disappointing that these issues were raised only from someone outside the technical community though. I've yet to see a really good discussion of software and ethics in the public sphere from someone who is a practicing software professional. 

Is software engineering generally only a high paying job if you disregard ethical considerations?
-Zack
Reply all
Reply to author
Forward
0 new messages