Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

It's becoming obvious that...

0 views
Skip to first unread message

Dan Stromberg

unread,
Apr 4, 2001, 1:11:45 PM4/4/01
to
...that there will eventually be a crossplatform framework for a swiss
army knife of worm-carried exploits. There's even a good chance that
it'll have some generic code that'll run on the various x86 platforms
like linux and microsoft, and perhaps some of the less popular x86
platforms as well (if people bother) - though it wouldn't have to; it
could just send infection code N ways for N OSes, and if N-1 don't
take, so what?

The question is, how long will various folks be able to cling to their
notion that automatic application of patches is a bad thing, in the
face of such a rapidly-spreading general-framework attack-worm?

It's entirely possible that the next generation of "proof of concept"
exploits will just be an addition to this general-framework
attack-worm, leaving basically no window of safety for you to find out
and get around to applying patches. Even the vendors using automatic
patching procedures, likely wouldn't be able to keep up.

I imagine the next step after that would be a worm that knows how to
"mate" with another worm, to carry the exploits of the two forward in
a single "child" worm. I'm guessing that even if some patent (maybe a
genetic algorithm/genetic programming patent) applies, that won't
matter much.
--
Dan Stromberg UCI/NACS/DCS

Bruno Wolff III

unread,
Apr 4, 2001, 1:47:36 PM4/4/01
to
On 4 Apr 2001 17:11:45 GMT, Dan Stromberg <stro...@seki.acs.uci.edu> wrote:
>
>The question is, how long will various folks be able to cling to their
>notion that automatic application of patches is a bad thing, in the
>face of such a rapidly-spreading general-framework attack-worm?

Properly secured machines aren't going to be that vulnerable. For the few
times you do get hit, you have to weight that versus the chance of
automatically getting hosed if a patch is accidentally or maliciously
applied to your systems. An automated patch delivery system would be a great
vector for worms to use. You can be sure a lot of effort would be spent
trying to find ways to subvert it.

H C

unread,
Apr 4, 2001, 6:42:46 PM4/4/01
to
> The question is, how long will various folks be able to cling to their
> notion that automatic application of patches is a bad thing, in the
> face of such a rapidly-spreading general-framework attack-worm?

Follow the Seven P's, my friend...

There is more to security than just applying patches. To top it off, lots
of the
patches from Microsoft do the same thing an admin can do himself...create or

update a Registry key, set ACLs on a file or Reg key, etc.

Properly secured infrastructures, using a logical, documented approach, will
not
be as susceptible to attacks and worms.

D. J. Bernstein

unread,
Apr 4, 2001, 8:11:38 PM4/4/01
to
Dan Stromberg <stro...@seki.acs.uci.edu> wrote:
> Even the vendors using automatic patching procedures,
> likely wouldn't be able to keep up.

Good! They gave us bad software; they deserve to panic.

See the bottom of http://securesoftware.list.cr.yp.to/contributors.html
for further comments on this topic.

---Dan

Juergen P. Meier

unread,
Apr 6, 2001, 10:52:33 AM4/6/01
to
On 5 Apr 2001 00:11:38 GMT, <d...@cr.yp.to> typed:

And what exactly does this have to do with this thread ?
I would really apreciate if you could explain.

>---Dan

juergen

--
J...@lrz.fh-muenchen.de
"This World is about to be Destroyed!"

Julian T. J. Midgley

unread,
Apr 6, 2001, 7:46:03 PM4/6/01
to
In article <2001Apr500....@cr.yp.to>,

Yes, I've seen this before, and I think you might be missing the
point. Immediate disclosure (with no chance given to the authors to
release a patch /before/ the exploit is published) may worry the
authors, but it will worry the people who use the software concerned
even more.

In an ideal world, software would be bug-free; in the real world,
there will be errors. I agree absolutely that it is every
programmer's responsibility to program with security in mind, and to
endeavour to eradicate security holes from the software he releases.
In practice, people will make mistakes, and when they do, I would much
rather that the world was told of the exploit at the same time that a
patch was released than that the exploit be published first, with no
prior warning being given to the author.

The attitude you encourage with the securesoftware list is essentially
irresponsible, and is not likely to lead to our machines being more
secure, but rather, the converse.

The fact that you may have released a couple of software packages that
are well known for their security, and for which, to the best of my
knowledge, there have not yet been any security exploits discovered,
is not sufficient to deduce, as you seem to do, that it is practically
possible for all software to be released without any security holes at
all, or that all that is required to achieve this utopian state of
affairs is extra vigilance on the part of the programmers. In complex
systems, elements of software can be forced to interact in deleterious
ways that even a superbly competent programmer would not have thought
of at the time he wrote the code; when such problems are discovered,
it makes sense that the programmer be given the chance to correct it
before a new exploit script is placed in the hands of the
script-kiddies.

Certainly we should not be forgiving of programmers who are
demonstrably lax with regard to security, but we do ourselves, and the
software using world in general, no favours at all if we are
irresponsibly hard on programmers who have genuinely made their best
efforts to make their software secure, but inadvertently slip up from
time to time.

Julian Midgley

--
Julian T. J. Midgley http://www.xenoclast.org

D. J. Bernstein

unread,
Apr 7, 2001, 5:17:54 AM4/7/01
to
http://securesoftware.list.cr.yp.to

Julian T. J. Midgley <jt...@xenoclast.org> wrote:
> The attitude you encourage with the securesoftware list is essentially
> irresponsible, and is not likely to lead to our machines being more
> secure, but rather, the converse.

Full disclosure frightens you? Great! You now have more incentive to
support secure software.

> In an ideal world, software would be bug-free; in the real world,
> there will be errors.

Software can and should be structured so that the errors don't produce
security problems.

---Dan

Julian T. J. Midgley

unread,
Apr 8, 2001, 1:55:20 AM4/8/01
to
In article <2001Apr709....@cr.yp.to>,

D. J. Bernstein <d...@cr.yp.to> wrote:
>http://securesoftware.list.cr.yp.to
>
>Julian T. J. Midgley <jt...@xenoclast.org> wrote:
>> The attitude you encourage with the securesoftware list is essentially
>> irresponsible, and is not likely to lead to our machines being more
>> secure, but rather, the converse.
>
>Full disclosure frightens you? Great! You now have more incentive to
>support secure software.

Don't put words in my mouth, Bernstein. Full disclosure doesn't
frighten me at all. Immediate full disclosure without prior
notification or attempted notification of the author is self-evidently
foolish.

>
>> In an ideal world, software would be bug-free; in the real world,
>> there will be errors.
>
>Software can and should be structured so that the errors don't produce
>security problems.

Doh! Whatever mechanism you choose for this structuring, it will
itself be vulnerable to errors (in both design and implementation).
Or have you recently invented the Human Who Cannot Err in your spare
time?

D. J. Bernstein

unread,
Apr 8, 2001, 11:23:19 PM4/8/01
to
Julian T. J. Midgley <jt...@xenoclast.org> wrote:
> Immediate full disclosure without prior notification or attempted
> notification of the author is self-evidently foolish.

On the contrary. Immediate full disclosure, with a working exploit,
punishes the programmer for his bad code. He panics; he has to rush to
fix the problem; he loses users.

You're whining that punishment is painful. You're ignoring the effect
that punishment has on future behavior. It encourages programmers to
invest the time and effort necessary to eliminate security problems.

> > Software can and should be structured so that the errors don't produce
> > security problems.
> Doh! Whatever mechanism you choose for this structuring, it will
> itself be vulnerable to errors (in both design and implementation).

On the contrary. Automatic bounds checking, for example, is easy to get
right. A small amount of code, small enough to be bug-free, can protect
the entire system, if the system is structured properly.

---Dan

Cor Gest jr

unread,
Apr 9, 2001, 1:24:52 AM4/9/01
to
d...@cr.yp.to (D. J. Bernstein) writes:


> On the contrary. Immediate full disclosure, with a working exploit,
> punishes the programmer for his bad code. He panics; he has to rush to
> fix the problem; he loses users.
>
> You're whining that punishment is painful. You're ignoring the effect
> that punishment has on future behavior. It encourages programmers to
> invest the time and effort necessary to eliminate security problems.
>

> On the contrary. Automatic bounds checking, for example, is easy to get
> right. A small amount of code, small enough to be bug-free, can protect
> the entire system, if the system is structured properly.

The method: notify_and_wait for a solution obviously does get
the wanted solutions to real-life time-restricted problems so:

The sledge-hammer-approach may have the wanted effect

The Holy-Grail isn't asked for by the users:
maily some riddance of security-bugs, and yes it costs, and yes
you must say it hurts...
But if the cie's keep still,deaf and the hope to bottom-line up,
soonly enough no-one will not ever have to bother about their bottom-line;
there wont be anything anymore to bother about.

But nowadays the buyer is the beta-tester ain't it ..?
just to save some pennies? You gamble the shop ?

cor


--
(defvar my-computer '((OS . "GNU Emacs") (Boot-Loader . "GNU Linux")))
/* If GNU/LINUX has no solution, you've got the wrong problem */
/* Never install Slackware..........You might learn to use IT */
/* pa3...@amsat.org http://clsnet.dynip.nl */

Juergen P. Meier

unread,
Apr 9, 2001, 2:32:51 AM4/9/01
to
On 9 Apr 2001 03:23:19 GMT, <d...@cr.yp.to> typed:

>Julian T. J. Midgley <jt...@xenoclast.org> wrote:
>> Immediate full disclosure without prior notification or attempted
>> notification of the author is self-evidently foolish.
>
>On the contrary. Immediate full disclosure, with a working exploit,
>punishes the programmer for his bad code. He panics; he has to rush to
>fix the problem; he loses users.

False. You do not punish the Coder (who might deserve it), you
punish all the Users. How very nice of you.

>You're whining that punishment is painful. You're ignoring the effect
>that punishment has on future behavior. It encourages programmers to
>invest the time and effort necessary to eliminate security problems.

I dont know in which society You were raised, but punishment is not the
only educational method (Its an extremly insufficient methode when
applied to Adult humans!).
In fact, i believe that guided, constructive criticism is a much better
suited tool for teaching security awareness to all the programmers out
there.

[warning: public media colored hat talk mode ON]

Delayed full discolsure gives not only the developers time to react,
but also the users time to fix their productive systems before the
exploit gets public knowledge.
Note: This _only_ applies to exploits that are not already known
by the public (be it white, gray or black hats), since i fully agree
with you that there is no benefit whatsover for delaying full
disclosure in this case.

I just disagree with you in that the potential risk of a just
discovered (by a whitehat) already known to the black/grey hats
is less significant than the benefit of a little time to fix the
problem and deploy this fix on a large number of user installations.

>> > Software can and should be structured so that the errors don't produce
>> > security problems.
>> Doh! Whatever mechanism you choose for this structuring, it will
>> itself be vulnerable to errors (in both design and implementation).
>
>On the contrary. Automatic bounds checking, for example, is easy to get
>right. A small amount of code, small enough to be bug-free, can protect
>the entire system, if the system is structured properly.

Uh, do you really try to claim that bounds checking automatisms will make
code secure ? - I guess not, so please rephrase your statement.

Bounds check automatisms are nice, but usually quite useless in unstructured
languages like C (and even C++, as this one has inhrerited all the
bad things from C).
Note: im a C programmer by heart, but i would never ever try to
write code with security requirements in this language. Not if i cant
do it on a single page of C code. (80cols x 25lines)

>---Dan

Juergen

H C

unread,
Apr 9, 2001, 6:53:35 AM4/9/01
to
> On the contrary. Immediate full disclosure, with a working exploit,
> punishes the programmer for his bad code. He panics; he has to rush to
> fix the problem; he loses users.
>
> You're whining that punishment is painful. You're ignoring the effect
> that punishment has on future behavior. It encourages programmers to
> invest the time and effort necessary to eliminate security problems.

Interesting approach. I'd like to offer up another view...as someone who
writes
software.

Let's take a hypothetical...someone writes some software, and it has a small
bug in
the code that allows for an exploit. Now, the author is providing this
software for free
or for a nominal shareware fee...which few people are willing to pay anyway.
The
bug is disclosed, along with working code, without any prior notification to
the author.

The result then is that the author decides that it's not worth it to put in
the time and effort
to please everyone...after all, it doesn't pay the bills. Once this happens
often enough,
two conditions arise...on the one hand, some authors decide to stop providing
their code
to the general public, and only let trusted friends have a peek. On the other
hand,
companies that produce code can now jack up the price b/c the code has to be
reviewed
multiple times by experts to remove as many of the bugs as possible before
initial release.

> On the contrary. Automatic bounds checking, for example, is easy to get
> right. A small amount of code, small enough to be bug-free, can protect
> the entire system, if the system is structured properly.

However, many of the software systems written today are so complex that
correcting
a problem in one area invariably opens up a gaping hole in another.

Matt McLeod

unread,
Apr 9, 2001, 9:42:44 AM4/9/01
to
In article <3AD194AF...@patriot.net>, H C <carv...@patriot.net> wrote:
>The result then is that the author decides that it's not worth it to put in
>the time and effort
>to please everyone...after all, it doesn't pay the bills. Once this happens
>often enough,
>two conditions arise...on the one hand, some authors decide to stop providing
>their code
>to the general public, and only let trusted friends have a peek.

Um. You're assuming that most of the people writing open-sourcish
stuff are doing it to pay the bills. They're not. Whatever the
effects of Dan's approach, I don't see it stopping people from
writing and releasing stuff.

> On the other
>hand,
>companies that produce code can now jack up the price b/c the code has to be
>reviewed
>multiple times by experts to remove as many of the bugs as possible before
>initial release.

Oh my! Software houses might have to run code reviews and maybe
not use the general public as a free beta-testing resource?
Shock! Horror! Film at 11!

Yes, I know that in the world of commercial software development
there is rarely enough time to properly design and implement a
product, let alone run reviews and test it. That this is so
does not make it inherently right.

I'm not sure that Dan has the right idea with full-disclosure-
without-notification (assuming that this is indeed what he's
suggesting), but suggesting that it would bring about the end
of the world is being just a teensy bit melodramatic.

--
+++ Out of Cheese Error +++ MELON MELON MELON +++

Richard L. Hamilton

unread,
Apr 9, 2001, 9:43:09 AM4/9/01
to
In article <2001Apr903....@cr.yp.to>,

d...@cr.yp.to (D. J. Bernstein) writes:
> Julian T. J. Midgley <jt...@xenoclast.org> wrote:
>> Immediate full disclosure without prior notification or attempted
>> notification of the author is self-evidently foolish.
>
> On the contrary. Immediate full disclosure, with a working exploit,
> punishes the programmer for his bad code. He panics; he has to rush to
> fix the problem; he loses users.

I like negative re-inforcement, but I don't like causing panic; it doesn't
do much for the quality of the results.

> You're whining that punishment is painful. You're ignoring the effect
> that punishment has on future behavior. It encourages programmers to
> invest the time and effort necessary to eliminate security problems.

Instead of playing at judge, jury, and executioner, how about:

* one week notice if no exploits known to be in the wild; in that
time, the maintainer should either get a patch out or at least
get an announcement of the problem out; if the latter, and the
exploit is not included, it had better include a workaround or
other defensive measure

* no notice if exploit is known to be in the wild, except that
publishing of exploit is delayed one week (i.e. first public notice
only contains warning/workaround (or "don't use" if not known)

* in any event, defer exploit release to avoid Thursday (day before
Friday for the Islamic folks), Friday, weekends, and holidays (and
day before) generally known to be widely observed or known by releaser
to apply in maintainer's country

* in the case of a maintainer that's generally agreed to have a poor
record of prior problems and responses, the week grace period may
go out the door, but for the sake of the rest of us, the avoidance
of pre-weekend/holiday release should still apply

Consequences, and short time frames are important to apply pressure.
But issuing an open invitation to all the script kiddies of the world
prior to allowing at least limited time for a fix is plain irresponsible.

And for those who don't have a horrible record (as if anyone even
moderately prolific had a perfect one!), the simple courtesy of advance
notice provides a carrot (or at least the deferral of the application of
the stick). Even from the most cynical point of view, you don't blackmail
someone by taking out an ad and publishing whatever dirt you've got on
them right away; once you've done that, you've just tossed your leverage.
It's the _threat_ (applied in a consistent fashion to provide examples)
that does the job; the application as needed simply makes the point that
you aren't kidding around.

The notion of all info being made immediately available to everyone
sounds cute for a perfect world of perfect people. But in the real
world, it's irresponsible and unrealistic; apply the notion to something
other than programming that might get _you_ hurt and you might see my
point.

--
ftp> get |fortune
377 I/O error: smart remark generator failed

Bogonics: the primary language inside the Beltway

mailto:rlh...@mindwarp.smart.net http://www.smart.net/~rlhamil

D. J. Bernstein

unread,
Apr 10, 2001, 6:09:37 PM4/10/01
to
Richard L. Hamilton <rlh...@smart.net> wrote:
> But issuing an open invitation to all the script kiddies of the world
> prior to allowing at least limited time for a fix is plain irresponsible.

The programmer should have fixed the problem before releasing the code.
The security hole is _his_ fault. It is not the messenger's fault.

> one week notice

No. Shielding bad programmers is shortsighted.

---Dan

Richard L. Hamilton

unread,
Apr 11, 2001, 2:16:04 AM4/11/01
to
In article <2001Apr1022...@cr.yp.to>,

d...@cr.yp.to (D. J. Bernstein) writes:

How many non-trivial programs have you written and released?

What percentage of them would you be willing to bet your life
is not subject to exploits?

Until formal proofs of correctness become feasible for arbitrarily
complex software, the idea is risk _management_. Part of that
is certainly a better level of quality control and internal testing
than is presently commonplace. But that will not eliminate the
need for vigilance or bug fixes, and it will not preclude exploitable
bugs getting out in the wild. That being the case, we need approaches
that contribute to getting fixes promptly, without creating further
problems by rushing them out the door in panic mode. One thing we
need that we don't have is a little more candor on the part of
software creators and vendors, and I don't see that your no prisoners
approach encourages that.

Russell Frame

unread,
Apr 11, 2001, 10:42:31 AM4/11/01
to
It's asinine to state say it's possible for software to be written bug-free.
Anyone saying such has never written a line of code in their life. All of
the quality control in the world wont catch every error.

Is disclosure a good idea? Yes...it keeps pressure on coders to produce a
good product and fix them when a problem is found.

Is immediate release of an advisory and exploit helpful to anyone without
giving the coder a chance to fix the problem? Of course not. There are a
lot of honest people out there just trying to make a living providing good
software. Do they make mistakes? Of course. Give them a few days to fix
it then release your exploit. The only people who disclose bugs without
giving the vendor a chance to fix them are egotistical little script kiddies
who want credit for a 'sploit before anyone else.

If the vendor can provide a solution...great, everyone has been served and
the bug and fix should be made public so everyone can patch their systems.
If the vendor ignores your notification, then again, release your advisory
so everyone can be aware they are running exploitable software that the
vendor doesn't want to support. At least the user base is then served.

Disclosure is good. Stating that vendors should never release a bug or
something that could become a bug in the future is just stupid.


Matthew Kruk

unread,
Apr 11, 2001, 11:52:14 AM4/11/01
to
Russell Frame wrote:
>
> It's asinine to state say it's possible for software to be written bug-free.
> Anyone saying such has never written a line of code in their life. All of
> the quality control in the world wont catch every error.
> ...

> Disclosure is good. Stating that vendors should never release a bug or
> something that could become a bug in the future is just stupid.

I think the point is this: too much software is released in a state which would
have been considered "beta" in the past. Allow me to use some speculative
figures as an example: 10 years ago, software was released 90% "bug free"
whereas nowadays it might be 70% or less.

Maybe it's me but I get this feeling that software is rushed out the door and
then "we'll fix the bugs as users find them because now everyone has internet
access so they can apply the patches". The problem is people do not maintain
their software as well (or as time permits!) as need be (to apply patches etc.)
if at all.

D. J. Bernstein

unread,
Apr 12, 2001, 5:28:40 PM4/12/01
to
Richard L. Hamilton <rlh...@smart.net> wrote:
> How many non-trivial programs have you written and released?

Let's try two examples.

qmail is running 10% of the Internet's SMTP servers and has an even
larger percentage of the Internet's mail traffic. It is covered by a
$500 security guarantee: http://cr.yp.to/qmail/guarantee.html

djbdns is handling something like a million *.com domains. It is covered
by a $500 security guarantee: http://cr.yp.to/djbdns/guarantee.html

> would you be willing to bet your life

Bet my life? Wow. I thought we were discussing full disclosure, not the
electric chair.

---Dan

Barry Margolin

unread,
Apr 12, 2001, 6:26:11 PM4/12/01
to
In article <2001Apr1221...@cr.yp.to>,

D. J. Bernstein <d...@cr.yp.to> wrote:
>Richard L. Hamilton <rlh...@smart.net> wrote:
>> How many non-trivial programs have you written and released?
>
>Let's try two examples.
>
>qmail is running 10% of the Internet's SMTP servers and has an even
>larger percentage of the Internet's mail traffic. It is covered by a
>$500 security guarantee: http://cr.yp.to/qmail/guarantee.html
>
>djbdns is handling something like a million *.com domains. It is covered
>by a $500 security guarantee: http://cr.yp.to/djbdns/guarantee.html

Is $500 supposed to be alot of money? You're not offering $500 for each
bug (just the first person to publish an exploit), or to each customer
impacted by the bug. But I admit that you're just one guy, not a big
corporation, so it's not pocket money, either.

Of course, the disclaimer that it has to be a hole "*in* qmail/djbdns"
weakens the guarantee a bit. A program doesn't run in a vacuum, it has to
deal with the environment, including the OS, network stack, libraries, etc.
One of your servers may be used as a conduit to exploit a system hole;
while it's not your fault, and there may be nothing you can do about it,
it's still the case that the system as a whole is less secure when that
server is running than if it weren't. (As they say, the only way to be
reasonably sure that a system is secure is to lock it away in a safe,
disconnect it from the network, and turn off the power.)

For instance, I think that some sendmail or BIND vulnerabilities have
actually been due to syslog() not doing bounds checking (I'm not totally
sure about this, but I do remember a number of exploits related to
syslog()). Although the fixes may have been implemented in the servers,
that's really a workaround for a system bug, not truly a bug in the
servers. But the CERT Advisories probably say sendmail/BIND in the Subject
lines, so they take the blame.

I'm not saying that sendmail and BIND haven't had their share of real
security bugs (they definitely have -- they're spaghetti code written
before this was as serious an issue), and I'm sure that djbdns and qmail
are orders of magnitude

--
Barry Margolin, bar...@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Richard L. Hamilton

unread,
Apr 13, 2001, 10:36:42 PM4/13/01
to
In article <2001Apr1221...@cr.yp.to>,

d...@cr.yp.to (D. J. Bernstein) writes:

[sound of penny dropping]

Oh, you. The guy with the good software and the lousy attitude.
I should've known. You've certainly earned the right to whatever
attitude you want, but that doesn't make it right.

I think we want a similar result: much more secure and reliable software.
We differ as to how to achieve that result. Now that some of this comes
back to me, I realize you've been talking the same line for awhile. Which
means there's not a bat's chance of either of us persuading the other.

Still, if it came to a contest where a significant component was
_gaining_cooperation_, I wonder which approach would work better...

Jimi Thompson

unread,
Apr 14, 2001, 1:17:26 AM4/14/01
to
I'd like to meet the person that can work with, debug, and have perfectly
secure code at the end of say 3 million lines of code.


Juergen P. Meier

unread,
Apr 14, 2001, 6:32:44 AM4/14/01
to
On Sat, 14 Apr 2001 00:17:26 -0500, <JI...@prodigy.net> typed:

>I'd like to meet the person that can work with, debug, and have perfectly
>secure code at the end of say 3 million lines of code.

That person probably does not exist (of if (s)he does, would have
immediatly caused a singularity to form and swallow everything. ;)

Thats probably why djb prefers small, overseeable codesizes.

juergen

Julian T. J. Midgley

unread,
Apr 17, 2001, 7:29:31 AM4/17/01
to
In article <2001Apr903....@cr.yp.to>,

D. J. Bernstein <d...@cr.yp.to> wrote:
>Julian T. J. Midgley <jt...@xenoclast.org> wrote:
>> Immediate full disclosure without prior notification or attempted
>> notification of the author is self-evidently foolish.
>
>On the contrary. Immediate full disclosure, with a working exploit,
>punishes the programmer for his bad code. He panics; he has to rush to
>fix the problem; he loses users.

You still don't get it, do you? It doesn't punish the programmer
nearly half so much as it punishes his users. The users aren't likely
to thank you if their systems get cracked as a result of your lack of
prior notification to the programmer. And given that practically
every program I can think of has had a security exploit published for
it at some time or other, this isn't likely to force the users to move
to some other programmers software. It might result them in sending
you vitriolic email for gross irresponsibility, but I suspect that's
about it.


>You're whining that punishment is painful. You're ignoring the effect
>that punishment has on future behavior. It encourages programmers to
>invest the time and effort necessary to eliminate security problems.

Being notified of a security exploit in the usual way is quite
sufficient for the majority of programmers...

>> > Software can and should be structured so that the errors don't produce
>> > security problems.
>> Doh! Whatever mechanism you choose for this structuring, it will
>> itself be vulnerable to errors (in both design and implementation).
>
>On the contrary. Automatic bounds checking, for example, is easy to get
>right. A small amount of code, small enough to be bug-free, can protect
>the entire system, if the system is structured properly.

There is a damn site more to writing secure software than automatic
bounds checking, as you well know...

A simple question for you:

"Are you infallible?"

(Yes/no answers accepted.)

Alun Jones

unread,
Apr 17, 2001, 9:59:32 AM4/17/01
to
In article <vOVC6.11100$xA.17...@news2-win.server.ntlworld.com>,
jt...@xenoclast.org (Julian T. J. Midgley) wrote:
> In article <2001Apr903....@cr.yp.to>,
> D. J. Bernstein <d...@cr.yp.to> wrote:
> >Julian T. J. Midgley <jt...@xenoclast.org> wrote:
> >> Immediate full disclosure without prior notification or attempted
> >> notification of the author is self-evidently foolish.
> >
> >On the contrary. Immediate full disclosure, with a working exploit,
> >punishes the programmer for his bad code. He panics; he has to rush to
> >fix the problem; he loses users.
>
> You still don't get it, do you? It doesn't punish the programmer
> nearly half so much as it punishes his users. The users aren't likely
> to thank you if their systems get cracked as a result of your lack of
> prior notification to the programmer. And given that practically
> every program I can think of has had a security exploit published for
> it at some time or other, this isn't likely to force the users to move
> to some other programmers software. It might result them in sending
> you vitriolic email for gross irresponsibility, but I suspect that's
> about it.

Your point is correct that the users are more often the victims of early
publication (essentially, any form of publication that doesn't involve an
attempt to contact the programmer).

On the topic of punishing the programmer, however, it's worth noting the old
adage that every non-trivial program has bugs. Why would anyone feel there
is a need to "punish" the programmer? He's already got to acknowledge that
his program fouled up in some unforeseen way, he's got to track and fix the
problem, run more tests on it, and put a release out. In the process of
putting out the release, he also has to tell all of his users - even those
that wouldn't normally have been aware of the bug - that he is fixing a
security flaw, thus further affecting his relationship of trust with those
users.

And you (DJ) think the programmer needs _more_ punishment?

Now, I'm a strong supporter of _full_ disclosure, but my definition of
"full" appears to include something more than many others - I suggest that
"full" disclosure is not met until such time as the programmer is contacted.

There's any number of "security flaw mailing lists" out there, and each one
wants to have an edge over the others, and so will try and release
information to their own readers without distributing that information to
the other security lists. Is a programmer supposed to subscribe to, and
read, every mailing list? Even using tools to search for the name of his
program, this takes a substantial amount of time that is not available to
many operations.

If you're reporting a security flaw, it is your responsibility to make an
attempt to contact the programmer, to allow him/her to verify the report
(because you're going to look an idiot if he pops up on the list and says
"that problem was fixed two months ago", or even proves that your report is
incorrect), acknowledge it (yes, some of us do that), and set the wheels in
motion towards a fix and/or workaround. Nobody actually _requires_ that you
do that, of course, but it's nonetheless what you _should_ do. To not do
so, is to support the hackers while undermining the users.

> >You're whining that punishment is painful. You're ignoring the effect
> >that punishment has on future behavior. It encourages programmers to
> >invest the time and effort necessary to eliminate security problems.
>
> Being notified of a security exploit in the usual way is quite
> sufficient for the majority of programmers...

I know it gets my arse in gear. I've canceled a vacation before now, just
because someone showed me a security flaw.

Partly, it's about the thought of losing users - not so much because I'll
lose their money (actually, once they've bought the software, there's no
more money I get from them), but more because I feel that the other software
that they might switch to is likely to be even less secure. But hey, I'm
somewhat conceited, myself. :-)

> A simple question for you:
>
> "Are you infallible?"

Some would say that DJ ought to be the Pope. ;-)

Alun.
~~~~

[Note that answers to questions in newsgroups are not generally
invitations to contact me personally for help in the future.]
--
Texas Imperial Software | Try WFTPD, the Windows FTP Server. Find us at
1602 Harvest Moon Place | http://www.wftpd.com or email al...@texis.com
Cedar Park TX 78613-1419 | VISA/MC accepted. NT-based sites, be sure to
Fax/Voice +1(512)378-3246 | read details of WFTPD Pro for NT.

D. J. Bernstein

unread,
Apr 17, 2001, 4:02:21 PM4/17/01
to
http://securesoftware.list.cr.yp.to

Julian T. J. Midgley <jt...@xenoclast.org> wrote:
> You still don't get it, do you? It doesn't punish the programmer
> nearly half so much as it punishes his users. The users aren't likely
> to thank you if their systems get cracked as a result of your lack of
> prior notification to the programmer.

``You still don't get it, do you? By disclosing this vulnerability to
the public, you're punishing the users. They aren't likely to thank you
if their systems get cracked as a result of your irresponsible
publication of information. You have to keep the information secret!''

What opponents of full disclosure never seem to grasp is that the
vulnerability is THE PROGRAMMER'S FAULT. It isn't an act of God,
something to run away from. It is a human failure. It is something that
can and should be prevented.

Shooting the messenger is shortsighted. It shields programmers from
their failures. It reduces the incentives for programmers to do better.

> There is a damn site more to writing secure software than automatic
> bounds checking, as you well know...

Which part of ``for example'' didn't you understand?

---Dan

Alun Jones

unread,
Apr 17, 2001, 5:06:53 PM4/17/01
to
In article <2001Apr1720...@cr.yp.to>, d...@cr.yp.to (D. J.
Bernstein) wrote:
> http://securesoftware.list.cr.yp.to
>
> Julian T. J. Midgley <jt...@xenoclast.org> wrote:
> > You still don't get it, do you? It doesn't punish the programmer
> > nearly half so much as it punishes his users. The users aren't likely
> > to thank you if their systems get cracked as a result of your lack of
> > prior notification to the programmer.
>
> ``You still don't get it, do you? By disclosing this vulnerability to
> the public, you're punishing the users. They aren't likely to thank you
> if their systems get cracked as a result of your irresponsible
> publication of information. You have to keep the information secret!''

I must have missed the part where Julian said that you had to avoid
notifying the public. As far as I could see, he was merely suggesting that
you should notify the programmer first. In fact, I didn't see where he said
that you had to precede your public announcement by months, weeks, days,
hours or minutes.

> What opponents of full disclosure never seem to grasp is that the
> vulnerability is THE PROGRAMMER'S FAULT. It isn't an act of God,
> something to run away from. It is a human failure. It is something that
> can and should be prevented.

If it is something that can be prevented, then there would be flawless
software in public distribution. It can be limited, but I'd disagree that
it can be prevented. One means of limiting is to ensure that software
development is never a "finished" process, but a continual loop of
enhancement and improvement.

If you are a proponent of full disclosure, then you should allow for
disclosure to the programmer - otherwise, it is only _partial_ disclosure
that you advocate.

> Shooting the messenger is shortsighted. It shields programmers from
> their failures. It reduces the incentives for programmers to do better.

Who is suggesting shooting the messenger? Did I miss a post somewhere? I
thought Julian was suggesting that the programmer should be 'messaged' first
- not that the programmer should attempt to ensure that the message doesn't
ever get out.

Dave Gough

unread,
Apr 17, 2001, 5:15:31 PM4/17/01
to

On a Different, and vaguely legalistic approach, immediately publishing
an exploit for a security hole, and then being the first to make such a
hole public, may open you to legal attacks for Negligence, and depending
on the application's function, even criminal charges. This may seem
farfetched, but remember that McD's was still responsible for some twit
that couldn't hold on to their coffee. (I hope that has since been
overturned. I was unable to follow the news for a bit.)

> Shooting the messenger is shortsighted. It shields programmers from
> their failures. It reduces the incentives for programmers to do better.
> What opponents of full disclosure never seem to grasp is that the
> vulnerability is THE PROGRAMMER'S FAULT. It isn't an act of God,
> something to run away from. It is a human failure. It is something that
> can and should be prevented.

Even though I agree that shooting the messenger is wrong in many ways, I
also don't think the messenger should walk up and shoot the recipient,
either. Yes, it's the programmer's fault, but look at reality. Is the
lightning bolt that strikes you as you walk down the street your fault?
Is it your fault that you are almost forced to buy a Windows product
because the software you want to run is only available on that platform?
(As a note, and not an attack on Microsoft, the software choices between
Windows platforms and other platforms are almost uniformly stacked
against other platforms, save for server software.)

And in the case of a program co-authored by several programmers, some of
whom may no longer be working with the product in question, how is it
handled?

I understand the need for full review process of all software; the
company I work for does little else but Software Verification but it
seems that you have to be writing software for avionics or similiar
embedded and black box systems before anyone thinks that enough time
needs to be spent to assure that there are no bugs.

Even if a gun were placed against every programmer's head that would
automatically fire through his left temple in the event a security bug
manifested, you'd still wind up with security holes. (And a lot of dead
programmers. This might be a good thing (my asking price for employers
would go up) but would probably clutter the streets a bit.)

At least it's good to see that there are still some crusaders in the
world. Just don't go Don Quixote.

--

Dave Gough
System Administrator
-----------
ACE Computer Engineering, Inc.
W. Melbourne, FL 32904

<entire previous post follows>

david.vcf

Theo de Raadt

unread,
Apr 17, 2001, 6:51:03 PM4/17/01
to
Dave Gough <da...@ace-comp.com> writes:

> On a Different, and vaguely legalistic approach, immediately publishing
> an exploit for a security hole, and then being the first to make such a
> hole public, may open you to legal attacks for Negligence, and depending
> on the application's function, even criminal charges.

I bet all the full disclosure people living outside the USA are peeing their
pants right about now... because they are laughing so hard.

D. J. Bernstein

unread,
Apr 17, 2001, 11:39:39 PM4/17/01
to
Dave Gough <da...@ace-comp.com> wrote:
> may open you to legal attacks for Negligence, and depending
> on the application's function, even criminal charges.

Under what law? DMCA has an exception for security testing. I don't see
any basis for a negligence claim.

Anyway, I'm not intimidated by legal threats, and I don't require that
contributors to the securesoftware mailing list identify themselves.
See http://securesoftware.list.cr.yp.to/contributors.html.

---Dan

Julian T. J. Midgley

unread,
Apr 18, 2001, 6:16:57 AM4/18/01
to
In article <2001Apr1720...@cr.yp.to>,

D. J. Bernstein <d...@cr.yp.to> wrote:
>Julian T. J. Midgley <jt...@xenoclast.org> wrote:
>> You still don't get it, do you? It doesn't punish the programmer
>> nearly half so much as it punishes his users. The users aren't likely
>> to thank you if their systems get cracked as a result of your lack of
>> prior notification to the programmer.
>
>``You still don't get it, do you? By disclosing this vulnerability to
>the public, you're punishing the users. They aren't likely to thank you
>if their systems get cracked as a result of your irresponsible
>publication of information. You have to keep the information secret!''

What is it with you and putting words in people's mouths? I never
said anywhere that one should keep the information secret - I merely
said that one should inform the programmer first and give them a
reasonable chance to release a patch /before/ you go public with the
details. I am a proponent of full-disclosure a la Bugtraq.

Given that you seem so emminently fallible when it comes to reading
other people's writing, I find it hard to believe that you think it is
possible for people to infallibly write secure software.

>What opponents of full disclosure never seem to grasp is that the
>vulnerability is THE PROGRAMMER'S FAULT. It isn't an act of God,
>something to run away from. It is a human failure. It is something that
>can and should be prevented.

Nor am I an opponent of full-disclosure. Last time I looked, Bugtraq
was a full disclosure mailing list, and I fully support it. I am an
opponent of /immediate/ full-disclosure, which is an important
distinction.

I agree that these bugs should be prevented, where possible, through
education, etc; however, I disagree with your assertion that it is
possible for all programmers always to write software that is
perfectly secure.

If history teaches us one thing, it's that human error cannot be
absolutely prevented except by absolutely removing the human from the
process under consideration.

Finally, you still haven't addressed the key issue, which is that full
disclosure without prior notification of the programmer punishes the
users of the software far more than the programmer himself. There is
only one (or a small number of) programmer(s), but there may be tens
or hundreds of thousands of users. Many of these users could be
running the software to support commercial operations; if their
machines go down through being cracked, they lose money by the hour.
These are the people you are seeking to protect in your crusade for
secure software, yet it is these same people who are hurt most by your
proposed solution, since they are the ones placed most at risk of
compromise.

I note that:

1. Most programmers respond rapidly and positively to notifications
of security holes in their software.

2. Prior notification enables them to release a fix before or at the
same time as the announcement of the exploit, reducing the risk of
their users being cracked.

3. Failing to notify the programmers in advance harms the users
considerably more than it harms the programmer, and does not
motivate him to write more secure software any more than full
disclosure with prior notification would have done.

We both agree that secure software is an excellent goal. If you could
explain what the weaknesses are of the prior notification model for
full disclosure, then you might start to convince me that your mailing
list is a good idea. You will have to think of something other than
"immediate public notification motivates the programmer to write
secure software by punishing him", because I don't believe you have a
shred of evidence to support that conclusion; if you do, please be so
good as to present it...

Alun Jones

unread,
Apr 18, 2001, 9:58:12 AM4/18/01
to
In article <tQdD6.235$a46....@news2-win.server.ntlworld.com>,
jt...@xenoclast.org (Julian T. J. Midgley) wrote:
> What is it with you and putting words in people's mouths? I never
> said anywhere that one should keep the information secret - I merely
> said that one should inform the programmer first and give them a
> reasonable chance to release a patch /before/ you go public with the
> details. I am a proponent of full-disclosure a la Bugtraq.

While I agree with much of what you're saying here, it's worth noting that
there are some strong disputes as to what is "a reasonable chance", i.e.
what delay one should have between notifying the programmer, and releasing
the details to the public.

Further, in releasing the details to the public at the earliest possible
time, you don't give slow programmers the ability to pretend that they
respond quickly to reports of security flaws. Counter-balance to that, of
course, is that if you don't notify programmers as you notify the public,
you cause even fast, dedicated programmers to appear slow and unreliable.

> Nor am I an opponent of full-disclosure. Last time I looked, Bugtraq
> was a full disclosure mailing list, and I fully support it. I am an
> opponent of /immediate/ full-disclosure, which is an important
> distinction.

My personal feeling is that Bugtraq is a partial disclosure mailing list.
There is no requirement to report vulnerabilities to the programmer at the
same time as you publish them through Bugtraq - and as my own recent history
shows, you can be reading and _actively_ searching Bugtraq for mention of
keywords related to your software, and _still_ not receive notice that a
vulnerability has been posted against your software until months later.

> Finally, you still haven't addressed the key issue, which is that full
> disclosure without prior notification of the programmer punishes the
> users of the software far more than the programmer himself. There is
> only one (or a small number of) programmer(s), but there may be tens
> or hundreds of thousands of users. Many of these users could be
> running the software to support commercial operations; if their
> machines go down through being cracked, they lose money by the hour.

There is a _slim_ possibility that public disclosure allows a member of the
public to suggest a workaround or fix - for instance, the recent DoS attack
against multiple FTP servers, where the sequence "LIST
*/../*/../*/../*/../*/../*/../*" caused resource depletion and CPU hogging;
in this case, any FTP server that allows user-supplied filters on command
strings would simply provide a workaround by disallowing any string with the
command "/.." in it (which isn't such a bad idea, anyway).

Such a workaround could be applied faster than it takes many larger
companies to come out with an acceptably official statement on the bug.

> These are the people you are seeking to protect in your crusade for
> secure software, yet it is these same people who are hurt most by your
> proposed solution, since they are the ones placed most at risk of
> compromise.

The users are definitely hurt, though, in the vast majority of cases, if the
programmer is not given an early opportunity to fix the bug.

> I note that:
>
> 1. Most programmers respond rapidly and positively to notifications
> of security holes in their software.

Agreed. Even if you believe that most programmers are evil, how do you tell
the evil ones from the good, if you don't allow the good ones an ability to
respond?

> 2. Prior notification enables them to release a fix before or at the
> same time as the announcement of the exploit, reducing the risk of
> their users being cracked.

_Any_ notification would be better than the scheme we have at present, where
it is left up to the whim of the bug finder as to whether they should
contact the programmer or not - and with a premium put on being the first to
discover a new bug, who's going to bother trying to track down the
programmer, when it's quicker just to post to the mailing lists? This is
why the mailing lists have a responsibility to allow vendors to receive
individual notices of reports against their software.

If you maintain a vulnerability database, indexed on vendor name, or
product, then there is no reason why you shouldn't have a facility for
automatically contacting the vendor when a new vulnerability is reported (or
a modification is made to an old vulnerability).

> 3. Failing to notify the programmers in advance harms the users
> considerably more than it harms the programmer, and does not
> motivate him to write more secure software any more than full
> disclosure with prior notification would have done.

Actually, I'd say it harms the users and the programmer - anything that
harms the users harms the programmer. Prior notification (or even
simultaneous, or slightly delayed notification) may not provide motivation,
but it definitely provides opportunity. Without timely notification, the
first a programmer is likely to hear of the bug is when his users start
chewing his ear off about a reported vulnerability. Sure, it gives him a
jump-start, but then so does electrodes to the testicles. Timely
notification allows the programmer to say "yes, we're aware of that, and
we're working towards a fix - in the meantime, a temporary workaround could
be ..."

Worth noting, too, is that the first user to call the programmer about a
vulnerability is not generally the first user to read about the
vulnerability. Because of recent problems getting Bugtraq and SecurityFocus
on track with a vulnerability report, I've lost several sales. I can just
hear the chorus of "ahh, poor baby". However, those sales were lost because
of a vulnerability that was patched and released six weeks _before_ the
vulnerability was posted to Bugtraq, and added to SecurityFocus'
vulnerability database. In other words, unconscionably out-dated
information (close to a lie, essentially) was being used to reject software,
despite the absence of the vulnerability being used as an excuse to reject
the software.

How often does this happen to other companies? It's happened _twice_ to me,
and I've only had four vulnerabilities reported against my software. That's
not a good success rate on the part of Bugtraq / SecurityFocus or its
posters, and it suggests that the use of security tools that merely compare
the vulnerability database against version numbers is a substantially flawed
method of attempting to protect your systems.

> We both agree that secure software is an excellent goal. If you could
> explain what the weaknesses are of the prior notification model for
> full disclosure, then you might start to convince me that your mailing
> list is a good idea. You will have to think of something other than
> "immediate public notification motivates the programmer to write
> secure software by punishing him", because I don't believe you have a
> shred of evidence to support that conclusion; if you do, please be so
> good as to present it...

I don't understand why DJ believes that further punishment is necessary.
It's an embarrassment to find a bug in your code - and a public
embarrassment, and loss of sales, to have a security flaw as a result of
that bug.

Sure, immediate public notification may motivate an idle programmer to
improve his software; but immediate notification of the programmer _allows_
an interested, active programmer to improve his software.

Dave Sill

unread,
Apr 18, 2001, 2:45:24 PM4/18/01
to
al...@texis.com (Alun Jones) writes:

> I don't understand why DJ believes that further punishment is necessary.
> It's an embarrassment to find a bug in your code - and a public
> embarrassment, and loss of sales, to have a security flaw as a result of
> that bug.

Obviously the current deterrent is insufficient to prevent shoddy,
insecure software from being distributed, as the abundance of shoddy,
insecure software demonstrates.

--
Dave Sill <MaxFr...@sws5.ctd.ornl.gov> <http://web.infoave.net/~dsill>
Oak Ridge National Lab, Workstation Support
<http://www.lifewithqmail.org>: almost everything you always wanted to know.

Alun Jones

unread,
Apr 18, 2001, 5:11:59 PM4/18/01
to
In article <wx0eluq...@sws5.ctd.ornl.gov>, Dave Sill
<MaxFr...@sws5.ctd.ornl.gov> wrote:
> al...@texis.com (Alun Jones) writes:
>
> > I don't understand why DJ believes that further punishment is necessary.
> > It's an embarrassment to find a bug in your code - and a public
> > embarrassment, and loss of sales, to have a security flaw as a result of
> > that bug.
>
> Obviously the current deterrent is insufficient to prevent shoddy,
> insecure software from being distributed, as the abundance of shoddy,
> insecure software demonstrates.

On the contrary, there's a relative minority of software titles that are
distributed as shoddy and unsecure. However, that minority is from a few
companies that have the majority of installations. Your perception from
this is clear, that most programmers are as bad as only those that make the
most sales; but it's a wrong perception.

D. J. Bernstein

unread,
Apr 18, 2001, 11:23:11 PM4/18/01
to
Julian T. J. Midgley <jt...@xenoclast.org> wrote:
> If you could explain what the weaknesses are of the prior notification
> model for full disclosure

Same as the weaknesses of hiding the information. It shields programmers
from the consequences of their mistakes. It makes many programmers think
that they don't have to worry about security until something goes wrong.
It reduces the incentive to create secure software.

> Finally, you still haven't addressed the key issue, which is that full
> disclosure without prior notification of the programmer punishes the
> users of the software far more than the programmer himself.

``Finally, you still haven't addressed the key issue, which is that
disclosing vulnerabilities to the public punishes the users of the
software far more than the programmer himself. You have to keep the
information secret!''

> What is it with you and putting words in people's mouths?

I am not putting words in your mouth. I am drawing an analogy, to point
out the irrationality of your thought process.

---Dan

Juergen P. Meier

unread,
Apr 19, 2001, 1:40:09 AM4/19/01
to
On 19 Apr 2001 03:23:11 GMT, <d...@cr.yp.to> typed:

>Julian T. J. Midgley <jt...@xenoclast.org> wrote:
>> If you could explain what the weaknesses are of the prior notification
>> model for full disclosure
>
>Same as the weaknesses of hiding the information. It shields programmers
>from the consequences of their mistakes. It makes many programmers think
>that they don't have to worry about security until something goes wrong.
>It reduces the incentive to create secure software.

You still dont understand the consequences of the fourth dimension (time)
has for us living in this time-space?
_delayed_ disclosure is not the same as hiding, since it still gives
the programmers a deadline after which the information goes public,
patch or no patch.
This forces the programmers to learn secure programming, but at the
same time gives them the chance to do so without putting the
majority of his users in danger.

>> Finally, you still haven't addressed the key issue, which is that full
>> disclosure without prior notification of the programmer punishes the
>> users of the software far more than the programmer himself.
>
>``Finally, you still haven't addressed the key issue, which is that
>disclosing vulnerabilities to the public punishes the users of the
>software far more than the programmer himself. You have to keep the
>information secret!''

Sometimes i think you are eigther a politican or a press reporter/editor.
Would you please spare all of us thiese useless rephrasing-whith-
putting-completly-different-meanings-in ?
You just show off you complete and utter incompetence to grasp the
original statement!

Please answer the _Original_ question!

>> What is it with you and putting words in people's mouths?
>
>I am not putting words in your mouth. I am drawing an analogy, to point
>out the irrationality of your thought process.

let _me_ rephrase this:
``I am not puttings words in your mouth, i am putting letters in your
quotes, to dilude the readers and avoid having to respond to your original
question.''

You just make a clown out of youself, if this is the way in which you
take part in a discussion.

>---Dan

juergen.

Julian T. J. Midgley

unread,
Apr 19, 2001, 5:07:13 AM4/19/01
to
>al...@texis.com (Alun Jones) writes:
>
>> I don't understand why DJ believes that further punishment is necessary.
>> It's an embarrassment to find a bug in your code - and a public
>> embarrassment, and loss of sales, to have a security flaw as a result of
>> that bug.
>
>Obviously the current deterrent is insufficient to prevent shoddy,
>insecure software from being distributed, as the abundance of shoddy,
>insecure software demonstrates.

Ah, but do you really believe that DJB's proposed mailing list will
actually encourage people to write more secure software, and
do more good than harm? And if so, what evidence have you to suggest
that programmers respond positively to the sort of 'encouragement' he
proposes?

If anything, the list will result in less secure software, as people
are forced/scared into rushing out patches without proper checking...

Julian

Julian T. J. Midgley

unread,
Apr 19, 2001, 5:25:08 AM4/19/01
to
In article <2001Apr1903...@cr.yp.to>,

D. J. Bernstein <d...@cr.yp.to> wrote:
>Julian T. J. Midgley <jt...@xenoclast.org> wrote:
>> If you could explain what the weaknesses are of the prior notification
>> model for full disclosure
>
>Same as the weaknesses of hiding the information. It shields programmers
>from the consequences of their mistakes.

Not for very long, it doesn't...

> It makes many programmers think that they don't have to worry about
> security until something goes wrong.

I doubt very much that any such programmer will be in any way
differently motivated by your proposed mailing list.

>It reduces the incentive to create secure software.

It is the usual practice for the discoverer of the exploit to give the
programmer some grace period in which to develop a patch before he
[the discoverer] posts the information publically. In practice, this
grace period seems to vary between 30 seconds and a few weeks,
depending on the protagonists involved; a week is typical, however.
If the programmer doesn't succeed in releasing a patch in time, the
public is still informed.

Since full disclosure does take place (after the grace period), the
programmer has plenty of incentives to write secure software. If he
doesn't respond with a patch, he will certainly suffer a loss of
reputation. Similarly, if every fifth posting to BugTraq concerns one
of his pieces of software, people will tend to associate him with
insecure software, and be less inclined to use his stuff if they can
possibly avoid it.

Importantly, this arrangement also does a reasonable job of trying to
protect the users of the software; in most cases, the announcement of
the exploit is concurrent with the announcement of the patch for it.

>> Finally, you still haven't addressed the key issue, which is that full
>> disclosure without prior notification of the programmer punishes the
>> users of the software far more than the programmer himself.
>
>``Finally, you still haven't addressed the key issue, which is that
>disclosing vulnerabilities to the public punishes the users of the
>software far more than the programmer himself. You have to keep the
>information secret!''

I'm beginning to lose my patience with you. There's a world of
difference between 'keeping information secret', and releasing it to
the world at the same time as the patch the programmer has had time to
create.

And of course, the above paragraph still doesn't answer my questions:

1. Do you, or do you not believe that immediate full disclosure
punishes the user of the software more than it punishes the
programmer? (The users may suffer financial losses, the programmer
is unlikely to suffer anything more than loss of reputation; and if
he responds rapidly, he isn't even likely to suffer that.)

2. What evidence have you that programmers will respond postively to
the sort of 'encouragement' you propose to give them? Who will
write more secure software when they are forced to rush out patches
without time for thought and testing?

>> What is it with you and putting words in people's mouths?
>
>I am not putting words in your mouth. I am drawing an analogy, to point
>out the irrationality of your thought process.

Well, you aren't doing a very good job of it. The analogy is
incorrect, and you've fail to tackle any of the questions I posed you
in my previous posting.

Julian

Dave Sill

unread,
Apr 19, 2001, 9:18:24 AM4/19/01
to
al...@texis.com (Alun Jones) writes:

> On the contrary, there's a relative minority of software titles that are
> distributed as shoddy and unsecure.

Your standards are obviously much lower than mine. Give me a few
examples of titles you consider robust and secure. There should be
plenty since they're in the majority.

> However, that minority is from a few
> companies that have the majority of installations.

I wasn't thinking primarily of commercial software.

> Your perception from
> this is clear, that most programmers are as bad as only those that make the
> most sales; but it's a wrong perception.

You don't know enough about the nature of my perceptions to explain
how they're wrong.

Dave Sill

unread,
Apr 19, 2001, 10:14:46 AM4/19/01
to
jt...@xenoclast.org (Julian T. J. Midgley) writes:

> Ah, but do you really believe that DJB's proposed mailing list will
> actually encourage people to write more secure software, and
> do more good than harm?

Yes, definitely. And although DJB encourages immediate public
disclosure, there's nothing stopping one from contacting the author
before posting a message to the securesoftware list.

> And if so, what evidence have you to suggest
> that programmers respond positively to the sort of 'encouragement' he
> proposes?

I know from experience that people are more likely to be careful when
they're held responsible for the consequences of their actions.

> If anything, the list will result in less secure software, as people
> are forced/scared into rushing out patches without proper checking...

There's clearly a balance to be struck in releasing bugfixes: a quick
fix might introduce new problems or not entirely fix the old problem,
and a thoroughly tested fix might take so long to develop that the bug
will be widely exploited before the fix is released. I don't think
competent, conscientious developers will have any trouble releasing
good fixes in a timely manner, and the pressure to release timely
fixes will further encourage more secure initial releases.

Julian T. J. Midgley

unread,
Apr 19, 2001, 10:43:21 AM4/19/01
to
In article <wx0d7a9...@sws5.ctd.ornl.gov>,

Dave Sill <MaxFr...@sws5.ctd.ornl.gov> wrote:
>jt...@xenoclast.org (Julian T. J. Midgley) writes:
>
>> Ah, but do you really believe that DJB's proposed mailing list will
>> actually encourage people to write more secure software, and
>> do more good than harm?
>
>Yes, definitely. And although DJB encourages immediate public
>disclosure, there's nothing stopping one from contacting the author
>before posting a message to the securesoftware list.

Ah - if one does that [contacts the author first], then I have no
objections whatever to his list. My objections are purely to the
DJB's proposal that no such advance notice be given.

>> And if so, what evidence have you to suggest
>> that programmers respond positively to the sort of 'encouragement' he
>> proposes?
>
>I know from experience that people are more likely to be careful when
>they're held responsible for the consequences of their actions.

Lists such as BugTraq already have the same effect.

Message has been deleted

Ken Hagan

unread,
Apr 19, 2001, 11:53:11 AM4/19/01
to
"Leonard R. Budney" <lbudney-...@nb.net> wrote...
>
> This mentality is unique to software engineering, and it's wrong.
> Suppose we took this attitude toward mechanical engineering!
> ``Telling people that the bridge is unsafe is bad; we should at
> least give the authorities time to quietly fix the bridge. After
> all, people have to get to work, and we should meanwhile keep a
> good thought: after all, it _may_ not collapse, killing everyone
> on it!''

But very few people respond to a "weak bridge" notice by
deliberately driving a 40 ton lorry over it, or strapping
explosives to the supports. People *do* attack software.

The problem seems to be that security holes in software are much
more likely to be "fallen into" if a bad guy knows where to push.
Therefore, broadcasting the necessary knowledge significantly
increases the likelihood of failure. Against this, we have the
possibly very high costs that users can incur even if the hole
is "fallen into" at random. (You don't *need* a bad guy.)

Can we compromise on "immediate notification of the existence of
the weakness", to be followed later by "providing the details of
the exploit"? This lets users protect themselves, and minimises
the help that we give to the bad guys.


Message has been deleted

Mike O'Connor

unread,
Apr 19, 2001, 3:52:37 PM4/19/01
to
In article <2001Apr1720...@cr.yp.to>,

D. J. Bernstein <d...@cr.yp.to> wrote:
:What opponents of full disclosure never seem to grasp is that the

:vulnerability is THE PROGRAMMER'S FAULT. It isn't an act of God,
:something to run away from. It is a human failure. It is something that
:can and should be prevented.

You assert that 'It' is a human failure, that:

a) can be prevented, and
b) should be prevented.

Based on your previous sentences, I believe that your 'It' is the act
of a programmer generating vulnerable code that makes it into programs
released to the general public.

Am I stating your views fairly and accurately?

:Shooting the messenger is shortsighted. It shields programmers from


:their failures. It reduces the incentives for programmers to do better.

Do you think that full disclosure causes programmers to generate and
release less vulnerable code? Is full disclosure intended to facilitate:

a) actually preventing human failure, or
b) blaming someone for human failure

I have seen plenty of evidence for 'b'. Yes, full disclosure can lead
to a whole lot of blame. This may or may not be a productive exercise.

I have not seen evidence to suggest that full disclosure inspires 'a'
-- programmers writing/releasing less vulnerable code. Do you have
evidence indicating otherwise? Sure, a programmer may address the
vulnerability at hand. For all we know, they may well have done so
without full disclosure. But does full disclosure cause programmers
and their employers to learn or evolve? If so, how? There's people
and books and such out there to teach how to write code. There's not
a lot that teaches how to write invulnerable code. So much of the
foundation technologies prevalent in the computing industry are not
designed with security as a primary criterion.

There are other reasons for and against full disclosure, but I have
not seen evidence to suggest that it leads toward programmers writing
better code. Dan, if that's your biggest reason for full disclosure,
it would help to show some evidence that full disclosure actually works
to do this. Otherwise, your efforts may come across as a righteous
and unproductive blame game.

--
Michael J. O'Connor | WWW: http://dojo.mi.org/~mjo/ | Email: m...@dojo.mi.org
Royal Oak, Michigan | (has my PGP & Geek Code info) | Phone: +1 248-427-4481

Alun Jones

unread,
Apr 19, 2001, 6:03:14 PM4/19/01
to
In article <m3ae5c2...@peregrine.swoop.local>, lbudney...@nb.net
(Leonard R. Budney) wrote:
> If stupid admins don't read bugtraq, then they deserve to be owned, and
> then fired. If authors of security-critical software don't, then they
> deserve to suffer an angry backlash.

Sorry, Len, but you seem to assume that reading Bugtraq catches all the
vulnerabilities posted to Bugtraq.

This is, clearly, not the case.

As I noted earlier, I subscribe to Bugtraq, I read Bugtraq, and I scan every
Bugtraq posting automatically for the string "FTP", in a case-insensitive
manner, so that I should catch any and all mentions of my software.

Last week, someone notified me that they were rejecting my software on the
grounds that it had an unfixed vulnerability. Again, going back through
Bugtraq showed no mention of this. Visiting the SecurityFocus vulnerability
database, on the other hand, showed that a Bugtraq posting had apparently
been made, listing a bug against my software. The bug had been fixed over
six weeks prior to its being posted to Bugtraq, and the person announcing
the bug had not bothered to inform me.

You seem to think that I deserved to lose a sale, based on the fact that I
fixed a bug, before it was reported to me, the bug was then "posted to
Bugtraq", and entered into the SecurityFocus database - but somehow,
mysteriously, the posting to Bugtraq never actually made it to me, a
subscriber to Bugtraq. Three months after I fixed the bug, and released the
fixed version, I am told I lost a sale because of the supposed presence of
the bug.

Perhaps, if Bugtraq were a reliable source of information, I wouldn't have
much problem with your assertion. However, it quite clearly isn't. When
you rely on an unreliable source of security information, then your security
is just as shoddy as when you rely on unreliable programs.

Now, my own situation listed above could have been fixed in one of two ways:
1. Bugtraq could post every item to its regular mailing list.
2. SecurityFocus could notify vendors when they update the vulnerability
database.

Apparently, item 1 is not happening - why on earth do you have a problem
with item 2?

As a final note, there are a large number of security-related mailing lists.
Perhaps Bugtraq should be considered "essential reading", but which others
should I also read? How many others? How much time should I be expected to
spend reading through pages of other people's bugs, in the vain hope that
one of them might relate to me? How much time should I be required to
forego from my development work?

Apparently, the current solution is quite happy to take punitive measures
against _all_ programmers, not just those that are assessed as having
vulnerable software.

Alun Jones

unread,
Apr 19, 2001, 6:03:16 PM4/19/01
to
In article <5VxD6.244$yb1....@news2-win.server.ntlworld.com>,
jt...@xenoclast.org (Julian T. J. Midgley) wrote:
> If anything, the list will result in less secure software, as people
> are forced/scared into rushing out patches without proper checking...

This is always the problem with bug fixing - when every line of code changed
has a chance of introducing or exposing another bug, you have to seriously
weigh whether fixing a bug will be likely to open up a whole can of worms.
It takes me days to test each release of my software; if I release a fix in
the same week as a bug is reported, then I haven't tested the software
adequately. Isn't untested code just as bad as poorly-written code?

I hope DJB isn't advocating that it's a good idea to rush-release untested
code!

Alun Jones

unread,
Apr 19, 2001, 6:03:18 PM4/19/01
to
In article <0104191952...@dojo.mi.org>, Mike O'Connor
<m...@dojo.mi.org> wrote:
> There are other reasons for and against full disclosure, but I have
> not seen evidence to suggest that it leads toward programmers writing
> better code. Dan, if that's your biggest reason for full disclosure,
> it would help to show some evidence that full disclosure actually works
> to do this. Otherwise, your efforts may come across as a righteous
> and unproductive blame game.

Worse, I've seen some indications that the full disclosure game is being
used as a marketing gimmick - each time you fix a bug in your own software,
especially if it relates to underlying OS behaviour, try it against your
competition, and then post the vulnerability (ideally with a note that
you've tested it against your own software, which has not proven to be
vulnerable).

If you have time, or a spare person or two, it's a great game, and because
nobody's required to contact the vendors involved, if your competition
hasn't subscribed to the right mailing list yet, you can make miles of
marketing ground from it.

Mike O'Connor

unread,
Apr 19, 2001, 6:10:32 PM4/19/01
to
In article <5gJD6.167558$lj4.5...@news6.giganews.com>,
Alun Jones <al...@texis.com> wrote:
:In article <0104191952...@dojo.mi.org>, Mike O'Connor
:<m...@dojo.mi.org> wrote:
:> There are other reasons for and against full disclosure, but I have
:> not seen evidence to suggest that it leads toward programmers writing
:> better code. Dan, if that's your biggest reason for full disclosure,
:> it would help to show some evidence that full disclosure actually works
:> to do this. Otherwise, your efforts may come across as a righteous
:> and unproductive blame game.
:
:Worse, I've seen some indications that the full disclosure game is being
:used as a marketing gimmick - each time you fix a bug in your own software,
:especially if it relates to underlying OS behaviour, try it against your
:competition, and then post the vulnerability (ideally with a note that
:you've tested it against your own software, which has not proven to be
:vulnerable).
:
:If you have time, or a spare person or two, it's a great game, and because
:nobody's required to contact the vendors involved, if your competition
:hasn't subscribed to the right mailing list yet, you can make miles of
:marketing ground from it.

All this may be true, but I don't want it to distract from my initial line
of questioning. I'm trying to sort out Dan's motivations for wanting to do
what he's doing, because his reasons as I understand them don't make sense
to me. I'm not trying to bring up every other reason that full disclosure
is good or bad or whatever.

Message has been deleted
Message has been deleted

Theo de Raadt

unread,
Apr 19, 2001, 7:27:56 PM4/19/01
to
Mike O'Connor <m...@dojo.mi.org> writes:

While I do not agree with much of what is being said here on both sides, I
want to add my view.

> You assert that 'It' is a human failure, that:
>
> a) can be prevented, and
> b) should be prevented.
>
> Based on your previous sentences, I believe that your 'It' is the act
> of a programmer generating vulnerable code that makes it into programs
> released to the general public.
>
> Am I stating your views fairly and accurately?

I believe that both (a) and (b) are true, and that it is human failure
which can be prevented.

> :Shooting the messenger is shortsighted. It shields programmers from
> :their failures. It reduces the incentives for programmers to do better.
>
> Do you think that full disclosure causes programmers to generate and
> release less vulnerable code? Is full disclosure intended to facilitate:
>
> a) actually preventing human failure, or
> b) blaming someone for human failure
>
> I have seen plenty of evidence for 'b'. Yes, full disclosure can lead
> to a whole lot of blame. This may or may not be a productive exercise.

I believe it does actually cause (a).

Once people get a kick in the ass, many do try harder. (Perhaps you don't?)

If they don't get a kick in the ass, they do not try harder.

For instance, Microsoft is trying to do better at security because
they've been slaughtered in the press, largely because of the effects
of full disclosure.

> I have not seen evidence to suggest that full disclosure inspires 'a'
> -- programmers writing/releasing less vulnerable code.

And I think you are either blind, or lying.

> Do you have
> evidence indicating otherwise?

I think there is much evidence.

> Sure, a programmer may address the
> vulnerability at hand. For all we know, they may well have done so
> without full disclosure. But does full disclosure cause programmers
> and their employers to learn or evolve?

Yes it does.

> If so, how?

People try to avoid public humilation.

> There's people
> and books and such out there to teach how to write code.

There are? I've never read one, and I seem to be doing a pretty good
job. Because if our OpenBSD group makes a mistake in security, it
bears down pretty hard upon us. Since we are terrified of humilation,
we try harder.

I think that everyone can bear the same standard.

Including you.


> There's not
> a lot that teaches how to write invulnerable code.

The threat of public humilation may.

> So much of the
> foundation technologies prevalent in the computing industry are not
> designed with security as a primary criterion.

That is a utter copout. You should be ashamed.

> There are other reasons for and against full disclosure, but I have
> not seen evidence to suggest that it leads toward programmers writing
> better code.

What??? You are trying to claim that not one
SINGLE case exists of full disclosure causing even just ONE programmer
to write better code?

Liar -- there's TONS of evidence that it has caused improvements!

> Dan, if that's your biggest reason for full disclosure,
> it would help to show some evidence that full disclosure actually works
> to do this. Otherwise, your efforts may come across as a righteous
> and unproductive blame game.

Actually, I don't think Dan has to prove anything. If anything, you need
to prove that full disclosure harms code quality, and causes new holes to
be introduced.

Because if full disclosure causes just one developer to audit their
code and fix one security hole proactively, Dan is right and you are
wrong. If it has, then full disclosure has improved code.

The fact is, full disclosure is here to stay, and you are the one
trying to pick violets out of your butt.

--
This space not left unintentionally unblank. der...@openbsd.org
Open Source means some restrictions apply, limits are placed, often quite
severe. Free Software has _no_ serious restrictions. OpenBSD is Free Software.

Ken Hagan

unread,
Apr 20, 2001, 5:43:00 AM4/20/01
to
"Leonard R. Budney" <lbudney...@nb.net> wrote...
>
> Some sound arguments exist on both sides. I tend to favor immediate
> posting of (1) the hole, (2) an exploit, and (3) a workaround or
> fix, with a CC to the author. I respect folks who decide to hold the
> exploit for later.

Yes, I missed "providing a work around", probably because I thought
it would make the exploit obvious. On reflection, it should nearly
always be possible to offer a workaround that keeps the exploit
non-obvious.

> Len
> (Whose past employer finally installed a firewall, after ignoring my
> dire warnings, because a script kiddie did some $30K of damage.)

This is a fair point. I talked about "bad guys", and there is an
assumption in many circles that this means evil competitors or
organised criminals. The danger from unwashed hordes of children
is just as great. The figure is interesting too. I bet all sorts
of companies could lose 30K in disruption, and I bet the firewall
cost less than a tenth of that. We're talking about fairly casual
crime here -- a sort of "surf-by shooting".


Julian T. J. Midgley

unread,
Apr 20, 2001, 5:43:41 AM4/20/01
to
In article <cr8yo7...@zeus.theos.com>,

Theo de Raadt <der...@zeus.theos.com> wrote:
>Mike O'Connor <m...@dojo.mi.org> writes:
>
>While I do not agree with much of what is being said here on both sides, I
>want to add my view.
>
[SNIP]

>
>Once people get a kick in the ass, many do try harder. (Perhaps you don't?)
>
>If they don't get a kick in the ass, they do not try harder.

They get a kick in the arse when they get an email from me saying
"your program is vulnerable to attack thusly; please provide a fix- I
shall be publishing details of the problem to Bugtraq one week from
now, whether you've released a fix or not". But at least this way
(provided they release a fix on time) there doesn't exist a period
between discovery of the problem and its announcement to the rest of
humanity (and the script kiddies).

>For instance, Microsoft is trying to do better at security because
>they've been slaughtered in the press, largely because of the effects
>of full disclosure.

Yes - the sort of full disclosure for which Bugtraq is famed, which
(usually, though not always) involves prior notification of the
author. Given that we can see that Bugtraq works (as you essentially
admit), there is no requirement for an additional mailing list which
has as one of its tenets that one should not attempt to contact the
author before going public.

It's worth noting that the argument here is between two different
types of 'full disclosure' - 'immediate full disclosure' (DJB's
preferred option), and 'full disclosure with prior notification' (a la
Bugtraq). Often in this thread, however, 'full disclosure' on its own
has been used synomously with 'immediate full disclosure', and I
believe this was the sense in which Mike O'Connor was using it in his
previous post. He was not advocating that there should be /no/ full
disclosure, merely that /immediate/ full disclosure was flawed.

Julian

Mike O'Connor

unread,
Apr 20, 2001, 2:48:56 AM4/20/01
to
In article <cr8yo7...@zeus.theos.com>,
Theo de Raadt <der...@zeus.theos.com> wrote:
:Mike O'Connor <m...@dojo.mi.org> writes:
:
:While I do not agree with much of what is being said here on both sides, I
:want to add my view.
:
:> You assert that 'It' is a human failure, that:
:>
:> a) can be prevented, and
:> b) should be prevented.
:>
:> Based on your previous sentences, I believe that your 'It' is the act
:> of a programmer generating vulnerable code that makes it into programs
:> released to the general public.
:>
:> Am I stating your views fairly and accurately?
:
:I believe that both (a) and (b) are true, and that it is human failure
:which can be prevented.

That's fine. I was asking Dan what his views were.

:> :Shooting the messenger is shortsighted. It shields programmers from


:> :their failures. It reduces the incentives for programmers to do better.
:>
:> Do you think that full disclosure causes programmers to generate and
:> release less vulnerable code? Is full disclosure intended to facilitate:
:>
:> a) actually preventing human failure, or
:> b) blaming someone for human failure
:>
:> I have seen plenty of evidence for 'b'. Yes, full disclosure can lead
:> to a whole lot of blame. This may or may not be a productive exercise.
:
:I believe it does actually cause (a).
:
:Once people get a kick in the ass, many do try harder. (Perhaps you don't?)
:
:If they don't get a kick in the ass, they do not try harder.

Does "try harder" equate to actually doing something about it? If
so, what specific actions are taken that would not otherwise be taken?
And how effective are they?

:For instance, Microsoft is trying to do better at security because


:they've been slaughtered in the press, largely because of the effects
:of full disclosure.

Are they succeeding? For someone of their size and scope, how long do
we observe before we can say one way or the other? How does the increase
in their security efforts compare to their increase in coding efforts
that don't have to do with security?

:> I have not seen evidence to suggest that full disclosure inspires 'a'

:> -- programmers writing/releasing less vulnerable code.
:
:And I think you are either blind, or lying.

I don't perceive myself to be lying to anyone else. Call me blind,
then, or lying to myself. I don't have all the answers, and often
feel fortunate when I can ask the right questions.

:> Do you have


:> evidence indicating otherwise?
:
:I think there is much evidence.

Show me. I see a lot of security holes in the same products again
and again, despite full disclosure having been in force for awhile.
And these products even grow in market share -- IE and Outlook are
great examples, here. I'm trying to sort out what full disclosure
really translates into. I can clearly see the blame game. I can
see other facets. I don't see where it leads programmers to write
better code. Buffer overruns were the subjects of full disclosure
very-public exploit over a decade ago, yet are still a factor today.

:> Sure, a programmer may address the

:> vulnerability at hand. For all we know, they may well have done so
:> without full disclosure. But does full disclosure cause programmers
:> and their employers to learn or evolve?
:
:Yes it does.
:
:> If so, how?
:
:People try to avoid public humilation.

To quote Yoda: "Do, or do not. There is no try."

:> There's people

:> and books and such out there to teach how to write code.
:
:There are? I've never read one, and I seem to be doing a pretty good
:job. Because if our OpenBSD group makes a mistake in security, it
:bears down pretty hard upon us. Since we are terrified of humilation,
:we try harder.
:
:I think that everyone can bear the same standard.
:
:Including you.

The notion that everyone can bear terror is demonstrably false. Some
people even die from terror. You might feel that everyone _should_ bear
the same standard, but that would be a different statement.

For humilation to work as a motivating factor, pride needs to be
involved. Isn't pride one of the deadly sins?

It's unclear to me that you and the fine folks working on OpenBSD
worked less hard before you were established enough to be prone to
full disclosure than you do currently.

:> There's not


:> a lot that teaches how to write invulnerable code.
:
:The threat of public humilation may.

Oh really? Why shouldn't the lesson be "don't release software",
"let's make disclosure a crime", or any number of other lessons?

Humiliation may teach that someone should do something, but not how
to do it.

:> So much of the


:> foundation technologies prevalent in the computing industry are not
:> designed with security as a primary criterion.
:
:That is a utter copout. You should be ashamed.

It is a truth, whether you choose to accept it or not. My thought
when writing that was simply that there is not much in the environment
that one can point at and say "this is secure", so learn from it and
build upon it. Where does your humiliated programmer turn to so they
can get better? By the time you get up to the popular higher-level
languages riding atop application environments and OSes and such,
which is the entry point for most programmers, you need to assume that
everything else around you is crap and reinvent the wheel to be
invulnerable. Instead of the classic "Hello, world!" program, you
teach them to codify a much more daunting "Fuck off, world!" program.

:> There are other reasons for and against full disclosure, but I have


:> not seen evidence to suggest that it leads toward programmers writing
:> better code.
:
:What??? You are trying to claim that not one
:SINGLE case exists of full disclosure causing even just ONE programmer
:to write better code?
:
:Liar -- there's TONS of evidence that it has caused improvements!

The big example you came up with was Microsoft. This scares me.

:> Dan, if that's your biggest reason for full disclosure,


:> it would help to show some evidence that full disclosure actually works
:> to do this. Otherwise, your efforts may come across as a righteous
:> and unproductive blame game.
:
:Actually, I don't think Dan has to prove anything. If anything, you need
:to prove that full disclosure harms code quality, and causes new holes to
:be introduced.

I didn't say anything about full disclosure harming quality. I made a
conscious effort to not cloud my questions to Dan by interjecting my
views on full disclosure.

:Because if full disclosure causes just one developer to audit their


:code and fix one security hole proactively, Dan is right and you are
:wrong. If it has, then full disclosure has improved code.

Ahhh... so finally, after a lot of personal invective, you say words
address "how does full disclosure lead to the prevention of human
failure". So, full disclosure causes someone to perform a code audit
when they might otherwise not. The obvious followon question to me is:

Are code audits effective in preventing (not "just" minimizing) human
failure?

There was an awful lot of auditing of the relevant code for things
like the recent ftpd glob vulnerability. Lots of folks spent time and
energy looking at the code. Human failure was not prevented here.

Would more auditing have helped? Better code auditing? Application
auditing? Less complexity and less functionality? What else is there
besides auditing, because it wasn't enough to prevent failure?

:The fact is, full disclosure is here to stay, and you are the one


:trying to pick violets out of your butt.

Did I ever say that full disclosure was NOT here to stay?

Julian T. J. Midgley

unread,
Apr 20, 2001, 5:50:25 AM4/20/01
to
In article <hxTD6.736$991....@news6-win.server.ntlworld.com>,

Julian T. J. Midgley <jt...@xenoclast.org> wrote:

>
>They get a kick in the arse when they get an email from me saying
>"your program is vulnerable to attack thusly; please provide a fix- I
>shall be publishing details of the problem to Bugtraq one week from
>now, whether you've released a fix or not". But at least this way
>(provided they release a fix on time) there doesn't exist a period
>between discovery of the problem and its announcement to the rest of
>humanity (and the script kiddies).

Sorry - I'll complete/correct that sentence:

But at least this way (provided they release a fix on time) there

doesn't exist a period between the announcement of the problem and the
announcement of the fix when all the black hats know how to exploit
it, but the sysadmins have no means of patching it. (For certain
services, it is often not an option just to disable it, and the cost
of moving to an alternative piece of software is often prohibitive (in
any event, there is no such cost if the release of the exploit and fix
are simultaneous).)

Message has been deleted

Mike O'Connor

unread,
Apr 20, 2001, 5:58:45 AM4/20/01
to
In article <hxTD6.736$991....@news6-win.server.ntlworld.com>,
Julian T. J. Midgley <jt...@xenoclast.org> wrote:
:Yes - the sort of full disclosure for which Bugtraq is famed, which

:(usually, though not always) involves prior notification of the
:author. Given that we can see that Bugtraq works (as you essentially
:admit), there is no requirement for an additional mailing list which
:has as one of its tenets that one should not attempt to contact the
:author before going public.
:
:It's worth noting that the argument here is between two different
:types of 'full disclosure' - 'immediate full disclosure' (DJB's
:preferred option), and 'full disclosure with prior notification' (a la
:Bugtraq). Often in this thread, however, 'full disclosure' on its own
:has been used synomously with 'immediate full disclosure', and I
:believe this was the sense in which Mike O'Connor was using it in his
:previous post. He was not advocating that there should be /no/ full
:disclosure, merely that /immediate/ full disclosure was flawed.

Good, good point! I don't consider Bugtraq "full disclosure". I was
talking of it in DJB terms.

And, I'm specifically arguing about what full disclosure does as far
as making programmers write less vulnerable code. I definitely have
views on what good disclosure models are and aren't, but I'm trying
to keep that independent of my line of questioning. All I'll say for
now is that the old CERT model of "wait interminably for the vendor
before releasing the advisory" had some obvious problems.

Ken Hagan

unread,
Apr 20, 2001, 8:49:20 AM4/20/01
to
"Leonard R. Budney" <lbudney...@nb.net> wrote...
>
> The danger from unwashed hordes of children is minimal. They
> don't know until everyone knows, and once everyone knows, there
> is no excuse for ever being caught with your pants down.
>

Isn't there? I can see two reasons to suppose the the kiddies
will know before you do...

1 Script kiddies can automate the process of "Here is an
exploit, search the internet for a vulnerable computer",
but you can't automate the process of "keep up to date
by following enough newsgroups and web-sites".

2 An exploit publicised at 6pm, or on Saturday, probably
won't be patched until the following morning, or Monday.
Guess when the script kiddies are awake? Even if they kept
normal office hours, the Net is world-wide and runs 24/7.

...and one reason (that you implied elsewhere in your post) for
optimism.

i Going off-line takes a lot less time than creating a script.

I think the sheer numbers of script kiddies makes them a risk,
even when the windows of opportunity are small.


Alun Jones

unread,
Apr 20, 2001, 10:12:33 AM4/20/01
to
In article <m38zkw4...@peregrine.swoop.local>, lbudney...@nb.net
(Leonard R. Budney) wrote:
> Note that the lion's share of this "required reading" is required of
> admins. As a vendor, your primary job is to do your own work right the
> first time. Your second job is to fix problems quickly, and notify your
> customers promptly, to minimize damage from failure at job #1. Your third
> job is to stay current on potential problems, to improve in your craft.

Okay, now, unless you're the second coming, I'd suggest that even you have
problems with "do your own work right the first time" from time to time. Do
you write code? Can you truly claim to have never written a program with a
bug in it? Can you even claim that you are fully aware of all of the
implications of every system call or third-party library that you use?

My "second job", as you put it, is quite clearly hampered if nobody tells me
when I've slipped up on job one.

> And by the way, if you fixed an exploitable hole in your own software,
> why didn't you post that information to bugtraq? If you didn't, you are
> partly to blame here--the guy who found the hole apparently didn't know
> that he was running insecure software (yours) and that he should have
> installed the fix.

Uh, dude? I did post that information to bugtraq, when I first fixed the
bug. Someone later came and posted (apparently in some manner that alerted
the SecurityFocus vulnerability database, but whizzed straight past my
mailbox) an 'update' to the bug report, claiming that the bug was still
active, even after it had been fixed.

> > Apparently, the current solution is quite happy to take punitive
> > measures against _all_ programmers, not just those that are assessed
> > as having vulnerable software.
>

> Maybe I'm confused here. Didn't you say there was a hole in your
> software, until you fixed it? And didn't you essentially admit that
> you quietly fixed the hole, but left users of old software vulnerable
> by failing to announce the fix? If so, don't be so quick to distance
> yourself from other people who also write exploitable software.

Don't read into my words things that I didn't put there.

Yes, there was a hole until I fixed it. But then, can you claim with 100%
surety that you have no holes in _your_ software?

No, I don't leave my users vulnerable by failing to announce fixes. Fixes
_are_ announced, regularly. I personally feel that Bugtraq is a farce, and
that my customers should get word of any fix from _me_, not from some
unreliable third party.

I cannot distance myself from "other people who also write exploitable
software". No human being, prone to error, accident, oversight, misfortune,
brain farts, etc, can do so. There are, undoubtedly, bugs in my software,
and some of them may be exploitable. I do my damnedest to weed out as many
as I can in the development and testing phases, and if any are discovered
subsequently, I make sure they are stomped on quickly.

You seem to be under the impression that it is possible to guarantee that a
program is not exploitable. Either you are naive, or you are not a
programmer, or both.

Alun Jones

unread,
Apr 20, 2001, 10:12:45 AM4/20/01
to
In article <m366g04...@peregrine.swoop.local>, lbudney...@nb.net
(Leonard R. Budney) wrote:
> al...@texis.com (Alun Jones) writes:
>
> > ...each time you fix a bug in your own software...try it against your
> > competition, and then post the vulnerability (ideally with a note that
> > you've tested it against your own software, which has not proven to be
> > vulnerable).
>
> Note that this is extremely unethical, and should harm you in the end. If
> you "fix a vulnerability" in your own software, then you are obligated to
> announce the hole and the fix, for the sake of your users.

Len, you did a very bad thing there.

You quoted my words dreadfully out of their context, giving the impression
that I advocate such unethical procedures. Please, apologise.

Alun Jones

unread,
Apr 20, 2001, 10:13:00 AM4/20/01
to
In article <cr8yo7...@zeus.theos.com>, Theo de Raadt
<der...@zeus.theos.com> wrote:
> People try to avoid public humilation.

If public humiliation is what it's about, then why not take photographs of
the programmer, and paste his head onto an embarrassing picture?

No, bug reporting should not be about humiliation or punishment. It should
be about ensuring that the code base becomes more secure.

The mere notice of an unfixed bug that a developer has been made aware of is
sufficient to achieve that aim, if such notice is available to his users.
If the programmer is not made aware of bug reports, then all it means when a
bug report is unaddressed for a long time is that the programmer doesn't
subscribe to that particular mailing list. This may mean nothing more
sinister than that the mailing list is unpopular - perhaps even only at that
vendor.

> There are? I've never read one, and I seem to be doing a pretty good
> job. Because if our OpenBSD group makes a mistake in security, it
> bears down pretty hard upon us. Since we are terrified of humilation,
> we try harder.
>
> I think that everyone can bear the same standard.

Personally my standard is to work hard to provide secure code. I don't
_need_ the threat of humiliation to do that. Are you really saying that you
wouldn't give a fig for your users' security if it weren't for the fact that
someone might publicly "out" you? That's a poor attitude. I write security
into my software. I don't have to be threatened in order to do so.

> The threat of public humilation may.

I'd hate for _my_ users to think that the only thing keeping me honest was
the threat of public humiliation. I'd prefer them to know me better, and
realise that I write security into the code from a basis of truly wanting
the code to be secure.

> > So much of the
> > foundation technologies prevalent in the computing industry are not
> > designed with security as a primary criterion.
>
> That is a utter copout. You should be ashamed.

Is it untrue? Is sprintf not a hugely unsafe function? Is it not taught by
example in all kinds of programming classes?

[For those of you not willing to wait for the answer, no it's not untrue,
sprintf (and many other 'standard' functions) are security flaws waiting to
happen, and most programming classes are not even slightly connected with
the concept that your code may come under attack, and/or be called on to
read incorrectly formatted input.]

> What??? You are trying to claim that not one
> SINGLE case exists of full disclosure causing even just ONE programmer
> to write better code?
>
> Liar -- there's TONS of evidence that it has caused improvements!

Sadly, there's also tons of evidence that vendors actively ignore full
disclosure mailing lists. And, personally, I write better code because I
want my programs to work better - be faster, more secure, more reliable,
etc. Not because someone posted a claimed exploit to Bugtraq.

> Actually, I don't think Dan has to prove anything. If anything, you need
> to prove that full disclosure harms code quality, and causes new holes to
> be introduced.

Far from it. I think the point is that full disclosure requires the vendor
to be notified, otherwise it benefits the hackers, and not the users.

> Because if full disclosure causes just one developer to audit their
> code and fix one security hole proactively, Dan is right and you are
> wrong. If it has, then full disclosure has improved code.

Full disclosure doesn't cause anyone to do anything proactively. Full
disclosure can only cause _reactive_ activity.

> The fact is, full disclosure is here to stay, and you are the one
> trying to pick violets out of your butt.

Where do you see full disclosure? I only see partial disclosure. Perhaps
my view is slightly less encumbered.

Message has been deleted
Message has been deleted

Tom Holub

unread,
Apr 20, 2001, 6:18:21 PM4/20/01
to
In article <m3snj3c...@peregrine.swoop.local>,
Leonard R. Budney <lbudney...@nb.net> wrote:
)
)Do you know of any large installation where IT is closed all weekend?

Every university in the world?
-Tom

Message has been deleted

Tom Holub

unread,
Apr 20, 2001, 9:00:40 PM4/20/01
to
In article <m3lmovc...@peregrine.swoop.local>,

Leonard R. Budney <lbudney...@nb.net> wrote:
)do...@best.com (Tom Holub) writes:
)
)> In article <m3snj3c...@peregrine.swoop.local>,

)> Leonard R. Budney <lbudney...@nb.net> wrote:
)> )
)> )Do you know of any large installation where IT is closed all weekend?
)>
)> Every university in the world?
)
)Hmmm...maybe you know something I don't know.

Obviously.

)If email or the web server
)goes down on Saturday, are you telling me that it stays down till 9am
)on Monday?

There are over 35,000 hosts at Berkeley, with no firewall. Perhaps
10% of those are Unix boxes of some sort (say, 3500). Of those, only
a small fraction are email or web servers; most are desktop machines
for faculty or grad students, or are in public or research labs. So
there's a huge number of machines, let's say 3000, which could easily
be broken into over a weekend without anyone important noticing.

Berkeley is not unusual in this; all major research universities have
large numbers of non-mission-critical boxes vulnerable on the public
network. It is the nature of academia.

Of course, I'm only counting Unix boxes in the above; there are
obviously vulnerabilities afoot for the 20,000+ Windows machines as
well.
-Tom

Message has been deleted

Tom Holub

unread,
Apr 20, 2001, 10:21:57 PM4/20/01
to
In article <m3itjzc...@peregrine.swoop.local>,

Leonard R. Budney <lbudney...@nb.net> wrote:
)do...@best.com (Tom Holub) writes:
)> There are over 35,000 hosts at Berkeley, with no firewall.
)
)That's their decision. It's a stupid decision, of course. But they're
)adults; having made that decision, they are responsible for the
)consequences. They have nobody to cry to if they're compromised through
)any one of their 35,000 vulnerabilities.

The point (which you will continue to refuse to get) is that you are
basing your logic on a state of affairs which simply doesn't exist.
The reality is that there are an unlimited number of vulnerable hosts
sitting out there on the public Internet, which are run not by
"incompetent" admins, but by typical users who don't have security as
their top priority, and immediate full disclosure increases the risk
for this (enormous) class of machines.

I'm sure you'll go do whatever you want to, anyway. Ideologues are
like that.
-Tom

Alun Jones

unread,
Apr 20, 2001, 10:36:41 PM4/20/01
to
In article <m3elune...@peregrine.swoop.local>, lbudney...@nb.net
(Leonard R. Budney) wrote:
> It's not their job to tell you. They do, generally, out of self-interest.
> Which is great--but why are you whining as if it's their moral duty to
> support your software? Ultimately, that responsibility rests on your
> shoulders and nobody else's.

Okay, excuse me for wanting clarification for this.

You view reporting bugs to the vendor as "support" for that software? No
wonder I seem to misunderstand your point of view.

No, if I find a bug in someone else's software, I view it as my primary
responsibility to tell that person. It's only a secondary responsibility to
tell anyone else.

> Then that's between you and the false poster. False reports have been
> posted before to bugtraq, and will be again. If his irresponsible conduct
> provably cost you money, sue him. But I'm not sure how you construe the
> *misuse* of a notification mechanism into an argument against the intended
> use of that mechanism.

I don't. I've never said that people shouldn't make public disclosure of
security vulnerabilities. I'm all for it. However, I feel that if your
goal is to provide more secure systems, then that has to be accompanied by
disclosure to the vendor. Otherwise, you're simply telling people that
things are broken, rather than allowing the problems to be fixed quickly.

In other words, I'm not saying Bugtraq should be torn down - I'm saying that
the systems built off of it that confer vulnerability tracking by vendor
should automatically contact the vendor - because it's _quite_ abundantly
clear that people posting to Bugtraq are not contacting vendors.

Message has been deleted

D. J. Bernstein

unread,
Apr 21, 2001, 12:37:38 AM4/21/01
to
Juergen P. Meier <J...@lrz.fh-muenchen.de> wrote:
> This forces the programmers to learn secure programming, but at the
> same time gives them the chance to do so without putting the
> majority of his users in danger.

The programmer has _already_ put the users in danger, by providing bad
software. That's not acceptable. He should have learned about security
_before_ writing security-critical software.

---Dan

D. J. Bernstein

unread,
Apr 21, 2001, 2:43:18 AM4/21/01
to
Alun Jones <al...@texis.com> wrote:
> No, bug reporting should not be about humiliation or punishment. It
> should be about ensuring that the code base becomes more secure.

Fixing all the existing security problems is not enough.

Programmers have to stop creating new security problems. _You_ have to
stop creating new security problems.

I want you to be under so much pressure that you will invest the time
necessary to avoid security problems. If you can't handle the pressure,
you shouldn't be writing network software.

> Full disclosure doesn't cause anyone to do anything proactively.

That simply isn't true. Fear is a wonderful motivator.

---Dan

Theo de Raadt

unread,
Apr 21, 2001, 3:20:18 AM4/21/01
to

Unfortunately, I must agree with Dan (I really don't want to).

If you cannot write bug-free software which will be used in security
sensitive situations, please get out of the business.

Message has been deleted

Sriranga Veeraraghavan

unread,
Apr 21, 2001, 6:01:52 PM4/21/01
to
d...@cr.yp.to (D. J. Bernstein) writes:

> The programmer has _already_ put the users in danger, by providing
> bad software. That's not acceptable. He should have learned about
> security _before_ writing security-critical software.

So how does one go about learning to write secure software?

This is an honest question and not a troll/flame.

I'm concerned about writing good secure code, but I have found very
few programming books (esp. network programming) that cover secure
programming techniques. Even Stevens doesn't seem to cover it well
(perhaps I'm just not looking hard enough).

As an aside, I've read the secure programming faq and several of the
references listed there in, but it is still not clear to me what sort
of things I should avoid in my programs. If you know of a good web
page that covers this I would appreciate a pointer.

TIA,

----ranga

Juergen P. Meier

unread,
Apr 21, 2001, 6:13:44 PM4/21/01
to
On 21 Apr 2001 04:37:38 GMT, <d...@cr.yp.to> typed:

Why not kill all programmers on earth (including you) and burn
all programs (since they are insecure and put all users in danger.)

Well, i thought this practice has ben erased from the western
culture sometimes in the 17th centry...

ok, now im polemic...

Now for a civilized reply:

The programmer was _not aware_ of the hole in his program.
Thats the whole point!
I doubt there are many programmers out there who deliberatly put
holes in their programs.
So your talking about punishing the programmer for his bad code
is pretty useles.

>---Dan

Full disclosure is what i want.
But i rather give the programer a few days to get a _working_
and _tested_ fix before going public.

I've seen enough fixes that break other things because they were
rushed.

juergen

--
J...@lrz.fh-muenchen.de
"This World is about to be Destroyed!"

Juergen P. Meier

unread,
Apr 21, 2001, 6:25:15 PM4/21/01
to
On Fri, 20 Apr 2001 09:58:45 GMT, <m...@dojo.mi.org> typed:

>:Yes - the sort of full disclosure for which Bugtraq is famed, which
>:(usually, though not always) involves prior notification of the
>:author. Given that we can see that Bugtraq works (as you essentially
>:admit), there is no requirement for an additional mailing list which
>:has as one of its tenets that one should not attempt to contact the
>:author before going public.
>:
>:It's worth noting that the argument here is between two different
>:types of 'full disclosure' - 'immediate full disclosure' (DJB's
>:preferred option), and 'full disclosure with prior notification' (a la
>:Bugtraq). Often in this thread, however, 'full disclosure' on its own
>:has been used synomously with 'immediate full disclosure', and I
>:believe this was the sense in which Mike O'Connor was using it in his
>:previous post. He was not advocating that there should be /no/ full
>:disclosure, merely that /immediate/ full disclosure was flawed.
>
>Good, good point! I don't consider Bugtraq "full disclosure". I was
>talking of it in DJB terms.

Thanks for the clarification here!

>And, I'm specifically arguing about what full disclosure does as far
>as making programmers write less vulnerable code. I definitely have
>views on what good disclosure models are and aren't, but I'm trying
>to keep that independent of my line of questioning. All I'll say for
>now is that the old CERT model of "wait interminably for the vendor
>before releasing the advisory" had some obvious problems.

Yes, I fully agree with you here.
Delayed full disclosure without any form of timelimit, just waiting
infinitly for the vendor to produce a fix, is as bad as no disclosure
at all, because with some vendors its just that: no disclosure at all.

The preferred scheme on bugtraq (its not enforced there, you can still
use the DJB model [heck, i should rather say immediate disclosure ;]
on bugtraq if you wisch, you'll just have to ignore the comments of
other posters calling you irresponsible.) is to give the vendor
a predefined fixed time to get in gear and produce a working fix,
before you'll post it on the list for the entire public to read.

I still believe that the delayed-with-timelimit full disclosure
model is superior to the immediate one in the long run.
As it not only forces the vendors to fix their bugs, but it
does not dicourage programmers (especially those writing free [beer]
products) to continue their work at all.

regards,

Juergen

Message has been deleted

Julian T. J. Midgley

unread,
Apr 21, 2001, 9:09:04 PM4/21/01
to
In article <cwv8e7...@zeus.theos.com>,

Theo de Raadt <der...@zeus.theos.com> wrote:
>d...@cr.yp.to (D. J. Bernstein) writes:

>> That simply isn't true. Fear is a wonderful motivator.
>
>Unfortunately, I must agree with Dan (I really don't want to).
>
>If you cannot write bug-free software which will be used in security
>sensitive situations, please get out of the business.

ROTFL

Splendid. You will now publish here a list of all programmers known
never to have a written a program with a bug[1] in it. We will then
only use software written by those programmers. We will then have no
need of any form of security mailing list, since we know all our
trusted programmers to be infallible.

Of course, we'll be somewhat limited in our choice of software, but
that's a small price to pay for ultimate security. "Hello, world!",
anyone?

Go on then, publish your list...

Julian


[1] NB. That reads 'bug' not 'security bug'; if a programmer can write
a program containing a bug of any sort, then he cannot possibly be
conscientious enough for us to be certain that he won't introduce a
security bug into his software at a later date.

Julian T. J. Midgley

unread,
Apr 21, 2001, 9:37:54 PM4/21/01
to
In article <m3ae59g...@peregrine.swoop.local>,

Leonard R. Budney <lbudney...@nb.net> wrote:
>J...@lrz.fh-muenchen.de (Juergen P. Meier) writes:
>> Why not kill all programmers on earth (including you) and burn
>> all programs (since they are insecure and put all users in danger.)
>
>If embarrassment == death to you, then you're in the wrong business.

>
>> The programmer was _not aware_ of the hole in his program.
>> Thats the whole point!
>
>If you don't know how to write secure programs, you shouldn't be doing
>it. That's the whole point!

You don't know how to write absolutely secure programs, even Dan
Bernstein (despite his relative success to date) doesn't know how to
write absolutely secure programs. "How to write secure programs" is
fundamentally unknowable, unless you are writing in a language in
which security can be evaluated mathematically (ML might possibly
qualify; at least the language itself is provably sound
mathematically; I suspect, however, that the concept of 'security' is
not mathematically provable, in which case even ML won't help you).
"How to try to write secure programs" is not unknowable, neither is
"how to write the most secure program one possibly can", but neither
equates to absolute security.

There may exist methods of compromise which even the most competent
security expert is not yet aware of, or interactions between
components which are sufficiently complex that they will not occur to
the programmer at the time of writing, however well versed in the art
of writing secure programs he is.

In addition to these, there of course exist the many known failures
(such as buffer overflows, running with unnecessary privileges, etc.)
which programmers continue to fall foul of, though they ought to know
better; improved education and more security related resources are the
ways to tackle these; discouraging people from writing software at all
is not. (I would rather have an extremely useful program that contains
a couple of security flaws that get ironed out after they are
discovered, than not have the program at all. For genuinely useful
software, risk analysis will typically reveal that the cost of not
having the software at all is greater than the cost associated with
compromise in the event of its containing a security hole (assuming
the systems concerned are appropriately backed up and competently
administrated).

Anyone who claims that he absolutely knows how to write secure
programs is most likely a charlatan, with an inadequate understanding
of security. If not, then he is a very rare genius indeed -
regrettably, sufficiently rare that we couldn't possibly rely on such
geniuses to write all our security related software for us; it's most
unlikely that even one such genius is currently alive.

Julian

Message has been deleted

NTRO...@mcsesuck.com

unread,
Apr 22, 2001, 12:38:54 AM4/22/01
to
Please kill this mental masturbation thread.

Dave Mundt

unread,
Apr 22, 2001, 2:07:10 AM4/22/01
to
Greetings and Salutations...

Sriranga Veeraraghavan <ra...@soda.csua.berkeley.edu> wrote:

>d...@cr.yp.to (D. J. Bernstein) writes:
>
>> The programmer has _already_ put the users in danger, by providing
>> bad software. That's not acceptable. He should have learned about
>> security _before_ writing security-critical software.
>
>So how does one go about learning to write secure software?
>

IMHO...look at enough "insecure" software to understand WHY it is
bad...then, don't do that.

>This is an honest question and not a troll/flame.
>
>I'm concerned about writing good secure code, but I have found very
>few programming books (esp. network programming) that cover secure
>programming techniques. Even Stevens doesn't seem to cover it well
>(perhaps I'm just not looking hard enough).
>
>As an aside, I've read the secure programming faq and several of the
>references listed there in, but it is still not clear to me what sort
>of things I should avoid in my programs. If you know of a good web
>page that covers this I would appreciate a pointer.
>
>TIA,
>
>----ranga
>

I suspect that "secure" programming techniques are pretty much the
same as "good" programming techniques... For me, some basic rules
include:
0) Determine HOW "secure" it needs to be...and where the user
needs to be kept from.
1) NO implicit declarations, and, strict type-checking (including
bounds checking of arrays/strings/etc).
2) Assume that the user is going to feed absolute trash to the
code...and write it so that it degrades gracefully, or, ignores the
trash.
3) Comment, comment, comment...
4) Keep It Simple. If you find yourself saying "wouldn't it be
cool if it would do THIS???" more than a couple of times, then, you
are getting caught up in "creeping featuritis" and need to step back
from the code.
6) Use a good testing protocol against the code before dumping it
on the general public. At the very least, after you have proved to
your satisfaction that it works the way you EXPECT it to work...hand
it to some folks totally unfamiliar with it, and, let THEM break it.
7) In general, program defensively...for example...if you are
building a CGI that connects your web pages to a database, try
manually feeding a chunk of SQL code where the 'automated' query
parameters would be. Does your glue code deal with this, or does it
cheerfully feed it to the database software? What happens when your
program is expecting a 20 byte string, and, gets sent a 200,000 byte
string?
8) As a part of this, spend time looking at, and, understanding
how users acquire priviledges they should not...so you are not
repeating the mistakes of others...
Regards
Dave Mundt


Remove the "REMOVE_THIS_" from my email address to get to me...
I hate Cullers who gather from newsgroups

Visit my home page at http://www.esper.com/xvart/index.html

Juergen P. Meier

unread,
Apr 22, 2001, 5:43:13 AM4/22/01
to
On 21 Apr 2001 22:55:04 -0400, <lbudney...@nb.net> typed:

>jt...@xenoclast.org (Julian T. J. Midgley) writes:
>>
>> Splendid. You will now publish here a list of all programmers known
>> never to have a written a program with a bug[1] in it.
>
>Not all bugs are security holes. So absolute perfection, whatever that
>is, is not the same goal as complete security. (Though they have much
>in common, I agree.)

All security bugs are bugs though.

>> We will then only use software written by those programmers.
>

>In practice, this is not true. Dan wrote qmail, djbdns, and publicfile,
>all of which are known *never* to have a security hole. However, their

Just because djb refuses to call his programming mistakes (see his
Changelogs, at least he does not try to cover it up) "bugs" or "security
flaws" does not make him a perfect programmer, which he most certainly
is not. His code is good, much better than most other code (not a big
surprise if one compares the number of active developers contributing
to his code (=1) and other products (>1)).

Since he is a) a member of the Human race and b) has written Bugs
(whatever name he gives it) in the past, i do not count djb to this
list of perfect programmers which still counts 0 (zero) names.

>penetration is surprisingly low, considering that fact. (Pleasantly high,
>but surprisingly low.)

Not much of a surprise if you look at the license which is named
"not a license" [1] for example. Or if you consider the documentation
that is not included in the source archives. Or if you consider the
incompatibilities on the protocol level with other implementations
(publicfile, djbdns - qmail is an exception here)

>> [1] NB. That reads 'bug' not 'security bug'; if a programmer can write
>> a program containing a bug of any sort, then he cannot possibly be
>> conscientious enough for us to be certain that he won't introduce a
>> security bug into his software at a later date.
>

>When spewing non-sequitors, I say go for the gusto. Instead of "software
>bug", you should have said, "mistake". After all, anyone who would spill
>coffee on his shirt is bound to give away root eventually...

No. Your realworld analogy is wrong. Plese dont use stupid analogies of
the real world, because they almost never work.

A bug is a bug. Someone who is able to make a non-security related
Bug is always someone who could have made this bug in a security critical
part of the code.

You have discovered the true nature of all humanity: Imperfection.

so any name that would apear on this list would most likely not be a
human being ;)

>Len.

[1] The 'terms of use' of djbdns is, although titled "not to be a license"
is in fact (at least according to the legal system which applies to me)
a license, no matter what name djb gave it. (im german, so the European,
German and Bavarian legal systems apply, in this order of precedence.)


juergen

Message has been deleted

Juergen P. Meier

unread,
Apr 22, 2001, 8:01:29 AM4/22/01
to
On 22 Apr 2001 07:04:56 -0400, <lbudney...@nb.net> typed:
[...]

http://learn.to/quote

>> A bug is a bug.
>

>But many bugs do not give away elevated privileges. Security is not
>perfection (though that helps); it's being careful enough about the

If someone fails to avoid programming errors, there is nothing in this
world that could prevent this person from making those programming errors
in security critical parts of the code.

Yes, programmers should be careful when writing security critical code,
but they should be careful when writing code anyway, cause with
complex programs one cant easily say what part of the code touches
security and what part dont. (i think we agree in this part)

>right things. That you simply don't understand this explains why some
>folks are saying you shouldn't be writing security-critical software
>at all.

Those folks _always_ have the choice not to use this code in their
security critical environments !
There is _absolutly_ _noone_ who could prevent you from not using it!
No force.

What you (and djb) propose (and what i strongly object) is to prohibit
programmers to write code at all.
You are not alone in this world, and your opinion is not the only one valid.
How can you (or djb) dare to claim jurisdiction about who should be
allowed to create programms and who not ? (the criterias dont matter)

>> Someone who is able to make a non-security related Bug is always someone
>> who could have made this bug in a security critical part of the code.
>

>``Someone who is able to drop his keys is always someone who could have
>dropped his baby out an attic window.''

s/out an attic window/on the floor/

This happens much more frequently.
Its pretty unsual to hold babies out of attic windows. (at least where
i live)

>Len.

btw, you schould really work on your quoting skill...
It's nearly as bad as djb's.

Julian T. J. Midgley

unread,
Apr 22, 2001, 8:26:06 AM4/22/01
to
In article <m38zkt8...@peregrine.swoop.local>,

Leonard R. Budney <lbudney...@nb.net> wrote:
>J...@lrz.fh-muenchen.de (Juergen P. Meier) writes:
>>
>> Just because djb refuses to call his programming mistakes..."security
>> flaws"...
>
>Put your money where your mouth is. If there's a security flaw, give the
>exploit.

>
>> A bug is a bug.
>
>But many bugs do not give away elevated privileges. Security is not
>perfection (though that helps); it's being careful enough about the
>right things. That you simply don't understand this explains why some
>folks are saying you shouldn't be writing security-critical software
>at all.

I see no evidence whatever that Juergen doesn't undertand this;
instead, I see increasing evidence that you don't understand what it
is to be human. You are telling me that you know people who can
be relied upon *never* to make a mistake in a security critical piece
of code? This is strange, because in most cases, the most secure
pieces of software are ones which have been thoroughly and carefully
audited by a /team/ of competent people; it is very rare indeed that
any such audit of a previously unaudited piece of software fails to
turn up bugs with security implications. And a team is essential, one
person just won't do, since it's vital that team members check each
others' work. If human beings had perfect memory, install recall and
perfect concentration, then it might be possible train one guy to do
this for himself...

Now the only way most open source programmers can get access to a
sufficiently numerous and competent audit team is to release their
software and get people to use it. Even if the programmer has been
painstaking in his attention to security related details, there will
likely be bugs found. This thread is about what the most appropriate
response is *when* those bugs are found, not *if*[1] they are found.

And in the end, it boils down to this: you, DJB and others believe
that not informing the author before notifying the world will
encourage people to write more secure software.

I and others believe that failing to notify the author
and give a him chance to fix the problem before publishing it to the
world places the users at unnecessary risk. I believe that it is
essential that you set a short (a week at the absolute maximum) time
limit before you publish details of the flaw.

Both groups believe that full disclosure (as opposed to the CERT-like
'we'll sit on it until the vendor bothers to send us the fix'
maybe-one-day disclosure) helps to increase the security of software
as a whole, by encouraging programmers to place a priority on
security, and helping to keep them up to date with new types of
attack, etc.

I don't think the shock value of not informing the programmer first
adds much to the encouragement factor, but it does add a great deal to
the risk to users. DJB and you disagree, but I don't think you
understand human motivations very well. People develop thick skins
quickly if beaten repeatedly with large sticks; they also tend to
become less co-operative.

Programmers are generally grateful to those who notify them of
security holes, and are keen to fix them; they often get involved in
correspondence with the discoverer of the hole, and learn from him.
They are unlikely to be grateful towards someone who doesn't have the
courtesy to inform them of the bug he's found in the software, nor are
they likely to learn as much from him.

Along the way, we seem to have become embroiled in a dispute about
whether or not it is possible for a human programmer to be able to
guarantee that his code is secure. I've never yet heard of a case of
a human being found who cannot and does not err, so I think it's as
unlikely as it would be in any other sphere of human achievement. You
and DJB appear to believe that such people exist, but haven't yet been
successful in pointing to very many of them.

That about sums it up; it's clear that we're not going to agree on
this.

In closing, I note, thankfully, that there have been infinitely more
posts to Bugtraq in the last week than there have ever been to
secures...@list.cr.yp.to
(http://securesoftware.list.cr.yp.to/archives.html - I've also been
subscribed to the list for a week or two, and have yet to see a single
post).

Message has been deleted

Julian T. J. Midgley

unread,
Apr 22, 2001, 9:37:54 AM4/22/01
to
In article <m33db18...@peregrine.swoop.local>,

Leonard R. Budney <lbudney...@nb.net> wrote:
>jt...@xenoclast.org (Julian T. J. Midgley) writes:
>>
>> I've never yet heard of a case of a human being found who cannot and
>> does not err...
>
>You keep confusing security with inerrancy.

Not at all...

>> ...unlikely as it would be in any other sphere of human achievement.
>
>Bridge building, for example? It's a wonder anyone crosses a river and
>lives.

Tacoma Narrows mean anything to you? And for a more recent example,
what about the Millenium Footbridge across the Thames in London, which
had to be closed after a design failure?

As I've said before, bridge design (at least for non-innovative
designs) is relatively easy (the number of variables is relatively
small, the amount of data on the materials used is large, and
simulation is relatively straightforward.

But no one has yet written a piece of software that can tell you
whether or not your code is 'secure'. The complexity of reasonably
sized software programs is usually too great.

>Maybe the reason I can't sympathize with your viewpoint is that in my
>work, software *is* like building a bridge: with our customers,
>"software failure" can result in dumping 300 tons of molten steel down
>somebody's back. And if that happens we can't shrug our shoulders and
>say, "Hey, to err is human!"

Indeed. But, I bet your software lifecycle isn't:

Procedure A:
1. Programmer writes software and tests it himself.
2. He tells you it's ready.
3. You ship it, because he's so well trained you know he doesn't
make mistakes any more.

I'm quite sure you instead use procedure B: a sophisticated testing
and auditing process, involving several teams of people, code reviews,
etc. And I bet this process often uncovers bugs that are then
corrected *before* the code is released to your customers. And
furthermore, I'm certain that all this testing costs quite a
substantial quantity of money. If you don't, and actually use Procedure
A instead of B, then please tell the world so that some poor sod can
avoid having 300 tons of molten steel poured down his back.

An individual writing open source software typically doesn't have
access to the resources to check his code this thoroughly. His
resources are the open-source community, who will review and check his
code for him, if he's lucky- but they usually only do this once he's
released it, and not before. If he's already well known, he may be
able to gather a team together to review his code before he makes it
available, but probably only for projects acknowledged to be important
and security critical. Otherwise, part of the process for eliminating
the inevitable bugs in his software is discovery and announcement on
full disclosure mailing lists (with prior notification). The more
conscientious we encourage programmers to be, the fewer holes there
will be in the first release of any particular piece of software. But
until you invent an automatic (and 100% reliable) process for bug
checking, we will never have programmers that can reliably write
software without bugs (security relatedor otherwise).

We have devised procedures for reducing the probability that a human's
error will go undiscovered (code reviews, testing, peer review, etc);
we've not yet devised a procedure for preventing the mistake being
made in the first place. We can use these procecures in designing the
software for the space shuttle, or for controlling steel factories,
and it costs us a great deal to do so; the closer your want to get to
zero probability of there existing a serious bug in your software, the
more you have to spend on testing, review, training, etc.

But a man writing software at home which he gives away without
expecting payment does not have the resources to spend as much time on
error checking and review (and in any event, has to enlist the help of
others; it's never safe to assume that you can reliably audit your own
code). So it's quite unreasonable of you to lack sympathy for him when
his errors are discovered, unless you wish to pay him for the cost of
running his software through the same procedures you use at work
(including paying the requisite number of tests, programmers for code
review, etc).

D. J. Bernstein

unread,
Apr 22, 2001, 1:34:48 PM4/22/01
to
Julian T. J. Midgley <jt...@xenoclast.org> wrote:
> http://securesoftware.list.cr.yp.to/archives.html - I've also been
> subscribed to the list for a week or two, and have yet to see a single post

It's a new list. It'll need time to catch up to bugtraq; most bugtraq
contributors don't know about the list yet. But the advantages---clear
labelling of messages, fast distribution of messages, no messages on
commercial software---will propel it ahead of bugtraq in the long run.

---Dan

D. J. Bernstein

unread,
Apr 22, 2001, 2:41:34 PM4/22/01
to
Julian T. J. Midgley <jt...@xenoclast.org> wrote:
> But a man writing software at home which he gives away without
> expecting payment does not have the resources to spend as much time on
> error checking and review

Then he shouldn't be writing security-critical software. If he does so
anyway, and he creates security problems, he will be punished.

---Dan

Dave Mundt

unread,
Apr 22, 2001, 3:51:11 PM4/22/01
to
Greetings and Salutations...

d...@cr.yp.to (D. J. Bernstein) wrote:

>Julian T. J. Midgley <jt...@xenoclast.org> wrote:
>> But a man writing software at home which he gives away without
>> expecting payment does not have the resources to spend as much time on
>> error checking and review
>

Hum...this is a point...but, not a good one. The fact is that
sloppy programming is NEVER a bargain...even if it is "free".

>Then he shouldn't be writing security-critical software. If he does so
>anyway, and he creates security problems, he will be punished.
>
>---Dan

Hum...this is pretty draconian... (*smile*) I hope your kids dont
ever break any rules.
I also want to point out that if someone is leaping to a "free"
source for "security critical" software and simply running it, without
understanding what it is doing, or what its limitations might be,
perhaps part of the blame goes to the USER too...and so THEY should
share in the "punishment" too.

Message has been deleted

Julian T. J. Midgley

unread,
Apr 22, 2001, 6:20:25 PM4/22/01
to
In article <2001Apr2218...@cr.yp.to>,

Will he indeed - how exactly? If he is competent and careful and the
number of errors (security related and otherwise) in his code is
small, if he is timely in fixing them when they are discovered and
reported to him, and if the code he has written is of significant use
to people, then he will often generate a large base of contented
users. If some of those users wish to use his software in
environments where security is absolutely critical, then, given that
his code is open source, they are able to devote their resources to
conducting a thorough audit of the code to tighten its security still
further.

That's what actually happens in the real world, DJB. In your fantasy
land, people might immediately cease to use any piece of software
released with a security hole in it; back where the rest of us live,
people would rather have the software, and have the security hole
fixed.

If we all behaved towards programmers as you would have us behave, we
would not have a large volume of absolutely secure software (as you
seem to think we would), we would have instead a pitifully small
number of programmers and next to no software. There certainly
wouldn't be a free software movement (if you're not getting paid and
will immediately be ostracized for displaying any signs of the human
tendency to err, then why bother to go to the effort of writing
software at all).

Julian Midgley

Ken Hagan

unread,
Apr 23, 2001, 5:44:15 AM4/23/01
to
"Leonard R. Budney" <lbudney...@nb.net> wrote...
> "Ken Hagan" <K.H...@thermoteknix.co.uk> writes:
>
> > 1 Script kiddies can automate the process of "Here is an
> > exploit, search the internet for a vulnerable computer",
>
> They have to harvest the exploit, build it, and plug it into their
> scanner. The effort involved ~= the effort of plugging holes.

Well, one of them has to. The others can just copy it.

> > 2 An exploit publicised at 6pm, or on Saturday, probably
> > won't be patched until the following morning, or Monday.
>
> Do you know of any large installation where IT is closed all weekend?

Large as in multi-national? No. But I know of a lot of small
businesses who aren't offering on-line trading, but are using
a web site to publicise themselves and provide 24/7 support
to their (international) users. These people do tend to be
home in the evenings and at weekends. Sorry, but most businesses
can't afford the staffing that you seem to need. Are you saying
they should stay off the Internet?

> > I think the sheer numbers of script kiddies makes them a risk,
> > even when the windows of opportunity are small.
>
> Plugging a hole against one attack means plugging it against
> arbitrarily many repetitions of that attack.

Indeed. My point was that for many potential victims there is a
period of several hours when there are thousands of script kiddies
awake and the victim is asleep.


Mike O'Connor

unread,
Apr 23, 2001, 7:03:00 AM4/23/01
to
In article <JOIE6.6055$Cu.11...@news2-win.server.ntlworld.com>,

Julian T. J. Midgley <jt...@xenoclast.org> wrote:
:In article <2001Apr2218...@cr.yp.to>,

:D. J. Bernstein <d...@cr.yp.to> wrote:
:>Julian T. J. Midgley <jt...@xenoclast.org> wrote:
:>> But a man writing software at home which he gives away without
:>> expecting payment does not have the resources to spend as much time on
:>> error checking and review
:>
:>Then he shouldn't be writing security-critical software. If he does so
:>anyway, and he creates security problems, he will be punished.
:
:Will he indeed - how exactly? If he is competent and careful and the
:number of errors (security related and otherwise) in his code is
:small, if he is timely in fixing them when they are discovered and
:reported to him, and if the code he has written is of significant use
:to people, then he will often generate a large base of contented
:users. If some of those users wish to use his software in

Full disclosure events are usually against stuff that already has
a largish user base of some sort.

:environments where security is absolutely critical, then, given that


:his code is open source, they are able to devote their resources to
:conducting a thorough audit of the code to tighten its security still
:further.
:
:That's what actually happens in the real world, DJB. In your fantasy
:land, people might immediately cease to use any piece of software
:released with a security hole in it; back where the rest of us live,
:people would rather have the software, and have the security hole
:fixed.

The interesting thing is that society in general does not agree
that the releasing of persistently-vulnerable software ought to
penalize the programmers. Consider stuff like Outlook and IE, from
Microsoft. The market share grows and grows, despite the numerous,
very public, fully disclosed security gaffes. But, if you ask your
Joe-Blow-computer-user who was more the asshole, between:

a) a programmer writing a virus or other exploit to demonstrate
an Outlook hole, and
b) a programmer or other employee of Microsoft, involved with the
Outlook product

I bet they'd pick 'a', probably invoking the derogatory form of
'hacker'. They'll pick 'a' despite other forms of M$ sliminess.

Based on stuff like the above, I've concluded that in general, people
don't associate software security with software functionality (except
in evaluating a security product, or the security piece of a product).
Heck, there's still the notion that security is a 'piece'...

IM(very)HO this is a function of how security works in a society.
Much like on a typical PeeCee, windows in a house are an obvious
security hole. Laws say "don't break windows". Not perfect, but
accepted in society and pretty effective on the whole. When the
rarity happens and people break someone's window, let themselves
in, and steal stuff, the house is not generally chided for being
insecure. In your downtrodden urban areas (I live near Detroit),
some might say "put bars on those windows". That still doesn't
prevent breaking the glass, injecting some sort of toxin in the
air, then breaking in at leisure after everyone is unconscious or
dead. The "real world" could easily be as hostile an environment
as Internet software. The reason most folks don't dig themselves
into a mountainous bunker with a lifetime of supplies and ammo
involves society and laws and such being part of one's daily life.

This looks to be the framework that people are generally exposed
to when they think security on the Internet. Convince folks that
this framework doesn't make sense for Internet software, or it'll
be extended to Internet software just like everything else.

:If we all behaved towards programmers as you would have us behave, we


:would not have a large volume of absolutely secure software (as you
:seem to think we would), we would have instead a pitifully small
:number of programmers and next to no software. There certainly
:wouldn't be a free software movement (if you're not getting paid and
:will immediately be ostracized for displaying any signs of the human
:tendency to err, then why bother to go to the effort of writing
:software at all).

That's one option. The societal approach above is another. As I
have stated before, it's unclear just what lesson is supposed to
be taught by punishment.

--
Michael J. O'Connor | WWW: http://dojo.mi.org/~mjo/ | Email: m...@dojo.mi.org
Royal Oak, Michigan | (has my PGP & Geek Code info) | Phone: +1 248-427-4481

Alun Jones

unread,
Apr 23, 2001, 11:02:28 AM4/23/01
to
In article <2001Apr2106...@cr.yp.to>, d...@cr.yp.to (D. J.
Bernstein) wrote:
> Alun Jones <al...@texis.com> wrote:
> > No, bug reporting should not be about humiliation or punishment. It
> > should be about ensuring that the code base becomes more secure.
>
> Fixing all the existing security problems is not enough.
>
> Programmers have to stop creating new security problems. _You_ have to
> stop creating new security problems.

Well, duh. Of course it would be wonderful if we could all stop writing
code with bugs in. If you'd like to tell me how I can ensure that I never
make a typing mistake, I'd be grateful to hear it. If you'd like to tell me
how I can make absolutely certain that I'll never mis-analyse output from QA
tests, then please bestow your marvelous wisdom upon us all.

> I want you to be under so much pressure that you will invest the time
> necessary to avoid security problems. If you can't handle the pressure,
> you shouldn't be writing network software.

Sod that for a game of soldiers.

It may sound trite, but I want to provide security for my users because I
care about them and their systems. I think it says something rather sad
about you that you are motivated not by a desire to do good, but out of a
fear that someone may tell people that you're doing bad.

> > Full disclosure doesn't cause anyone to do anything proactively.
>
> That simply isn't true. Fear is a wonderful motivator.

For you, perhaps, Dan. Others of us actually care about our work, and take
pride in it. Maybe you require daily beatings in order to improve your
morale, but please don't assign such sensibilities to the rest of us.

And, once again, you miss my point. Fear makes you _re_act, this does not
mean that you are doing anything _pro_actively.

Alun.
~~~~

[Note that answers to questions in newsgroups are not generally
invitations to contact me personally for help in the future.]
--
Texas Imperial Software | Try WFTPD, the Windows FTP Server. Find us at
1602 Harvest Moon Place | http://www.wftpd.com or email al...@texis.com
Cedar Park TX 78613-1419 | VISA/MC accepted. NT-based sites, be sure to
Fax/Voice +1(512)378-3246 | read details of WFTPD Pro for NT.

Alun Jones

unread,
Apr 23, 2001, 11:02:25 AM4/23/01
to
In article <m3snj2z...@peregrine.swoop.local>, lbudney...@nb.net
(Leonard R. Budney) wrote:
> al...@texis.com (Alun Jones) writes:
> > (Leonard R. Budney) wrote:
> >> It's not their job to tell you. They do, generally, out of
> >> self-interest. Which is great--but why are you whining as if
> >> it's their moral duty to support your software? Ultimately, that
> >> responsibility rests on your shoulders and nobody else's.
> >
> > You view reporting bugs to the vendor as "support" for that software? No
> > wonder I seem to misunderstand your point of view.
>
> What's so hard to understand? Nobody owes you anything. They can use your
> software or not. If they don't like it, they can tell you or not. If they
> decide that it's simply crap, they can throw it away, with or without
> telling you. They may *choose* to tell you--but not because they love you.
> It's because they hope you'll give them something they can actually use.

Len, I'm intrigued as to why you think that these same people that don't owe
the vendor anything should feel somehow that they owe Bugtraq and other
partial disclosure lists anything.

Alun Jones

unread,
Apr 23, 2001, 11:02:31 AM4/23/01
to
In article <m3hezh8...@peregrine.swoop.local>, lbudney...@nb.net
(Leonard R. Budney) wrote:
> jt...@xenoclast.org (Julian T. J. Midgley) writes:
> >
> > Splendid. You will now publish here a list of all programmers known
> > never to have a written a program with a bug[1] in it.
>
> Not all bugs are security holes. So absolute perfection, whatever that
> is, is not the same goal as complete security. (Though they have much
> in common, I agree.)

If you acknowledge that programs have bugs that the programmer is not aware
of, then you must also acknowledge that programs have security bugs that the
programmer is not aware of. After all, if there might be a rat in your
basement, can you assure me that it's not a brown rat?

> > We will then only use software written by those programmers.
>

> In practice, this is not true. Dan wrote qmail, djbdns, and publicfile,
> all of which are known *never* to have a security hole. However, their

> penetration is surprisingly low, considering that fact. (Pleasantly high,
> but surprisingly low.)

You are willing to state that none of Dan's programs have ever had any
security flaws in them whatever?

Wow, even Dan hasn't (yet) gone that far.

It is loading more messages.
0 new messages