Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Security bugs and disclosure

8 views
Skip to first unread message

Mike Shaver

unread,
Mar 23, 2000, 3:00:00 AM3/23/00
to
There's been some discussion of what Mozilla should do with security
bugs, and their fixes.

The following text comes from bug 28387:

------- Additional Comments From sha...@mozilla.org 2000-03-23 09:01
-------

Why is this bug Netscape-confidential?

------- Additional Comments From nor...@netscape.com 2000-03-23 09:47
-------

I've made this bug Netscape confidential as I have all open security
exploits
against the browser.

I hope you'll agree that having open exploits visible to all comers is
not the
right policy as more people begin using mozilla. I'll also agree that
Netscape-only isn't quite right--ideally it would be a set of trusted
people in
and out of Netscape that would have visibility access to open security
exploits.
It's a pain for instance that Georgi Guninski, who works from Bulgaria
and thus
doesn't have Netscape-only access, can't even see the bugs he's found.

Once exploits have been fixed and the fixes have had time to propagate
to enough
users, then the bugs should be open for all to view. I had been
switching bugs
over from Netscape Confidential to open once they were fixed, but now
even after
I fix bugs in the tip I've been leaving them Netscape Confidential since
the
bugs are present in the soon-to-be-released beta.

We could push mozilla.org to create a new category of visibility for
security
bugs and let them maintain a list of trusted viewers. I hadn't pushed
for this
because I wasn't sure the extra effort was worth the gain.

------- Additional Comments From sha...@mozilla.org 2000-03-23 10:01
-------

I think it's inappropriate for Netscape to be given special privileges
WRT
security bugs. What about other users of the code, who might also be
shipping
product or beta that could contain these bugs? They can't even _know_
about
this bug, or understand the motivation for code that you're checking
into the
Mozilla tree -- which they might or might not want in their product --
with the
current setting. Why should the Mozilla community trust Netscape if
Netscape
doesn't trust the community? (And why is it OK for _anyone_ at Netscape
-- plus
myself and perhaps a handful of other ex-Netscapers -- to see this? Why
do they
deserve special trust, just because of their email address?)

I think that Netscape has ample opporunity to decide whether to fix
these bugs
in their beta, given that there are known fixes in hand. If that is too
much
risk for the Netscape beta managers to take, then that is their
decision, and it
shouldn't impact the rest of the Mozilla community.

I personally believe that as soon as there is a fix, workaround or piece
of user
advice (``don't add bookmarks for javascript: URLs from untrusted
sites'')
that's sufficient to prevent or limit the danger, we shouldn't be
restricting it
at all. (I think that restriction of vulnerability information should
only
occur in very dangerous, very specific cases, and it should probably
involve
discussion with st...@mozilla.org: this bug wouldn't be such a case, to
my mind,
but I'd have been happy to debate it with others.)

I also believe that people who are reporting bugs in Mozilla should be
reporting
them through Bugzilla. If Georgi wants to report Netscape-beta bugs to
Netscape, that's his choice, but I think he should be working in
Bugzilla like
the rest of our bug reporters.

------- Additional Comments From nor...@netscape.com 2000-03-23 11:06
-------

Shaver: Yes, start a discussion on n.p.m.security. I agree that security
bugs
should have an audience wider than Netscape, I just think that there
needs to be
some secrecy surrounding known exploits in software that is in use. That
secrecy
benefits mozilla users and mozilla contributors and isn't a
Netscape-specific
benefit.

-----------------------------------------------------------------

I think we agree on the following points:
- keep security bugs relatively quiet (Security Group Only) until
a fix is found, tested and committed
- Security Group needs to be different than ``Netscape only''

What's left to decide is, at least:
- how is the Security Group populated?

- what do we do once a fix is found?

- does our policy differ when we get to a ``production release'',
versus our current not-yet-beta state?

I'll post with my thoughts on these topics in a reply, hopefully today.

Mike

--
256708.87 208576.27

Kevin Hecht

unread,
Mar 23, 2000, 3:00:00 AM3/23/00
to

Does this procedure change, and if so how, if the bug is first
discovered by someone posting an exploit to a web site, to Bugtraq, or
alerting CNET and other media, rather than going through Netscape
channels? If a bug becomes known to Netscape at the same time it does to
the whole world, as it often is, is there any sound reason why you would
not want external contributors to help you fix it so long as it involves
open source code? In some cases, the same people who find it may be the
ones who help you fix it or provide a patch.

There has been a history at Netscape of failing to respond to
security-related bug reports from the outside (in fact, there are
complaints about this on Bugtraq even in the past few days with regards
to an unacknowledged exploit in Enterprise Server - see
http://www.securityfocus.com/templates/archive.pike?list=1&date=2000-03-15&msg=38D2173D...@relaygroup.com).
My concern is that there seems to be no obvious way yet to advise
Netscape in a confidential manner about Mozilla security bugs (at least
on the mozilla.org site) when one is found, and given the spotty history
of acknowledging past problems, this shouldn't be ignored.

In the case of security bugs where there is a known workaround but no
permanent fix yet, I strongly agree that disclosure of such a workaround
(and public bug acknowledgement) should be made available immediately.

> - Security Group needs to be different than ``Netscape only''
>
> What's left to decide is, at least:
> - how is the Security Group populated?
>
> - what do we do once a fix is found?
>
> - does our policy differ when we get to a ``production release'',
> versus our current not-yet-beta state?
>
> I'll post with my thoughts on these topics in a reply, hopefully today.
>
> Mike

I'll probably want to comment on those, but I'll wait to see what others
have to say first ...
--
Kevin Hecht, Netscape Champion
College of Commerce and Finance, Villanova University
khec...@idt.net Kevin...@villanova.edu
http://idt.net/~khecht19/

Mike Shaver

unread,
Mar 23, 2000, 3:00:00 AM3/23/00
to
Kevin Hecht wrote:
> Does this procedure change, and if so how, if the bug is first
> discovered by someone posting an exploit to a web site, to Bugtraq, or
> alerting CNET and other media, rather than going through Netscape
> channels?

There should be no ``Netscape channels'' for Mozilla security bugs. If
someone reports a bug to Netscape about their branded derivative, then I
presume that Netscape would report that vulnerability to Mozilla if it
was in common code. People should be reporting Mozilla bugs to the
Mozilla community (or some designated subset thereof), not to one
particular contributor/consumer.

(They would always have the right not to disclose those things, I guess,
but that would be absurdly bad community spirit. Let's not even go
there.)

If the exploit is public, I don't think we need to go to any lengths to
protect our own discussions of it, especially because opener discussion
might help us get to a better fix, sooner.

More in my self-reply, when I finish with it.

Mike

--
264567.81 216020.61

Dan Mosedale

unread,
Mar 24, 2000, 3:00:00 AM3/24/00
to
Mike Shaver wrote:
>
> I think we agree on the following points:
> - keep security bugs relatively quiet (Security Group Only) until
> a fix is found, tested and committed
> - Security Group needs to be different than ``Netscape only''
>
> What's left to decide is, at least:
> - how is the Security Group populated?

Elsewhere, someone suggested that the security group could be the same
as the group of people who can confirm bugs. This doesn't seem too
unreasonable.

> - what do we do once a fix is found?

Open up permissions on the bug to the general public after the next
milestone release containing the fix?

> - does our policy differ when we get to a ``production release'',
> versus our current not-yet-beta state?

Dan

Daniel Veditz

unread,
Mar 24, 2000, 3:00:00 AM3/24/00
to
Mike Shaver wrote:

>
> Kevin Hecht wrote:
> > Does this procedure change, and if so how, if the bug is first
> > discovered by someone posting an exploit to a web site, to Bugtraq, or
> > alerting CNET and other media, rather than going through Netscape
> > channels?
>
> There should be no ``Netscape channels'' for Mozilla security bugs. If
> someone reports a bug to Netscape about their branded derivative, then I
> presume that Netscape would report that vulnerability to Mozilla if it
> was in common code. People should be reporting Mozilla bugs to the
> Mozilla community (or some designated subset thereof), not to one
> particular contributor/consumer.
>
> (They would always have the right not to disclose those things, I guess,
> but that would be absurdly bad community spirit. Let's not even go
> there.)

No, let's. I can nearly guarantee (based on past behavior) that unless
mozilla designates a small trusted group of security-concerned people
Netscape will never divulge information about a non-public exploit until 1)
they have a fix *AND* 2) there is available a release or patch containing
the fix. Microsoft does the same thing. It is simply irresponsible to
expose your customers to UNNECESSARY risks from script kiddies who would
run with that information.

> If the exploit is public, I don't think we need to go to any lengths to
> protect our own discussions of it, especially because opener discussion
> might help us get to a better fix, sooner.

Define "public". Merely being found by a mozilla community member does not
count, as most responsible security-hole finders want to give the affected
developers a chance to respond and/or fix before exposing it to the world.
Such people usually bring in the press only as leverage to move
recalcitrant vendors because they, too, understand the goodness of trying
to get a fix before letting the cat out of the bag.

If there isn't a way to report these security holes privately to
mozilla.org I'm betting many of these folks will report them quietly via
e-mail to netscape first. And as mentioned Netscape will probably try to
keep the info under their hat until there's a fix.

This is kind of a bummer because Netscape has limited resources devoted to
fixing security holes. And one of those people was just promoted to
managing the entire javascript team, no doubt reducing the amount of time
he can devote to security work. Security experts working alone or for other
contributing companies are not able to help out nor protect their customers
when they don't know about these exploits.

What can mozilla.org do to assuage the fears of contributing companies like
Netscape that will encourage them to share security information more
broadly, but in a controlled way?

What can mozilla.org do to assuage the fears of responsible security-hole
finders that will encourage them to share the knowledge of expoits with a
group of mozilla developers rather than just with the main vendor (at this
point) Netscape?

Key words: assuage fears and encourage. Netscape has been burned too many
times on security holes, you aren't going to dictate to them. If they have
to I have no doubt they will pull security bugs out of bugzilla and set up
a parallel system if bugzilla proves too leaky. And that would be a bummer
and inconvenience for everyone.

-Dan Veditz

Oleg Rekutin

unread,
Mar 26, 2000, 3:00:00 AM3/26/00
to
IMHO, all security bugs should be fully disclosed and open. In addition,
a post to n.p.m.security (and Bugtraq or something?) should be made
every time a security bug is found, to alert the developers outside.

By opening the bugs like that, yes, you do permit people to exploit the
bugs before the fix is found and even after that exploit the bugs of the
users of the older versions of Mozilla (because they haven't updated to
a fixed version yet).

That is the only disadvantage of opening up security bugs. Now, is that
disadvantage really that bad? I don't think so. The average user Joe
usually doesn't randomly surf Geocities, encountering malicious web
pages. Such a user usually browses major web sites that are quite
unlikely to contain malicious code. Now, an advanced user or a heavy
surfer usually understands that there is some risk in doing
"promiscious" or "unsafe" surfing (ehh, hard to define, but let's assume
that "unsafe" surfing is surfing on supsicious web pages potentially
containing malicious code).

Furthermore, in the past, when a security bug was found in Netscape Nav,
it was usually left open until the next version came out. Which was not
a day after. It wasn't even a week after. It was just that, a new
version. And because Netscape kept it all quiet, people that were
looking to exploit bugs usually cared enough to monitor security mailing
lists and newsgroups for discoveries of Netscape Nav bugs, meanwhile the
unalerted users continued on browsing, without any care. And perhaps,
due to the slow release process, it was OK for Netscape to keep security
bugs this quiet. But not with Mozilla.

We are stepping into open source here. And that means open bugs,
especially security bugs (altough I am fine with closed
Netscape-branded-version-specific bugs). One of the constantly touted
benefits of open source is really quick bug fix time (and it's not a
myth). Security patches or temporary work arounds for applications are
usually available within hours, a few days at the latest. Remember the
OOB DoS attack problem a few years ago? Fixes for Linux and FreeBSD were
issued in a few hours, while Microsoft took about three to four days to
release a fix of the problem.

So now are the benefits of having publicly open security bugs. Once a
security problem is found and is publicly announced, people that really
care and otherwise would have not contributed to the project (like an
experienced programmer that normally doesn't have the time) would
actually help out with the bug, quickly providing a fix or the work
around. It's a huge benefit. Netscape itself benefits from this, as the
bugs in its browser are plugged lightning fast now, using developers
outside of Netscape's own developer force, thus freeing them to do other
things. Netscape also gains a good sales point, stemming from the fact
that Microsoft takes days to release patches for IE.

Security through obscurity doesn't work well. Obscurity means it costs
some effort to "find". This effort is a barrier, making software more
difficult to exploit. However, this effort is a much greater barrier for
fixing things.

So open up those bugs! You *will* benefit!


My humble opinion,

Oleg.

Mike Shaver

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
Daniel Veditz wrote:
> No, let's. I can nearly guarantee (based on past behavior) that unless
> mozilla designates a small trusted group of security-concerned people
> Netscape will never divulge information about a non-public exploit until 1)
> they have a fix *AND* 2) there is available a release or patch containing
> the fix.

I really hope that Netscape realizes that they don't have a monopoly on
security reports or fixes. Georgi used to, IIRC, post to bugtraq before
fixed versions were available, and today we have reports of privacy
issues related to cache control
(http://www.linuxcare.com.au/mbp/meantime/) -- will Netscape even bother
to ship an update to address them?

> Microsoft does the same thing.

Sure, but Microsoft is the only distribution point for IE. We're in a
different world.

> It is simply irresponsible to
> expose your customers to UNNECESSARY risks from script kiddies who would
> run with that information.

So it would be ``simply irresponsible'', then, to ship a beta with known
security holes, rather than respinning to take in-the-tree-and-tested
fixes that would protect said customers? And yet I will protect to the
death Netscape's right to decide what's in their software.

What would Netscape say if mozilla.org told them ``don't release your
6.09 update with mention of the fixed holes, because Really Slow Vendor
Inc. hasn't published their fix yet''? Surely they would think it
unreasonable to wait for some slower vendor to take an unbounded amount
of time in this area.

> Define "public". Merely being found by a mozilla community member does not
> count, as most responsible security-hole finders want to give the affected
> developers a chance to respond and/or fix before exposing it to the world.

``The affected developers''. Does the fact that I choose to build my
own Mozilla -- because I'm on a platform that Netscape doesn't care
about, or because I want MathML -- make me any less entitled to
knowledge about vulnerabilities in the software that I'm running?

WRT public: bugtraq is public. A web page is public. If you can get
the information without being a memory of
mozilla-securit...@mozilla.org or discovering it
independently, it's public.

> If there isn't a way to report these security holes privately to
> mozilla.org I'm betting many of these folks will report them quietly via
> e-mail to netscape first. And as mentioned Netscape will probably try to
> keep the info under their hat until there's a fix.

Hey, check it out! I'm proposing just such a system!

I'm suggesting that we have a set of people who handle incoming security
reports (The Security Group), and that this set is smaller than
all-the-people-who-can-reach-bugzilla.

Do you have suggestions as to what the criteria should be for membership
in the Security Reports? (Criteria that are likely to be rejected
out-of-hand include ``Netscape employee'', ``paid to work on Mozilla'',
``signed a Security Group Membership form'', I think.)

> Security experts working alone or for other
> contributing companies are not able to help out nor protect their customers
> when they don't know about these exploits.

Well, yes, that's sort of my point. And I don't think that Netscape
should control that information, because, as Kevin points out, their
record on acknowledging and repairing security holes doesn't indicate
the kind of high-speed response that the open source community has come
to expect.

In fact, responsiveness to security and privacy concerns may well be a
market-differentiator for Mozilla derivatives, leading to a world where
people get most of their browser from www.wecareaboutyoursecurity.com
and just grab the Netscape AIM/PSM bits from the Netscape-branded
packages (if they trust them at all).

> What can mozilla.org do to assuage the fears of responsible security-hole
> finders that will encourage them to share the knowledge of expoits with a
> group of mozilla developers rather than just with the main vendor (at this
> point) Netscape?

I don't know how to answer that question, until someone clearly
articulates where those fears come from, preferably as ultimate rather
than proximate causes.

I believe that people report security bugs to Netscape because only
Netscape can effect a fix for its users; that will no longer be the case
for the vast majority of bugs in Netscape 6, where J. Random could
respin a layout.xpi with the appropriate fix and get new bits into the
hands of any interested user long before Netscape even admits that
there's a problem.

(Currently, of course, the main -- nay, only -- vendor of Mozilla is
currently Mozilla, so there's obviously some mechanism for motivating
people to report to another entity. Otherwise, Netscape wouldn't have
these reports, right?)

Mike

--
264485.57 162416.43

Frank Hecker

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
Mike Shaver wrote:
> I'm suggesting that we have a set of people who handle incoming security
> reports (The Security Group), and that this set is smaller than
> all-the-people-who-can-reach-bugzilla.

I agree with this proposal.

> Do you have suggestions as to what the criteria should be for membership
> in the Security Reports? (Criteria that are likely to be rejected
> out-of-hand include ``Netscape employee'', ``paid to work on Mozilla'',
> ``signed a Security Group Membership form'', I think.)

Well, here are some suggestions:

1. Start with an initial "Mozilla security group" composed of
contributors vouched for by the appropriate module owner(s) most
concerned with and knowledgeable about Mozilla security issues.

2. Expand that group over time based on recommendations from those
already in the group, whether on a consensus basis or through some sort
of semi-formal voting by people in the group (e.g., require a two-thirds
or three-fourths supermajority for approval).

I wouldn't immediately dismiss the idea of having new people sign some
sort of form in addition to whatever else they're required to do (just
as people must do prior to getting CVS check-in privileges). However the
fact remains that the only way such a group is going to work in practice
is if the people in it are trusted on a personal level by the other
people in the group, and if the group as a whole is trusted by Mozilla
vendors (including Netscape).

> In fact, responsiveness to security and privacy concerns may well be a
> market-differentiator for Mozilla derivatives, leading to a world where
> people get most of their browser from www.wecareaboutyoursecurity.com
> and just grab the Netscape AIM/PSM bits from the Netscape-branded
> packages (if they trust them at all).

I don't know about this particular scenario, but it's certainly true
that some businesses and individuals will want rapid responses to
Mozilla security issues and will be willing to pay for that in one way
or another. I think we should definitely encourage businesses that want
to provide such services, whether they do it independently or under
contract to Mozilla vendors.

> > What can mozilla.org do to assuage the fears of responsible security-hole
> > finders that will encourage them to share the knowledge of expoits with a
> > group of mozilla developers rather than just with the main vendor (at this
> > point) Netscape?
>
> I don't know how to answer that question, until someone clearly
> articulates where those fears come from, preferably as ultimate rather
> than proximate causes.

I studied Latin in high school, but you lost me a little bit on this
sentence :-)

In any case, I think that people are worried about scenarios where some
random person "out there" has a) access to Mozilla security bug reports
and b) a willingness to exploit them for malicious purposes. Even if the
group of people able to view security reports were restricted in some
way, some people would still worry about scenarios where a malicious
person managed to get into that group in some way.

That's why I think it comes back to trusting people on a personal level.
I think the level of trust should be comparable to that required of a
person given check-in privileges, because the levels of risk are at
least roughly comparable. (A malicious person checking in code can
actually do just as much or even more damage by inserting trojans, but
there's at least the potential for it to be caught through review.)
That's why I favor a model based upon some reasonable level of personal
trust being established, as opposed to either a semi-automated granting
of access or a mere signing of forms in the absence of any personal
relationships.

Frank
--
Frank Hecker work: http://www.collab.net/
fr...@collab.net home: http://www.hecker.org/

Daniel Veditz

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
Mike Shaver wrote:
>
> I really hope that Netscape realizes that they don't have a monopoly on
> security reports or fixes. Georgi used to, IIRC, post to bugtraq before
> fixed versions were available,

Yes, bugfinders usually posted to bugtraq to pressure Netscape and/or
Microsoft, but they also usually gave the company a week or few days head
start at finding a fix before going public.

Nowhere did I say it was bad to use public disclosure as a lever to budge
recalcitrant vendors. In fact, sadly, it is probably more the threat of bad
press that fuels the urgency of security patches than concern with
customers' safety. Giving notice that in X days the hole goes public would
seem to be a responsible blend of motivating the vendor, warning the
public, and minimizing the amount of time the public is exposed.

> and today we have reports of [4.x bug] -- will Netscape even bother


> to ship an update to address them?

I'm not posting here as a Netscape spokesperson, ask them what they'll do.

> > If there isn't a way to report these security holes privately to
> > mozilla.org I'm betting many of these folks will report them quietly via
> > e-mail to netscape first. And as mentioned Netscape will probably try to
> > keep the info under their hat until there's a fix.
>
> Hey, check it out! I'm proposing just such a system!
>

> I'm suggesting that we have a set of people who handle incoming security
> reports (The Security Group), and that this set is smaller than
> all-the-people-who-can-reach-bugzilla.

Sounds great, I guess we are in violent agreement then. I thought you were
proposing that all security bugs should be 100% public bugs.

There is a case to be made for full openness on an open source
developer-oriented project. But I fear it would drive Netscape to hoard
information (they apparently felt badly burned by the way Sun handled Java
security hole knowledge, and some of those folks are still around). Until
Netscape is no longer the 800-lb gorilla on the project that is something
to worry about.

> Do you have suggestions as to what the criteria should be for membership
> in the Security Reports? (Criteria that are likely to be rejected
> out-of-hand include ``Netscape employee'', ``paid to work on Mozilla'',
> ``signed a Security Group Membership form'', I think.)

I don't know why you reject your last item, since that's pretty close
to how CVS access is handled. I was imagining responsible security
developers could nominate others for inclusion in the group. That
leaves the problem of how to seed the initial group and it might not
scale fast enough.

The major problem with any scheme: no way to judge newcomers who could
potentially be brilliant contributors or malicious neer-do-wells. Any
amount of process that could weed out the latter could easily discourage
the former.

One current drawback to the current bugzilla implementation is that bug
reporters can be blocked from their own bugs. That seems silly to me and
should be fixed. If someone feels the need to add information they don't
want the reporter to see they should open a new bug and make the old bug
depend on it. In an "open" bug system a reporter should always have the
right to see what's going on with what they've reported.

> (Currently, of course, the main -- nay, only -- vendor of Mozilla is
> currently Mozilla, so there's obviously some mechanism for motivating
> people to report to another entity. Otherwise, Netscape wouldn't have
> these reports, right?)

I think the current mozilla security bugs are in Netscape hands because
most of them have been found by testers hired by Netscape, and Netscape
prefers to handle them that way given the lack of other options. They
*could* have been filed in the internal "bugsplat" database, so I'd say
that's at least a hint that if there were a middle ground between Netscape
private and completely public that it would have been used.

Plus Norris said that explicitly in the bug that started this thread :-)

-Dan Veditz

Matthew Copeland

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
As a systems administrator and security auditor, I would be kind of
iritated
if netscape went to hoarding security related bugs for multiple reasons.

1. If a bug exists and one person has found it, so can another. This
means that even though you may keep a bug private, somone else could
find it and start to exploit it. By keeping the bug private, you are
placing me at risk, because I don't have knowledge of the bug. That's
iritating.

2. You have gone open source. You are going to lose the advantages of
open source, if you start hiding bugs from everyone. Bugs getted fixed
faster the more knowledge there is about them because it provides
motivation
to developers.

3. The Netscape-confidential sounds to me like the classic case of
security through
obscurity. It doesn't work. It's bad. We all know it. Don't do it.
Every good security analyst and crypto expert will tell you that
security through
obscurity is no security at all. Someone else will find the bug.
Admittedly,
its nice to be able to go to your boss and say we have found a security
bug,
but don't worry, we have already fixed it. Problem is that you have
now placed
your customers in a higher state of risk, because the customer has no
knowledge of
the bug, and thus can't provide advisories internally, and the bug
probably won't
get publicized enough for them to know they need to upgrade.

I respect netscape, and I think that mozilla/navigator/communicator are
great, but I think
obscuring security realated bugs is a mistake.

Matthew M. Copeland
matt...@designlab.ukans.edu

> I think we agree on the following points:
> - keep security bugs relatively quiet (Security Group Only) until
> a fix is found, tested and committed
> - Security Group needs to be different than ``Netscape only''
>
> What's left to decide is, at least:
> - how is the Security Group populated?
>

> - what do we do once a fix is found?
>

> - does our policy differ when we get to a ``production release'',
> versus our current not-yet-beta state?
>

> I'll post with my thoughts on these topics in a reply, hopefully today.
>
> Mike
>

> --
> 256708.87 208576.27

Zac Spitzer

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
http://slashdot.org/article.pl?sid=00/03/28/0054248&mode=flat

--
"This behavior is by design." microsoft on ASP errors
Try a mozilla milestone today! - http://www.mozilla.org

Gervase Markham

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Frank, Dan, Mike: you have exactly the right attitude here. For goodness
sake don't let rabid Slashdot weenies (or even seemingly-more-sensible
arguments) convince you otherwise. :-)

Gerv

R. Saravanan

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Here's a suggestion:

Let us assume that there is a bugzilla category called Mozilla-confidential
which restricts visibility of the bug to the Mozilla Security group.

Let the reporter of each security bug in bugzilla exercise the right to
make the bug Mozilla-confidential. If the reporter happens to be a "free
spirit", he/she may choose not to do this and make the bug public. But then
then he/she already has the ability to that in a different forum; so it
doesn't matter.

If the bug reporter happens to be a corporate employee and happens to find
the bug on corporate time, then the corporation may instruct the employee
to always make security bugs Mozilla-confidential. So be it.

This way each individual bug reporter, and not some netscape or mozilla
employee, gets to decide whether he/she is happy with the whole
Mozilla-confidential category.

Saravanan

PS As suggested earlier, reporters should always be able view their own
bugs.

Frank Hecker

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to

Well, I should say here that some of the arguments presented on Slashdot
are in fact not bogus at all, and deserve a serious response. I'll try
to give my personal response to those arguments below, and illustrate
what I think the real debatable issues are and what possible approaches
we have to addressing them. (I apologize in advance for the length of
this as well as meandering somewhat in the course of making my points.)

First, to clear away some of the misconceptions that found their way
onto Slashdot:

To my mind the true debate here is not about whether AOL/Netscape should
keep Mozilla-related security bugs to itself. Mozilla is an open source
project, and Bugzilla is a resource for that project as a whole, not
just for AOL/Netscape. If the Bugzilla database is going to contain
information that is not available to everyone in the Mozilla project
then the justifications for imposing such restrictions have to be pretty
compelling.

Security-related bugs are a case where I believe you can't justify
restricting Mozilla bug information to a single vendor (whether
AOL/Netscape or whoever). For example, another company basing their own
products on Mozilla has just as much claim as AOL/Netscape to have
access to Mozilla security bug information in order to protect and
provide for their own customers. A similar argument would apply in the
case of developers creating their own version of Mozilla for
distribution -- for example, if the MathML developers were to create and
distribute a custom version of Mozilla to the mathematical community.

So I don't think the real argument is about restricting access to
security bugs only to AOL/Netscape. I think the debate is rather about
a) whether security bugs in Bugzilla should be fully public or
restricted to some smaller group; and b) if restricted to a smaller
group, how that group should be chosen. (Or to put it another and I
think a better way, how could any particular individual get themselves
admitted to that smaller group?)

I also think some people are misinterpreting Bruce Schneier's remarks at

http://www.counterpane.com/crypto-gram-0002.html#PublicizingVulnerabilities

as being in favor of full disclosure under any circumstances. Schneier
writes "In general, I am in favor of the full-disclosure movement", but
then later goes on to write "I believe in giving the vendor advance
notice" to allow them some reasonable amount of time to fix the problem.
(Schneier's idea of "reasonable" appears to be more than a week but at
most a month.) So IMO Schneier can't be represented as advocating for an
absolute requirement that vendors fully and publicly disclose security
bugs as soon as they receive reports of them; otherwise what would be
the point of giving the vendor advance notice?

Schneier is in effect saying that the vendor of the software is in a
special position relative to anyone else, and is justified to some
degree in concealing information about security-related bugs until they
can be fixed. It's not a justification that allows concealing security
bugs forever, but it does allow concealing them for some reasonable
period.

Now, the problem with applying Schneier's argument in this case (and
here we're getting into what I think are the real issues) is that in the
Mozilla project we're not dealing with proprietary software supplied by
an single identifiable vendor; we're talking about an open source
project where in effect anyone and everyone can potentially be a Mozilla
"vendor": In the proprietary world vendors are special because a) only
they can fix bugs and b) only they are (ultimately) responsible for
supporting users of their software. In the open source world anyone can
potentially fix bugs, and anyone can distribute versions of the software
to end users. So either there is no "vendor" in Schneier's sense, or
anyone can be a "vendor"; in either case it's hard to see how Schneier's
ideas of "giving the vendor advance notice" would apply.

The discussion in the previous paragraph leads directly to two different
arguments for full disclosure of security bugs, arguments that IMO are
reasonable and deserve to be addressed.

The first argument goes somewhat like the following: given that a)
anyone can (potentially) fix security bugs in an open source project
like Mozilla, and b) we collectively have an obligation to maximize the
chances that security bugs will be fixed, therefore we have an
obligation to immediately and fully disclose information on security
bugs to as many people as possible, because only in that way can we
maximize the probability that the problems will be fixed.

I don't accept that particular argument in the general case, because it
doesn't take into account the fact that with security bugs there are
real risks and that disclosing details of bugs can potentially increase
that risks. These risks go beyond just not having the software work, or
even suffering from simple denial of service attacks, for example by
malicious people putting up web sites that are designed to crash
Mozilla; with major security bugs you have a risk that users' personal
data will be compromised and altered, and that their systems are
subverted for malicious purposes (e.g., through trojans).

If you expose details of such security bugs to more people you increase
the probability that they will be fixed but you also increase the
probability that they will be exploited by people previously unaware of
the bugs. (And here I should say that I don't accept as a general truth
the statement that everyone who can and will exploit a bug already knows
about it by the time it's reported to the developers.) IMO those risks
(of the bugs being exploited vs. not being fixed) need to be balanced; I
can't say exactly what the balance point is, but I believe it's
reasonable to assume that there's some point in disclosure (below
disclosing to everyone) beyond which you could significantly increase
the risk of exploitation without significantly increasing the chance
that the bug will be fixed.

Of course, we don't know what that point is exactly. We can't say for
certain, for example, that there is a group of exactly 10, or 20, or 100
Mozilla developers that is the optimum audience for Mozilla security bug
information, because no one else outside that group is likely to fix
those bugs. But IMO we still have an obligation to maximise the chances
that bugs will be fixed. So if we accept the idea of limiting access to
security bug information, then IMO we also have an obligation to a)
ensure that people who have the ability to fix Mozilla security bugs are
able to join the group without undue hassle and b) put some sort of
reasonable time limit on how long security bug information is not
publicly disclosed. These two policies help ensure that the initial
group includes the people most likely able to fix the bug, and that the
bug will likely be fixed in any event even if the initial group is
unable to do so.

The second argument for full disclosure goes as follows: There are
sysadmins and other people who are responsible for a user community that
would be using Mozilla, and who have the means, the knowledge, and the
motivation to help fix Mozilla security problems. Don't they have a
reasonable claim to be able to view information on reported Mozilla
security problems, arising from the responsibility that they owe to
their users, and that we owe to them as representatives of those users?

I think the answer has to be, yes, they do have some claim to view those
security bug reports. If you accept that argument, then one can go on to
make the subsequent argument that since anyone in the world could
potentially be in the position of having some reasonable claim on seeing
Mozilla security bug reports, then the only justifiable policy is to
make the bug reports fully public as soon as they are received into
Bugzilla.

Again, I don't believe that this argument for full disclosure is fully
convincing. I believe the population of sysadmins supporting Mozilla,
distributors of Mozilla-based products, and other people responsible for
Mozilla users is a proper subset of the total Mozilla user population,
and you can make a case for limiting information on security bugs to
that subset. Of course, as with the case of people who can fix security
bugs, we have no foolproof way of determining exactly who should be in
that group or not. Since we still IMO have an obligation to sysadmins,
Mozilla distributors, etc., we should take the same approach as we
should for developers: provide some reasonable way for motivated people
to become part of the "inner" group allowed to see security bug reports,
and set some reasonable time limits on how long information is
restricted to the group.

A final argument for less than full disclosure of security bug reports
is that given by Dan Veditz: that mandating full and immediate
disclosure for security bug reports placed into Bugzilla is likely to
encourage Mozilla vendors (including AOL/Netscape, but also potentially
others) to bypass the Bugzilla mechanisms in handling security-related
bugs and to handle that information internally and not make it available
to other interested parties in the Mozilla project.

I believe that it's in everyone's interest that Bugzilla be used as a
common repository for bug information by all parties involved with
Mozilla development. I also believe that when deciding on a policy you
have to consider the likely consequences of adopting it. Even though
mandating absolute full disclosure of security bugs can be justified by
arguments I myself can potentially accept (e.g., by the arguments I've
given above), I believe that the consequences of trying to force full
disclosure are in practice likely to lead to a situation where in
practice we end up with less disclosure rather than more.

Therefore I'm willing to make the compromise of limiting full disclosure
of security bugs in some way, as long as we follow the general
guidelines I've mentioned above:

* Information on security bugs is not limited to any particular vendor.

* There is some reasonable way for people to apply and be approved for
access to the information on the same basis as the others already "in
the know".

* There is some mechanism to make full public disclosure of the
information after a reasonable amount of time.

I'll leave it to others to make more detailed proposals on how this
might be accomplished.

Mike Shaver

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Daniel Veditz wrote:
> Nowhere did I say it was bad to use public disclosure as a lever to budge
> recalcitrant vendors.

One of my motivations here is to avoid having mozilla.org be, or appear
as, a recalcitrant vendor.

> I'm not posting here as a Netscape spokesperson, ask them what they'll do.

I wish someone _would_ post here with Netscape's position, because
otherwise we're just waist-deep in conjecture. Of course, Netscape and
the other vendors might be perfectly content to let you and I decide the
policy, so they're going to keep quiet. Dare to dream.

> Sounds great, I guess we are in violent agreement then. I thought you were
> proposing that all security bugs should be 100% public bugs.

I think I need to make a new post with a more concrete proposal, because
we're getting lost in the sparring. =)

> (they apparently felt badly burned by the way Sun handled Java
> security hole knowledge, and some of those folks are still around).

Well, if Netscape is willing to paint mozilla.org with the
Sun-past-behaviour brush, we're doomed anyway.

> Until
> Netscape is no longer the 800-lb gorilla on the project that is something
> to worry about.

Providing special consideration to Netscape unnecessarily lengthens that
period, I think, but that's another discussion entirely, and not one
that's appropriate for .security.

> > ``signed a Security Group Membership form''
>

> I don't know why you reject [a form], since that's pretty close


> to how CVS access is handled.

I guess. My concern is that I don't necessarily believe that someone
should have to give up their real identity to be a part of that
notification group. Lots of proven security folks (Solar Designer,
*Hobbit*, etc.) do their business under pseudonyms, and I wouldn't want
to exclude them out of hand.

And if you're the IS manager for Microsoft, or you work on Opera, maybe
you don't want to reveal that fact, but you could probably contribute
significantly.

(I have these concerns about CVS access as well, FWIW.)

> The major problem with any scheme: no way to judge newcomers who could
> potentially be brilliant contributors or malicious neer-do-wells. Any
> amount of process that could weed out the latter could easily discourage
> the former.

Do you have any experience with people infiltrating
developer-notification lists for malicious purposes? We've got enough
real problems to deal with here, let's not borrow trouble. =)

> One current drawback to the current bugzilla implementation is that bug
> reporters can be blocked from their own bugs. That seems silly to me and
> should be fixed. If someone feels the need to add information they don't
> want the reporter to see they should open a new bug and make the old bug
> depend on it. In an "open" bug system a reporter should always have the
> right to see what's going on with what they've reported.

Yeah, you should file a bug about that. =)

Mike

--
352041.11 234463.58

Mike Shaver

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
- Create secu...@mozilla.org, with the initial members being myself,
Norris and Brendan. People can be added to that list by approval of the
existing list members. This group is basically responsible for
determining that something is, in fact, a security bug, and getting in
contact with the appropriate people to find a fix (module owners,
etc.). They're probably also the people who determine how to describe
the severity of the bug, etc.

It is _not_ my intent that the secu...@mozilla.org group be populated
with reps from all the vendors, because then we have to decide what
constitutes a ``vendor'', and why those characteristics make the group
special, and nobody has the time.

- Create a ``Security Sensitive'' bugzilla group, into which anyone can
move bugs. Those bugs are visible only to the reporter, the security
group, and those added to the cc: lines. (This will permit the security
group, the reporter, and the pertinent developers to discuss things
within the bug.)

Once a security bug is reported, mozilla.org should immediately post a
workaround if one can be determined, and if such posting does not give
excessive detail about the vulnerability in question. ``Make the
following change to your security policy for untrusted sites'' would be
a good workaround to post, while ``do not open messages with more than
255 attachments'' might not.

At the same time, a message should be sent to mozilla-security and
mozilla-security-announce, stating that a vulnerability has been found,
the approximate severity of the vulnerability, platforms affected, and
how long we expect it to take to find and commit a fix. This will allow
vendors and other interested parties to make decisions about release
delays and prepare for redeployment of the appropriate components.

Example:

``mozilla.org has been made aware of a security vulnerability in the
Mozilla software, which affects the 5.0 and 5.5 source trees, on all
platforms. This is a critical vulnerability, and may allow an attacker
to execute arbitrary code on the user's computer. mozilla.org expects
to have a fix for this available by April 2nd. Users should add the
following line to the end of their policy.js file to protect themselves
until the fix is available:
policy("default", "security.event.cross_domain", "false");
''

Once a fix is found and committed, a followup message should be posted,
giving the details of the fix, and how to verify that the vulnerability
is repaired. Ideally, mozilla.org should provide .xpis or something to
allow users to upgrade easily at the same time. This is also the point
where the bug is marked public again, likely.

Vendors who are concerned about the amount of time it takes to spin and
test their own versions should probably consider staying very close to
the Mozilla tree, so that you can just use the Mozilla-provided .xpi
files until your custom versions are ready, similar to how Microsoft
often releases initial patches very quickly, with a caveat that they
were not exhaustively tested, and then will update them if QA finds
anything disasterous. In the presence of a workaround, that seems
sufficient.

Mike

--
353506.39 235854.74

Frank Hecker

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to Mike Shaver
Mike Shaver wrote:
> - Create secu...@mozilla.org, with the initial members being myself,
> Norris and Brendan. People can be added to that list by approval of the
> existing list members. This group is basically responsible for
> determining that something is, in fact, a security bug, and getting in
> contact with the appropriate people to find a fix (module owners,
> etc.). They're probably also the people who determine how to describe
> the severity of the bug, etc.

OK, so the question then becomes: How do we maximize the chances of
getting people involved who can actually fix the bug? For example, if
someone has expertise in fixing security bugs, how can they put
themselves in a position where their talents will be applied to this?
Presumably your answer is that they win the trust of the modules owners
and other parties consulted by the core secu...@mozilla.org group, in
the same manner that anyone else becomes trusted to be involved with
active Mozilla development.

> It is _not_ my intent that the secu...@mozilla.org group be populated
> with reps from all the vendors, because then we have to decide what
> constitutes a ``vendor'', and why those characteristics make the group
> special, and nobody has the time.

Understood. You're in essence proposing an informal mechanism to
accomplish the same goal (of widening participation), since developers
at a given vendor could certainly work to put themselves in a position
where they were copied on security problems based on their perceived
ability to help with them.

> - Create a ``Security Sensitive'' bugzilla group, into which anyone can
> move bugs. Those bugs are visible only to the reporter, the security
> group, and those added to the cc: lines. (This will permit the security
> group, the reporter, and the pertinent developers to discuss things
> within the bug.)

To respond to the comment from R. Saravanan, certainly it's the
reporter's option to not classify a particular bug as security
sensitive, or to disclose it publicly through other forums. However by
allowing anyone to move a bug into the security sensitive category,
there's the potential to cause disagreements about previously reported
bugs "disappearing" from public scrutiny.

Also, did you really mean "anyone" here, as in "anyone with a Bugzilla
login" could classify bugs as security sensitive? Or are you proposing
some other criteria?

> Once a security bug is reported, mozilla.org should immediately post a
> workaround if one can be determined, and if such posting does not give
> excessive detail about the vulnerability in question. ``Make the
> following change to your security policy for untrusted sites'' would be
> a good workaround to post, while ``do not open messages with more than
> 255 attachments'' might not.
>
> At the same time, a message should be sent to mozilla-security and
> mozilla-security-announce, stating that a vulnerability has been found,
> the approximate severity of the vulnerability, platforms affected, and
> how long we expect it to take to find and commit a fix. This will allow
> vendors and other interested parties to make decisions about release
> delays and prepare for redeployment of the appropriate components.

By mozilla-security I presume you mean the existing mailing list
bidirectionally gatewayed to the netscape.public.mozilla.security
newsgroup. I also presume that you're proposing creation of a new
mozilla-security-announce mailing list, since I don't believe one
currently exists. Would this mailing list also be gatewayed to a
newsgroup, e.g., netscape.public.mozilla.security.announce?

> Example:
>
> ``mozilla.org has been made aware of a security vulnerability in the
> Mozilla software, which affects the 5.0 and 5.5 source trees, on all
> platforms. This is a critical vulnerability, and may allow an attacker
> to execute arbitrary code on the user's computer. mozilla.org expects
> to have a fix for this available by April 2nd. Users should add the
> following line to the end of their policy.js file to protect themselves
> until the fix is available:
> policy("default", "security.event.cross_domain", "false");
> ''
>
> Once a fix is found and committed, a followup message should be posted,
> giving the details of the fix, and how to verify that the vulnerability
> is repaired. Ideally, mozilla.org should provide .xpis or something to
> allow users to upgrade easily at the same time. This is also the point
> where the bug is marked public again, likely.

I'm a little unclear about this. Are you proposing that "vendors"
(however defined) don't get advance notice of the fix itself, unless
they're actually involved in fixing the problem and are thus copied on
the bug from the start? I understand your reasoning here and below, but
I suspect that major "vendors" will lobby very hard to get some sort of
head start. (Of course, this goes back to the point I made above about
their developers putting themselves in a position to be involved.)

> Vendors who are concerned about the amount of time it takes to spin and
> test their own versions should probably consider staying very close to
> the Mozilla tree, so that you can just use the Mozilla-provided .xpi
> files until your custom versions are ready, similar to how Microsoft
> often releases initial patches very quickly, with a caveat that they
> were not exhaustively tested, and then will update them if QA finds
> anything disasterous. In the presence of a workaround, that seems
> sufficient.

That's all my comments for now. In general I think it's a reasonable
proposal, but I'd like to hear others' opinions before making up my mind
once and for all.

Frank Hecker

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to Mike Shaver, st...@mozilla.org
Frank Hecker wrote:
> OK, so the question then becomes: How do we maximize the chances of
> getting people involved who can actually fix the bug? For example, if
> someone has expertise in fixing security bugs, how can they put
> themselves in a position where their talents will be applied to this?
> Presumably your answer is that they win the trust of the modules owners
> and other parties consulted by the core secu...@mozilla.org group, in
> the same manner that anyone else becomes trusted to be involved with
> active Mozilla development.

To prevent misunderstanding I should clarify what I mean by the phrase
"trusted to be involved with active Mozilla development". I mean trusted
enough to have their patches accepted into the core Mozilla source tree,
or to be given check-in access to that source tree. (Of course anyone
can do Mozilla development on their own private tree without asking
anyone for permission.) Just as there's a trust-based "barrier to entry"
to someone getting patches accepted or obtaining check-in access, in
your proposal there would be a comparable barrier to entry to getting
included in on discussions of security-related bugs.

R. Saravanan

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
I think only the reporter, individual or institutional, should have the right
to classify the bug as being security-sensitive. If the security group
thinks a bug shouldn't be public, they have to make a case for it and request
the reporter to re-classify it. I think this will work in most of the cases.
If the reporter is obstinate and refuses, then he would precisely be the kind
of person who would be willing to use a different forum to air his
complaints. I think it would be better to have all this "dirty linen" washed
in the bugzilla forum than in a different forum.

Saravanan

Norris Boyd

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to Mike Shaver
This sounds good to me for the most part. Thanks for thinking about the
issues and posting this proposal.

It sounds like a lot of work to do the notification of the bugs. Would
it be acceptable to you to (a) wait until, say, M16, before beginning
the notification process, and (b) then only do notification for more
serious problems? My concern here is that during development I'm still
dealing with a lot of security bugs and all the extra reporting
requirements would only slow me down in fixing them. Once we have more
users then the reporting requirements become more important.

--Norris

Mike Shaver wrote:
>
> - Create secu...@mozilla.org, with the initial members being myself,
> Norris and Brendan. People can be added to that list by approval of the
> existing list members. This group is basically responsible for
> determining that something is, in fact, a security bug, and getting in
> contact with the appropriate people to find a fix (module owners,
> etc.). They're probably also the people who determine how to describe
> the severity of the bug, etc.
>

> It is _not_ my intent that the secu...@mozilla.org group be populated
> with reps from all the vendors, because then we have to decide what

> constitutes a &pi0;&pi0;vendor'', and why those characteristics make the group


> special, and nobody has the time.
>

> - Create a &pi0;&pi0;Security Sensitive'' bugzilla group, into which anyone can


> move bugs. Those bugs are visible only to the reporter, the security
> group, and those added to the cc: lines. (This will permit the security
> group, the reporter, and the pertinent developers to discuss things
> within the bug.)
>

> Once a security bug is reported, mozilla.org should immediately post a
> workaround if one can be determined, and if such posting does not give

> excessive detail about the vulnerability in question. &pi0;&pi0;Make the


> following change to your security policy for untrusted sites'' would be

> a good workaround to post, while &pi0;&pi0;do not open messages with more than


> 255 attachments'' might not.
>
> At the same time, a message should be sent to mozilla-security and
> mozilla-security-announce, stating that a vulnerability has been found,
> the approximate severity of the vulnerability, platforms affected, and
> how long we expect it to take to find and commit a fix. This will allow
> vendors and other interested parties to make decisions about release
> delays and prepare for redeployment of the appropriate components.
>

> Example:
>
> &pi0;&pi0;mozilla.org has been made aware of a security vulnerability in the


> Mozilla software, which affects the 5.0 and 5.5 source trees, on all
> platforms. This is a critical vulnerability, and may allow an attacker
> to execute arbitrary code on the user's computer. mozilla.org expects
> to have a fix for this available by April 2nd. Users should add the
> following line to the end of their policy.js file to protect themselves
> until the fix is available:
> policy("default", "security.event.cross_domain", "false");
> ''
>
> Once a fix is found and committed, a followup message should be posted,
> giving the details of the fix, and how to verify that the vulnerability
> is repaired. Ideally, mozilla.org should provide .xpis or something to
> allow users to upgrade easily at the same time. This is also the point
> where the bug is marked public again, likely.
>

> Vendors who are concerned about the amount of time it takes to spin and
> test their own versions should probably consider staying very close to
> the Mozilla tree, so that you can just use the Mozilla-provided .xpi
> files until your custom versions are ready, similar to how Microsoft
> often releases initial patches very quickly, with a caveat that they
> were not exhaustively tested, and then will update them if QA finds
> anything disasterous. In the presence of a workaround, that seems
> sufficient.
>

> Mike
>
> --
> 353506.39 235854.74


Mike Shaver

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
"R. Saravanan" wrote:
>
> I think only the reporter, individual or institutional, should have the right
> to classify the bug as being security-sensitive. If the security group
> thinks a bug shouldn't be public, they have to make a case for it and request
> the reporter to re-classify it. I think this will work in most of the cases.
> If the reporter is obstinate and refuses, then he would precisely be the kind
> of person who would be willing to use a different forum to air his
> complaints. I think it would be better to have all this "dirty linen" washed
> in the bugzilla forum than in a different forum.

Eggs-actly.

Mike

--
381597.37 258992.02

Mike Shaver

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Frank Hecker wrote:
> I'm a little unclear about this. Are you proposing that "vendors"
> (however defined) don't get advance notice of the fix itself, unless
> they're actually involved in fixing the problem and are thus copied on
> the bug from the start? I understand your reasoning here and below, but
> I suspect that major "vendors" will lobby very hard to get some sort of
> head start. (Of course, this goes back to the point I made above about
> their developers putting themselves in a position to be involved.)

If you can give a meaningful definition of "vendor" for the Mozilla
context, we could discuss some sort of advanced notification.

Mike

--
381609.43 259002.70

mitchell baker

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
We should also be sure that the proposal applies to security bugs and not general to
general contributor confidential information. We want to encourage security bugs to
be handled through bugzilla. Confidential information of a contributor is
different; we want this data to migrate away from bugzilla into other repositories.

Mitchell

Mike Shaver

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Norris Boyd wrote:
> It sounds like a lot of work to do the notification of the bugs. Would
> it be acceptable to you to (a) wait until, say, M16, before beginning
> the notification process, and (b) then only do notification for more
> serious problems? My concern here is that during development I'm still
> dealing with a lot of security bugs and all the extra reporting
> requirements would only slow me down in fixing them.

Arguably, until there's a .0 release, we don't have to be as circumspect
about the privacy and notification process. (At least, I'd be willing
to make that argument.)

Users of development or pre-release builds are assuming a variety of
system-damage and security risks -- for a long time we didn't have any
capabilities at all! -- and I think that's a good way to be able to go
faster during development.

The general open-source practice seems to be:
- some sort of fix-then-notify process, similar to the one I've
proposed, for release/production versions, and
- no such process for development/alpha/beta/pre-release versions,
because, as you mention, it's too expensive to track all bugs with
security ramifications during development.

If we decide that we want to notify for pre-Mozilla-5.0 releases[*],
then I'm willing to take some of that notification burden on; much of it
will be boilerplate.

[*] If it turns out that a vendor/downstream-consumer wants to start
being quieter about security bugs before the Mozilla 5.0 ``release'',
perhaps because they ship a .0 of their own based on pre-.0 Mozilla
code, then that should be entertained, but the vendor in question should
be ready to take on the burden of that process.

I had thought that you wanted to keep security bugs quiet already, but
if it turns out that you don't, then I'm happy to delay throwing the
security-notification switch until there's a .0 Mozilla release, or
someone steps up and asks for it to be thrown.

> Once we have more
> users then the reporting requirements become more important.

I would think that the ``secrecy'' and ``reporting'' requirements stem
from the same root cause, the need to protect users from malicious
attackers, and would therefore scale at approximately the same rate. Do
you disagree?

Mike

--
381632.67 259023.78

John Gardiner Myers

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to

Mike Shaver wrote:
> - Create a ``Security Sensitive'' bugzilla group, into which anyone can


> move bugs. Those bugs are visible only to the reporter, the security
> group, and those added to the cc: lines.

I believe you need to also include the assignee and QA contact.

Mike Shaver

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to

The QA contact for a security-sensitive bug should probably be someone
in the security group, but yes, you're right.

Mike

--
407394.06 279100.47

cad2123

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
Zac Spitzer wrote:
> http://slashdot.org/article.pl?sid=00/03/28/0054248&mode=flat

After reading those comments and these articles as a passive spectator,
I have an opinion for Mozilla:
Open bugs are patched fast. Therefore, open bugs offer a smaller
time-window for exploit. At the most, an exploit could last one month
(the lifespan of a milestone).
Some malicious people read the bug reports anyway, regardless of the
fact that they were "closed". Therefore, closed bugs did not reduce the
possibility of exploit. They may have reduced the number of people
immediately accessing the exploit, but that information can be rapidly
e-mailed or posted to "the underground", *without alerting* the
developers, administrators, or users who are at risk. I do not believe
security information should be more readily available to malicious
coders than to potential fixers. Ignorance is only bliss until it kills
you.
Note that this opinion is for Mozilla, given the circumstances
surrounding it, and not meant to be applied to other projects. Facts
vary from case to case, so the conclusions should also be able to vary.

Mike Shaver

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
cad2123 wrote:
> Some malicious people read the bug reports anyway, regardless of the
> fact that they were "closed".

Huh? Are you characterizing _me_ as malicious? I don't think even jar
would go that far. =)

Mike

--
450503.46 314885.07

Gervase Markham

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
> Open bugs are patched fast. Therefore, open bugs offer a smaller
> time-window for exploit.

So all those people across the world with no familiarity with the Moz
codebase will speed up the process of finding and fixing bugs, then,
compared with those people who know it well?

> At the most, an exploit could last one month
> (the lifespan of a milestone).

One month! <shiver>

> Some malicious people read the bug reports anyway, regardless of the
> fact that they were "closed".

Rubbish. Far fewer, at any rate. If this was true, no-one would ever
employ a "keep it restricted while we are fixing it" policy and, as was
pointed out, most people, including the Apache group, do.

Gerv

cad2123

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to

OK. What about other people building their own software off the Mozilla
source? We leave them in the dark until the next milestone release, and
then _their_ users have to wait for _those_ builds to be patched? And
you shiver at having bugs floating around for only a month...
The relevant fix should be added to the disclosure when it's found (and
disclosed.)

Mike Shaver

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
cad2123 wrote:
> The relevant fix should be added to the disclosure when it's found (and
> disclosed.)

Yes. Disclose when you have a fix. I'm thinking that the Cone of
Silence would be down for a handful of days, maybe a week for an
especially hard bug, not multiple weeks or a month.

Mike

--
530281.20 382913.14

Norris Boyd

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to Mike Shaver
If you're willing to help out with the notification process, then perhaps it
makes sense to start the process before a vendor ships. That way we get a
chance to iron out problems while it's less critical.

I would however, like to avoid publicizing security holes in the
soon-to-be-released Netscape beta before they are fixed with the next beta.
Even though it's pre-release software, we hope people will be using it for
daily work (I am), and I believe we have a responsiblity to users to hold
off making the actual exploits public until the fixed version is available.

--Norris

Bob Lord

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Frank Hecker wrote:

> Therefore I'm willing to make the compromise of limiting full disclosure
> of security bugs in some way, as long as we follow the general
> guidelines I've mentioned above:
>
> * Information on security bugs is not limited to any particular vendor.
>
> * There is some reasonable way for people to apply and be approved for
> access to the information on the same basis as the others already "in
> the know".
>
> * There is some mechanism to make full public disclosure of the
> information after a reasonable amount of time.

Playing devil's advocate for a moment, I have a few questions about the forming
"limited-disclosure" idea.

Can the chosen security-experts keep a secret? See
http://www.mozillazine.org/talkback.html?article=1278 as an example of how easy
it is for information to leak out. I don't think this matter is an isolated
incident. Information wants to be free. ;-)

It seems prudent to check in security flaw fixes with comments about the nature
of the bug. Those comments will help prevent other developers from re-opening
holes. Once we do that, the cat is out of the bag *long* before the end-user
gets a fixed version of the software. Won't the people interested in writing
exploits have roughly the same amount of time to do their evil, regardless of
how long the security-experts mull over the fix?

Vendors and project teams who are not on the security-experts list cannot
evaluate the flaw if they don't know about it. Won't there someday be dozens
of projects and vendors who rely on code from Mozilla? They won't all be on
the same ship schedule, and will all want to handle the flaw in their own way
(e.g. shipping anyway, stopping shipment, etc.). That tells me the fixes have
to be checked-in immediately (so again, the secret gets out early).

Also, how would they join this list? Once the number of security-experts gets
large, won't it be even harder to keep information private, even for a couple
of days?

Will the benefits of limited-disclosure exceed the costs in most cases? In the
end, how does it protect end-users from harm given that the cat will almost
always be out of the bag much sooner than anyone would like?

-Bob

lord.vcf

Mike Shaver

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Bob Lord wrote:
> Playing devil's advocate for a moment, I have a few questions about the forming
> "limited-disclosure" idea.

As I have come to expect, I agree with Bob.

As soon as we have a fix, we need to be open about it. As soon as we
know of a flaw, we need to announce that we know what the flaw is, and
how to protect yourself until there's a fix.

Mike

--
543914.36 395451.02

Mitch Stoltz

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Other people making use of the Mozilla source, if they have a trustworthy
representative, could be added to the security group. I'm sure any major
user of Mozilla code who themselves have a large user base would have a
representative on the security group - if they're known to us and contribute
to Mozilla.
-Mitch

> OK. What about other people building their own software off the Mozilla
> source? We leave them in the dark until the next milestone release, and
> then _their_ users have to wait for _those_ builds to be patched? And
> you shiver at having bugs floating around for only a month...

Mike Shaver

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Gervase Markham wrote:
> You are using a tighter definition of "restricted" than I was. My
> version of "restricted" approximates to "everyone who has Bugzilla bug
> editing privs" or something like that.

It might as well be public, then. Not that I don't trust the people
with the ability to edit a bug, but a secret isn't a secret if 200
people know.

Mike

--
549118.06 399850.73

Jim Roskind

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Bob Lord wrote:

> It seems prudent to check in security flaw fixes with comments about the nature
> of the bug. Those comments will help prevent other developers from re-opening
> holes. Once we do that, the cat is out of the bag *long* before the end-user
> gets a fixed version of the software. Won't the people interested in writing
> exploits have roughly the same amount of time to do their evil, regardless of
> how long the security-experts mull over the fix?

Nope. The time line is very different for attackers and repair folk when all you
have is a bug fix, especially when the fix of interest is not isolated as a
"security fix" via the checkin comment.

Most security fixes I've been involved with consist of tying together a series of
flaws. It is rarely the case that a single item remains an undocumented linchpin,
that must be heavily documented to preclude a repeat event. For the bug to
"reappear," a code change would have to be made to deliberately worsen the quality
of the code in some area.

EXAMPLE The most common case of a security flaw is a buffer overwrite bug. I don't
think it is critical to say in a checkin comment: "this is a buffer overwrite bug,
with significant security implications, so please don't change back to a fixed
size, and overwritable buffer." The typical fix is to say something like "use
dynamic buffer size for foo." It is often a difficult task to figure out *how* an
attacker would controllably populate the given buffer. That is typically the topic
of a discussion in a security bug. The history of the bug typically involves
having developed the code for one purpose, and then expanding its role, without
updating its capabilities (e.g., handling of attacker specified character strings,
rather than only some internal field names). The fix involves updating code to
serve the now larger purpose, and regressions (in this example: going back to a
static buffer size :-( ) are IMO quite unusual (it is just poor for all
situations... so why would the code regress this way??).

My experience has been that security bugs get fixed, and no one even notices them.

As far as shipping a product with bugs, and feeling bad.... Every product I know of
(sendmail?? Windows? Linux? Word? IE? Navigator? Java?) has a large number of
security bugs. We just don't *yet* know about them. I use all these products
knowing they have security bugs (unless I'm deluding myself). My only hope is that
when the bugs get found, they are fixed before being widely publicized amongst
hackers. I think an admin is fooling him/herself if they claim they would "stop
using a product instantly" if it had a security bug (unless they are already
restricting themselves to non-computer products).

A fix is NEVER as easy as changing a line of code, and shipping. A lot of testing
is needed. Testing takes time. If you don't take that time, then additional bugs,
often worse than the security bug, appear. I want vendors to have enough time to
fix each bug, and fix it well, and not damage the rest of the product in the fix
(before I take the fix). I don't wish to force the hand of the vendor so strongly
that a mistake is made, and I have to suffer.

Bottom line: I support disclosure after the major vendors have shipped. I support
earlier disclosure if you (the holder of the intellectual property surrounding the
security flaw) feel the vendor(s) are being too slow about handling the problem,
and believe publicity is the "only way to get a fix" developed or released.

As a final point in the "reality" check. I've worked on projects where a
fix-respin release cycle, took in excess of a week. I've been in a position where
bugs arrived at shorter intervals than a week. If I ever took the stance: "don't
ship if you have a known bug" I'd be forced to sit on a pile of bug fixes, shipping
nothing, for many months. This would help no one. It would be especially painful
to the folks that are desperate for *specific* bug fixes ASAP.. In the end, a
reasonable vendor "bundles" a pile of fixes into a release, qualifies that release,
and ships it, knowing additional bugs have already be found. This is the ONLY way
to progress with finite development and testing resources. I can act differently
when I'm talking about a "printf hello world" program, but I can't do anything else
when I'm talking about significant software, developed at net-speed (which can't
use of "formal proof methods").

Thanks for listening,

Jim

--
-- My views are mine, not Netscape's --
Jim Roskind fax: 650.428.4058
j...@netscape.com voice: 650.937.2546
--------------------------------------------------------------------------
PGP 2.6.2 Key fingerprint=0E 2A B2 35 01 9B 5C 58 2D 52 05 9A 3D 9B 84 DB

Mike Shaver

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Norris Boyd wrote:
> If you're willing to help out with the notification process, then perhaps it
> makes sense to start the process before a vendor ships. That way we get a
> chance to iron out problems while it's less critical.

It might make sense, but my vote is still to wait until a .0 release
before we throw the notification-and-quiet-until-we-have-a-fix switch.

To be honest, I'd be more willing to endorse flipping the switch now if
Netscape were to take all known security fixes in beta1, and open up the
related bug reports, but that's a personal opinion and may not reflect
that of mozilla.org.

> I would however, like to avoid publicizing security holes in the
> soon-to-be-released Netscape beta before they are fixed with the next beta.

I don't think that's acceptable, and here's why:

- that imposes a month+ wait (has Netscape even announced a date for
beta2?) on public discussion of bugs that are already fixed in the CVS
tree, during which period more code will be written and audited. That
writing and auditing will benefit from knowledge and discussion of
security bugs, which will in turn reduce the number of bugs we have to
fix later (before or after release).

- there will be other organizations with release/deployment schedules,
so what do we say to the administrator from Podunk U who only wants to
deploy annually? When Red Hat wants us to wait for RH7.0? We will
always be in the middle of someone's release schedule, and I think we
best serve everyone by being open about having a fix, and giving enough
information for a vendor/deployer/user to decide for themselves what to
do about the bug.

- Netscape has done security-related releases before, on shorter
time-frames than a month, so I'm not exactly sure why it's not possible
to spin beta1b with a security fix in it. It's obviously up to Netscape
to decide how to serve its customers, but there is automation in place
to produce daily-update .xpis on the commercial tree now, and it would
seem to be well-suited for respinning layout.xpi to patch something
important.

- The cat is, as Bob Lord mentioned, out of the bag. There are public,
queryable databases that contain ample information for getting
information about fixes, and their root causes.

> Even though it's pre-release software, we hope people will be using it for
> daily work (I am), and I believe we have a responsiblity to users to hold
> off making the actual exploits public until the fixed version is available.

If that responsibility to users exists, though, why not hold the beta to
take the fixes that are known and in the tree, or make components
available for update when fixes are found? It doesn't take Steve
Bellovin to follow bonsai for checkins related to security-component
bugs, or just read CVS logs for the usual-suspect files, so they're very
likely to be found.

What about telling users of beta1 what the known-before-we-shipped-it
security flaws are, so that they can decide if they want to use it or
not? That would seem the sporting thing to do.

Mike

--
544615.51 396110.95

Mike Shaver

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Jim Roskind wrote:
> Nope. The time line is very different for attackers and repair folk when all you
> have is a bug fix, especially when the fix of interest is not isolated as a
> "security fix" via the checkin comment.

Given only a little bit of knowledge about the project (who the people
are that fix security bugs, how those bugs are classified) and the
ability to pull bug numbers out of CVS messages, I bet you could
_automate_ the process of tracking, with some false positives, when
security-related checkins happen.

> EXAMPLE The most common case of a security flaw is a buffer overwrite bug.

As far as Mozilla goes, we'll have to take your word for it, because
Netscape is still hoarding security bug information, and mozilla.org
hasn't exercised eminent domain (with or without the 5th Amendment's
requisite compensation). =)

Assuming that Netscape isn't willing to share that information with the
community at large right now, could you summarize the kinds of bugs
(buffer overflow, content having improper access to other content,
insufficient restriction on remote script access to browser facilities,
etc.) that are in the system now?

(``Most common'' in the global software space probably isn't as
interesting as ``most common in Mozilla''. The latter, at least, offers
promising leads as to areas and patterns that should receieve special
attention. If buffer overflows, as an example, prove to be dominant,
then maybe we should look at something like StackGuard to mitigate the
effects of such bugs.)

> I don't
> think it is critical to say in a checkin comment: "this is a buffer overwrite bug,
> with significant security implications, so please don't change back to a fixed
> size, and overwritable buffer. The typical fix is to say something like "use
> dynamic buffer size for foo."

Then how do I, as a vendor, know that it's a patch that I want to respin
my release for, or maybe one that I will rush my next service pack out
for?

The crux of the issue here is: how do you let users and vendors make
informed decisions about how to deal with known (to mozilla.org)
security/privacy issues, without letting ``bad people'' know? I don't
think you can.

> are IMO quite unusual (it is just poor for all
> situations... so why would the code regress this way??).

Well, regressions are one thing, and having someone else write code like
that in another place is another thing. ``Oh, I didn't realize that an
attacker could get a string into our certificate data whose length
didn't match the other field!''

> My only hope is that
> when the bugs get found, they are fixed before being widely publicized amongst
> hackers.

What good is a fix if you don't know to get it?

> I think an admin is fooling him/herself if they claim they would "stop
> using a product instantly" if it had a security bug (unless they are already
> restricting themselves to non-computer products).

No, but I can clearly picture an organization deciding to use stock
Mozilla with Netscape component-plugins, rather than the Netscape bundle
as a whole, if they believed that they could get security fixes more
quickly that way. (Or to use Opera or IE, if neither Mozilla nor
Netscape can keep up, and security matters to the person/organization in
question. This is one of the reasons that you see people moving away
from NES: they can't get ``adequate'' security responses from
Netscape[*]. I'd prefer not to have that happen to Mozilla, for the
strongest possible meaning of ``prefer''.)

[*]
http://www.securityfocus.com/templates/archive.pike?list=1&date=2000-03-15&msg=38D2173D...@relaygroup.com

Netscape isn't the only company ``guilty'' of this, it's just the one
that seems most relevant to our discussion. There are those who choose
one Linux distribution over another because of differing responsiveness
to security issues (real or perceived)

> A fix is NEVER as easy as changing a line of code, and shipping. A lot of testing
> is needed. Testing takes time. If you don't take that time, then additional bugs,
> often worse than the security bug, appear.

Something I've seen Microsoft do is put up an initial `quick-fix', with
caveats that it hasn't been exhaustively tested, and then put an updated
patch up if they find further problems. Would that not work? In fact,
for vendors that stay with the Mozilla codebase, they could _all_ use a
stock-Mozilla update .xpis as the quick fix, until they build and brand
their own, super-vendor-tested fix.

This gives the users choice (workaround, Mozilla .xpi, vendor quick fix,
wait it out for vendor-blessed fix), and choice is what we're all about.

> I want vendors to have enough time to
> fix each bug, and fix it well, and not damage the rest of the product in the fix
> (before I take the fix).

I'm talking about the case where we have a fix, and it is tested
adequately-enough to get into the production-release branch of Mozilla
CVS. What sort of testing is being done for existing security bugs?

Mike

--
551516.72 401652.40

Jim Roskind

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Mike Shaver wrote:

> Jim Roskind wrote:
> > EXAMPLE The most common case of a security flaw is a buffer overwrite bug.
>
> As far as Mozilla goes, we'll have to take your word for it,

My comment was about bugs appearing on the internet, on lists such as bugtraq.

> Then how do I, as a vendor, know that it's a patch that I want to respin
> my release for, or maybe one that I will rush my next service pack out
> for?

Wait for a certified "good release." I do this for my OS, and I do it for emacs. I try to work with vendors that
provide frequent accumulated patches, and I count on the QA process to assure I'm getting the best build available
at the time.

As I said, I know that every release has bugs. I'm just looking for a release that has cured the ones that are
known and curable at some point in time.

> The crux of the issue here is: how do you let users and vendors make
> informed decisions about how to deal with known (to mozilla.org)
> security/privacy issues, without letting ``bad people'' know? I don't
> think you can.

You are correct. We can't figure out who is "Good" and who is "Bad." If you have a system which relies on such
factors, you'll probably doom yourself to having a "bad" system :-/. It will be especially bad if you think that
you can propose a system based on voting to select a good or bad person. (See US election history for lots of
counter examples of voter decided trust ;-) ). Trust amongst developers with significant motivations to not
violate the trust seems like the only workable solution.

> > are IMO quite unusual (it is just poor for all
> > situations... so why would the code regress this way??).
>
> Well, regressions are one thing, and having someone else write code like
> that in another place is another thing. ``Oh, I didn't realize that an
> attacker could get a string into our certificate data whose length
> didn't match the other field!''

I agree, and that is why I'd like to be in a position to be able to easily disclose bugs after the fact (after
significant field distribution of a release). If there is a threat of premature disclosure when bugs are placed in
a location that is convenient for post-mortem disclosure, then contributors will be forced into a closed bug
tracking system.

> > My only hope is that
> > when the bugs get found, they are fixed before being widely publicized amongst
> > hackers.
>
> What good is a fix if you don't know to get it?

Many releases appear with release notes that include statements like "includes numerous bug resolutions, including
improvement in areas of security." That should be enough motivation if you trust the vendor. If you don't trust
the vendor, you should not take binary under any circumstances. Knowing that a binary is based (again... trust
that factoid) on an open source effort, there is more reason to believe that bugs will have been exposed, and
resolved.

> > I think an admin is fooling him/herself if they claim they would "stop
> > using a product instantly" if it had a security bug (unless they are already
> > restricting themselves to non-computer products).
>
> No, but I can clearly picture an organization deciding to use stock
> Mozilla with Netscape component-plugins, rather than the Netscape bundle
> as a whole, if they believed that they could get security fixes more
> quickly that way.

OK. I can believe such organizations exist. It is really hard to qa all sorts of combinations... and most folks
prefer to rely on external qa... but for folks that like to live on the edge, do a lot of qa, fall off the edge now
and then, they can certainly go for that direction. IMO, 99% of the mozilla based browser users will not want to
play that game. Hopefully mozilla won't be exposing and patching security bugs on a daily basis... as the cost of
constant upgrading would make its use prohibitive :-(.

> > A fix is NEVER as easy as changing a line of code, and shipping. A lot of testing
> > is needed. Testing takes time. If you don't take that time, then additional bugs,
> > often worse than the security bug, appear.
>
> Something I've seen Microsoft do is put up an initial `quick-fix', with
> caveats that it hasn't been exhaustively tested, and then put an updated
> patch up if they find further problems. Would that not work? In fact,
> for vendors that stay with the Mozilla codebase, they could _all_ use a
> stock-Mozilla update .xpis as the quick fix, until they build and brand
> their own, super-vendor-tested fix.

With Microsoft's massive engineering and test work force, I bet they can test a product pretty well in a very short
period of time. I've also seen them mess up, and push out numerous fixes to security fire drills (often with the
"bug" not fixed entirely in an early release). Considering the value of a PR fiasco to Microsoft, they are willing
to put a ton of money behind such efforts (*when* the topic is about to go public). I would hazard to guess that
when the press is not screaming for the fix (or the bug-reporter is not threatening instant release of the info),
then they are allowed the luxury of efficiency, and they bundle patches together, and ship them as a qualified
group. To do otherwise would probably be cost/resource prohibitive even to Microsoft..

> > I want vendors to have enough time to
> > fix each bug, and fix it well, and not damage the rest of the product in the fix
> > (before I take the fix).
>
> I'm talking about the case where we have a fix, and it is tested
> adequately-enough to get into the production-release branch of Mozilla
> CVS. What sort of testing is being done for existing security bugs?

Just because a bug can land in a production release branch, does not mean the product should ship based on that
version. I'm beating a dead horse here... but you've got to do a lot of testing. It takes time. There is no
substitute. IF the production branch was sooooo great, then you would not need a branch. You'd only need a
static tag ;-).

Mike Shaver

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Jim Roskind wrote:
>
> Mike Shaver wrote:
>
> > Jim Roskind wrote:
> > > EXAMPLE The most common case of a security flaw is a buffer overwrite bug.
> >
> > As far as Mozilla goes, we'll have to take your word for it,
>
> My comment was about bugs appearing on the internet, on lists such as bugtraq.

Well, I'm still interested in Mozilla security bugs, for what are likely
obvious reasons. Can you share the requested information about them?
(I can view the bugs myself, but I know that you don't want me to, so
I'm trying to respect that until we figure out what the proper
resolution is, however much it pains me to have the world be the way it
is right now.)

> > Then how do I, as a vendor, know that it's a patch that I want to respin
> > my release for, or maybe one that I will rush my next service pack out
> > for?
>
> Wait for a certified "good release." I do this for my OS, and I do it for emacs.

So why isn't Netscape waiting for a Mozilla-certified good release?

Note that I'm asking about how _vendors_ (meaning: people who consume
and distribute Mozilla source) will know. How does IBM decide if it
wants the fix for #28387 in its April build of
Mozilla-for-OS/2-preview-1? How does Red Hat decide if it wants to take
the fix for #32088 in a Mozilla build for the next Powertools CD? How
do I, as someone who builds his own browser periodically to test and
hack and assist the rest of the Mozilla community, _including_Netscape_,
know if I want to update my tree to protect myself (get the fix for
#31648), rather than waiting until I'm finished debugging the current
leak with Bruce?

Should any of these groups have to play mother-may-I with Netscape, or
any other vendor? I say no, predictably.

> I try to work with vendors that provide frequent accumulated patches

How would you feel about a vendor that knew about security bugs in its
product, and had fixes for them, but chose to ship not only without the
fixes but also without honest disclosure of those known flaws? I
wouldn't want to deal with such a vendor, if I could avoid it, but you
and I have been known to have differing opinions on such matters.

More importantly, I don't want Mozilla to be such a vendor.

(What if you want MathML or IRC, or you're on Linux/PPC or OpenVMS?
Netscape's not interested in giving you appropriate builds with their
great QA and branding and stuff, but you should be interested in waiting
until they're ready to tell you that you've been running with this
vulnerability for a month, and thanks for your patience? Seems pretty
ridiculous to me.)

> Many releases appear with release notes that include statements like "includes
> numerous bug resolutions, including improvement in areas of security."
> That should be enough motivation if you trust the vendor.

...and if you know what the vendor considers to be areas of security.
What if I am looking for an upgrade to fix the meantime cache-tracking
thing? I just grab a bunch of vendor binaries, and hope?

> Knowing that a binary is based (again... trust
> that factoid) on an open source effort, there is more reason to believe
> that bugs will have been exposed, and
> resolved.

Pretty deep irony there, Jim. You can believe that the bugs will have
been exposed and resolved, but you've can't actually find out, because
we're an open source project that won't tell you when we expose and
resolve security bugs. I'm not sure how trusting I'd be, and there are
those who consider me a pretty big Mozilla fan.

> OK. I can believe such organizations exist. It is really hard to qa
> all sorts of combinations... and most folks prefer to rely on external qa...
> but for folks that like to live on the edge, do a lot of qa, fall off the
> edge now and then, they can certainly go for that direction.

I'm not sure I understand your point. Are you saying that Mozilla will
have inherently weaker QA than Netscape? That's a pretty interesting
assertion, but one that we should debate in another forum. (Are you
coming to the party?)

> With Microsoft's massive engineering and test work force, I bet they can test
> a product pretty well in a very short period of time.

How many people do we need to do the same thing?

What if Microsoft (or IBM -- hey, they're pretty big) becomes a Mozilla
vendor? Do we make them wait for Netscape (or Bob And Joe Browser Inc.)
to rev their branded version of the fix? Would Netscape be willing to
ship layout-with-fix.xpi if IBM built it? What if Mozilla built it?

There's lots of other stuff in your post that I disagree with, some
quite violently, but I'm trying to stay focussed on the parts that are
most important to me. I'm sure we'll revisit the other issues later. =/

Mike

--
561690.72 411234.68

Christopher Blizzard

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to mozilla-...@mozilla.org

I have to agree with Mike and Bob. If there's a security problem that
we have some relief for, be it a fix or a work around, we need to let
people know about it. We need to let those who use mozilla and derive
their products from mozilla make their own risk assessment.

--Chris

--
------------
Christopher Blizzard
http://people.redhat.com/blizzard/
I don't pretend to have all the answers. I don't pretend to even
know what the questions are. Hey, where am I?
------------


Christopher Blizzard

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to mozilla-...@mozilla.org
Jim Roskind wrote:
>
> Bob Lord wrote:
>
> > It seems prudent to check in security flaw fixes with comments about the nature
> > of the bug. Those comments will help prevent other developers from re-opening
> > holes. Once we do that, the cat is out of the bag *long* before the end-user
> > gets a fixed version of the software. Won't the people interested in writing
> > exploits have roughly the same amount of time to do their evil, regardless of
> > how long the security-experts mull over the fix?
>
> Nope. The time line is very different for attackers and repair folk when all you
> have is a bug fix, especially when the fix of interest is not isolated as a
> "security fix" via the checkin comment.
>

That's not true in my world. In most cases in which someone reports a
bug in a public forum, whether or not there's a fix, I'll get something
in my hands in a matter of hours on minutes. Sometimes I get fixes
before the exploits are available.

[ long talk about buffer overruns deleted ]

>
> My experience has been that security bugs get fixed, and no one even notices them.
>

Are you talking about your vendors, your own work or the people that
work for you? In any case, those vendors, those programmers or you are
doing a diservice by not revealing that they might be at risk.

> As far as shipping a product with bugs, and feeling bad.... Every product I know of
> (sendmail?? Windows? Linux? Word? IE? Navigator? Java?) has a large number of
> security bugs. We just don't *yet* know about them. I use all these products
> knowing they have security bugs (unless I'm deluding myself). My only hope is that
> when the bugs get found, they are fixed before being widely publicized amongst
> hackers. I think an admin is fooling him/herself if they claim they would "stop
> using a product instantly" if it had a security bug (unless they are already
> restricting themselves to non-computer products).

I have seen companies that I work for just stop using a product or
program because of a security problems or repeated chronic security
problems. I've done it myself.

I agree that you are fooling yourself if you think that any products
that are you are using are totally secure.

>
> A fix is NEVER as easy as changing a line of code, and shipping. A lot of testing
> is needed. Testing takes time. If you don't take that time, then additional bugs,
> often worse than the security bug, appear. I want vendors to have enough time to
> fix each bug, and fix it well, and not damage the rest of the product in the fix
> (before I take the fix). I don't wish to force the hand of the vendor so strongly
> that a mistake is made, and I have to suffer.
>
> Bottom line: I support disclosure after the major vendors have shipped. I support
> earlier disclosure if you (the holder of the intellectual property surrounding the
> security flaw) feel the vendor(s) are being too slow about handling the problem,
> and believe publicity is the "only way to get a fix" developed or released.
>

Who gets to decide who the major vendors are? Who gets to make the
assesment about what is a good amount of time in between a fix and when
a fix should be released?

For Mozilla we have to make those fixes available as soon as possible
because we don't make "product" releases. We need to make the people
using that source code aware of the problem and let them make their own
QA assessments about the quality of the fix as it relates to their
needs, product, and timeline. Netscape is one of those consumers.

If Netscape-the-company knows about a security problem and they choose
to hold onto the information that the bug exists or the code for the fix
for an entire release cycle then they are being a bad citizen.
Individual consumers of the Mozilla code base should be able to make
their own decisions about what they want to do when a problem is
discovered.

> As a final point in the "reality"a check. I've worked on projects where a


> fix-respin release cycle, took in excess of a week. I've been in a position where
> bugs arrived at shorter intervals than a week. If I ever took the stance: "don't
> ship if you have a known bug" I'd be forced to sit on a pile of bug fixes, shipping
> nothing, for many months. This would help no one. It would be especially painful
> to the folks that are desperate for *specific* bug fixes ASAP.. In the end, a
> reasonable vendor "bundles" a pile of fixes into a release, qualifies that release,
> and ships it, knowing additional bugs have already be found. This is the ONLY way
> to progress with finite development and testing resources. I can act differently
> when I'm talking about a "printf hello world" program, but I can't do anything else
> when I'm talking about significant software, developed at net-speed (which can't
> use of "formal proof methods").
>

That's fine but you shouldn't hold other people or companies hostage to
your decisions about how important a security related bug fix is.

Christopher Blizzard

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to mozilla-...@mozilla.org
Mike Shaver wrote:
>
> Something I've seen Microsoft do is put up an initial `quick-fix', with
> caveats that it hasn't been exhaustively tested, and then put an updated
> patch up if they find further problems. Would that not work? In fact,
> for vendors that stay with the Mozilla codebase, they could _all_ use a
> stock-Mozilla update .xpis as the quick fix, until they build and brand
> their own, super-vendor-tested fix.

Just another addition to this - If there's a security problem in a
version of Netscape's released code and people feel that they aren't
getting proper relief from Netscape then other people will come in to
fill the gap. That doesn't have to be Mozilla or other vendors.
Individuals will do it. I've seen it happen.

Jerry Baker

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
Jim Roskind wrote:
>
> Many releases appear with release notes that include statements like "includes numerous bug resolutions, including
> improvement in areas of security."
>
> -- My views are mine, not Netscape's --
> Jim Roskind fax:

As a side note not totally related to the subject, but related to the
above statement:

If you go into the Netscape.Communicator group, Netscape.navigator,
etc., one of the jokes when someone asks what a new version of Netscape
has changed is just what you mention here. Many people post right before
a point release of Netscape asking whether it will fix bug x or bug y.
Someone always has to tell them, "I don't know because the release notes
suck". My impression of release notes that say, "various bugfixes" is
that the vendor is too lazy or does not have enough people to document
their software. It makes it appear as if someone just changes the title
of the release notes to reflect the version number and puts it out.

Just my opinions.


--
Jerry Baker

PGP Mail Preferred
Key: http://pgpkeys.mit.edu:11371/pks/lookup?op=get&search=0x2A5E91C6

jesus X

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to
I'm a non developer, but jumping in (late) anyway. I 'care' about Mozilla as
any person can care about a program, and this is a big issue.

Jim Roskind wrote:

> > The crux of the issue here is: how do you let users and vendors make
> > informed decisions about how to deal with known (to mozilla.org)
> > security/privacy issues, without letting ``bad people'' know? I don't
> > think you can.
>
> You are correct. We can't figure out who is "Good" and who is "Bad."

Actually, you can, but it takes separating the good people from the bad in a
way that is severely restrictive, such as only giving a few people access to
the information, as NS is doing, except this is a bad method.

At this point in development, despite it's being fairly advanced along the
path, and rapidly approaching a point of completion, Mozilla is not yet
ready for prime time, and as such, no one expects it to be bullet proof. The
best way in this instance, an open source project, is to keep ALL the bugs
open, including security. There aren't enough people using Mozilla yet to
make it attractive to a malicious hacker, save a few real rogues (the real
rogues will always be there, for any product, no matter what, so we can
ignore them here, just like the crook who's not looking for a quick take,
but a professional thief who IS GOING to do their crime regardless), so
showing the bad points should not have any real effect. If the fear is that
it's a REAL security hazard that won't be fixed by the time a real v1.0
release is out, then quite frankly, the release should be held. Security
bugs are different from regular bugs in that they really do prohibit proper
use. A cursor failing to change back 100% of the time after mouseovers on a
screen (like on my NS4.72 in the mail/news window) is nothing, but an open
window on the first floor is a problem, a true stopper.

> I agree, and that is why I'd like to be in a position to be able to easily
> disclose bugs after the fact (after significant field distribution of a
> release).

Either I'm not reading this right, or it's insane. I'm reading it as, "I
would like to not reveal the security bugs until a critical mass of people
have the product." This is the same as saying, "I'd like to maximize
potential damage, and really put the screws to my development team to fix
this thing PDQ." You want bugs fixed BEFORE 'significant field distribution'
if possible.

> If there is a threat of premature disclosure when bugs are placed in
> a location that is convenient for post-mortem disclosure, then
> contributors will be forced into a closed bug tracking system.

Again, I have to be reading this backwards... If bugs are released publicly,
they are scrutinized more, and the fixes are, by environmental factors,
forced to be better in the end. This is just a function of natural
selection, which exists in every arena, including computer software (it's
not who's the best, it's who can survive, hence explaining the dominance of
Windows over OS/2 and other GOOD os's)

> Many releases appear with release notes that include statements like
> "includes numerous bug resolutions, including improvement in areas of
> security." That should be enough motivation if you trust the vendor.

How do you know if you trust the vendor if they refuse to tell you what they
have done? How can you trust a source that won't tell you why they should be
trusted? When I see or hear about any new release of a program that I use, I
look at the release notes first. And on more than one occasion, from a
trusted vendor, I have decided not to update then because I don't think it's
worth it, or I don't think it's ready, or I don't like the way it's been
fixed, or I just don't need it, etc. If there are no notes, it's a toss up.
I'll either DL it and test it, or I'll just wait. More often than not, I
wait, because if I can't see a problem, and they're not telling me what they
fixed, it seems like just a waste of time to get a fix for a problem I don't
have.

Give me a reason to trust, before assuming I trust.

> If you don't trust the vendor, you should not take binary under any
> circumstances.

I don't trust Microsoft, and yet, sometimes there's a patch I feel is
worthwhile or needed. Sometimes trust is not a factor, or a smaller factor
compared to need.

> Knowing that a binary is based (again... trust that factoid) on an open
> source effort, there is more reason to believe that bugs will have been
> exposed, and resolved.

Exactly! And yet, in THIS project, they're being hidden!

> Hopefully mozilla won't be exposing and patching security bugs on a daily
> basis... as the cost of constant upgrading would make its use prohibitive

And hopefully, Netscape will share it's security bug list with Mozilla and
the community AT LARGE as so to both FIX them, and prevent redundant code.

> With Microsoft's massive engineering and test work force, I bet they can
> test a product pretty well in a very short period of time.

And yet sometimes, it looks like it was tested by chimps. :)

> I've also seen them mess up, and push out numerous fixes to security fire
> drills (often with the "bug" not fixed entirely in an early release).

Really? When? :)

> Considering the value of a PR fiasco to Microsoft, they are willing
> to put a ton of money behind such efforts (*when* the topic is about to
> go public).

And yet, in the case of L0pht exposing vulnerabilities, they just throw PR
money at it to lie some more... Very strange use of cash...

> I would hazard to guess that when the press is not screaming for the fix
> (or the bug-reporter is not threatening instant release of the info),
> then they are allowed the luxury of efficiency, and they bundle patches
> together, and ship them as a qualified group.

And sometimes the bug is then exploited in the interim. It happens.

> To do otherwise would probably be cost/resource prohibitive even to
> Microsoft..

True, but then you have to set priority levels. Security and normal
operation of the program are both Priority 1 issues.

> -- My views are mine, not Netscape's --

My views are someone else's entirely, and the devil made me do it. =-]

Grey Hodge
--
jesus X
email [ jesusx @ who.net ] [ jesusx @ bluephone.net ]
web [ - offline - ]
tag [ But it's a KILLER bunny rabbit! ]
warning [ Everything not Strictly Forbidden is now Mandatory. ]

cad2123

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to
jesus X wrote:
[... ... ...]

> > I agree, and that is why I'd like to be in a position to be able to easily
> > disclose bugs after the fact (after significant field distribution of a
> > release).
>
> Either I'm not reading this right, or it's insane. I'm reading it as, "I
> would like to not reveal the security bugs until a critical mass of people
> have the product." This is the same as saying, "I'd like to maximize
> potential damage, and really put the screws to my development team to fix
> this thing PDQ." You want bugs fixed BEFORE 'significant field distribution'
> if possible.
>

Disclosure is not fixing... I presume this means "significant field
distribution of the fixed version". (That would make the most sense,
anyway.)

> > If there is a threat of premature disclosure when bugs are placed in
> > a location that is convenient for post-mortem disclosure, then
> > contributors will be forced into a closed bug tracking system.
>
> Again, I have to be reading this backwards... If bugs are released publicly,
> they are scrutinized more, and the fixes are, by environmental factors,
> forced to be better in the end. This is just a function of natural

> selection, which exists in every arena, including computer software [...]

I agree. If you know Attack X works, and it's hastily patched ("Hey, it
just needs to defend against X..."), you open the door to people to find
variant attacks. Like the Hotmail/IE5 Javascript attacks posted on
Rootshell.
And then there's the issue best illustrated by the Virus Checker version
history (Amiga):
16 October. Fixed a major bug in detecting bootblock viruses.
18 October. Fixed bugs *caused by fixing the last bug.* [emphasis mine]
(I did that from memory, so it's probably inaccurate, but that's the
gist of it.)
Two versions from one bug. Visibility is, in general, a good thing... I
hear time and again that open source is more reliable because of
scrutiny. Is the mantra wrong, or is Netscape just denying it?

[... ...]


> > Knowing that a binary is based (again... trust that factoid) on an open
> > source effort, there is more reason to believe that bugs will have been
> > exposed, and resolved.
>
> Exactly! And yet, in THIS project, they're being hidden!
>
> > Hopefully mozilla won't be exposing and patching security bugs on a daily
> > basis... as the cost of constant upgrading would make its use prohibitive
>
> And hopefully, Netscape will share it's security bug list with Mozilla and
> the community AT LARGE as so to both FIX them, and prevent redundant code.
>

This is especially handy for bugs in interactions. If JavaScript in
MailNews starts popping up the Composer, you get your choice of areas:
- JavaScript
- MailNews
- XPSomethingOrOther
- Composer (unlikely...)
Deflecting the bug to the wrong module people would be
counterproductive.

> > With Microsoft's massive engineering and test work force, I bet they can
> > test a product pretty well in a very short period of time.
>
> And yet sometimes, it looks like it was tested by chimps. :)
>

I can almost imagine:

(Scene: a Death Star-like command center, in an underground cavern, deep
below Redmond WA.)
Minion: It fixes TearDrop, it causes Netscape's toolbar to be rendered
as bright pink, and it makes Sun's Java fill free hard disk space
with the phrase, "BWAHAHAHAHA---L'etat, c'est Microsoft!"
Sith Lord: Excellent. IE and our JVM are fine?
Minion: Yes, m'lord. Also, Hotbar's torture chamber graphics are
unharmed.
Sith Lord: Beautiful. Release it, and make sure they know Microsoft
takes security seriously.
Minion: Right, m'lord. Are we fixing that kernel bug that randomly
crashes SUNWAMD.DLL when links are clicked in IE5?
Sith Lord: No. Let those that access from a Solaris-based network
suffer. Their anger and despair will fuel our rise to world
domination! BWA HA HA HA HA HA! Even Linus will be turned to the
Dark Side! HAA HA HA HA HA HA...
(curtain falls)

Okay, so I got WAY off topic. Humor'll do that to you.

[... ... ... ...]

Mike Shaver

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
jesus X wrote:
> Actually, you can, but it takes separating the good people from the bad in a
> way that is severely restrictive, such as only giving a few people access to
> the information, as NS is doing, except this is a bad method.

NS is restricting the information to quite a few people, actually: all
bugzilla account holders with @netscape.com addresses, which is likely
more than 100.

Mike

--
19134.97 37.58

Stuart Ballard

unread,
Apr 11, 2000, 3:00:00 AM4/11/00
to
Sorry to jump in late on this thread, but I haven't seen this point made
anywhere...

What advantages are seen in making this "security" group different than
the group of people who can check in code? After all... if someone with
checkin privileges wants to sabotage the security of the browser, they
have plenty of other ways to do it! (eg write a large-ish fix to a
complex bug or a cool feature, and include in the middle of it a
potential buffer overrun - that would probably get past code review).

Note that (many/most?) Netscape engineers are already in this group, so
the people who can currently see it will still be able to.

Also, since you have a physical signature for every person with checkin
privilege, you can add a clause requiring them to make some sort of a
statement that they won't widely publicise security-related bugs. Sort
of a semi-NDA (although hopefully not quite that strong...)

Thoughts?

Stuart.

PS Presumably there would also be some mechanism for one of these
"security" people to add someone outside the security group as a "cc" on
one of these bugs. That way if someone feels that a particular person's
expertise would be valuable, and they are trusted, but they aren't in
the group, they can still be given access.

jesus X

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
Mike Shaver wrote:

>
> jesus X wrote:
> > Actually, you can, but it takes separating the good people from the bad in a
> > way that is severely restrictive, such as only giving a few people access to
> > the information, as NS is doing, except this is a bad method.
>
> NS is restricting the information to quite a few people, actually: all
> bugzilla account holders with @netscape.com addresses, which is likely
> more than 100.

Ok, well this reinforces my point in another manner. This means they have no
problem keeping it in house, but refuse to release it to the community, thus
defeating the whole idea of open source.

I am all in favor of the NPL, giving them special rights to the product.
After all, they are giving support, engineers, MONEY to run things, and
started the whole deal. Credit is being given to contributors, so I'm
perfectly ok with the NPL. But, withholding security bugs that need fixed
has nothing to do with them trying to push a commercial product. This deals
with FIXING that commercial product. In essence, they're refusing to accept
free help.

0 new messages