Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Full Disclosure!

35 views
Skip to first unread message

Eddy Nigg

unread,
Jan 2, 2009, 10:38:10 PM1/2/09
to
Before anybody else does, I prefer from posting it myself :-)

http://blog.phishme.com/2009/01/nobody-is-perfect/
http://schmoil.blogspot.com/2009/01/nobody-is-perfect.html

For the interested, StartCom is currently checking if I can release our
internal "critical event report" of this event to the public (there
might be some internal information which should not be disclosed).

--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: star...@startcom.org
Blog: https://blog.startcom.org

Ben Bucksch

unread,
Jan 3, 2009, 12:31:38 AM1/3/09
to Eddy Nigg
On 03.01.2009 04:59, Eddy Nigg wrote:
> The report is available from here: https://blog.startcom.org/?p=161

That's surely interesting, but the report does not contain any details
of interest.
It only says

"The attack ... involved proxying ,intercepting all communication from
and to the browser and eventually modification of the browser response
to the server. A tool like
http://www.owasp.org/index.php/Category:OWASP_WebScarab_Project was used
for the attack."

That's all it says about the problem. Which tells me nothing, other than
that the *user*s browser might have been involved in some critical
verification steps.

Other info;
"Only low-assurance Class 1 certificates were involved."
He passed all your tests and you only noticed, because he tried to get a
cert for verisign/paypal.com, which are on your blacklist. While that's
a good idea, it obviously wouldn't have prevented registration of other
targets.

So, what really happened and why? How is the browser any relevant in the
verification steps?

Ben

Eddy Nigg

unread,
Jan 3, 2009, 1:18:02 AM1/3/09
to
On 01/03/2009 07:31 AM, Ben Bucksch:

I can give some more information on this. This attack is perhaps trivial
for a hacker, it's maybe not overly obvious. In this respect, we
actually have more protections in place and attacks are anticipated to
DNS and mail servers, but also otherwise to the web interfaces. That's
because as I also said previously, bugs do happen as many developers
here can confirm. The attack was foiled by the assumption that it must
have some value for the attacker and that attackers usually make at some
point a mistake. This is not the first time attempts to circumvent our
various procedures happened, it's however the first time somebody
actually succeeded to some degree.

Another word before disclosing some more. You and also Mike Zusman asked
in his article what would have happened to smaller targets. Rightly,
unknown sites are not equally protected as high-profile targets.
However, StartCom has a responsible policy in place which disallows wild
cards and multiple domains in the Class 1 settings, prevents other
possible misleading issuance such as paypa1.com or micr0soft.com just to
mention a few, financial institutions must provide identity documents
and most important, certificates are valid for one year only - no matter
which validation level the subscriber enjoys. Manual verifications are
also performed on an ongoing basis. With this we try to counter some
misuse as well.

The attack was performed by using said tool above or by using a modified
version of the browser. By hooking this tool between the server and
browser, the tool allows to change the values coming to and from the
browser. With it, he was able to change some values send during the post
response to that of his liking. The validations wizard allows for a
selection of a few possible email addresses considered for
administrative purpose or as listed in the whois records of the domain
name. The flaw was, that insufficient verification of the response at
the server side was performed, allowing him to validate the domain by
using a different email address than the validations wizard actually
provided. The value of the selection was changed during transit after
performing the selection at the browser. Hope that clarifies the details
which aren't outlined in the report.

Additionally all steps of the subscribers are always logged (yes, every
click of it) and we have records about every validation and about which
email address was used for it, failed attempts etc. With those records
could we re-validate all certificates very quickly. The only ones which
failed were those of domains which retired. Expired certificates were
not tested.

Ian G

unread,
Jan 3, 2009, 9:43:47 AM1/3/09
to mozilla's crypto code discussion list
On 3/1/09 04:38, Eddy Nigg wrote:
> Before anybody else does, I prefer from posting it myself :-)
>
> http://blog.phishme.com/2009/01/nobody-is-perfect/
> http://schmoil.blogspot.com/2009/01/nobody-is-perfect.html
>
> For the interested, StartCom is currently checking if I can release our
> internal "critical event report" of this event to the public (there
> might be some internal information which should not be disclosed).


Leaving aside the details of this "disclosed exploit demo" ... and with
a nod to the benefit to the community of such a disclosure ... it is
useful to examine the MOTIVE for doing this.

What incentive exists for a CA in disclosing an apparent weakness?

* In the open source world, we would say, the code is there for us
to share and improve the code, and the weaknesses are, as a consequence
of the model, revealed. In the open source world, we grasp this nettle
and turn it into an advantage.

* But in the closed source world, other dynamics work. A seller of
proprietary product will suppress any report of weakness, as this will
cause the buying public to become suspicious, and buy some other
supplier's product.

We've seen both sides over the last 2-3 weeks.

So I guess there are two questions:

1. do we want to live in the world of open disclosure,
or the world of pretty facades?

2. if the former, how do we create the incentives
such that all prefer to disclose up front?

iang

Eddy Nigg

unread,
Jan 3, 2009, 10:48:59 AM1/3/09
to
On 01/03/2009 04:43 PM, Ian G:

>
> What incentive exists for a CA in disclosing an apparent weakness?

Quite frankly, none.

>
> We've seen both sides over the last 2-3 weeks.

Not entirely correct...but...

>
> So I guess there are two questions:
>
> 1. do we want to live in the world of open disclosure,
> or the world of pretty facades?
>
> 2. if the former, how do we create the incentives
> such that all prefer to disclose up front?
>

...I wouldn't be willing to disclose each and every detail of code,
preventive measures, controls and procedures and possible events. But
since there was not much to hide anymore from our incident and the cat
is out of the bag anyway and since the event has been dealt with
correctly IMO and the vulnerability neutralized, there was no problem
providing now some details about it. Better than have rumors and people
guessing...

However depending on the severity, reporting and disclosing is not a
privilege. But I'm not sure if it can be enforced.

David E. Ross

unread,
Jan 3, 2009, 11:02:21 AM1/3/09
to

To a large extent, I addressed this issue from the standpoint of an
outsider discovering a problem more than three years ago in my
<http://www.rossde.com/editorials/edtl_shootmsngr.html>. See also my
<http://www.rossde.com/editorials/CalOaksBank.html> (two years ago) for
a comparison of going public versus staying private when an outsider
discovers such a problem.

--

David E. Ross
<http://www.rossde.com/>

Q: What's a President Bush cocktail?
A: Business on the rocks.

Ben Bucksch

unread,
Jan 3, 2009, 11:16:56 AM1/3/09
to
On 03.01.2009 07:18, Eddy Nigg wrote:
> The validations wizard allows for a selection of a few possible email
> addresses considered for administrative purpose or as listed in the
> whois records of the domain name. The flaw was, that insufficient
> verification of the response at the server side was performed,
> allowing him to validate the domain by using a different email address
> than the validations wizard actually provided.

Ah, I see.

(no information follows, just opinion)

Yes, that is (just?) a bug. It does mean that a developer didn't think
correctly - it's a general rule in security to validate all input,
distrust all other parties, and this was not done here. I'd check
similar code near there, and the other code of that developer, but IIRC
you wrote that you did that at least to some degree and rectified other
potential problems.

I was already scared that you let the user's browser do the domain
validation, and let the browser report "yes, the verification passed",
or something like that. Yes, that would have been incredibily stupid,
but given what we learned recently about some other CAs... This bug is
not too far from that, but at least not that obviously stupid, it can
really have been just an oversight of the developer in question, and his
reviewer.

Ben

Paul Hoffman

unread,
Jan 3, 2009, 11:16:39 AM1/3/09
to mozilla's crypto code discussion list
Why is this relevant to this mailing list?

--Paul Hoffman

Ben Bucksch

unread,
Jan 3, 2009, 11:32:19 AM1/3/09
to
On 03.01.2009 16:48, Eddy Nigg wrote:
> ...I wouldn't be willing to disclose each and every detail of code,
> preventive measures, controls and procedures and possible events.

Well, I think this might be a good idea, though. I could even go so far
as to demand that all operations of the CA, including all processes in
all detail, and the actual day-to-day operations, need to be open to
everybody, both over the Internet and in real life. Anybody can just
walk in the CA's office and watch anybody there working. All is entirely
open to anybody. Only the private keys of the CA and the rest rooms are
kept hidden.

I think that would improve operation quite a lot. The processes would
need to be water-proof and correct, just like a cryptographic algorithm
needs to withstand public scrutiny. (And most actually do have
weaknesses at first which are rooted out by the public review. This, as
experience shows, outweighs the advantage that attackers get by knowing
the algorithm. The algo just needs to be strong enough. I think you can
create strong CA processes, too.) Also, the day to day operations could
be observed, too, by anybody, whether they match the declared processes,
and to see whether the declared processes still show holes in practice,
e.g. lacking diligence when verifying signatures.
(A regular and unannounced audit - of *all* parts of the processes, no
matter if RA or not - by a third party would also be mandatory.)

The problems we see are in no small part because CAs decalre their
operations to be their private little business. Well, it's not, it's our
responsibility. The browser gives the CAs special status, the CAs only
exist because browsers invent this whole concept, so we say how a CA has
to operate.


Daniel Veditz

unread,
Jan 3, 2009, 1:16:34 PM1/3/09
to
Paul Hoffman wrote:
> Why is this relevant to this mailing list?

Doesn't it go along with the other "are CA's trustworthy?" threads?

Nelson B Bolyard

unread,
Jan 3, 2009, 2:03:08 PM1/3/09
to mozilla's crypto code discussion list
Eddy Nigg wrote, On 2009-01-02 22:18:

> The attack was performed by using said tool above or by using a modified
> version of the browser. By hooking this tool between the server and
> browser, the tool allows to change the values coming to and from the
> browser.

I hate to say it, but it's possible for the browser user to change those
values without either (a) modifying the browser or (b) using some proxy
tool. So let me ask: Did Mike Zusman confirm that he was using such a
tool? Or is that merely an assumption?

> With it, he was able to change some values send during the post
> response to that of his liking. The validations wizard allows for a
> selection of a few possible email addresses considered for
> administrative purpose or as listed in the whois records of the domain
> name. The flaw was, that insufficient verification of the response at
> the server side was performed, allowing him to validate the domain by
> using a different email address than the validations wizard actually
> provided. The value of the selection was changed during transit after
> performing the selection at the browser.

But that server input verification flaw is fixed now, right?

Eddy Nigg

unread,
Jan 3, 2009, 2:01:16 PM1/3/09
to
On 01/03/2009 06:16 PM, Ben Bucksch:

>
> Yes, that is (just?) a bug. It does mean that a developer didn't think
> correctly - it's a general rule in security to validate all input,
> distrust all other parties, and this was not done here.

Correct. Actually it was done, but something in the verification wasn't
done correctly. It was simply a bug as it indeed can happen.

> I'd check
> similar code near there, and the other code of that developer, but IIRC
> you wrote that you did that at least to some degree and rectified other
> potential problems.

Also correct. Any potential source of input was reviewed and corrected
where needed.

>
> I was already scared that you let the user's browser do the domain
> validation, and let the browser report "yes, the verification passed",
> or something like that.

LOL

> Yes, that would have been incredibily stupid,
> but given what we learned recently about some other CAs... This bug is
> not too far from that, but at least not that obviously stupid, it can
> really have been just an oversight of the developer in question, and his
> reviewer.

Nono...it's very far from that, Ben. With certstar there were no
validations at all. It didn't exist. That's a far cry from a bug in the
post response verification. More than that, the other layers of defense
did exactly what they were supposed to do. The staff has reacted
incredible fast and awareness high as well. Minutes from the failed
attempt to receive a certificate for verisign.com the "attacker" was
banned from the StartCom network and issuance of the high-profile
certificate prevented in first place.

Flaws and even human error can happen - I'm certain that ours wouldn't
be a first even if we don't know about it. But comparing this to
non-existent validation and non-existent control over the third party
who's supposed to validate doesn't cut really.

Eddy Nigg

unread,
Jan 3, 2009, 2:03:46 PM1/3/09
to
On 01/03/2009 09:03 PM, Nelson B Bolyard:

> I hate to say it, but it's possible for the browser user to change those
> values without either (a) modifying the browser or (b) using some proxy
> tool.

I don't know another way, but I'm glad to learn how.

> So let me ask: Did Mike Zusman confirm that he was using such a
> tool?

Yes

>
> But that server input verification flaw is fixed now, right?
>

Correct, as also stated in the event report.

Eddy Nigg

unread,
Jan 3, 2009, 2:54:12 PM1/3/09
to
On 01/03/2009 06:32 PM, Ben Bucksch:

> Well, I think this might be a good idea, though. I could even go so far
> as to demand that all operations of the CA, including all processes in
> all detail, and the actual day-to-day operations, need to be open to
> everybody, both over the Internet and in real life. Anybody can just
> walk in the CA's office and watch anybody there working. All is entirely
> open to anybody. Only the private keys of the CA and the rest rooms are
> kept hidden.

Haha :-)

Actually exactly the opposite is true...NOBODY can walk into the CAs
offices without proper identification, permission and an obvious need to
do so.

But aren't auditors the eye of the public performing and recording those
operations? I mean, it's rather boring to watch some CA employee
starring at a screen and it wouldn't provide much insight either.
Neither is anybody allowed to view the details either (privacy), so...

>
> I think that would improve operation quite a lot. The processes would
> need to be water-proof and correct, just like a cryptographic algorithm
> needs to withstand public scrutiny. (And most actually do have
> weaknesses at first which are rooted out by the public review. This, as
> experience shows, outweighs the advantage that attackers get by knowing
> the algorithm. The algo just needs to be strong enough. I think you can
> create strong CA processes, too.)

I certainly agree with the later.

> (A regular and unannounced audit - of *all* parts of the processes, no
> matter if RA or not - by a third party would also be mandatory.)

Yes, this could be interesting indeed.

Ben Bucksch

unread,
Jan 3, 2009, 3:03:53 PM1/3/09
to
On 03.01.2009 20:01, Eddy Nigg wrote:
> the other layers of defense

Please don't call the blacklist a real "layer of defense". If he didn't
try to get a cert for paypal.com, it would have worked. All layers
failed. Please be honest enough to yourself to admit that, so that you
can try to find new layers or checks.

> the "attacker" was banned from the StartCom network

FWIW, I think it's a bit overreaction to immediately revoke *all* his certs.

> The staff has reacted incredible fast and awareness high as well.

Yes, that surprised me, too.

Even during the day, it was fairly fast. But it in the middle of the
night (midnight and later), right? The times were local time (Israel) or
UTC?

Ben Bucksch

unread,
Jan 3, 2009, 3:07:43 PM1/3/09
to
On 03.01.2009 20:03, Eddy Nigg wrote:
> On 01/03/2009 09:03 PM, Nelson B Bolyard:
>> I hate to say it, but it's possible for the browser user to change those
>> values without either (a) modifying the browser or (b) using some proxy
>> tool.
>
> I don't know another way, but I'm glad to learn how.

You can pretent to be a browser and do it by hand.
We regularly do that when we alter Google query URLs. Modifying a POST
is a bit harder, but not much different conceptually. I'm sure you use
cookies and stuff, but that's not hard either (see wget etc., I can even
do it in telnet). If you use JS to verify that it's a browser, that's
kind of silly and locks some users out.

Eddy Nigg

unread,
Jan 3, 2009, 3:39:04 PM1/3/09
to
On 01/03/2009 10:03 PM, Ben Bucksch:

> On 03.01.2009 20:01, Eddy Nigg wrote:
>> the other layers of defense
>
> Please don't call the blacklist a real "layer of defense". If he didn't
> try to get a cert for paypal.com, it would have worked. All layers
> failed. Please be honest enough to yourself to admit that, so that you
> can try to find new layers or checks.

How can you say that? First of all it was a certificate for verisign and
not paypal, even though he could have tried it too (and failed). Neither
it's a black-list per se, but a quite intelligent flagging and review
system. It's one of the layers of defense...

...if you remember the phishing attempt made on some small American
financial institute with a cert from GeoTrust. Our flagging system was
greatly improved as a direct result from what we learned from that
event. That is, not only defense from a bug, but also from attempts
similar to the one just mentioned. It includes of course high-profile
targets but not only.

The only alternative would be manual verification of each and every
certificate, which in my opinion isn't very efficient for domain validation.

But for comparison, where was the layer of defense at the other recent
event? A black-list could have prevented that, don't you think?

>
> FWIW, I think it's a bit overreaction to immediately revoke *all* his
> certs.

Well, those were exactly two. It was the correct response.

>
>> The staff has reacted incredible fast and awareness high as well.
>
> Yes, that surprised me, too.
>
> Even during the day, it was fairly fast. But it in the middle of the
> night (midnight and later), right? The times were local time (Israel) or
> UTC?
>

Yes, local time.

Nelson B Bolyard

unread,
Jan 3, 2009, 4:33:47 PM1/3/09
to mozilla's crypto code discussion list
Eddy Nigg wrote, On 2009-01-02 22:18:
> [...] The flaw was, that insufficient verification of the response at
> the server side was performed, allowing him to validate the domain by
> using a different email address than the validations wizard actually
> provided. [...]

>
> Additionally all steps of the subscribers are always logged (yes, every
> click of it) and we have records about every validation and about which
> email address was used for it, failed attempts etc. With those records
> could we re-validate all certificates very quickly.

Do your records include the email addresses that were actually used by
your servers in the course of validation?

Can you search those records to see if any other certs were ever issued
after using an email address that was "a different email address than the
validations wizard actually provided" ?

I think a check of that magnitude is an appropriate response to this event.

Eddy Nigg

unread,
Jan 3, 2009, 4:38:23 PM1/3/09
to
On 01/03/2009 11:33 PM, Nelson B Bolyard:

>> Additionally all steps of the subscribers are always logged (yes, every
>> click of it) and we have records about every validation and about which
>> email address was used for it, failed attempts etc. With those records
>> could we re-validate all certificates very quickly.
>
> Do your records include the email addresses that were actually used by
> your servers in the course of validation?

Yes. That was also the reason why we could pinpoint the attempt as
fraudulent within almost seconds...as such, we wouldn't prevent Verisign
from getting a cert from us and/or test our systems if the request is
legitimate.

> Can you search those records to see if any other certs were ever issued
> after using an email address that was "a different email address than the
> validations wizard actually provided" ?

Yes.

> I think a check of that magnitude is an appropriate response to this event.

This is exactly what we did.

Nelson B Bolyard

unread,
Jan 3, 2009, 4:54:45 PM1/3/09
to mozilla's crypto code discussion list
Eddy Nigg wrote, On 2009-01-03 11:03:
> On 01/03/2009 09:03 PM, Nelson B Bolyard:
>> I hate to say it, but it's possible for the browser user to change those
>> values without either (a) modifying the browser or (b) using some proxy
>> tool.
>
> I don't know another way, but I'm glad to learn how.

It's pretty easy to alter a downloaded form by saving the page containing
that form to a local file (File->Save Page as), then edit the file, then
use a file:// URL to visit the edited file and continue the session with
the edited form. There are countermeasures and counter-counter measures
to this sort of thing. There are still other ways to achieve this.

Nelson B Bolyard

unread,
Jan 3, 2009, 4:59:39 PM1/3/09
to mozilla's crypto code discussion list
Eddy Nigg wrote, On 2009-01-03 11:01:
> On 01/03/2009 06:16 PM, Ben Bucksch:

>> Yes, that would have been incredibily stupid,


>> but given what we learned recently about some other CAs... This bug is
>> not too far from that, but at least not that obviously stupid, it can
>> really have been just an oversight of the developer in question, and his
>> reviewer.
>
> Nono...it's very far from that, Ben. With certstar there were no
> validations at all. It didn't exist. That's a far cry from a bug in the
> post response verification.

As I understand it, Eddy, the situation with CertStar was a bug which
caused the code to simply fail to invoke the facilities that do the DV
validation (or verification, or whatever the right term is for that).
The input, which was the DNS name that should have been validated,
wasn't checked. As I understand it based on messages I have read, the
facilities existed to do the check, but a small bug kept them from being
invoked, a small bug that was (reportedly) easily and quickly fixed.

In StartCom's case, likewise, an important input was not checked. It
was the email address to be used, rather than the DNS name, that wasn't
checked. But either way, the result was that a check was not performed,
and consequently, a cert was issued for a domain name that was not
properly under the control of the party to whom it was issued. Thus,
these two events appear to me to be failings of a comparable magnitude.

It's true that exploiting one of these required a little more work on
the part of the "attacker" than the other. One required nothing more
than that the attacker type in the DNS name he did not control, while
the other required that the attacker alter the form to make it include
an email address that had not been present as received from the CA/RA,
but both are well within the scope of things that most serious attackers
can readily do, as recent events have shown.

Both of these "bugs" might have been, but were not, detected until a
researcher/attacker found them and reported them. I have no evidence
that either failing was intentional. They were just bugs. One was
perhaps less obvious than the other, but both had consequences that
were of potentially of similar magnitude, IMO.

Eddy Nigg

unread,
Jan 3, 2009, 5:25:53 PM1/3/09
to
On 01/03/2009 11:54 PM, Nelson B Bolyard:

Oh well, that wouldn't work to start with...

Nelson B Bolyard

unread,
Jan 3, 2009, 5:44:10 PM1/3/09
to mozilla's crypto code discussion list

That's good to read, Eddy. I had not understood that from your previous
messages on this subject. Thank you for clearing that up for me.

Nelson B Bolyard

unread,
Jan 3, 2009, 5:46:35 PM1/3/09
to mozilla's crypto code discussion list
Eddy Nigg wrote, On 2009-01-03 14:25:
> On 01/03/2009 11:54 PM, Nelson B Bolyard:
>> Eddy Nigg wrote, On 2009-01-03 11:03:
>>> On 01/03/2009 09:03 PM, Nelson B Bolyard:
>>>> I hate to say it, but it's possible for the browser user to change those
>>>> values without either (a) modifying the browser or (b) using some proxy
>>>> tool.
>>> I don't know another way, but I'm glad to learn how.
>> It's pretty easy to alter a downloaded form by saving the page containing
>> that form to a local file (File->Save Page as), then edit the file, then
>> use a file:// URL to visit the edited file and continue the session with
>> the edited form. There are countermeasures and counter-counter measures
>> to this sort of thing. There are still other ways to achieve this.
>
> Oh well, that wouldn't work to start with...

Because ?

If you check the referrer URL, that can be faked, too.

Eddy Nigg

unread,
Jan 3, 2009, 5:50:50 PM1/3/09
to
On 01/03/2009 11:59 PM, Nelson B Bolyard:

>
> As I understand it, Eddy, the situation with CertStar was a bug which
> caused the code to simply fail to invoke the facilities that do the DV
> validation (or verification, or whatever the right term is for that).

If that were correct, just a walk-through without further testing would
have revealed that. That's not even called debugging, that's a check one
does to see if the program works at all.

Additionally, I'm not even blaming certstar for this, the failure is
clearly that of Comodo. Would they have taken out one certificate from
them, they would have found that something important is missing.

> The input, which was the DNS name that should have been validated,
> wasn't checked. As I understand it based on messages I have read, the
> facilities existed to do the check, but a small bug kept them from being
> invoked, a small bug that was (reportedly) easily and quickly fixed.

Both assumptions are in my opinion incorrect, as for days later on
somebody from Mozilla could still see the "renew" pages. I've made a
screen shot of that one too.

> In StartCom's case, likewise, an important input was not checked. It
> was the email address to be used, rather than the DNS name, that wasn't
> checked.

Not entirely correct, but similar.

> But either way, the result was that a check was not performed,
> and consequently, a cert was issued for a domain name that was not
> properly under the control of the party to whom it was issued. Thus,
> these two events appear to me to be failings of a comparable magnitude.

Yes. But with all due respect the result of both events is quite different.

>
> It's true that exploiting one of these required a little more work on
> the part of the "attacker" than the other. One required nothing more
> than that the attacker type in the DNS name he did not control, while
> the other required that the attacker alter the form to make it include
> an email address that had not been present as received from the CA/RA,
> but both are well within the scope of things that most serious attackers
> can readily do, as recent events have shown.

It required to proxy the all responses from and to the server. A simple
form as you suggest wouldn't work.

> Both of these "bugs" might have been, but were not, detected until a
> researcher/attacker found them and reported them.

Wrong! We (the system actually did) detected and took active actions
within less then ten minutes. Theirs came after more then three hours
and only after posting to this list. I think that there is a clear
difference, both in the protection of preventing the higher value
certificate which wasn't issued (same would have been true for many
other high-profile sites including Mozilla) and in the overall response
immediately thereafter.

> I have no evidence
> that either failing was intentional. They were just bugs.

As I see it, our case indeed was a bug, the Comodo case was negligence.

> One was
> perhaps less obvious than the other, but both had consequences that
> were of potentially of similar magnitude, IMO.

I think the constellation and policies are inherently different between
Comodo and StartCom. It's not by chance that StartCom doesn't have a
stipulation for RAs. Many other policy decisions and implementations are
entirely different (intentional). May I quote Mike Zusman:

"But, there are at least two types of CAs. One type treats SSL
certificates as a cash cow, pushing signed certificates out the door,
and counting the money. The second type is like StartCom. This second
type understands that trust comes before money and that trusted CAs are
a critical piece of Internet infrastructure."

If you don't see the difference between both events I can't help you
(and I'm not talking about the money stuff he mentioned).

Jan Schejbal

unread,
Jan 3, 2009, 6:11:36 PM1/3/09
to
>Why is this relevant to this mailing list?

Because there was a security failure in one of the Firefox trusted CAs
allowing anyone to get fake certificates. This event and the reaction
of the CA are important to determine if the CA is (still) trustworthy.
It's the same as the Commodo thing. Just with a way better reaction and
without the dodgy background of dozens of resellers doing (or, in at
least one case, not doing) the Domain Verification.

Greetings
Jan


--
Please avoid sending mails, use the group instead.
If you really need to send me an e-mail, mention "FROM NG"
in the subject line, otherwise my spam filter will delete your mail.
Sorry for the inconvenience, thank the spammers...

Eddy Nigg

unread,
Jan 3, 2009, 6:20:57 PM1/3/09
to
On 01/04/2009 12:46 AM, Nelson B Bolyard:

Because! :-)

> If you check the referrer URL, that can be faked, too.

I know that too :S

Ian G

unread,
Jan 3, 2009, 9:51:36 PM1/3/09
to mozilla's crypto code discussion list
It was written:

> But aren't auditors the eye of the public performing and recording those
> operations?


That's one theory. Here is another: Who is the client of the auditor?
The auditor has a duty to the client that (arguably) outweighs the
duty to anyone else.

You might not agree to the above characterisation. But, try this test:
can you draw a line from the auditor to the public?

iang

David E. Ross

unread,
Jan 4, 2009, 1:54:13 PM1/4/09
to

The line from auditor to the public has been drawn in the courts, where
lawsuits against auditors by investors injured by corporate fraud have
been successful.

--
David E. Ross
<http://www.rossde.com/>

Go to Mozdev at <http://www.mozdev.org/> for quick access to
extensions for Firefox, Thunderbird, SeaMonkey, and other
Mozilla-related applications. You can access Mozdev much
more quickly than you can Mozilla Add-Ons.

Ben Bucksch

unread,
Jan 4, 2009, 2:26:49 PM1/4/09
to
On 04.01.2009 19:54, David E. Ross wrote:
> The line from auditor to the public has been drawn in the courts,
> where lawsuits against auditors by investors injured by corporate
> fraud have been successful.

Yes.

But as Ian pointed out, and you can see in the audit documents, e.g.
<https://cert.webtrust.org/SealFile?seal=798&file=pdf>, the assurances
and assertions made by the auditors are rather weak.

I don't know what the audits in the case of e.g. public stock companies
and IPO, which you probably refer to, assert, and whether certain
assertions are *required* by law, e.g. the rather strict US stock market
laws and regulations by the SEC.

Therefore, I'd say that we should mandate assertions by the CA audits
which are actually worth something, also in court. I.e. when the auditor
didn't do its job, it must be possible to sue him (and the CA) for
damages and win.

What I have seen in the last 2 weeks was extremely sobering. An audit
which doesn't check the actual verifications done by the CA is entirely
worthless.
See also "PositiveSSL is not valid for browsers", towards the end. I
think that CPS should never have passed the audit.

Ben

Paul Hoffman

unread,
Jan 4, 2009, 3:32:06 PM1/4/09
to mozilla's crypto code discussion list
At 12:11 AM +0100 1/4/09, Jan Schejbal wrote:
>>Why is this relevant to this mailing list?
>
>Because there was a security failure in one of the Firefox trusted CAs allowing anyone to get fake certificates. This event and the reaction of the CA are important to determine if the CA is (still) trustworthy. It's the same as the Commodo thing. Just with a way better reaction and without the dodgy background of dozens of resellers doing (or, in at least one case, not doing) the Domain Verification.

Sorry, but I don't see that listed as a topic for discussion on the mailing list's information page <https://lists.mozilla.org/listinfo/dev-tech-crypto>.

I propose that Mozilla form a new mailing list, dev-policy-trustanchors. The topics for that list would include:

- All new trust anchors being added to the Mozilla trust anchor pile
- Proposals for changes to the Mozilla trust anchor policy
- Complaints about particular participants in the current trust anchor pile
- Discussion of the UI aspects of the PKI in various Mozilla software

Topics that would still be germane for dev-tech-crypto would include

- Questions on how to add or remove trust anchors from various Mozilla software (without any discussion of why someone wants to do it)
- Discussion of how to implement alternate UI schemes for PKI (that is, what hooks are available in NSS for detecting positive and negative results)

All of Eddy's recent threads (being slimed by a Comodo reseller, finding a reseller that doesn't do domain validation, advertising that he had a domain validation bug but fixed it) would all be appropriate on the new list.

The current list is way too unfocused. People asking actual tech questions get drowned out by threads that have literally nothing to do with crypto but everything to do with policy.

Thoughts?

--Paul Hoffman

Eddy Nigg

unread,
Jan 4, 2009, 3:41:00 PM1/4/09
to
On 01/04/2009 10:32 PM, Paul Hoffman:

> The current list is way too unfocused. People asking actual tech questions get drowned out by threads that have literally nothing to do with crypto but everything to do with policy.
>
> Thoughts?
>

+1 from me.

Justin Dolske

unread,
Jan 4, 2009, 6:05:44 PM1/4/09
to
On 1/4/09 12:32 PM, Paul Hoffman wrote:

> I propose that Mozilla form a new mailing list, dev-policy-trustanchors.

Yes. I'd also very much like to see this split. I'm interested in the
technical side of things, but not so much the policy stuff (and,
frankly, the incessant bickering and advocacy that goes along with it).

Maybe policy-crypto, instead of policy-trustanchors? That might be a bit
more discoverable, makes the connection to dev-crypto clearer, and is a
more general policy-vs-tech split. Usenet-wise, I'd think it should
probably be mozilla.policy.crypto (or mozilla.policy.trustanchors),
instead of in the .dev hierarchy.

Justin

Nelson B Bolyard

unread,
Jan 4, 2009, 6:36:36 PM1/4/09
to mozilla's crypto code discussion list
Paul Hoffman wrote, On 2009-01-04 12:32:
> I propose that Mozilla form a new mailing list, dev-policy-trustanchors.

> The current list is way too unfocused. People asking actual tech


> questions get drowned out by threads that have literally nothing to do
> with crypto but everything to do with policy.
>
> Thoughts?

Did you mean to start a new thread? Doing so requires more than merely
changing the subject. You must post a message that is not a reply to do so.

1. In my view, there are 3 broad categories of discussion that go on in
this list. They are (in no particular order):

a) technical discussion about NSS, JSS and PSM code and protocols
(primarily of interest to developers, IMO)
b) root CA certs and related policy
c) UI/GUI ("ooey gooey" :) for crypto and certs in Mozilla products.

One of those clearly belongs in Mozilla's "developer technology"
hierarchy. It's less clear that the other two belong there.

2. As moderator of the dev-tech-crypto mailing list, I receive an email
each and every time someone subscribes or unsubscribes. Every month the
list receives a certain number of subscriptions and unsubscriptions,
with the result that the list has steadily grown at a rate of 1-3 a month
for a long time. When the volume of non-developer discussions greatly
increased (approximately in September or October), we saw an increase in
the number of monthly unsubscriptions. It reached (and briefly surpassed)
the rate of subscriptions. But the number of subscriptions rose again in
November, and since December 21, it has suddenly jumped up.

I think those observations suggest that the discussion of non-developer
topics such as root certs and browser UI has increased the level of
participation (even if mostly passive) in the subject of cryptographic
security in Mozilla products, which is good, but that has come at a cost
to the level of participation by those who were primarily interested in
developer topics.

I think both groups (developers and non-developers) might be better
served by separating the discussions into separate lists. But developers
may be very interested in both classes of topics and may not wish to
subscribe to yet another list to follow both.

3. I wonder if the non-developer topics are already within the scope of
another extant low-traffic list, namely dev-security (a.k.a.
mozilla.dev.security), except that I think the new list does not belong
in the "dev" hierarchy.

Eddy Nigg

unread,
Jan 4, 2009, 7:00:53 PM1/4/09
to
On 01/05/2009 01:36 AM, Nelson B Bolyard:

> 3. I wonder if the non-developer topics are already within the scope of
> another extant low-traffic list, namely dev-security (a.k.a.
> mozilla.dev.security), except that I think the new list does not belong
> in the "dev" hierarchy.

Ahhhh dev.security...yes, the forgotten step child of crypto. At times
we used to post there (and cross post to crypto) and don't know why
crypto became the de-facto list for all CA/SSL/Policy related issues.

BTW, I unsubscribed from all Mozilla mailing lists and use now the
fabulous NNTP support of Thunderbird. Lightweight, fast and without the
headache of passwords and subscription preferences. Nelson you can
deduct one of the "unsubscribes" which was me moving to the news reader.
I wish there were more mailing lists with NNTP support, I can only
recommend it.

Ian G

unread,
Jan 4, 2009, 7:01:37 PM1/4/09
to mozilla's crypto code discussion list
On 4/1/09 21:32, Paul Hoffman wrote:

> I propose that Mozilla form a new mailing list, dev-policy-trustanchors. The topics for that list would include:
>
> - All new trust anchors being added to the Mozilla trust anchor pile
> - Proposals for changes to the Mozilla trust anchor policy
> - Complaints about particular participants in the current trust anchor pile
> - Discussion of the UI aspects of the PKI in various Mozilla software


I agree in principle. I would suggest "policy-ca" or "ca-policy" being
anything in or around the CA policy, as that is the name of the thing.

Comments:

1. I don't think the discussions here are anything to do with dev.
2. trustanchors seems a too precise term, and I would prefer to see
it dropped (for liability reasons).
3. I would love to see real discussion of the UI aspects. I have no
idea how to talk to those people, they should be here.
4. This topic is also about legal relationships. Calling it "policy"
tends to sweep the liabilities under the carpet.


> Topics that would still be germane for dev-tech-crypto would include
>
> - Questions on how to add or remove trust anchors from various Mozilla software (without any discussion of why someone wants to do it)
> - Discussion of how to implement alternate UI schemes for PKI (that is, what hooks are available in NSS for detecting positive and negative results)


Agreed. It would be nice if we could do that.

> All of Eddy's recent threads (being slimed by a Comodo reseller, finding a reseller that doesn't do domain validation, advertising that he had a domain validation bug but fixed it) would all be appropriate on the new list.
>

> The current list is way too unfocused. People asking actual tech questions get drowned out by threads that have literally nothing to do with crypto but everything to do with policy.
>
> Thoughts?


Absolutely, +1.

iang

Nelson B Bolyard

unread,
Jan 4, 2009, 7:35:41 PM1/4/09
to mozilla's crypto code discussion list
Ian G wrote, On 2009-01-04 16:01:
> On 4/1/09 21:32, Paul Hoffman wrote:
>
>> I propose that Mozilla form a new mailing list,
>> dev-policy-trustanchors. The topics for that list would include:
>>
>> - All new trust anchors being added to the Mozilla trust anchor pile
>> - Proposals for changes to the Mozilla trust anchor policy
>> - Complaints about particular participants in the current trust anchor pile
>> - Discussion of the UI aspects of the PKI in various Mozilla software
>
> I agree in principle. I would suggest "policy-ca" or "ca-policy" being
> anything in or around the CA policy, as that is the name of the thing.
>
> Comments:
>
> 1. I don't think the discussions here are anything to do with dev.
> 2. trustanchors seems a too precise term, and I would prefer to see
> it dropped (for liability reasons).
> 3. I would love to see real discussion of the UI aspects. I have no
> idea how to talk to those people, they should be here.
> 4. This topic is also about legal relationships. Calling it "policy"
> tends to sweep the liabilities under the carpet.

There's no mozilla.policy hierarchy. So I'm searching for ideas for a
good hierarchy for these discussions. Here are some ideas. How about:

mozilla.security.CA
mozilla.security.UI
mozilla.security.pki

others?

Paul Hoffman

unread,
Jan 4, 2009, 7:45:51 PM1/4/09
to mozilla's crypto code discussion list

+1 to .CA or .PKI, -1 to .UI. There is more to the security UI than PKIX, and there is much more to trust anchors than UI.

Kyle Hamilton

unread,
Jan 4, 2009, 8:48:35 PM1/4/09
to mozilla's crypto code discussion list

Paul, I believe you're correct, but I also believe that Ian was
suggesting an AND, not an OR.

.CA (I'd actually suggest 'trustanchors') for trust anchor
inclusion/exclusion discussion. +1 under either name.
.UI for user interface issues (and ho boy there are many of them). +1 to this.

Interestingly, I can't really see much reason for a .pki (except
possibly as an adjunct to .UI, since any changes to the PKI should
only be brought about due to usability requirements, and since .pki
would be more of a technical discussion of the changes that need to be
made... eventually [hopefully] leading to spec documents that code can
be written to adhere to). +0, but I'm willing to listen to arguments
for and against.

-Kyle H

Gervase Markham

unread,
Jan 5, 2009, 9:56:45 AM1/5/09
to
Eddy Nigg wrote:
> As I see it, our case indeed was a bug, the Comodo case was negligence.

There is no clear line between one and the other. You are saying the
Comodo case was negligence because the bug was so obvious that they
should have spotted it. But the obviousness of bugs is a gradated scale.
If the flaw in the Startcom system might have been found by employing an
experienced web app white hat hacker, does that make it negligence for
you not to have done so?

I am not saying the two incidents were the same - I think every incident
has to be assessed individually. I am just saying that you cannot make
such a division so quickly and easily.

Gerv

Eddy Nigg

unread,
Jan 5, 2009, 11:08:52 AM1/5/09
to
On 01/05/2009 04:56 PM, Gervase Markham:

> I am not saying the two incidents were the same - I think every incident
> has to be assessed individually. I am just saying that you cannot make
> such a division so quickly and easily.
>

Not quickly and easily - agree on that. And every incident needs to
assessed on its own merits, that's what I said too. Nelson suggested
that both were "just" flaws and it sounded like it can be put to rest now.

No excuses on having a flaw and StartCom treated the incident as a
"critical event" which required full reporting on the events and its
resolution. It was certainly not taken lightly even though the event
itself was handled excellent (IMO) and the systems proved themselves to
a great extend. However I'm very certain that flaws do happen here and
elsewhere, just look at the critical bugs Firefox has every here and
now, despite great QA and thousands of eyes looking at the code and
testing. It matters what is done with it and how to prevent it if
possible. Reporting, alertness and correct response is crucial too for
such events.

Now, this issue is quite different to that of Comodo, since StartCom has
no stipulation for RAs. As a matter of fact I'm proposing to Comodo to
perform domain and email validations by themselves, with being fully
aware that flaws can happen even at their systems. The issue I'm seeing
with Comodo is policy and implementation wise - besides the poor
performance (or was it negligence?) of the certstar reseller. In that,
both CAs differ greatly in many ways including the events themselves,
reporting and their resolution.

Therefor we can't lump just all failures together and as you correctly
stated, there is no clear line between one and the other. This is what I
was saying.

Paul Hoffman

unread,
Jan 5, 2009, 11:44:12 AM1/5/09
to mozilla's crypto code discussion list
At 6:08 PM +0200 1/5/09, Eddy Nigg wrote:
>Therefor we can't lump just all failures together and as you correctly stated, there is no clear line between one and the other. This is what I was saying.

What you said was "As I see it, our case indeed was a bug, the Comodo case was negligence". That seems like you were making a pretty clear line. Are you now denying that you said it, or denying that there is a clear line between bugs and negligence?

At 2:56 PM +0000 1/5/09, Gervase Markham wrote:
>I am just saying that you cannot make
>such a division so quickly and easily.

Sure he can: he does it all the time. :-)

Eddy Nigg

unread,
Jan 5, 2009, 12:03:51 PM1/5/09
to
On 01/05/2009 06:44 PM, Paul Hoffman:

> At 6:08 PM +0200 1/5/09, Eddy Nigg wrote:
>> Therefor we can't lump just all failures together and as you correctly stated, there is no clear line between one and the other. This is what I was saying.
>
> What you said was "As I see it, our case indeed was a bug, the Comodo case was negligence". That seems like you were making a pretty clear line. Are you now denying that you said it, or denying that there is a clear line between bugs and negligence?
>

There may be a clear line between bugs and negligence, it doesn't have
to be one however. A bug can be just a bug, but a bug also can be the
result due to negligence...Concerning the former I already answered.

Ben Bucksch

unread,
Jan 5, 2009, 3:12:21 PM1/5/09
to
On 05.01.2009 01:00, Eddy Nigg wrote:
> Ahhhh dev.security...yes, the forgotten step child of crypto. At times
> we used to post there (and cross post to crypto) and don't know why
> crypto became the de-facto list for all CA/SSL/Policy related issues.

Because crypto (including CA) is just a small and very special part of
security.
For me, security is mostly about preventing others to take over my
computer. Apart from the updater depending on SSL, this has nothing to
do with crypto, but all with buffer overflows, JS sandbox/caps etc..
In other words, crypto is about secure transfer. Security is about
firewalling/protecting my own premises.


FWIW, I read both via NNTP.

Ben Bucksch

unread,
Jan 5, 2009, 3:14:16 PM1/5/09
to
On 05.01.2009 01:35, Nelson B Bolyard wrote:
> There's no mozilla.policy hierarchy.

It can be created.
There's already a mozilla.governance, which would fit there, too.

Wan-Teh Chang

unread,
Jan 5, 2009, 4:35:29 PM1/5/09
to mozilla's crypto code discussion list
On Sun, Jan 4, 2009 at 12:32 PM, Paul Hoffman <phof...@proper.com> wrote:
>
> I propose that Mozilla form a new mailing list, dev-policy-trustanchors. The topics for that list would include:
>
> - All new trust anchors being added to the Mozilla trust anchor pile
> - Proposals for changes to the Mozilla trust anchor policy
> - Complaints about particular participants in the current trust anchor pile
> - Discussion of the UI aspects of the PKI in various Mozilla software

The first three topics are appropriate for the proposed new mailing list.
(I would use "root CAs" instead of "trust anchors" in the mailing list's
name because "trust anchors" sounds a little too technical.)

The fourth topic is not related to trust anchor policy. So I'd propose
that it stay in this mailing list even though it is not strictly speaking
related to crypto either.

I'm reading this mailing list using a mail program that supports
threaded discussions, so all the discussions about root CAs
don't prevent me from answering the real crypto questions. I
don't need the proposed new mailing list, but I don't object
to it either.

Wan-Teh

Paul Hoffman

unread,
Jan 5, 2009, 5:17:17 PM1/5/09
to mozilla's crypto code discussion list
At 1:35 PM -0800 1/5/09, Wan-Teh Chang wrote:
>On Sun, Jan 4, 2009 at 12:32 PM, Paul Hoffman <phof...@proper.com> wrote:
>>
>> I propose that Mozilla form a new mailing list, dev-policy-trustanchors. The topics for that list would include:
>>
>> - All new trust anchors being added to the Mozilla trust anchor pile
>> - Proposals for changes to the Mozilla trust anchor policy
>> - Complaints about particular participants in the current trust anchor pile
>> - Discussion of the UI aspects of the PKI in various Mozilla software
>
>The first three topics are appropriate for the proposed new mailing list.
>(I would use "root CAs" instead of "trust anchors" in the mailing list's
>name because "trust anchors" sounds a little too technical.)

I beg to differ here. There has been a lot of discussion of allowing people to add self-signed certs that are not CAs to their list of trusted CAs. Those would be roots, but they would not be CAs. They are, in fact, trust anchors.

>The fourth topic is not related to trust anchor policy.

Somewhat true, but they are a direct outgrowth of it. Note that I said "the UI aspects of the PKI", not "the UI aspects of security".

>So I'd propose
>that it stay in this mailing list even though it is not strictly speaking
>related to crypto either.

It is far less related to crypto than it is to trust anchor policy.

>I'm reading this mailing list using a mail program that supports
>threaded discussions, so all the discussions about root CAs
>don't prevent me from answering the real crypto questions. I
>don't need the proposed new mailing list, but I don't object
>to it either.

You are missing the parts where there are actual technical questions or assertions in the middle of threads that started as trust anchor rants.

Daniel Veditz

unread,
Jan 5, 2009, 5:59:07 PM1/5/09
to
Paul Hoffman wrote:
> You are missing the parts where there are actual technical questions
> or assertions in the middle of threads that started as trust anchor
> rants.

Requesting actual details in the middle of a long ranty thread is a good
way to get missed no matter what newsgroup or topic.

Julien R Pierre - Sun Microsystems

unread,
Jan 7, 2009, 9:30:05 PM1/7/09
to mozilla's crypto code discussion list
Paul Hoffman wrote:
> At 12:11 AM +0100 1/4/09, Jan Schejbal wrote:
>>> Why is this relevant to this mailing list?
>> Because there was a security failure in one of the Firefox trusted CAs allowing anyone to get fake certificates. This event and the reaction of the CA are important to determine if the CA is (still) trustworthy. It's the same as the Commodo thing. Just with a way better reaction and without the dodgy background of dozens of resellers doing (or, in at least one case, not doing) the Domain Verification.
>
> Sorry, but I don't see that listed as a topic for discussion on the mailing list's information page <https://lists.mozilla.org/listinfo/dev-tech-crypto>.
>
> I propose that Mozilla form a new mailing list, dev-policy-trustanchors. The topics for that list would include:
>
> - All new trust anchors being added to the Mozilla trust anchor pile
> - Proposals for changes to the Mozilla trust anchor policy
> - Complaints about particular participants in the current trust anchor pile
> - Discussion of the UI aspects of the PKI in various Mozilla software

I would be in favor of having a separate group/list to discuss the first
3 above issues.

Regarding UI, it's a bit less clear where the discussion of that
belongs. I think given that developer questions are usually lower
traffic, maybe it's OK to have the UI and developer questions remain
together in one single list.

Julien R Pierre - Sun Microsystems

unread,
Jan 7, 2009, 9:34:29 PM1/7/09
to
Paul,

Paul Hoffman wrote:
> At 1:35 PM -0800 1/5/09, Wan-Teh Chang wrote:
>> On Sun, Jan 4, 2009 at 12:32 PM, Paul Hoffman <phof...@proper.com> wrote:
>>> I propose that Mozilla form a new mailing list, dev-policy-trustanchors. The topics for that list would include:
>>>
>>> - All new trust anchors being added to the Mozilla trust anchor pile
>>> - Proposals for changes to the Mozilla trust anchor policy
>>> - Complaints about particular participants in the current trust anchor pile
>>> - Discussion of the UI aspects of the PKI in various Mozilla software
>> The first three topics are appropriate for the proposed new mailing list.
>> (I would use "root CAs" instead of "trust anchors" in the mailing list's
>> name because "trust anchors" sounds a little too technical.)
>
> I beg to differ here. There has been a lot of discussion of allowing people to add self-signed certs that are not CAs to their list of trusted CAs. Those would be roots, but they would not be CAs. They are, in fact, trust anchors.

The PKI UI in mozilla clients is not just about selecting trust anchors
and using self signed cert. It has many other functions - backing
up/restoring your own certs and keys, etc.

And it's a bit difficult to separate all the the cert management from
PKCS#11 token issues since certs live in tokens by definitions.

So, I think the UI issues should remain together in this list with NSS
issues.

Gervase Markham

unread,
Jan 12, 2009, 6:01:05 AM1/12/09
to
Paul Hoffman wrote:
> I propose that Mozilla form a new mailing list,
> dev-policy-trustanchors. The topics for that list would include:
>
> - All new trust anchors being added to the Mozilla trust anchor pile
> - Proposals for changes to the Mozilla trust anchor policy -
> Complaints about particular participants in the current trust anchor
> pile - Discussion of the UI aspects of the PKI in various Mozilla
> software

Ignoring the exact choice of name for a moment, we are looking into
setting up a list for security policy topics. Watch this space.

Gerv

Michael Ströder

unread,
Jan 14, 2009, 9:35:11 AM1/14/09
to
David E. Ross wrote:
> On 1/3/2009 6:51 PM, Ian G wrote:
>> It was written:
>>> But aren't auditors the eye of the public performing and recording those
>>> operations?
>>
>> That's one theory. Here is another: Who is the client of the auditor?
>> The auditor has a duty to the client that (arguably) outweighs the
>> duty to anyone else.
>>
>> You might not agree to the above characterisation. But, try this test:
>> can you draw a line from the auditor to the public?
>>
>
> The line from auditor to the public has been drawn in the courts, where
> lawsuits against auditors by investors injured by corporate fraud have
> been successful.

But unfortunately this likely does not apply to IT security audits.

Ciao, Michael.

Ian G

unread,
Jan 14, 2009, 12:49:11 PM1/14/09
to mozilla's crypto code discussion list

I would agree with that. In my conflicted opinion [1], but from some
research:

By law and custom, the "attest function" is only defined
for to opinions over financial statements by licensed
and/or qualified accountants.

The "attest function" is what an auditor does when stating an opinion
over the finances of a company.


1. From my notes: I found no law or case law that nails this down, but
there is dictum ("non-binding opinion") that is careful to draw a line
between financial audits and any other role. In _Rampell_ [2]:

"...While others may provide tax services or bookkeeping services,
"licensees of the board of accountancy" alone perform the 'attest'
function, which refers to the process by which "licensees" audit
financial statements and express opinions as to those financial
statements. Those audits are relied on not only by the clients on whose
financial matters audits are performed but upon a host of other
individuals and entities who may rely on the information in making their
own economic decisions. Audited statements are relied upon by banks,
other creditors, and investors ... In short, the use of financial
statements attested by "licensees" is so frequently used in our economic
system as to be indispensable..."


2. This issue is also the subject of wider and frequent public debate
over financial statements, auditors and the progression to general
consulting; and the obvious conflicts this generates.


3. I think, again in only my opinion, Mozilla was correct to have made
an implied decision not to seek "attest function" audits. Not that it
matters so much to Mozilla, but it would be a serious concern for a
public company (e.g., Microsoft) which has an interest in preserving the
value of its attest financial audit.


4. Even if we were to see this "constraint" changed to include the
attest function and/or fiduciary duty, I wonder how realistic it would
be? Who's going to sue a big4 auditor because their opinion sucks? How
much luck do they have in the financial sphere on this question, anyway?


5. A better strategy for Mozilla might be to figure out what the current
standard-in-practice is, and figure out ways of either improving it, or
adjusting the relying party behavior to cope with any weaknesses.

iang

[1] Speaking as a non-financial auditor, I'm obviously conflicted, so
someone else should research the position of the stakeholders and the
case law and challenge it.

[2] 1991 court decision in Florida, Department of Professional
Regulation, Board of Accountancy v. Rampell, District Court of Appeal,
Fourth District, No. 89-2668) decided October 16, 1991.

Michael Ströder

unread,
Jan 14, 2009, 2:56:47 PM1/14/09
to
Ian G wrote:
> On 14/1/09 15:35, Michael Ströder wrote:
>> David E. Ross wrote:
>>> On 1/3/2009 6:51 PM, Ian G wrote:
>>>> It was written:
>>>>> But aren't auditors the eye of the public performing and recording
>>>>> those
>>>>> operations?
>>>> That's one theory. Here is another: Who is the client of the auditor?
>>>> The auditor has a duty to the client that (arguably) outweighs the
>>>> duty to anyone else.
>>>>
>>>> You might not agree to the above characterisation. But, try this test:
>>>> can you draw a line from the auditor to the public?
>>>>
>>> The line from auditor to the public has been drawn in the courts, where
>>> lawsuits against auditors by investors injured by corporate fraud have
>>> been successful.
>>
>> But unfortunately this likely does not apply to IT security audits.
>
> I would agree with that. In my conflicted opinion [1], but from some
> research:
> [..long notes deleted which I agree with..]

> Who's going to sue a big4 auditor because their opinion sucks? How
> much luck do they have in the financial sphere on this question, anyway?

That's exactly the point. And the auditor is most times payed by the CA
(or any other organization) he audits.

The only way for Mozilla to enforce its policy is to possibly remove
trust flags in case of known violation of the Mozilla CA policy.

Ciao, Michael.

Gervase Markham

unread,
Jan 16, 2009, 1:02:10 AM1/16/09
to
Eddy Nigg wrote:
> On 01/05/2009 01:36 AM, Nelson B Bolyard:
>> 3. I wonder if the non-developer topics are already within the scope of
>> another extant low-traffic list, namely dev-security (a.k.a.
>> mozilla.dev.security), except that I think the new list does not belong
>> in the "dev" hierarchy.
>
> Ahhhh dev.security...yes, the forgotten step child of crypto. At times
> we used to post there (and cross post to crypto) and don't know why
> crypto became the de-facto list for all CA/SSL/Policy related issues.

Possibly because it's where announcements have historically been made
about the discussion period for included CAs.

Such discussions are both political and technical, and so range across
the two lists.

> BTW, I unsubscribed from all Mozilla mailing lists and use now the
> fabulous NNTP support of Thunderbird. Lightweight, fast and without the
> headache of passwords and subscription preferences. Nelson you can
> deduct one of the "unsubscribes" which was me moving to the news reader.
> I wish there were more mailing lists with NNTP support, I can only
> recommend it.

This sort of thing is why we tried very hard, and continue to try to
make sure that all Mozilla communication channels are available via
HTTP, NNTP and SMTP :-) Different people have very different
preferences. I'm an NNTP person myself.

Gerv

Gervase Markham

unread,
Jan 16, 2009, 1:05:33 AM1/16/09
to
Nelson B Bolyard wrote:
> 3. I wonder if the non-developer topics are already within the scope of
> another extant low-traffic list, namely dev-security (a.k.a.
> mozilla.dev.security), except that I think the new list does not belong
> in the "dev" hierarchy.

In an ideal world, it wouldn't, but it does seem to me that the upside
of a somewhat more accurate list name has to balance against the
downside of creating Yet Another List.

If we were to create another one, it would be only to solve this
problem, which would mean we'd need a new hierarchy - e.g.
mozilla.policy. But security and crypto are really the only areas of
Mozilla policy which have anything like as much debate as this. So it
would be a fairly empty hierarchy.

So, I'm currently minded to just take steps to move all these
discussions to mozilla.dev.security. But I'm happy to hear objections.

Gerv

Paul Hoffman

unread,
Jan 16, 2009, 11:33:39 AM1/16/09
to mozilla's crypto code discussion list

I'm happy to hear that you're happy to hear. :-) Objection.

Security has two very distinct aspects, policy and technology, that have less overlap than is commonly thought. People mash them together because they are so happy to think that someone wants to hear their security concern that they'll just say what they think wherever they can. The very significant downside of this mixing is that security technology moves forward much slower than it should.

Security implementers rarely want to talk about policy after a month or two. People who are concerned with security policy can go on for years (well, decades for some of us).

Having a separate policy list would help the technology folks focus on what they do best. It would also help keep the policy people keep their discussion out of bits-on-the-wire and up in the "what should we be doing" layer.

--Paul Hoffman

Ian G

unread,
Jan 17, 2009, 6:42:48 AM1/17/09
to mozilla's crypto code discussion list
On 16/1/09 17:33, Paul Hoffman wrote:
> At 6:05 AM +0000 1/16/09, Gervase Markham wrote:
> I'm happy to hear that you're happy to hear. :-) Objection.
>
> Security has two very distinct aspects, policy and technology, that have less overlap than is commonly thought. People mash them together because they are so happy to think that someone wants to hear their security concern that they'll just say what they think wherever they can. The very significant downside of this mixing is that security technology moves forward much slower than it should.
>
> Security implementers rarely want to talk about policy after a month or two. People who are concerned with security policy can go on for years (well, decades for some of us).
>
> Having a separate policy list would help the technology folks focus on what they do best. It would also help keep the policy people keep their discussion out of bits-on-the-wire and up in the "what should we be doing" layer.


I agree with Paul. The separation of policy from technology is also a
question of professionalism and responsibility.

iang

Gervase Markham

unread,
Jan 26, 2009, 11:20:10 PM1/26/09
to
Paul Hoffman wrote:
> Having a separate policy list would help the technology folks focus
> on what they do best. It would also help keep the policy people keep
> their discussion out of bits-on-the-wire and up in the "what should
> we be doing" layer.

OK, then.
https://bugzilla.mozilla.org/show_bug.cgi?id=475473
filed to create mozilla.dev.security.policy. And please let's not have a
bikeshed discussion about the name.

Gerv

Ben Bucksch

unread,
Jan 27, 2009, 5:45:06 AM1/27/09
to
On 14.01.2009 18:49, Ian G wrote:
> In _Rampell_ [2]:
> "... Those audits are relied on not only by the clients on whose
> financial matters audits are performed but upon a host of other
> individuals and entities who may rely on the information in making
> their own economic decisions. Audited statements are relied upon by
> banks, other creditors, and investors ... In short, the use of
> financial statements attested by "licensees" is so frequently used in
> our economic system as to be indispensable..."

The same could be said about IT security review - including source code
review, processes and server operations. Even the exact same words, if
you replace "financial"/"economic" with "security" and "banks, other
creditors, and investors" with "all the customers".

Ian G

unread,
Jan 27, 2009, 1:05:27 PM1/27/09
to mozilla's crypto code discussion list
Hi Ben,

There is a danger in editing out the subtleties, and then discovering it
says what we want :) That "..." is significant!


Going back to my quote, and adding my emphasis:

==================


1. From my notes: I found no law or case law that nails this down, but
there is dictum ("non-binding opinion") that is careful to draw a line
between financial audits and any other role. In _Rampell_ [2]:

"...*While others may provide* tax services or bookkeeping services,
"*licensees of the board of accountancy*" alone perform the 'attest'

function, which refers to the process by which "licensees" audit

*financial statements* and express opinions as to those financial
statements. Those audits are relied on not only by the clients on whose

financial matters audits are performed but upon a host of other
individuals and entities who may rely on the information in making their
own economic decisions. Audited statements are relied upon by banks,
other creditors, and investors ... In short, the use of financial
statements attested by "licensees" is so frequently used in our economic
system as to be indispensable..."

==================


Read that first sentence very carefully. The judge is saying, no. Not
only is this area reserved for accountants, they cannot use their
special "audit powers" outside the strict role [1], and the public
cannot rely on the "attest function" outside the strict financial audit.

The reason he is saying the latter is that if the special power were
broadened, the function of the audit *over financial statements* would
be weakened, and that is too indispensable to our economic system to be
permitted [2].

In other news, a new US Treasury secretary has been sworn in. He
probably gets the job of managing the largest financial handout in
history, to the the most bankrupt industry with the richest employees,
and the most auditors.

I wonder if the Senate had the thought of asking him how the attest
function and the broadening role of accountancy firms in consulting
effected the financial crisis [3]?

iang,
auditor, but not lawyer, nor accountant, nor licensed to the attest
function in any relevant state!

[1] remembering this is unreliable on several counts

* I don't know law,
* a new court might think of things differently,
* the above is not a precedent, just an opinion called "dictum"
* a state law sometimes does think differently
* indeed this guys says some of that directly
http://www.allbusiness.com/accounting/methods-standards/368326-1.html
* I think I quoted from him, and he may have snipped important parts!
* there are other countries involved
* i'm an auditor so I am biased
* i'm not an accountant, so I am biased......


[2] Even unreliable, it does tell us what we might want to do. In order
for the public to rely on a qualified and licensed *accountant*
performing the *attest function* in a systems audit of computers with no
financial component in sight, we would have to get the judge to agree to
things like:

a) *accountancy* is a good basis for judging computers, crypto,
law, contracts, software design, the Internet, etc.

b) the board of accountancy is a good test of people who can do the
above,

c) the board of accountancy wants this massive expansion,

d) the accountants want this massive expansion,

e) the use of the accountants in this area does not weaken the
power of the attest function in any way over financial statements.

f) or, the weakening of finance is ok because we strengthen the
security field?

and perhaps some more.


[3] Of the above, a,b,c,f reasonable people might discuss without
reaching agreement. d. is a slam-dunk, the whole history of the
accounting field in the last 20-30 years screams "YES!". e. is also an
easy choice, the whole history of the financial crisis screams something
too :)

Frank Hecker

unread,
Feb 4, 2009, 1:27:05 PM2/4/09
to
Eddy Nigg wrote:
> On 01/03/2009 05:38 AM, Eddy Nigg:
>> Before anybody else does, I prefer from posting it myself :-)
>>
>> http://blog.phishme.com/2009/01/nobody-is-perfect/
>> http://schmoil.blogspot.com/2009/01/nobody-is-perfect.html
>>
>> For the interested, StartCom is currently checking if I can release our
>> internal "critical event report" of this event to the public (there
>> might be some internal information which should not be disclosed).
>>
>
> The report is available from here: https://blog.startcom.org/?p=161

(I'm continuing going back through old threads, including reading the
messages in this one I hadn't previously read. My apologies for being
somewhat abbreviated in my comments, and for not responding to every
point raised; I thought it was more important that I get my thoughts out
there and close out these issues, rather than wait for time I won't have
to do longer posts.)

My overall comments:

1. I appreciate your being proactive in posting about the StartCom
problems that were discovered and getting them fixed in a timely manner.
I wish more CAs would be more forthcoming about things like this.

2. I understand that what happened in the case of StartCom was not
exactly the same as what happened in the case of Comodo/CertStar.
However it's part of web security basics to assume that whatever a
client sends to a server is untrusted and must be (re)verified on the
server side to forestall potential attacks (e.g., SQL injection, etc.)
So IMO you get points for prompt disclosure and fixes, but in the end
you messed up just like Comodo and CertStar did.

3. To paraphrase what Nelson (?) wrote, "bugs happen". I don't think the
PKI/CA system is so fragile that it necessarily comes tumbling down
whenever a CA or RA makes a mistake. (If it really is that fragile then
we have bigger problems than those we're discussing here.) From a policy
point of view I think our interest is having CAs acknowledge problems
and fix them in a timely manner, both in terms of revoking certs when
needed and also in terms of addressing any underlying root causes.

4. In line with the previous point, I am not planning to recommend
removal of StartCom's root over this incident, both because the issue
been addressed by StartCom and also because it appears to be an isolated
incident that does not indicate any larger problem of incompetence or
maliciousness.

Frank

--
Frank Hecker
hec...@mozillafoundation.org

Frank Hecker

unread,
Feb 4, 2009, 1:37:10 PM2/4/09
to

Gerv, thanks for handling this. For the record, I'm happy with moving
policy discussions to a separate group. However bug 475473 implies that
the new group is up and running, while I can't find it either in
Thunderbird (i.e., via NNTP) or on the Google Groups site.

Johnathan Nightingale

unread,
Feb 4, 2009, 1:54:52 PM2/4/09
to mozilla's crypto code discussion list


I think that bug isn't resolved yet because google groups has been
acting up a bit lately. Another recent newsgroup creation,
(mozilla.dev.tree-management) was finally picked up about a week after
creation, but messages still aren't appearing there.

Cheers,

Johnathan

---
Johnathan Nightingale
Human Shield
joh...@mozilla.com

Frank Hecker

unread,
Feb 4, 2009, 2:11:54 PM2/4/09
to
Johnathan Nightingale wrote re bug 475473:

> I think that bug isn't resolved yet because google groups has been
> acting up a bit lately. Another recent newsgroup creation,
> (mozilla.dev.tree-management) was finally picked up about a week after
> creation, but messages still aren't appearing there.

OK, thanks for the info. I guess we'll just wait for this to resolve
itself, then we can verify that the new group is operating properly (and
the mailing list also) and then make an announcement in m.d.t.crypto and
m.d.security.

Eddy Nigg

unread,
Feb 4, 2009, 3:00:19 PM2/4/09
to
On 02/04/2009 08:27 PM, Frank Hecker:

> 2. I understand that what happened in the case of StartCom was not
> exactly the same as what happened in the case of Comodo/CertStar.
> However it's part of web security basics to assume that whatever a
> client sends to a server is untrusted and must be (re)verified on the
> server side to forestall potential attacks (e.g., SQL injection, etc.)

Correct. Since there wasn't much interest in this issue I haven't kept
updating on this. It appears - and according to our records, that the
time-window for this error was of limited nature too and some changes in
the code (incidentally made for EV certs) allowed for this type of
attack. As you correctly state, lots of potential attacks are taken into
consideration, validated and monitored. We improved also the procedures
for such changes.


> So IMO you get points for prompt disclosure and fixes, but in the end
> you messed up just like Comodo and CertStar did.

Nonono :-)

I see the main differences as followed and I believe the main
differences are policy wise (and allow me to comment on this since you
made the comparison).

StartCom requires domain control validation performed in a very specific
way. This is a matter of choice and policy. StartCom doesn't out-source
the validation nor uses RAs, hence doesn't have to implement controls
for RAs and third parties validations.

Now, due to the validations StartCom DOES perform and due to the careful
recording of all data, StartCom had the ability to KNOW exactly which
email address was used to validate a domain. Due to other protection
layers including staff awareness, this incidents potential damage was
kept to a minimum.

Unfortunately at Certstart there was no domain validation at all. Any
such steps didn't exist. And Comodo has apparently failed to verify how
Certstar does that.

Does Comodo know how the domain name was validated? Do they have any
records about which email addresses or any other evidences about how the
supposed validation was done? Which protective layers prevented the
issuance of a high-profile domain? Or anything?

From where I stand and from my point of view, I see clear differences
in both cases - policy and implementation wise. The differences can have
consequences, because a fix in a program is one thing, a fix in a policy
quite another.

(As such, both events didn't came close to a situation which isn't
correctable)

> 3. To paraphrase what Nelson (?) wrote, "bugs happen". I don't think the
> PKI/CA system is so fragile that it necessarily comes tumbling down
> whenever a CA or RA makes a mistake. (If it really is that fragile then
> we have bigger problems than those we're discussing here.) From a policy
> point of view I think our interest is having CAs acknowledge problems
> and fix them in a timely manner, both in terms of revoking certs when
> needed and also in terms of addressing any underlying root causes.

Frank, of course every effort has to be made to prevent any failures,
but it's a reality that vulnerabilities and bugs can happen. The MD5
collision and the Debian weak keys are also just another example. It
really matters how prepared a CA is to act under such circumstances. It
matters which additional protection layers exist and which conflicts of
interests would prevent a CA from acting upon a known failure or
vulnerability. In this respect I hope that StartCom served as an example
to follow, even under the circumstances of a negative event. Or perhaps
because of it, since the real test happens under such circumstances and
not when everything is rosy ;-)

Eddy Nigg

unread,
Feb 4, 2009, 6:28:06 PM2/4/09
to
On 02/04/2009 09:11 PM, Frank Hecker:

> OK, thanks for the info. I guess we'll just wait for this to resolve
> itself, then we can verify that the new group is operating properly (and
> the mailing list also) and then make an announcement in m.d.t.crypto and
> m.d.security.
>

Seems to work here. Cross-posting to m.d.s.policy.

But it seems messages are slow in appearing anyway, no matter which list...

Ian G

unread,
Feb 5, 2009, 8:14:14 AM2/5/09
to mozilla's crypto code discussion list, dev-se...@lists.mozilla.org
Excellent, OK, so I went here:

https://lists.mozilla.org/listinfo/dev-security

and subscribed. I guess it is up to each person to do that.

Now, the list charter! As a starting point:

==================
a. Discussion on security policy, governance, directions and
architecture in common for Mozilla products, as managed by Mozilla
Foundation and implemented by various groups in the Mozilla family.

b. Responsibility for the management and improvement of the Mozilla CA
policy, including:

i. making changes to the policy
ii. managing the root list
iii. dealing with problems

This list is led by the security module owner, as appointed by Mozilla
Foundation.
==================

Perhaps follow up to that should be only on the list itself. Or perhaps
not :)

iang

Eddy Nigg

unread,
Feb 5, 2009, 8:22:32 AM2/5/09
to
On 02/05/2009 03:14 PM, Ian G:

> Excellent, OK, so I went here:
>
> https://lists.mozilla.org/listinfo/dev-security
>
> and subscribed. I guess it is up to each person to do that.
>

Ian, this is the wrong list. The new list is called dev.security.policy,
not dev.security.

It seems that the new list doesn't show up at listinfo. Perhaps try with
the NNTP reader.

Ian G

unread,
Feb 5, 2009, 10:44:12 AM2/5/09
to mozilla's crypto code discussion list
On 5/2/09 14:22, Eddy Nigg wrote:
> On 02/05/2009 03:14 PM, Ian G:
>> Excellent, OK, so I went here:
>>
>> https://lists.mozilla.org/listinfo/dev-security
>>
>> and subscribed. I guess it is up to each person to do that.
>>
>
> Ian, this is the wrong list. The new list is called dev.security.policy,
> not dev.security.


Ouch. Blew that one!


> It seems that the new list doesn't show up at listinfo. Perhaps try with
> the NNTP reader.


OK, I'll wait. I don't have an NNTP reader, or don't know what one is.
Is it something in Firefox or Thunderbird?

iang

Frank Hecker

unread,
Feb 5, 2009, 12:34:53 PM2/5/09
to
Ian G wrote:
> OK, I'll wait. I don't have an NNTP reader, or don't know what one is.

We'll forgive you the confusion. It's like saying "HTTP reader" instead
of "browser" :-)

> Is it something in Firefox or Thunderbird?

You can read Mozilla newsgroups in Thunderbird by creating a "newsgroup"
account, specifying news.mozilla.org as your server, and then
subscribing to the mozilla.* groups you want to read.

As it happens, I tried just now to subscribe to the
mozilla.dev.security.policy newsgroup, and lo and behold it is now
present in the list and appears to be working.

The corresponding dev-secur...@mozilla.org list doesn't yet show
up in the list of mailing lists at lists.mozilla.org, though it does
have its own page now:

https://lists.mozilla.org/listinfo/dev-security-policy

so perhaps it's working as well. (I don't read these forums via email,
perhaps you or someone else can try subscribing.)

Finally, Google Groups does not appear to know about the new group yet:

http://groups.google.com/group/mozilla.dev.security.policy

so there's no easy way to post URLs for forum threads.

Given the problems we've had in getting this group up and running, I'd
prefer to wait a few more days before we start switching policy-related
discussions over to it.

Gervase Markham

unread,
Feb 5, 2009, 1:36:32 PM2/5/09
to
Eddy Nigg wrote:
>> So IMO you get points for prompt disclosure and fixes, but in the end
>> you messed up just like Comodo and CertStar did.
>
> Nonono :-)
>
> I see the main differences as followed and I believe the main
> differences are policy wise (and allow me to comment on this since you
> made the comparison).

Eddy: I don't think Frank is saying that you made the _same_ mistakes as
CertStar (out-sourcing validation etc. etc.), but that you made
_a_mistake_, just like they did. He then goes on to make the point that
making a mistake is not the end of the world.

Gerv

Eddy Nigg

unread,
Feb 5, 2009, 1:54:54 PM2/5/09
to
On 02/05/2009 08:36 PM, Gervase Markham:

> Eddy: I don't think Frank is saying that you made the _same_ mistakes as
> CertStar (out-sourcing validation etc. etc.), but that you made
> _a_mistake_, just like they did. He then goes on to make the point that
> making a mistake is not the end of the world.

Right, and I stated something similar. I want to confirm that we viewed
this mistake as sever and acted accordingly upon it.

There's point I'm trying to make with and without relation to any
shortcomings from StartCom's side which I hope I'll be able to address
in due time. Nevertheless thanks for clarifying.

Ian G

unread,
Feb 5, 2009, 1:55:34 PM2/5/09
to mozilla's crypto code discussion list
On 5/2/09 18:34, Frank Hecker wrote:
> Ian G wrote:
>> OK, I'll wait. I don't have an NNTP reader, or don't know what one is.
>
> We'll forgive you the confusion. It's like saying "HTTP reader" instead
> of "browser" :-)

Oh, it's newsgroup reader, got it, thanks.


>> Is it something in Firefox or Thunderbird?
>
> You can read Mozilla newsgroups in Thunderbird by creating a "newsgroup"
> account, specifying news.mozilla.org as your server, and then
> subscribing to the mozilla.* groups you want to read.


OK, I tried that and it blew up my Thunderbird. I'm running the beta
3.0b1, probably just too brave of me. Another furfy is that I'm trying
to set it to localhost:port to go out through tunnels. For some strange
reason, it got itself in a real mess, and started adding multiple
accounts with different names every time I tried to set the port number
in the account.

If I get another chance I'll try and do more investigation, but am busy
now. I never could handle these newfangled newsgroup forums :)


...

> The corresponding dev-secur...@mozilla.org list doesn't yet show
> up in the list of mailing lists at lists.mozilla.org, though it does
> have its own page now:
>
> https://lists.mozilla.org/listinfo/dev-security-policy
>
> so perhaps it's working as well. (I don't read these forums via email,
> perhaps you or someone else can try subscribing.)


Yes, tried that, am subscribed, I'll await someone else's post this time.

> Given the problems we've had in getting this group up and running, I'd
> prefer to wait a few more days before we start switching policy-related
> discussions over to it.


OK, so we all know we should do that ... (and there is no automatic
copying of everyone to the new list).


iang

Ian G

unread,
Feb 9, 2009, 11:45:37 AM2/9/09
to mozilla's crypto code discussion list
On 5/2/09 18:34, Frank Hecker wrote:

> https://lists.mozilla.org/listinfo/dev-security-policy
>
> so perhaps it's working as well. (I don't read these forums via email,
> perhaps you or someone else can try subscribing.)

Yes, email is working fine. Dunno about the rest.

> Given the problems we've had in getting this group up and running, I'd
> prefer to wait a few more days before we start switching policy-related
> discussions over to it.


Np. I've posted something soft, hopefully non-contraversial and
non-critical: a suggestion on the list charter.

iang

Ben Bucksch

unread,
Feb 9, 2009, 2:15:50 PM2/9/09
to
On 09.02.2009 17:45, Ian G wrote:
> I've posted something ... hopefully non-contraversial ...: a
> suggestion on the list charter.

That was a good one.

Ian G

unread,
Feb 9, 2009, 2:23:06 PM2/9/09
to mozilla's crypto code discussion list


It didn't last more than 30 seconds :-) Oh well, I suppose the list
will be active some time.

iang

Kyle Hamilton

unread,
Feb 9, 2009, 9:55:26 PM2/9/09
to mozilla's crypto code discussion list
Can we please have someone at Mozilla light a fire under the sysadmin
staff to get this working?

-Kyle H

> --
> dev-tech-crypto mailing list
> dev-tec...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>

Johnathan Nightingale

unread,
Feb 9, 2009, 9:58:03 PM2/9/09
to mozilla's crypto code discussion list
This isn't a problem with our IT folks, they solve their part in record
time, typically. Google groups has been having troubles lately picking
up both this and another group that was recently created. We've
contacted them about it, but we don't really want a bunch of people
posting threads in a group that's not yet indexed by the dominant
newsgroup search provider.

I imagine updates will be posted in the bug as they become available,
though.

Cheers,

Johnathan

Kyle Hamilton wrote:
> Can we please have someone at Mozilla light a fire under the sysadmin
> staff to get this working?
>
> -Kyle H
>
> On Mon, Jan 26, 2009 at 8:20 PM, Gervase Markham <ge...@mozilla.org> wrote:

0 new messages