What can Diaspora learn about security from Microsoft?

0 views
Skip to first unread message

j...@qworky.net

unread,
Oct 7, 2010, 12:59:00 PM10/7/10
to diaspora-dev
Some suggestions so far: train the developers, inspect the code, do
threat modeling, use the tools — and develop ones that don’t exist,
include security in the software lifecycle, be wary of legacy code,
and reach out to the security community.

More at http://www.talesfromthe.net/jon/?p=1940 ... feedback welcome!

jon

Sarah Mei

unread,
Oct 7, 2010, 3:08:46 PM10/7/10
to diaspo...@googlegroups.com
It's certainly an interesting approach to security. It has some
factual inaccuracies though that undermine the suggestions. I suggest
reading The Cathedral & The Bazaar for an alternative approach to
security that's more in line with open source.
http://bit.ly/p2vHe

j...@qworky.net

unread,
Oct 7, 2010, 8:41:33 PM10/7/10
to diaspora-dev
Thanks for the feedback, Sarah. What are the factual inaccuracies?
I'm happy to fix them ...

I like a lot of things about the Cathedral and the Bazaar, but "many
eyes make bugs shallow" is only true for security if the eyes know
what they're looking for -- and even then it doesn't provide a
solution for architectural and protocol issues. Sendmail is a classic
example of a situation where bugs continued to be unearthed after
years and years of scrutiny. The open source projects I know of
which take really seriously a lot of these same practices as
Microsoft. That said I probably should pay more attention to the
differences in the environments in the presentation, I kind of hand-
waved at it in the blog post. So a good point.

jon


On Oct 7, 12:08 pm, Sarah Mei <sarah...@gmail.com> wrote:
> It's certainly an interesting approach to security. It has some
> factual inaccuracies though that undermine the suggestions. I suggest
> reading The Cathedral & The Bazaar for an alternative approach to
> security that's more in line with open source.http://bit.ly/p2vHe

PeterH

unread,
Oct 8, 2010, 8:54:22 PM10/8/10
to diaspora-dev
The biggest lesson on security I see looking at microsoft is how not
to do it.
Security needs to be designed in the core system from from the start.
Adding security after the fact, like many microsoft products do, just
doesn't work. Permissions need to be in the hands of resource access
modules that present consistent controls to everything, not handled
piecemeal at the user interface layer.

On Oct 7, 9:59 am, "j...@qworky.net" <j...@qworky.net> wrote:
> Some suggestions so far: train the developers, inspect the code, do
> threat modeling, use the tools — and develop ones that don’t exist,
> include security in the software lifecycle, be wary of legacy code,
> and reach out to the security community.
>

Dan White

unread,
Oct 9, 2010, 10:24:59 PM10/9/10
to diaspo...@googlegroups.com
On 08/10/10�17:54�-0700, PeterH wrote:
>The biggest lesson on security I see looking at microsoft is how not
>to do it.
>Security needs to be designed in the core system from from the start.
>Adding security after the fact, like many microsoft products do, just
>doesn't work. Permissions need to be in the hands of resource access
>modules that present consistent controls to everything, not handled
>piecemeal at the user interface layer.

+1

Implementing a secure protocol (which places strict demands on any
implementation) helps. Using existing open source libraries and daemons,
rather than reinventing the wheel, is a really smart way to approach
security enlightenment.

Postfix is a great model. Break the components of your system up into clean
interface points. Implement those components as distinct processes and use
unix domain sockets (with kernel level security), for example, to
facilitate message passing between them.

For example, one way to approach the problem of giving a system access to
private keys is to hide them behind a daemon accessible via a unix domain
socket. Don't give the web user access to the socket. Just let the web user
pass messages to, and retrieve signed messages out of the socket, or pass
encrypted messages in and retrieve unencrypted messages. A compromise of
the web server account would not expose any private keys (unless the system
is rooted).

--
Dan White

Dan White

unread,
Oct 9, 2010, 10:29:28 PM10/9/10
to diaspo...@googlegroups.com
On 09/10/10�21:24�-0500, Dan White wrote:
>For example, one way to approach the problem of giving a system access to
>private keys is to hide them behind a daemon accessible via a unix domain
>socket. Don't give the web user access to the socket. Just let the web user
>pass messages to, and retrieve signed messages out of the socket, or pass
>encrypted messages in and retrieve unencrypted messages. A compromise of
>the web server account would not expose any private keys (unless the system
>is rooted).

That's a typo. Instead of:

Don't give the web user access to the socket.

I meant to say:

Don't give the web user direct access to the private keys.

--
Dan White

Sarah Mei

unread,
Oct 10, 2010, 1:27:41 AM10/10/10
to diaspo...@googlegroups.com
> What are the factual inaccuracies? I'm happy to fix them ...

Here's the main one I noticed:

"As Wayne Ariola pointed out in a comment, Diaspora’s current
"find-and-fix" approach mirrors industry mindset … and it doesn’t
work."

It does, actually, over time. You mentioned sendmail as a
counterexample, but mail servers, in general, are hard. They are still
fixing critical bugs in Exchange and it's, what, 15 years old? There
are a lot of rock solid open source projects that got that way BECAUSE
they had thousands of people looking at the code.

But - and this is key - it doesn't happen overnight. The first release
of apache had huge security holes, but these days it's the canonical
example of a secure open source project.

> I like a lot of things about the Cathedral and the Bazaar, but "many
> eyes make bugs shallow" is only true for security if the eyes know
> what they're looking for -- and even then it doesn't provide a
> solution for architectural and protocol issues.

Having done both styles of development (at Microsoft, no less, in my
first job out of college - if we ever have beers together I'll totally
tell you stories...) I think you may be making some assumptions here
that don't hold in open source.

First, you're assuming that there is an extremely limited supply of
people who can help with security. Certainly the number of folks who
understand complete end-to-end web application security is very small.
But almost everyone who's written a complex web app knows *something*.

For example, I know how to set up user accounts in a Rails app so that
people can't see other people's data. Other people know a lot about
encryption, or identity, or protocol design (given the number of
messages on this list!). Everyone knows *something*, and if you give
them all access to the code, their collective knowledge is more than
any one person - or any four people.

So while your suggestions make sense for a closed-source project, they
do not for diaspora, IMO. However, I think there are open-source
equivalents in some cases that could be useful:

1. Your suggestion: training. Typically open source folks prefer to
learn by doing, which is why they went into open source in the first
place. And we don't need to train anyone, really, because our
community already includes all the knowledge we need.

2. Your suggestion: code reviews. The diaspora guys do some pair
programming, which is continuous code review. But more importantly,
anyone can review the code at any time. It's one of the most-watched
repos on github. You bet your ass that code's being reviewed
constantly.

3. Your suggestion: threat modeling. I don't see a lot of value in
that, but maybe that's because it all seems obvious to me. The beauty
of open source, though, is that if YOU think it's valuable, you can do
it and post your results. (Side note - anyone else find the threat
modeling card game a little creepy? :)

4. Your suggestion: use the tools. I totally agree with this
suggestion, but I think they just need to actually test-drive their
development (instead of just writing tests first, or backfilling tests
later).

5. Your suggestion: include security in the product lifecycle. Perhaps
the open source equivalent of this is "be mindful of security, and fix
issues when they're raised." Unlike a traditional project, there's no
product lifecycle, no gantt chart, no 2-year schedule. And since all
bugs are shallow given a large enough community, and our community is
large enough, I don't think we need more process around the lifecycle.
As Evan Phoenix is fond of saying, "premature process is the root of
all frustration" in open source projects.

6. Your suggestion: be wary of legacy code. Very true. Code becomes
legacy the instant you commit. Keep the code clean, constantly
refactor, and write tests, so that refactoring is possible.

7. Your suggestion: reach out to the security community. What better
way to reach out to the security community than to hand them your
code? Done.

I'm not saying that the diaspora code, or the team's process, is
perfect. I'd like to see more test-driving, and more community
engagement.

But if they can get there, and get folks to fix the stuff they know
how to fix, we'll end up with an awesome, and sufficiently secure,
project. Eventually. :)

Sarah

j...@qworky.net

unread,
Oct 10, 2010, 2:23:48 PM10/10/10
to diaspora-dev
Excellent feedback all around, thanks very much for taking the time.
The revised draft incorporates a lot of these suggestions as well as
feedback from hacker news -- http://www.talesfromthe.net/jon/?p=1967

> > "As Wayne Ariola pointed out in a comment, Diaspora’s current
> > "find-and-fix" approach mirrors industry mindset … and it doesn’t
> > work."
>
> It does, actually, over time.

Well, admittedly 'work' is ambiguous. But the contrast between
Postfix (designed from the beginning with security in mind) and
sendmail or Exchange (not so much) is exactly what I'm talking
about. And Apache did a full redesign as part of their evolution so
again it looks to me like an argument against find-and-fix.

> 5. Your suggestion: include security in the product lifecycle. Perhaps the open source equivalent of this is "be mindful of security, and fix
issues when they're raised"

i changed the it to "Include security in the software engineering
process" to avoid the connotations of 'lifecycle' ... whether or not
it's heavyweight or documented, there's a process in place in every
open source and commercial project. and when i look at the open
source projects that are know for their security, they don't wait for
issues to be raised and fix them, but instead think proactively.

> (Side note - anyone else find the threat modeling card game a little creepy? :)

well it is from Microsoft ..., what did you think was creepy about it?

jon

j...@qworky.net

unread,
Oct 14, 2010, 11:53:14 AM10/14/10
to diaspora-dev
I'm giving the talk tomorrow at Microsoft's Blue Hat security
conference, and so was at their shmoozefest last night getting various
people's feedback both on this specific topic -- and on Diaspora more
generally. Most of the speakers at Blue Hat aren't from MS, and there
are plenty of attendees from other companies as well, so it was
interesting to hear what people have to say.

Even in the security community, people agreed that Diaspora made the
right choice by initially focusing on functionality rather than
security, and everybody hopes that the project will succeed. That
said there's a lot of skepticism ... well, y'know, it's their job to
be cynical.

What's encouraging though is that everybody sees Diaspora's value. The
general feel is that one way or another, the time is right for
Facebook alternatives. Whether or not it's Diaspora who cracks through
to mainstream adoption, people see it as a great learning experience
and a proof by example of just how eager people are for an
alternative.

jon

PS: Last-minute feedback still welcome -- no guarantees I can
incorporate it, but I'll do my best. http://www.talesfromthe.net/jon/?p=1967
Reply all
Reply to author
Forward
0 new messages