I can see your point, but OTOH it's also good for the network to be
tested with more than a few test users, so maybe a balance needs to be
struck somewhere? I'm thinking of setting up a server and telling a few
geek friends of mine that they can sign up if they like. But I take your
point that this isn't the time to start inviting everyone I know.
hear, hear! The only point i disagree on is that folks should not be
offering seed hosting. They should. There are many of us out here who
are not developers per se (business analysts, PM's, independent tech
consultants, bloggers) but are very interested in seeing this project
succeed and in understanding how to set it up, configure it, use it, so
we can hit the ground running somewhere around RC2 to start
evangelizing, recommending and supporting it. So we can become really
good issue reporters. So we can be the link between the
average-everyday-end-user community and the devs.
That said, anyone offering a hosted seed server at this juncture should
do so with CAVEATS in all caps.
kazar
http://ade.pt
PS: tech writers, too
A system that proposes to support privacy presumably rests on security.
The central issue many people are concerned about (reading comments
elsewhere) is that security is not an "add on". Example from:
http://news.ycombinator.com/item?id=1696396
http://news.ycombinator.com/item?id=1699641
"Privacy was the key kickoff point in the first place. You can't have good
privacy without good security. When these are your primary reasons for
getting started, your 'user experience' has to entail security. .. As
someone else said yesterday, having an 'HTTP' for social media would be more
important than having an 'Apache' for social media."
Ideally (though few manage this), security needs to be woven intrinsically
and mutually throughout an entire endeavor at all levels of the social
process, and from beginning to end, from recruitment to developer training
to coding standards to code reviews (or whatever works) to archiving
procedures to product announcements to bug fix procedures to communications
with the public, as well as at all levels of the code itself, the tests, and
so on. For many situations, security is often like a chain -- any weak link
makes it fail. The less a project embodies this end-to-end security ethic,
the more constant vigilance or constant exercise of power is required by
everyone involved in it (extrinsic security and/or unilateral security).
Related:
http://www.google.com/search?q="security+is+not+an+add-on"
http://en.wikipedia.org/wiki/Computer_security
"In the real world, the most security comes from operating systems where
security is not an add-on."
For physical examples of this thinking in action, you can see, say, on
intrinsic security at a physical level:
http://en.wikipedia.org/wiki/Brittle_Power
"According to the authors, a resilient energy system is feasible, costs
less, works better, is favoured in the market, but is rejected by U.S. policy."
And on mutual security at a social level:
http://www.beyondintractability.org/audio/morton_deutsch/?nid=2430
"And that better way of relating involves having a sense that one can only
have security if there's mutual security."
So, in that sense, security is cultural. If you try to bolt on security
after the fact (like trying to use a big military to defend long oil supply
lines instead of having local power sources like solar panels, or trying to
be the one who has all the power and everyone is afraid of rather than being
the one who has a lot of friends who all share power and look out for each
other) you end up spending a lot of time, money, and lives on "security" and
you possibly still end up insecure. :-(
Asking users via an email to be careful or to not use code that is released
is an example of relying on "extrinsic" security instead of "intrinsic"
security. I'm not saying it's not a reasonably request (sure it is, and I've
made such requests myself in other contexts) -- I'm just placing it as an
example on this intrinsic vs. extrinsic spectrum.
Unfortunately, intentional or not, the first Diaspora release has been taken
by some people to be a statement about the culture of Diaspora development
as regards end-to-end security, even if it was not an intentional statement
or even it it perhaps may not be accurate assessment relative to intent or
plans. So, it is going to take a bit of work to recover from that, but no
doubt it can be done by showing steady progress to creating a developer
culture that has a security mindset woven throughout it.
So how does one get security in practice, assuming you want to do it
end-to-end? What engineering attitude may be best to cultivate within that
mindset?
Often, the best security is just simplicity.
See:
http://www.google.com/search?q="security+through+simplicity"
http://www.smh.com.au/articles/2003/09/17/1063625084009.html
"Bruce Schneier is one of the world's best known and most pragmatic security
experts. He is also a man of considerable breadth of knowledge, if one were
to judge from his latest book, Beyond Fear. ... Security, explains Schneier,
is not something that one think of in absolute terms; it is a series of
trade-offs. ... There are no sacred cows here. Schneier categorically
separates the sheep from the goats with no dogma guiding his reason. There's
plenty of commonsense buried within the tome - people, not technology, are
the greatest asset when it comes to security; simplicity, not complexity,
makes us safer. To quote him: "Good security systems usually involve
technology and people working together, but the people have to run the
technology, not vice versa.""
More on that book:
"Beyond Fear: Thinking Sensibly about Security in an Uncertain World"
http://www.schneier.com/book-beyondfear.html
While most engineering is about making tradeoffs ("better, cheaper, sooner,
pick two" like if you are designing a new laptop), the most innovative
solutions allow you to get, say, both security and functionality in some
context without tradeoffs. But those are rare innovations and may require a
lot of deep thinking that leads to insight into the situation (or
alternatively, they come from one lucky person out of a million thinking
about the problem and having some inspiration coming from crossing ideas
from some other situation or otherwise seeing the problem from some unique
perspective). In practice, we end up making tradeoffs until we have such an
insight (or learn of an insight that others have had previously).
So, the issue moving forward on an end-to-end security path may not be about
creating lots of complicated code. It may be about creating some simple (but
elegant) code (or even just a standard for communicating, as in the comment
I quoted above) that makes some appropriate tradeoffs between security and
convenience (or pioneers some new innovation not requiring tradeoffs).
--Paul Fernhout
http://www.pdfernhout.net/
====
The biggest challenge of the 21st century is the irony of technologies of
abundance in the hands of those thinking in terms of scarcity.