What does security mean to Habari?

2 views
Skip to first unread message

Owen Winkler

unread,
Jan 29, 2007, 4:38:20 PM1/29/07
to habar...@googlegroups.com
We've batted this around a few times, and I think it's worth hashing
out completely. I'm not going to advocate any specific functionality
in this message, so people looking to pick a fight here about an
"auto-disable" feature should look elsewhere.

I am under the impression that many open source packages (and in
specific, blog applications) do not go through the process of a
thorough security evaluation. I don't want Habari to be one of those
packages.

I am not a security expert, but I have yet to see any list members
present their opinions as such, so I am hoping to throw my
understanding of security into the discussion so that we can hopefully
move forward in a positive direction.

What is security in Habari?

There are many different applications of security in a web product:

* Prevent unauthorized access to site data.
* Prevent access to change site content.
* Prevent data loss.
* Prevent unintentional distribution of personal details.
* Prevent use of server resources for unintended purposes.

Of course, the easiest ones to realize are the first two. Of the
simplest example of security to employ - preventing unauthorized
access to site data - Habari has few capabilities. Currently the
entire public-facing site is open to anyone who happens by. Maybe
this isn't so bad, but note that entrypoints to sensitive areas of the
site (XMLRPC, admin login, APP link) are exposed by the public site.
We can assume that these individual points will be secured by some
other mechanism, which brings us to the second bullet item in the
list.

Habari currently has an authentication system in place to prevent
unauthorized access to edit content. This system is not complete
though, since any user with a login currently has access to change
anything on the site, including software options. The system also
employs different security systems in different places. The admin
area uses a form-based non-SSL login, and Atom authentication takes
place using a server authentication technique. Either of these areas
are also accessible by providing cookies that are not (to my
knowledge) one-off entry tokens, and would continue to work
indefinitely. Some of that seems like a bad idea.

Being that our security for viewing entrypoints (like in APP, you only
see what's editable if you're logged in) is as good as the measures we
employ to authenticate them, we discover a kind of interdependence of
security between levels. If a user account is compromised on today's
revision of Habari, all of the site options are available for editing.
Is it possible to find the database password via this compromise? I
don't know, but it seems worth evaluating.

This type of interdependence is evident throughout the security of the
system. Assuming you can coax a database password out of the admin by
forging some cookies, who knows how far up the security chain that
password could get you? A compromised password discovered by
scripting an attack on a vulnerable entrypoint could indeed lead to a
full server compromise.

Is there evidence that this has happened? What is the rate of
incidence? I have no idea. It may never have happened, since there
is no centralized reporting to catalog these kinds of incidences. I'm
sure that our sites are not all safe simply because we don't know of
anyone that it has happened to. I've outlined in my prior paragraph a
decent enough example of how someone could compromise a whole - if not
very effectively managed - server with some spare time and a script or
two. I don't believe that allowing known vectors of possible security
breach to persist based on lack of data on their exploitation is a
wise course of action.

Data loss is something that Habari doesn't handle at all. Assuming
that you lose data somehow by breach of security (not by accidentally
clicking "delete"), server error (yes, a failing hard drive is a
security issue), or compromised site, Habari doesn't offer much in the
way of restoring your site to the way it was before the incident.
From talking with people on IRC, it seems their general impression is
that this functionality should be outside the purview of the core
software, but it is my personal hope that we take Habari to the next
level for blog software and cover all of these security issues within
the core product, at least to the level that we have a plan to offer
users, if not some software solution.

Assuming someone gains access to the database, and our prior means of
security have failed, have we taken the steps necessary to ensure that
any personal details associated to the site are unable to be used?
Like most web-based applications, we've been storing hashed passwords
in the database. This does a fairly good job of preventing casual
abstraction of passwords from our stored data. This seems important
because, as anecdotal evidence tells us, novice users use the same
password everywhere, and so getting the password out of our database
would unlock servers net-wide.

Habari uses SHA-1 to hash passwords. It was recently discovered that
SHA-1 is breakable - that a password of sorts can be recovered from an
SHA-1 hash. The IRC discussions about what to do based on this news
were fairly lengthy, as it seemed a viable vector to break Habari's
security, but there is no alternative that is as easily deployed.

This brings us to a truism of security. The more convenience you get,
the less security you have. Allowing open access to a site provides
your site with less security. Posting your server password on the
server's publicly exposed page may make it easier to remember when
you're logging in, but it's definitely not secure. For absolute
security, you would need to unplug your machine, which isn't
practical. We need to find a good balance between what is practical
and what is acceptable from a security standpoint.

Taking the SHA-1 hashes as an example, an alternative is to use
SHA-256. SHA-256 requires large number math that PHP does not support
natively. Should we require users to install those additional
libraries, or is SHA-1 good enough for our purposes presently? It's
an interesting question. How convenient should it be to use Habari?

One of the most common security issues these days is the cross-site
scripting exploit. They are hard to explain and even harder to
adequately prevent. To what level should Habari expect to provide
protection from such a breach?

The cross-site script is likely not to be as directly devastating as
somehow gaining direct access to the server. Assuming this happens,
there's not much to do but activate the server's recovery plan. Being
proactive about security in Habari in this regard can give us an edge
that other packages don't have. It might be possible to do something
in the software to prevent a known issue from allowing a server to be
compromised. This could be done via an update, for instance.

Updates are interesting things. I've proffered several ideas
concerning updates on the IRC channel, and the opinion is
overwhelmingly that users should be responsible for applying their own
updates. When taking security into consideration, is that the best
method?

The example I used on IRC: How many people continue to drive their
cars while ignoring the "service engine" light? Similarly, would we
expect that a user whose site is running known insecure software to
activate the update based on a blinking light? Typically, people
don't upgrade unless they have to because the process is painful (a
process that I hope Habari can correct). What incentive do they have
to obey the warning light?

I suggest that even faced with an impending, known threat, users would
avoid upgrading because they did not arrive at their admin console to
perform an update; they arrived at their console to use their
software. Even for this method to be effective, it assumes that
they're anywhere near their blog's admin console when the update
becomes available. Would they obey an email to update? Would they
receive that email? Lots of questions in this.

I'm not suggesting that every case is a critical emergency, either.
Still, I assume you can imagine a scenario where some new code is
introduced that unknowingly provides some access to high-level
passwords. The recent XMLRPC exploit in WordPress illustrates how new
code can unknowingly provide access to things it shouldn't. I am
fairly sure that they did not insert this code on purpose.

I'm not sure what the criteria is to differentiate between a critical
net-wide affecting issue and one that could cause a few posts to be
deleted. Sure, it's just blog software, but I think that people would
be wise to consider that any software you run on your server has the
potential to make available breaches in that server's security.

We are all also familiar with the bot-nets that send spam to our sites
by the thousands. These systems are not entirely systems hosted by
the spammers, but consist largely of compromised systems doing the
spammers' bidding. Suggesting that a single compromised site is
trivial is a fallacy, since it creates a new vector through which
additional sites can be compromised. Yes, your compromised site now
affects my secured one. No blog app is too small to bear strict
security scrutiny, and no single site is irrelevant when it's able to
be compromised.

What about privacy? Privacy is another aspect of convenience. It is
convenient for you that people not know where your site is. That's
fine. What happens to your privacy when you fail to update and your
server is compromised, allowing hackers to wantonly access and
distribute your private data? I specifically suggest that we do not
track any information except possibly in the aggregate (number of
update requests in total, for example) for the specific purpose of
appeasing these privacy issues. It seems quite a bit better to trust
someone you know with a little bit of information that they promise
not to use than it does to leave yourself open for attack by the
people who would abuse that information.

The key is the idea of an effectively managed server, and in
conjunction, effectively managed site software. Managing a server
effectively, using strong passwords or public keys, is something of
which I expect many readers of this list are capable. I believe that
most users of Habari will not be subscribed to the development mailing
list, and even that most subscribers have not been trained in
effectively securing their sites. I accuse most participants here of
having reasonable technical acumen, but consider: Have you received
security training, or is your knowledge anecdotal? Now consider the
common blog user's exposure to that training, anecdotal or otherwise,
and you might come to the same conclusion that I do: Most people
don't have the slightest clue about security.

Placing the security of a users site in their own hands is a good idea
when they are aware of the implications and responsibilities. In most
cases, people are not aware. I wonder how many people who read this
list are perceptive of all of these issues. I know that I have even
left out some important issues (such as knowing that your software is
installed from a trusted source), so knowing just what's on this list
does not guarantee that your knowledge is complete.

So what can we do? We can help users make correct security choices.
We can make it possible for people not to have to think about whether
they want to accomplish a task at the expense of their site's
security. There should be no reason for users to have to decide to be
insecure or follow practices that would allow them to become insecure
to get what they want. Presenting a user with descriptive choices
that require a background in security implications will almost
certainly result in a decision based on convenience rather than
security.

I suggest that we build Habari in such a way that it's easier to do by
default what is right for security than it is to make a bad decision
at security's expense.

Owen

ringmaster

unread,
Jan 29, 2007, 4:53:49 PM1/29/07
to habari-dev
An exercise:

Step 1-

Asume that there are a bunch of assets on your site/blog to which
people may want to gain access. These could be CPU time, bandwidth,
user IDs, passwords, ability to change the blog, whatever. List these
out.

Step 2-

There are wasy for people to obtain those things listed in step 1.
List out every method there is to obtain each thing listed in step 1,
no matter how trivial or difficult. Do not pre-screen these.

Step 3-

For each item in step 2's list, present the mitigations that are in
place to prevent them from happening.

Step 4-

Evaluate whether the mitigations are enough, or whether more need to
be added. It's a good idea to have multiple defenses against items in
list 2, for the purpose of redundency - if one prevention fails,
another is there to help.

Having these lists will help show where conscious decisions are made
to leave areas unsecured, as well as having a better idea of the
security protections that exist.

Owen

Scott Merrill

unread,
Jan 29, 2007, 9:39:38 PM1/29/07
to habar...@googlegroups.com
Owen Winkler wrote:
> I am under the impression that many open source packages (and in
> specific, blog applications) do not go through the process of a
> thorough security evaluation. I don't want Habari to be one of those
> packages.

I do not know any PHP security experts. I'd be interested in a formal
security review of any Habari release, though I worry that the results
would be depressing.

> What is security in Habari?

I would like to state the security mantra early in this discussion:

Security is inversely proportional to convenience.

The more you secure something, the less convenient it is to use.
The more convenient you make something, the harder it is to secure.

You alluded to this late in your message. I want it in the forefront of
everyone's minds as we embark on this discussion.

> Habari currently has an authentication system in place to prevent
> unauthorized access to edit content. This system is not complete
> though, since any user with a login currently has access to change
> anything on the site, including software options. The system also
> employs different security systems in different places. The admin
> area uses a form-based non-SSL login, and Atom authentication takes
> place using a server authentication technique. Either of these areas
> are also accessible by providing cookies that are not (to my
> knowledge) one-off entry tokens, and would continue to work
> indefinitely. Some of that seems like a bad idea.

The cookies expire after several months. Possibly a year. I forget.

> Being that our security for viewing entrypoints (like in APP, you only
> see what's editable if you're logged in) is as good as the measures we
> employ to authenticate them, we discover a kind of interdependence of
> security between levels. If a user account is compromised on today's
> revision of Habari, all of the site options are available for editing.
> Is it possible to find the database password via this compromise? I
> don't know, but it seems worth evaluating.

Currently, all users are "administrators". Any compromise of an
administrator-level account should be considered catastrophic. How to
effectively guage compromises of non-administrator accounts is a
slightly more tricky proposition. Purists will tell you that any
account compromise is catastrophic. Generally, I agree with this notion.

> This type of interdependence is evident throughout the security of the
> system. Assuming you can coax a database password out of the admin by
> forging some cookies, who knows how far up the security chain that
> password could get you? A compromised password discovered by
> scripting an attack on a vulnerable entrypoint could indeed lead to a
> full server compromise.

It's possible, yes. Security experts will tell you that your best
position is "security in depth": a variety of different, complimentary
tools and techniques applied to make the bad guys' jobs harder.

We know that Habari requires certain components in order to execute: a
web server, a database engine, PHP, plus some filesystem space
somewhere. It is impractical to think that Habari should be responsible
for mitigating shortcomings in each of these other pieces.

> Is there evidence that this has happened? What is the rate of
> incidence? I have no idea. It may never have happened, since there
> is no centralized reporting to catalog these kinds of incidences. I'm
> sure that our sites are not all safe simply because we don't know of
> anyone that it has happened to. I've outlined in my prior paragraph a
> decent enough example of how someone could compromise a whole - if not
> very effectively managed - server with some spare time and a script or
> two. I don't believe that allowing known vectors of possible security
> breach to persist based on lack of data on their exploitation is a
> wise course of action.

It happens all the time in other aspects of engineering. One identifies
the threats, assess how damaging they may be, and tries to determine
whether the time spent trying to mitigate it offsets the long-term
benefit of that effort.

"Known vectors of possible security breach" is a rather vague language
construct. Do you mean "theoretical ways to access the blog"? We can
theorize about things all day long, and then spend inordinate amounts of
time trying to address them. Have we made Habari more secure? Perhaps,
but at what expense? What functionality did not get implemented because
we were busying trying to close holes that were unlikely to get
triggered, or that affected only a small percentage of users?

> Data loss is something that Habari doesn't handle at all. Assuming
> that you lose data somehow by breach of security (not by accidentally
> clicking "delete"), server error (yes, a failing hard drive is a
> security issue), or compromised site, Habari doesn't offer much in the
> way of restoring your site to the way it was before the incident.
> From talking with people on IRC, it seems their general impression is
> that this functionality should be outside the purview of the core
> software, but it is my personal hope that we take Habari to the next
> level for blog software and cover all of these security issues within
> the core product, at least to the level that we have a plan to offer
> users, if not some software solution.

I can only assume you're talking about regular backups. If you have
something else in mind, do please say it.

You and I both know from personal experience that blog backups are a can
of worms. Where do you store the temporary backup file(s) prior to
delivering them to the user? How do you deliver them to the user in
such a way as to prevent compromise? What data should be included in
the backup -- user accounts including passwords? just post data? what
about private posts?

Short of bundling Habari with strong cryptographic routines to secure
the data, and requiring GPG-signed email delivery of backups or an
SSL-aware web server, there's little we can do to truly cover all the
bases. Does this mean that we've failed our security goals? I don't
think so.

> Taking the SHA-1 hashes as an example, an alternative is to use
> SHA-256. SHA-256 requires large number math that PHP does not support
> natively. Should we require users to install those additional
> libraries, or is SHA-1 good enough for our purposes presently? It's
> an interesting question. How convenient should it be to use Habari?

String hashes are just one way to deal with securing data. Other
mechanisms surely exist. I don't have any experience with -- or indeed
knowledge of -- them.

> Updates are interesting things. I've proffered several ideas
> concerning updates on the IRC channel, and the opinion is
> overwhelmingly that users should be responsible for applying their own
> updates. When taking security into consideration, is that the best
> method?

Customers almost always resent manufacturers telling them that they know
better. Moreover, end-users find all sorts of clever ways to use
software beyond the original intention. A very practical example is
Matt Read's Habari-powered pastebin. I worry that any enforced security
paradigm might stifle such wonderful innovation.

> The example I used on IRC: How many people continue to drive their
> cars while ignoring the "service engine" light? Similarly, would we
> expect that a user whose site is running known insecure software to
> activate the update based on a blinking light? Typically, people
> don't upgrade unless they have to because the process is painful (a
> process that I hope Habari can correct). What incentive do they have
> to obey the warning light?

People drive their cars with the "service engine" light on for a variety
of reasons. For one, planning a trip to the mechanic is often a pain in
the ass, and must be worked into one's schedule. Sometimes the light is
a false alarm -- confirmed by the mechanic -- so people leave it be.
Sometimes people are just lazy.

In the situations in which *I* have had personal experience, the warning
lights on an automobile's dashboard are just that: warnings. "Hey, get
this checked out soon." I've never seen a light that means "Your
vehicle is about to explode."

How would you feel if your car simply refused to start because the
engine's software detected some problem that _might_ trigger? You're
not informed. You're not given an option to get the car to the shop.
What if you're on the freeway when it happens? Are you thankful for
your car's overzealous safety?

> I suggest that even faced with an impending, known threat, users would
> avoid upgrading because they did not arrive at their admin console to
> perform an update; they arrived at their console to use their
> software. Even for this method to be effective, it assumes that
> they're anywhere near their blog's admin console when the update
> becomes available. Would they obey an email to update? Would they
> receive that email? Lots of questions in this.

What is our level of responsibility versus our level of culpability? If
we take real measures to try to alert people to a problem, are we at
fault if they fail to take action? Is your automobile manufacturer at
fault if you choose to neglect the warning light for months on end? Do
you speak ill of your brand of vehicle if you ignore the light for
months only to have the engine one day fail to start due to mechanical
problems?

> I'm not suggesting that every case is a critical emergency, either.
> Still, I assume you can imagine a scenario where some new code is
> introduced that unknowingly provides some access to high-level
> passwords. The recent XMLRPC exploit in WordPress illustrates how new
> code can unknowingly provide access to things it shouldn't. I am
> fairly sure that they did not insert this code on purpose.

I'm not familiar with the situation. I'm sure other's aren't either. A
link to more information would be helpful.

> I'm not sure what the criteria is to differentiate between a critical
> net-wide affecting issue and one that could cause a few posts to be
> deleted. Sure, it's just blog software, but I think that people would
> be wise to consider that any software you run on your server has the
> potential to make available breaches in that server's security.

How far along the software stack must we try to protect our users?
Vulnerabilities in PHP, web servers, and database systems are well
beyond our expertise: none of us are security experts, as you pointed
out originally.

> What about privacy? Privacy is another aspect of convenience. It is
> convenient for you that people not know where your site is. That's
> fine. What happens to your privacy when you fail to update and your
> server is compromised, allowing hackers to wantonly access and
> distribute your private data? I specifically suggest that we do not
> track any information except possibly in the aggregate (number of
> update requests in total, for example) for the specific purpose of
> appeasing these privacy issues. It seems quite a bit better to trust
> someone you know with a little bit of information that they promise
> not to use than it does to leave yourself open for attack by the
> people who would abuse that information.

Habari can collect only aggregate data, but to suggest that this is
somehow "protecting" our customer's privacy is a joke: the server logs
will record IP addresses, user agents, and much more granular data about
visitors. To claim that Habari respects your privacy while the
infrastructure on which Habari executes does not is disingenuous.

> The key is the idea of an effectively managed server, and in
> conjunction, effectively managed site software. Managing a server
> effectively, using strong passwords or public keys, is something of
> which I expect many readers of this list are capable. I believe that
> most users of Habari will not be subscribed to the development mailing
> list, and even that most subscribers have not been trained in
> effectively securing their sites. I accuse most participants here of
> having reasonable technical acumen, but consider: Have you received
> security training, or is your knowledge anecdotal? Now consider the
> common blog user's exposure to that training, anecdotal or otherwise,
> and you might come to the same conclusion that I do: Most people
> don't have the slightest clue about security.

I submit to you that most users don't _care_ about security. The
effects are vague to them. The ramifications baroque, and unintuitive.
"So I just start over, big whoop." People are slowly becoming more
aware of the issues surrounding security, but it's still a foreign
concept for most people. Many people I know will happily trade security
in favor of convenience when it comes to computers and networking.

> So what can we do? We can help users make correct security choices.
> We can make it possible for people not to have to think about whether
> they want to accomplish a task at the expense of their site's
> security. There should be no reason for users to have to decide to be
> insecure or follow practices that would allow them to become insecure
> to get what they want. Presenting a user with descriptive choices
> that require a background in security implications will almost
> certainly result in a decision based on convenience rather than
> security.
>
> I suggest that we build Habari in such a way that it's easier to do by
> default what is right for security than it is to make a bad decision
> at security's expense.

This is a very sticky subject; and minds greater than our's have
wrestled with it for a lot longer than we have.

I don't disagree that it's a good idea to build Habari in such a way as
to make it easy to be secure-by-default. But there are _many_ factors
outside of our control that will always work against us.

* we cannot require our users to use https communications, so their
passwords will almost always be delivered in plaintext. Likewise their
cookies will be transmitted in the clear. Logging in while using a
public wireless network is all it takes to expose your credentials.

* we cannot require our users to use client-side SSL certificates,
one-time passwords, biometric authentication mechanisms, or anything
else that tries to prove to the server that the user is who they claim
to be.

* an overwhelming majority of our users will _not_ have physical control
of (let alone access to) their servers. If one does not have physical
control, all one's security efforts are for naught. Physical access to
the machine gives one full access to the contents of that machine. An
irate -- or even just thoughtless -- hosting provider employee can
trivially bypass all of our protections.

We've talked a lot on this list previously about how to make Habari
enjoy a pain-free upgrade. Discussions have included "in-place,
web-based upgrades", whereby Habari will download and extract itself for
you. The only reasonable way to accomplish such a thing is to relax
filesystem permissions on the Habari files. This is diametrically
opposed to any notion of "secure-by-default": any time the web server
has the opportunity to create new files on the filesystem, you are
introducing a possible attack vector. The same holds true for media
uploads, incidentally.

(It was also proposed to use the blog owner's FTP credentials to upload
the new version. This is as bad -- if not worse -- than relaxed
filesystem permissions.)

--
GPG 9CFA4B35 | ski...@skippy.net | http://skippy.net/

Owen Winkler

unread,
Jan 29, 2007, 11:50:33 PM1/29/07
to habar...@googlegroups.com
On 1/29/07, Scott Merrill <ski...@skippy.net> wrote:
>
> I would like to state the security mantra early in this discussion:
>
> Security is inversely proportional to convenience.
>
> The more you secure something, the less convenient it is to use.
> The more convenient you make something, the harder it is to secure.
>
> You alluded to this late in your message. I want it in the forefront of
> everyone's minds as we embark on this discussion.

Yes, it should be in the forefront of people's minds because as you
make things more convenient, you lose security. Does adding adequate
security make things less convenient? Sure. As surely as a login is
an inconvenient step in getting directly to work, not having it would
make you inadequately secure.

Striking the correct balance of security to convenience is likely the
primary goal of any discussion on security. So far, we have no such
equation of balance. If I suggest tuning off every blog remotely when
we discover a misspelling, I have no idea if that is excessive
security for Habari, because our limits are not defined. What we're
going to do about security is unknown.

I wonder if paranoia about a kill-switch feature is even helpful to
bring attention to the real issues. It's one way to solve a certain
problem, and probably not the best way. But we have no limits, and we
need to know better what we should consider when deciding those
limits. Hopefully attention to the discussion sheds some light on
those issues.

> > Either of these areas
> > are also accessible by providing cookies that are not (to my
> > knowledge) one-off entry tokens, and would continue to work
> > indefinitely. Some of that seems like a bad idea.
>
> The cookies expire after several months. Possibly a year. I forget.

Cookies sent to a valid client that log in may expire after a set
length of time, but the key sent to that client that allows access
will be the same key that is used in perpetuity. Because the key does
not change, it is more susceptible to being guessed. Expiring the key
would be an improvement.

I don't want to focus on specific issues of marginal improvement like
this - I would rather address a plan for how we would evaluate whether
Habari is adequately secure. We may yet determine that making this
key expire is more security than is necessary or convenient.

> > A compromised password discovered by
> > scripting an attack on a vulnerable entrypoint could indeed lead to a
> > full server compromise.
>
> It's possible, yes. Security experts will tell you that your best
> position is "security in depth": a variety of different, complimentary
> tools and techniques applied to make the bad guys' jobs harder.
>
> We know that Habari requires certain components in order to execute: a
> web server, a database engine, PHP, plus some filesystem space
> somewhere. It is impractical to think that Habari should be responsible
> for mitigating shortcomings in each of these other pieces.

Similarly, those software packages are not going to account for
failures in shortcomings in Habari. Who is to blame if Habari code,
by calling some function in PHP, allows unsecured access to the
filesystem in versions of PHP that are not appropriately patched? Can
we assume that PHP is the most up-to-date version and that our calls
will not cause these problems?

Are we not accountable at all, or do we deny accountability for these
security issues because we have a minimum version requirement that
includes that PHP security hole? Do we write workarounds in Habari
for known security issues on platforms that we are supposed to run on,
or do we avoid addressing these issues because they're impractical?
And if we require a specific minimum version of PHP, Apache, et al, to
run, do we *require* those versions to function, or do we allow users
to run the software with a gentle nudge (or any notice) to upgrade to
a more secure version of their system software?

I think it's ok to say we're not going to account for everything that
a server can leave open out of our control. I would like to codify in
what cases it's acceptable or required to code for security holes in
the system software, and what a minimum required version really means.

> It happens all the time in other aspects of engineering. One identifies
> the threats, assess how damaging they may be, and tries to determine
> whether the time spent trying to mitigate it offsets the long-term
> benefit of that effort.

Then let's identify threats. It's something that hasn't been done
yet. I proposed a method to accomplish this as an addendum to my
original post.

> "Known vectors of possible security breach" is a rather vague language
> construct. Do you mean "theoretical ways to access the blog"? We can
> theorize about things all day long, and then spend inordinate amounts of
> time trying to address them. Have we made Habari more secure? Perhaps,
> but at what expense? What functionality did not get implemented because
> we were busying trying to close holes that were unlikely to get
> triggered, or that affected only a small percentage of users?

I don't remember suggesting that we close every conceivable
theoretical gap. I do think that listing them all, and deciding what
it would take to cover them is a fine plan. It would be well worth
our while to determine what the need is.

I'm not sure that the idea of unimplemented functionality due to time
constraints is a real argument since there isn't any real timetable
for the software release. Wouldn't the release date of the software
be constrained only by when it meets our security and functionality
requirements? If we were a large corporation with money riding on
release schedules, then it might be an issue. I'm still waiting for
my first check, if that's the case. ;)

The idea of not implementing all of the security we can from the
beginning doesn't sit well with me, though. It seems like we should
be thinking about security as we code new features. We shouldn't add
a feature, release, then backtrack to apply security to it unless the
flaw is revealed after passing whatever security testing we do before
release. All of the functionality we have to release should have all
of the security we're able to apply to it at release time, and that
implies that some kind of security audit needs to take place. More
discussion is required on this topic. How is it done? What are the
parameters? That sort of thing.

> > Data loss is something that Habari doesn't handle at all.
>

> I can only assume you're talking about regular backups. If you have
> something else in mind, do please say it.

Regular backups are a solution to the problem of data loss. I was
trying to avoid suggesting specific solutions, but yes, that's the
only one I can think of that would address recovery.

> You and I both know from personal experience that blog backups are a can
> of worms. Where do you store the temporary backup file(s) prior to
> delivering them to the user? How do you deliver them to the user in
> such a way as to prevent compromise? What data should be included in
> the backup -- user accounts including passwords? just post data? what
> about private posts?
>
> Short of bundling Habari with strong cryptographic routines to secure
> the data, and requiring GPG-signed email delivery of backups or an
> SSL-aware web server, there's little we can do to truly cover all the
> bases. Does this mean that we've failed our security goals? I don't
> think so.

All of your concerns are valid, and I'm glad that someone has finally
spoken them aloud. This is the point of opening this dialog - to get
people talking about making these decisions for our security, and to
have a record of the decisions and why we made them.

I don't propose to have The Answer to what we should do for recovery.
I have ideas, as I'm sure you do, even if the idea is "from our docs,
reference a shell-based cron script for off-site backup". We simply
need to discuss data recovery, how and whether Habari even addresses
it. It's ok if we don't include a backup mechanism, but I'd like to
offer reasons why and alternative answers.

> > Updates are interesting things. I've proffered several ideas
> > concerning updates on the IRC channel, and the opinion is
> > overwhelmingly that users should be responsible for applying their own
> > updates. When taking security into consideration, is that the best
> > method?
>
> Customers almost always resent manufacturers telling them that they know
> better. Moreover, end-users find all sorts of clever ways to use
> software beyond the original intention. A very practical example is
> Matt Read's Habari-powered pastebin. I worry that any enforced security
> paradigm might stifle such wonderful innovation.

Is the idea that customers will resent us a valid argument for leaving
the application insecure? To me, that sounds like saying my kid
resents me if I don't give her candy before bed, so I give it to her.

I'm curious how security could stifle innovation. I'm not doubtful
that it may be true, but I doubt that security would *by necessity*
stifle innovation. It seems like FUD that could be discovered to be
true or false by applying a security plan.

> People drive their cars with the "service engine" light on for a variety
> of reasons. For one, planning a trip to the mechanic is often a pain in
> the ass, and must be worked into one's schedule. Sometimes the light is
> a false alarm -- confirmed by the mechanic -- so people leave it be.
> Sometimes people are just lazy.
>
> In the situations in which *I* have had personal experience, the warning
> lights on an automobile's dashboard are just that: warnings. "Hey, get
> this checked out soon." I've never seen a light that means "Your
> vehicle is about to explode."

Indeed! People ignore upgrading because it's inconvenient. It must
be worked into a schedule, which makes it inopportune. Sometimes
upgrades fix things that would never apply to your circumstance and
aren't worthwhile. Other times, you just don't feel like upgrading.
There are different levels of severity that we would be able to
distinguish in Habari that a "service engine" light cannot, which
might help with these conditions.

No, the software may not explode before you get to upgrade it. But if
your site info is so critical, might you rather it protect itself than
possibly explode?

Running continuously on that warning light could be racking up the
potential service costs with every mile. My original point is that
people have grown so accustomed to the warnings that they don't really
heed them anymore, regardless of the level of peril.

> How would you feel if your car simply refused to start because the
> engine's software detected some problem that _might_ trigger? You're
> not informed. You're not given an option to get the car to the shop.
> What if you're on the freeway when it happens? Are you thankful for
> your car's overzealous safety?

Are you trying to equate the car not running or shutting down with the
auto-shutoff feature that someone (it wasn't me, by the way)
suggested? I don't think the analogy holds up there, if only because
the car's warning light is, as you've already pointed out, not an
indication that the car is going to explode.

If my car was primed to possibly explode if it ran, then yes, I would
want it to stop running or fail to start. I would be thankful for my
car's safety, especially after my mechanic was likely to mention that
I could have died had the car started.

That's why the idea of levels of severity are important. There are
different kinds of threats. The "start smoking profusely and making
awful noises" threat does not necessarily merit shutdown. The
"leaking gas on I-95" just might.

> What is our level of responsibility versus our level of culpability? If
> we take real measures to try to alert people to a problem, are we at
> fault if they fail to take action? Is your automobile manufacturer at
> fault if you choose to neglect the warning light for months on end? Do
> you speak ill of your brand of vehicle if you ignore the light for
> months only to have the engine one day fail to start due to mechanical
> problems?

No, I don't think we would be at fault. At the same time, I don't see
why we would simply display a warning when there are possibly other
means to employ (means that we should discuss the merits/flaws of
on-list) that would obviate both the warning message and the potential
damage.

> > I'm not suggesting that every case is a critical emergency, either.
> > Still, I assume you can imagine a scenario where some new code is
> > introduced that unknowingly provides some access to high-level
> > passwords. The recent XMLRPC exploit in WordPress illustrates how new
> > code can unknowingly provide access to things it shouldn't. I am
> > fairly sure that they did not insert this code on purpose.
>
> I'm not familiar with the situation. I'm sure other's aren't either. A
> link to more information would be helpful.

Here's the link:
http://comox.textdrive.com/pipermail/wp-hackers/2007-January/010519.html

The idea that it's possible to introduce a critical security issue
with what seems like innocuous code is the more salient point, though.

> > I'm not sure what the criteria is to differentiate between a critical
> > net-wide affecting issue and one that could cause a few posts to be
> > deleted. Sure, it's just blog software, but I think that people would
> > be wise to consider that any software you run on your server has the
> > potential to make available breaches in that server's security.
>
> How far along the software stack must we try to protect our users?
> Vulnerabilities in PHP, web servers, and database systems are well
> beyond our expertise: none of us are security experts, as you pointed
> out originally.

I don't know. I think we would do well to discuss how far we should go.

In the paragraph you quote though (and the one that follows that one),
I suggest that even the smallest software application is responsible
for the security breaches it creates. Sure, PHP might have a security
flaw, but if the server contains no script for that flaw to be
exploited, then where's the breach? Conversely, if there is script,
is the script responsible for the breach, even if it executes via the
broken PHP?

I don't know the answers to this. I seek the answer to how Habari
will handle these cases, and if it's not what I like personally (I
don't even know what that is right now), then so be it, but it will be
available historically for others to see how we've answered those
questions, too.

> > What about privacy? Privacy is another aspect of convenience. It is
> > convenient for you that people not know where your site is. That's
> > fine. What happens to your privacy when you fail to update and your
> > server is compromised, allowing hackers to wantonly access and
> > distribute your private data? I specifically suggest that we do not
> > track any information except possibly in the aggregate (number of
> > update requests in total, for example) for the specific purpose of
> > appeasing these privacy issues. It seems quite a bit better to trust
> > someone you know with a little bit of information that they promise
> > not to use than it does to leave yourself open for attack by the
> > people who would abuse that information.
>
> Habari can collect only aggregate data, but to suggest that this is
> somehow "protecting" our customer's privacy is a joke: the server logs
> will record IP addresses, user agents, and much more granular data about
> visitors. To claim that Habari respects your privacy while the
> infrastructure on which Habari executes does not is disingenuous.

Aggregate data itself doesn't protect anything, no. Of course, we can
tell the server/Apache not to log requests to the update scripts,
which will pretty much nullify every concern you've mentioned, and we
can still keep aggregate data. Suffice to say that I believe there
are more things that we can do to keep the update working with users
who require privacy that we don't have to throw our hands in the air
in futility.

Still, the main question regarding privacy and security has gone
unaddressed: Is it better (whether we logged requests or not) to have
a known entity (us, Habari) with a public policy on private data keep
that data, or to keep that data private while placing the burden of
security on a site admin for updates that would protect the actual
private data on the private server for which they would receive no
notification?

> I submit to you that most users don't _care_ about security. The
> effects are vague to them. The ramifications baroque, and unintuitive.
> "So I just start over, big whoop." People are slowly becoming more
> aware of the issues surrounding security, but it's still a foreign
> concept for most people. Many people I know will happily trade security
> in favor of convenience when it comes to computers and networking.

It's a shame that people feel that way. Even I feel that way
sometimes, so I know how it can be.

Yes- I think I am pursuing this particular topic a bit more
overzealously than I would for myself if I was just a user. I don't
see anyone really discussing security, whether hard or soft, and I
find that of reasonable concern. Maybe I should start a new "Where's
our logo?" thread just to ramp up traffic again?

> > I suggest that we build Habari in such a way that it's easier to do by
> > default what is right for security than it is to make a bad decision
> > at security's expense.
>
> This is a very sticky subject; and minds greater than our's have
> wrestled with it for a lot longer than we have.
>
> I don't disagree that it's a good idea to build Habari in such a way as
> to make it easy to be secure-by-default. But there are _many_ factors
> outside of our control that will always work against us.

It will be hard. I'll say it. It will be hard. I believe it will
also be worth working out.

All of the ideas you mentioned address specific concerns in a single
way. While they may be valid mediation methods, we have not had an
opportunity to discover what the real reason to use any of them is.
Also, there may be alternatives we can present to users so that, for
example, if they don't have a biometric scanner, they could use some
other form of positive identification.

After we've listed what vectors exist and what ways are possible to
address them, then it will be a good time to think about which ideas
are practical, which ideas we can supply to provide a users who want
an additional measure of security, and which ideas are destined for a
future wishlist or the dustbin.

I don't think that it's appropriate to cross off these ideas without
discussing them. For instance, why can't we require SSL for logins?
Plenty of hosts offer shared SSL directories for their customers.
This might be something that is too rare to have as part of our core
security strategy, but this is the first time I've heard anyone say
"we shouldn't require it."

These may be the first ideas that I've heard for Habari features that
anyone has said outright, "We cannot do this." I'm not saying it's
wrong, just observing how accepting we've been of suggestions up until
we hit security and mentioned a hint of the little inconvenience to
obtain it.

Thanks, skippy, for your well-thought comments, as always.

Owen

Rich Bowen

unread,
Jan 30, 2007, 8:22:00 AM1/30/07
to habar...@googlegroups.com

On Jan 29, 2007, at 21:39, Scott Merrill wrote:

>
> Owen Winkler wrote:
>> I am under the impression that many open source packages (and in
>> specific, blog applications) do not go through the process of a
>> thorough security evaluation. I don't want Habari to be one of those
>> packages.
>
> I do not know any PHP security experts. I'd be interested in a formal
> security review of any Habari release, though I worry that the results
> would be depressing.

*The* PHP security expert is Chris Shiflett. http://shiflett.org/
There's also John Coggeshall.

John expressed interest in what we were doing, when the subject came
up at ApacheCon. I haven't spoken with Chris about it, but I could
speak with him about it the next time I see him.

--
"Books to the ceiling, Books to the sky, My pile of books is a mile
high.
How I love them! How I need them! I'll have a long beard by the time
I read them." -- Arnold Lobel


Matthias Bauer

unread,
Jan 30, 2007, 8:34:25 AM1/30/07
to habar...@googlegroups.com
On 30.01.2007 14:22 Rich Bowen wrote:

> *The* PHP security expert is Chris Shiflett. http://shiflett.org/
> There's also John Coggeshall.

And Stefan Esser, of hardened-php.net, php-security.org, and Suhosin.
Although I guess some might call him 'complicated', he knows his stuff.

-Matt

Scott Merrill

unread,
Jan 30, 2007, 7:55:32 PM1/30/07
to habar...@googlegroups.com
ringmaster wrote:
> Step 1-
>
> Asume that there are a bunch of assets on your site/blog to which
> people may want to gain access. These could be CPU time, bandwidth,
> user IDs, passwords, ability to change the blog, whatever. List these
> out.

* passwords
* shell access
* CPU cycles
* names of other accounts on the server
* files in /home

> Step 2-
>
> There are wasy for people to obtain those things listed in step 1.
> List out every method there is to obtain each thing listed in step 1,
> no matter how trivial or difficult. Do not pre-screen these.

1) boot the server using a LiveCD
2) guess an account name and password
3) trick a user to give you their password
4) capture a user's password by key logging a keyboard
5) create a popular, useful plugin and embed a backdoor into it
6) offer to help someone install / configure their site, and retain
their account name and password
7) offer to help someone install / configure their site, and install a
backdoor in the process
8) capture a password or cookie via an unsecured WiFi login

I fully expect that there are lots of things I don't know about. For
example, I don't know much about buffer overflows, so I suspect that
it's possible to craft just the right packet to send to my server to
cause something to happen. I have no experience with this, and trust my
application providers (Debian, the Linux kernel developers, Apache, and
PHP) to do their best to keep me safe.

> Step 3-
>
> For each item in step 2's list, present the mitigations that are in
> place to prevent them from happening.

1) I have physical control over my server. It's in my house.
2) I use strong passwords wherever possible; and prefer client SSL
logins for authentication (ie: present something I have, not something I
know, for access)
3) I never give passwords to anyone.
4) I never log in to my blog on computers I do not physically control.
5) I rarely install whiz-bang plugins. I try to look over the plugins I
install, to make sure I understand what they do.
6) I do not need help with my site; and would never give someone else
access to it.
7) see above
8) I usually connect to my VPN at home when using public networks

> Step 4-
>
> Evaluate whether the mitigations are enough, or whether more need to
> be added. It's a good idea to have multiple defenses against items in
> list 2, for the purpose of redundency - if one prevention fails,
> another is there to help.

I believe that the measures I have in place are sufficient for my level
of risk.

Reply all
Reply to author
Forward
0 new messages