This stuff is complicated

72 views
Skip to first unread message

Josh Israel

unread,
Jan 21, 2013, 11:24:38 AM1/21/13
to stanford-...@googlegroups.com
It seems like if there's a common theme to a lot of the papers that we've been reading, it's that we should improve security by having some more sophisticated way to control the propagation of privileges. I'm completely willing to grant that it's easier for a skilled developer to design a secure system with the ideas coming out the papers we've read. But most developers are terrible. Also, I'd guess that a lot of security issues arise because developers didn't think of the implications of what they were doing. Mathematical soundness of a framework doesn't mean much when people don't understand the lattices that they're creating. I mean, after an hour lecture from a professor whose name is on the paper, Stanford students (myself included) still needed a followup explanation to see how to build a conceptually simple thing like a jail in HiStar. I don't intend that as a criticism of anyone; I think it just demonstrates how complicated this gets.

So I guess I have 2 questions coming out of this line of thought:
1) When you incorrectly analyze the permission flow of your program in a capability/labels environment, how bad is the damage and how does it compare to an analogous mistake in a traditional permissions system?
2) Can/should we be trying to build something that the average developer can use? Or do we need completely different understandings of security for things written by experts than the typical programmer?

Eve Nakamura

unread,
Jan 21, 2013, 1:33:27 PM1/21/13
to stanford-...@googlegroups.com
Most developers who don't work on security-critical systems probably don't think too much about security or information flow, and even if they do, there will always be bugs. Even the Jif paper had a bug in one of the code snippets presented. (I'm not sure if it was a typographical error, but bugs are bugs, and if bugs show up in papers, they are even more likely to show up in code written by average developers.)

1) I imagine that the extent of damage caused by bugs or incorrect analysis of information flow is related to the extent to which the information can leak, and where it can flow when it leaks.

2) I don't know if this is helpful, but I'll speak as an average developer who works primarily in application development (nothing security-critical). No one in my group really thinks about security. (Hopefully our software won't do anything to upset the OS or other applications; if we do, then hopefully the OS will do something about it, somehow.) I don't know everything about the information flow within our software or the security of various libraries that we use. If our software magically started needing security and everyone suddenly had to start actively contributing to making it secure, I doubt that anyone would be able to accomplish that without easily understandable APIs or some form of OS-enforced sandboxing or information flow control; anything that is too complicated will invariably result in incorrect usage and bugs. Identifying all of our input and output channels and securing them appropriately would require a lot of effort, but it's still possible. We definitely cannot realistically rewrite our entire application using a new language designed specially for security, however.

There are countless other applications out there that weren't designed with security in mind. But of course, developers who work on software with more security concerns would presumably have a easier time understanding and leveraging security mechanisms than developers who don't deal with security much. Even so, it's still really complicated, as you said.

Deian Stefan

unread,
Jan 21, 2013, 6:31:09 PM1/21/13
to stanford-...@googlegroups.com
You both bring up very good points. Things are definitely complicated.

If you set up labels incorrectly or unintentionally give out privileges, you're exposing your system to similar attacks as in non-IFC systems. Importantly, however, the degree of an attack is usually less damaging. For example, if Alice gives out her privileges to some process, then the integrity of her data may be compromised, but the process cannot exfiltrate or corrupt Bob's data. More generally, IFC and capabilities let you compartmentalize your system in such a way that the compromise of one component will not take down the whole system.

You're absolutely right in saying that most of developers are going to write buggy and vulnerable code. So, how likely are you going to get the compartmentalization right? This is a hard problem and a topic that more recent research is trying to address. 
Without going into details about this stuff (I can point you to some papers, if you're interested), I want to point out that even "vanilla" HiStar gives you an advantage: if you have a security expert on your team than can specify global policies correctly, IFC lets you execute arbitrary code without having to worry that the bugs/vulns in the code will cause massive damage. Hence, you can hire 1 good guy and 100 average developers.

Of course, depending on the scenario (e.g., web apps) this may not be good enough. As we'll see in 2 classes, the Hails paper precisely addresses this concern:  can we build a framework that the average developer, who is not a security expert, can use? [Disclosure: I'm an author on this paper.]
Reply all
Reply to author
Forward
0 new messages