Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Safety and security

5 views
Skip to first unread message

Dan Sugalski

unread,
Mar 23, 2004, 2:38:00 PM3/23/04
to perl6-i...@perl.org
Okay, we'll try this again... (darned cranky mail clients)

We've two big issues to deal with here--safety and security. While
related they aren't the same and there are different things that need
doing. As far as I can see it, we need four things:

1) An oploop that checks branch destinations for validity

2) Opcodes that check their parameters for basic sanity--valid
register numbers (0-31) and basically correct (ie non-NULL) register
contents

3) An oploop that checks basic quotas, mainly run time

4) Opcodes that check to see if you can actually do the thing you've requested

#s 1&2 are safety issues. #2, specifically, can be dealt with by the
opcode preprocessor, generating op bodies that do validity checking.
#1 needs a bounds-checking runloop, which we mostly have already. I'm
comfortable getting this done now, and this is what the framework
that's going in should be able to handle OK.

#s 3&4 deal with security. This... this is a dodgier issue.
Security's easy to get wrong and hard to get right. (Though quotas
are straightforward enough. Mostly) And once the framework's in
place, there's the issue of performance--how do we get good
performance in the common (insecure) case without sacrificing
security in the secure case?

At any rate, perl 5's Safe module is a good example of the Wrong Way
to do security, and as such we're going to take it as a cautionary
tale rather than a template. For security I want to go with an
explicit privilege model with privilege checking in parrot's
internals, rather than counting on op functions to Do The Right
Thing. That means that IO restrictions are imposed by the IO code,
not the IO ops, and suchlike stuff. Generally speaking, we're going
to emulate the VMS quota and privilege system, as it's reasonably
good as these things go.

If we're going to tackle this, though, we need to pull in some folks
who're actually competent at it before we do more than handwave about
the design.
--
Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski even samurai
d...@sidhe.org have teddy bears and even
teddy bears get drunk

Jarkko Hietaniemi

unread,
Mar 23, 2004, 3:58:37 PM3/23/04
to perl6-i...@perl.org, Dan Sugalski, perl6-i...@perl.org
> > At any rate, perl 5's Safe module is a good example of the Wrong Way
> to do security, and as such we're going to take it as a cautionary
> tale rather than a template. For security I want to go with an
> explicit privilege model with privilege checking in parrot's
> internals, rather than counting on op functions to Do The Right
> Thing. That means that IO restrictions are imposed by the IO code,
> not the IO ops, and suchlike stuff. Generally speaking, we're going
> to emulate the VMS quota and privilege system, as it's reasonably
> good as these things go.

For people who are wondering what has Dan got in his pipe today:
http://www.sans.org/rr/papers/22/604.pdf
And here a bit about quotas:
http://h71000.www7.hp.com/DOC/72final/5841/5841pro_028.html#58_quotasprivilegesandprotecti
(I swear I didn't make up the URL, HP did)

oz...@algorithm.com.au

unread,
Mar 23, 2004, 8:36:47 PM3/23/04
to d...@sidhe.org, perl6-i...@perl.org
On 24/03/2004, at 6:38 AM, Dan Sugalski wrote:

> At any rate, perl 5's Safe module is a good example of the Wrong Way
> to do security, and as such we're going to take it as a cautionary
> tale rather than a template. For security I want to go with an
> explicit privilege model with privilege checking in parrot's
> internals, rather than counting on op functions to Do The Right Thing.
> That means that IO restrictions are imposed by the IO code, not the IO
> ops, and suchlike stuff. Generally speaking, we're going to emulate
> the VMS quota and privilege system, as it's reasonably good as these
> things go.
>
> If we're going to tackle this, though, we need to pull in some folks
> who're actually competent at it before we do more than handwave about
> the design.

This is a question without a simple answer, but does Parrot provide an
infrastructure so that it would be possible to have proof-carrying[1]
Parrot bytecode? I'm of course not advocating that we should look into
proof-carrying code immediately, but I think it's important to realise
that PCC exists, and that Parrot should be forward-compatible with it,
if people want to put PCC concepts into Parrot at a later stage.

1. http://www.cs.princeton.edu/sip/projects/pcc/ -- Google around for
plenty of other links!


--
% Andre Pang : trust.in.love.to.save

Joe Schaefer

unread,
Mar 23, 2004, 5:48:45 PM3/23/04
to perl6-i...@perl.org
d...@sidhe.org (Dan Sugalski) writes:

[...]

> #s 3&4 deal with security. This... this is a dodgier issue. Security's
> easy to get wrong and hard to get right. (Though quotas are
> straightforward enough. Mostly) And once the framework's in place,
> there's the issue of performance--how do we get good performance in
> the common (insecure) case without sacrificing security in the secure case?

You might wish to consider a modular design here, similar to linux 2.6's
security modules (LSM)

http://www.nsa.gov/selinux/papers/module/x47.html

IMO, the advantage would be that parrot apps will have a better idea
of what security model is appropriate. So if the modular security hooks
can be made cheap enough, the more vexing security/performance tradeoffs
can be left up to the parrot apps.

No clue how to achieve this though- just a thought from a member of the
peanut gallery.
--
Joe Schaefer

Leopold Toetsch

unread,
Mar 24, 2004, 8:50:31 AM3/24/04
to Dan Sugalski, perl6-i...@perl.org
Dan Sugalski <d...@sidhe.org> wrote:

> At any rate, perl 5's Safe module is a good example of the Wrong Way
> to do security, and as such we're going to take it as a cautionary
> tale rather than a template.

Ok. What about Ponie?

leo

Dan Sugalski

unread,
Mar 24, 2004, 9:21:55 AM3/24/04
to l...@toetsch.at, perl6-i...@perl.org

What about it? Safe's one of those modules that's guaranteed to not
work under Ponie, as are a number of the B modules. That's OK.

Rafael Garcia-Suarez

unread,
Mar 24, 2004, 9:50:42 AM3/24/04
to perl6-i...@perl.org
Dan Sugalski wrote in perl.perl6.internals :

> At 2:50 PM +0100 3/24/04, Leopold Toetsch wrote:
>>Dan Sugalski <d...@sidhe.org> wrote:
>>
>>> At any rate, perl 5's Safe module is a good example of the Wrong Way
>>> to do security, and as such we're going to take it as a cautionary
>>> tale rather than a template.
>>
>>Ok. What about Ponie?
>
> What about it? Safe's one of those modules that's guaranteed to not
> work under Ponie, as are a number of the B modules. That's OK.

Why?

OK, I understand that Ponie will compile Perl 5 source to parrot ops,
and that Safe's interface uses perl ops. However it's a pure
compile-time module -- it hooks into the optree construction routines --
so it may be possible to have an equivalent of it under Ponie.

(not saying that this would be necessarily a good idea, though)

--
rgs

Dan Sugalski

unread,
Mar 24, 2004, 11:35:49 AM3/24/04
to Rafael Garcia-Suarez, perl6-i...@perl.org

It may be possible, but I'd not count on it. And given how busted it
is, I think I'd actually prefer it not work.

Anything that twiddles deep in the internals of the interpreter is
going to fail, and there's not a whole lot we can do about that--our
internals look very different, and there's a lot that just can't be
emulated.

Dan Sugalski

unread,
Mar 24, 2004, 12:05:25 PM3/24/04
to perl6-i...@perl.org
At 5:48 PM -0500 3/23/04, Joe Schaefer wrote:
>d...@sidhe.org (Dan Sugalski) writes:
>
>[...]
>
>> #s 3&4 deal with security. This... this is a dodgier issue. Security's
>> easy to get wrong and hard to get right. (Though quotas are
>> straightforward enough. Mostly) And once the framework's in place,
>> there's the issue of performance--how do we get good performance in
>> the common (insecure) case without sacrificing security in the secure case?
>
>You might wish to consider a modular design here, similar to linux 2.6's
>security modules (LSM)
>
> http://www.nsa.gov/selinux/papers/module/x47.html
>
>IMO, the advantage would be that parrot apps will have a better idea
>of what security model is appropriate.

Well... maybe.

Parrot apps don't get a whole lot of say here--this is more on the
order of OS level security. Not that it makes a huge difference, of
course.

I'm not familiar with the new linux system, and I'm not *going* to
get familiar enough with it to make any sensible decisions, so I
think I'd prefer to stick with a system I'm comfortable with and that
I know's got a solid background. (So at least any problems are a
matter of implementation rather than design -- those, at least, are
fixable)

Dan Sugalski

unread,
Mar 24, 2004, 12:06:21 PM3/24/04
to oz...@algorithm.com.au, perl6-i...@perl.org
At 12:36 PM +1100 3/24/04, oz...@algorithm.com.au wrote:
>On 24/03/2004, at 6:38 AM, Dan Sugalski wrote:
>
>>At any rate, perl 5's Safe module is a good example of the Wrong
>>Way to do security, and as such we're going to take it as a
>>cautionary tale rather than a template. For security I want to go
>>with an explicit privilege model with privilege checking in
>>parrot's internals, rather than counting on op functions to Do The
>>Right Thing. That means that IO restrictions are imposed by the IO
>>code, not the IO ops, and suchlike stuff. Generally speaking, we're
>>going to emulate the VMS quota and privilege system, as it's
>>reasonably good as these things go.
>>
>>If we're going to tackle this, though, we need to pull in some
>>folks who're actually competent at it before we do more than
>>handwave about the design.
>
>This is a question without a simple answer, but does Parrot provide
>an infrastructure so that it would be possible to have
>proof-carrying[1] Parrot bytecode?

In the general sense, no. The presence of eval and the dynamic nature
of the languages we're looking at pretty much shoots down most of the
provable bytecode work. Unfortunately.

Garrett Goebel

unread,
Mar 24, 2004, 6:21:17 PM3/24/04
to Dan Sugalski, perl6-i...@perl.org
Dan Sugalski wrote:
>
> If we're going to tackle this, though, we need to pull in some folks
> who're actually competent at it before we do more than handwave about
> the design.

A Language-Based Approach to Security (2000)
http://citeseer.ist.psu.edu/schneider00languagebased.html

Linux Security Modules: General Security Support for the Linux Kernel (2002)
http://citeseer.ist.psu.edu/wright02linux.html

The Three Security Architectures
http://www.canonical.org/%7Ekragen/3-sec-arch.html

Capability Security Model
http://www.erights.org/elib/capability/index.html
http://www.erights.org/elib/capability/duals/myths.html

Code Access Security in .Net
http://msdn.microsoft.com/msdnmag/issues/02/09/SecurityinNET/

Blogs of the primary CLR CAS developers
http://blogs.gotdotnet.com/gregfee
http://blogs.dotnetthis.com/Ivan/

Java vs. .NET Security, Parts 1 + 2
http://www.onjava.com/pub/a/onjava/2003/11/26/javavsdotnet.html
http://www.onjava.com/pub/a/onjava/2003/12/10/javavsdotnet.html

--
Garrett Goebel
IS Development Specialist

ScriptPro Direct: 913.403.5261
5828 Reeds Road Main: 913.384.1008
Mission, KS 66202 Fax: 913.384.2180
www.scriptpro.com garrett at scriptpro dot com

Steve Fink

unread,
Mar 24, 2004, 10:39:44 PM3/24/04
to Dan Sugalski, oz...@algorithm.com.au, perl6-i...@perl.org
On Mar-24, Dan Sugalski wrote:
> At 12:36 PM +1100 3/24/04, oz...@algorithm.com.au wrote:
> >On 24/03/2004, at 6:38 AM, Dan Sugalski wrote:
> >
> >This is a question without a simple answer, but does Parrot provide
> >an infrastructure so that it would be possible to have
> >proof-carrying[1] Parrot bytecode?
>
> In the general sense, no. The presence of eval and the dynamic nature
> of the languages we're looking at pretty much shoots down most of the
> provable bytecode work. Unfortunately.

? I'm not sure if I understand why. (Though I should warn that I did
not read the referenced paper; my concept of PCC comes from reading a
single CMU paper on it a couple of years ago.) My understanding of PCC
is that it freely allows any arbitrarily complex code to be run, as
long as you provide a machine-interpretable (and valid) proof of its
safety along with it. Clearly, eval'ing arbitrary strings cannot be
proved to be safe, so no such proof can be provided (or if it is, it
will discovered to be invalid.) But that just means that you have to
avoid unprovable constructs in your PCC-boxed code.

Eval'ing a specific string *might* be provably safe, which means that
we should have a way for an external (untrusted) compiler to not only
produce bytecode, but also proofs of the safety of that bytecode. We'd
also need, of course, the trusted PCC-equipped bytecode loader to
verify the proof before executing the bytecode. (And we'd need that
anyway to load in and prove the initial bytecode anyway.)

This would largely eliminate one of the main advantages of PCC, namely
that the expensive construction of a proof need not be paid at
runtime, only the relatively cheap proof verification. But if it is
only used for small, easily proven eval's, then it could still make
sense. The fun bit would be allowing the eval'ed code's proof to
reference aspects of the main program's proof. But perhaps the PCC
people have that worked out already?

Let me pause a second to tighten the bungee cord attached to my
desk -- all this handwaving, and I'm starting to lift off a little.

The next step into crazy land could be allowing the proofs to express
detailed properties of strings, such that they could prove that a
particular string could not possibly compile down to unsafe bytecode.
This would only be useful for very restricted languages, of course,
and I'd rather floss my brain with diamond-encrusted piano wire than
attempt to implement such a thing, but I think it still serves as a
proof of concept that Parrot and PCC aren't totally at odds.

Back to reality. I understand that many of Parrot's features would be
difficult to prove, but I'm not sure it's fundamentally any more
difficult than most OO languages. (I assume PCC allows you to punt on
proofs to some degree by inserting explicit checks for unprovable
properties, since then the guarded code can make use of those
properties to prove its own safety.)

oz...@algorithm.com.au

unread,
Mar 25, 2004, 1:34:22 AM3/25/04
to st...@fink.com, d...@sidhe.org, perl6-i...@perl.org
On 25/03/2004, at 2:39 PM, Steve Fink wrote:

> On Mar-24, Dan Sugalski wrote:
>> At 12:36 PM +1100 3/24/04, oz...@algorithm.com.au wrote:
>>> On 24/03/2004, at 6:38 AM, Dan Sugalski wrote:
>>>
>>> This is a question without a simple answer, but does Parrot provide
>>> an infrastructure so that it would be possible to have
>>> proof-carrying[1] Parrot bytecode?
>>
>> In the general sense, no. The presence of eval and the dynamic nature
>> of the languages we're looking at pretty much shoots down most of the
>> provable bytecode work. Unfortunately.
>
> ? I'm not sure if I understand why. (Though I should warn that I did
> not read the referenced paper; my concept of PCC comes from reading a
> single CMU paper on it a couple of years ago.) My understanding of PCC
> is that it freely allows any arbitrarily complex code to be run, as
> long as you provide a machine-interpretable (and valid) proof of its
> safety along with it.

> Clearly, eval'ing arbitrary strings cannot be proved to be safe,

It can be safe. Normally, PCC works by certifying the code during
compilation, and attaching the machine-checkable certificate with the
resulting compiled code (be that bytecode, machine code or whatever).
During runtime, a certificate checker then validates the certificate
against the provided compiled code, to assure that what the certificate
says it's true.

If you eval an arbitrary string, the compile/evaluate stages are more
closely linked: you effectively run the code (and thus check the
certificate) immediately after compilation.

The main requirement is that Parrot permits some sort of 'hooks', so
that

1. during compilation, a certificate of proof can be generated and
attached with the bytecode, and

2. before evaluation of the code, a certificate checker has to
validate the certificate against the code, and also that

3. Parrot's bytecode format must allow such a certificate to be
stored with the bytecode.

> Eval'ing a specific string *might* be provably safe, which means that
> we should have a way for an external (untrusted) compiler to not only
> produce bytecode, but also proofs of the safety of that bytecode. We'd
> also need, of course, the trusted PCC-equipped bytecode loader to
> verify the proof before executing the bytecode. (And we'd need that
> anyway to load in and prove the initial bytecode anyway.)
>
> This would largely eliminate one of the main advantages of PCC, namely
> that the expensive construction of a proof need not be paid at
> runtime, only the relatively cheap proof verification.

If you are directly eval'ing an arbitrary string, then yes, you have to
generate the proof when you compile that string to PBC. But you can
also provide a program/subroutine/etc as PBC with a certificate already
attached.

> Back to reality. I understand that many of Parrot's features would be
> difficult to prove, but I'm not sure it's fundamentally any more
> difficult than most OO languages.

AFAIK (although I don't know that much :), the Java VM has been proved
secure to a large extent.

Joe Schaefer

unread,
Mar 24, 2004, 1:06:17 PM3/24/04
to perl6-i...@perl.org
d...@sidhe.org (Dan Sugalski) writes:

> At 5:48 PM -0500 3/23/04, Joe Schaefer wrote:

[...]

> >IMO, the advantage would be that parrot apps will have a better idea
> >of what security model is appropriate.
>
> Well... maybe.
>
> Parrot apps don't get a whole lot of say here--this is more on the
> order of OS level security. Not that it makes a huge difference, of course.

To be specific, I was thinking about embedded parrot apps like mod_perl,
where it might be nice to enforce a security policy on a per-vhost
(virtual server) basis. That isn't something all parrot apps would
benefit from, of course.

--
Joe Schaefer

James Mastros

unread,
Mar 25, 2004, 6:39:21 AM3/25/04
to perl6-i...@perl.org, perl6-i...@perl.org
oz...@algorithm.com.au wrote:
> It can be safe. Normally, PCC works by certifying the code during
> compilation, and attaching the machine-checkable certificate with the
> resulting compiled code (be that bytecode, machine code or whatever).
> During runtime, a certificate checker then validates the certificate
> against the provided compiled code, to assure that what the certificate
> says it's true.
Oh. In that case, the fact that it's "proof carrying" is just a
particular case of signed code. I think that's a solved problem in
parrot, at least from a design-of-bytecode perspective. It may have
become unsolved recently, though.

I thought proof-carrying code contained a proof, not a certificate.
(The difference is that a proof is verifiably true -- that is, it's
givens match reality, and each step is valid. OTOH, a certificate is
something that we have to use judgment to decide of we want to trust or
not.)

> The main requirement is that Parrot permits some sort of 'hooks', so that
>
> 1. during compilation, a certificate of proof can be generated and
> attached with the bytecode, and
>
> 2. before evaluation of the code, a certificate checker has to
> validate the certificate against the code, and also that
>
> 3. Parrot's bytecode format must allow such a certificate to be
> stored with the bytecode.

I think we're done with step 3, but not 1 and 2.

> If you are directly eval'ing an arbitrary string, then yes, you have to
> generate the proof when you compile that string to PBC. But you can
> also provide a program/subroutine/etc as PBC with a certificate already
> attached.

Note that in the common case, there are no eval STRINGs (at runtime),
and thus all you have to do is prove that you don't eval STRING, which
should be a much easier proposition.

>> Back to reality. I understand that many of Parrot's features would be
>> difficult to prove, but I'm not sure it's fundamentally any more
>> difficult than most OO languages.
>
> AFAIK (although I don't know that much :), the Java VM has been proved
> secure to a large extent.

I suspect most code that wants to be provable will attempt to prove that
it does not use those features, rather then prove that it uses them safely.

(As pointed out in a deleted bit of the grandparent post, this may
consist of proving that it has a bit set in the header that says that it
shouldn't be allowed to eval string, which is easy to prove, since it's
a verifiable given.)

-=- James Mastros

Dan Sugalski

unread,
Mar 25, 2004, 9:03:37 AM3/25/04
to Joe Schaefer, perl6-i...@perl.org

Ah, *that* is a different matter altogether.

I'm planning an alternate mechanism for that, though it may be a bit
much--rather than restricting the dangerous things we make sure all
the dangerous things can be delegated to the embedder. So file
manipulation, mass memory allocation/deallocation, and real low-level
signal handling, for example, all get punted to the embedder, who can
then do whatever they want.

This means that when we go read some data from a file we call, say,
Parrot_read, which for the base parrot'll be just read, while for an
embedded parrot it may call some Apache thunking layer or something
instead.

Larry Wall

unread,
Mar 25, 2004, 1:32:02 PM3/25/04
to perl6-i...@perl.org
Do bear in mind that Perl can execute bits of code as it's compiling,
so if a bit of code is untrustworthy, you shouldn't be compiling it
in the first place, unless you've prescanned it to reject C<use>,
C<BEGIN>, and other macro definitions, or (more usefully) have hooks
in the compiler to catch and validate those bits of code before
running them. Doesn't do you much good to disallow

eval 'system "rm -rf /"';

at run time if you don't also catch

BEGIN { system "rm -rf /"; }

at compile time...

(Sorry if I'm just pointing out the obvious.)

Larry

Rafael Garcia-Suarez

unread,
Mar 25, 2004, 2:22:50 PM3/25/04
to perl6-i...@perl.org
Larry Wall wrote in perl.perl6.internals :

That's mostly what Perl 5's Safe is doing. Hence my previous comment.

The major flaw with this approach is that it's probably not going to
prevent
eval 'while(1){}'
or
eval '$x = "take this!" x 1_000_000'
or my personal favourite, the always funny
eval 'CORE::dump()'
unless you set up a very restrictive set of allowed ops.

(in each case, you abuse system resources: CPU, memory or ability to
send a signal. I don't know how to put restrictions on all of these
in the general case...)

Jarkko Hietaniemi

unread,
Mar 25, 2004, 4:35:59 PM3/25/04
to perl6-i...@perl.org, Rafael Garcia-Suarez, perl6-i...@perl.org
Rafael Garcia-Suarez wrote:

>> prevent
> eval 'while(1){}'
> or
> eval '$x = "take this!" x 1_000_000'

Or hog both (for a small while):

eval 'while(push@a,0){}'

> or my personal favourite, the always funny
> eval 'CORE::dump()'
> unless you set up a very restrictive set of allowed ops
>

Dan Sugalski

unread,
Mar 25, 2004, 4:40:33 PM3/25/04
to Jarkko Hietaniemi, perl6-i...@perl.org, Rafael Garcia-Suarez, perl6-i...@perl.org
At 11:35 PM +0200 3/25/04, Jarkko Hietaniemi wrote:
>Rafael Garcia-Suarez wrote:
>
>>> prevent
>> eval 'while(1){}'
>> or
>> eval '$x = "take this!" x 1_000_000'
>
>Or hog both (for a small while):
>
> eval 'while(push@a,0){}'

Which, if the interpreter's running with quotas, will be caught when
it either exceeds the allowable memory limits or CPU limits.

Yay, quotas! :)

> > or my personal favourite, the always funny
>> eval 'CORE::dump()'
>> unless you set up a very restrictive set of allowed ops
>>
>> (in each case, you abuse system resources: CPU, memory or ability to
>> send a signal. I don't know how to put restrictions on all of these
>> in the general case...)

James Mastros

unread,
Mar 26, 2004, 8:57:12 AM3/26/04
to perl6-i...@perl.org
Larry Wall wrote:
> Do bear in mind that Perl can execute bits of code as it's compiling,
> so if a bit of code is untrustworthy, you shouldn't be compiling it
> in the first place, unless you've prescanned it to reject C<use>,
> C<BEGIN>, and other macro definitions, or (more usefully) have hooks
> in the compiler to catch and validate those bits of code before
> running them.
In other words, the compiler must be sure to run immediate bits of code
with the same restrictions as it would run the real code.

This isn't a parrot issue per say; it's a compiler issue, and I don't
see how it requires additional mechinisims for parrot, unless possibly
it's running one pbc (the compiler itself) with one set of
restrictions/quotas, and another bytecode segment (pbc generated during
the compile) with another set.

I think we were planning on that anyway (to allow libraries to be more
trusted then the code that calls them, and callbacks to be less trusted).

-=- James Mastros

Dan Sugalski

unread,
Mar 26, 2004, 9:26:45 AM3/26/04
to James Mastros, perl6-i...@perl.org

Yup. Subroutines and methods are privilege boundaries, and code with
extra rights may call into less privileged code safely. We need to
work out the mechanism though.

Larry Wall

unread,
Mar 26, 2004, 11:27:30 AM3/26/04
to perl6-i...@perl.org
On Fri, Mar 26, 2004 at 09:26:45AM -0500, Dan Sugalski wrote:
: Yup. Subroutines and methods are privilege boundaries, and code with
: extra rights may call into less privileged code safely. We need to
: work out the mechanism though.

One thing you'll have to do in that case is disable the ability to peek
outward into your dynamic scope for various tidbits, such as $CALLER::_.

Larry

0 new messages