Totally safe transmute

12 Aufrufe
Direkt zur ersten ungelesenen Nachricht

Neil Madden

ungelesen,
17.03.2021, 12:05:2517.03.21
an cap-...@googlegroups.com
Just came across this entertaining blog post about bypassing Rust’s unsafe mode and implementing “transmute” (reinterpret cast) entirely within “safe” code:


The trick is that it opens /proc/self/mem in write mode and uses it to adjust a tagged union behind the back of the language semantics. We all know that security usually depends on memory safety, so this is a nice illustration that memory safety itself depends on other security details (at least, on Linux). 

— Neil

ForgeRock values your Privacy

John Kemp

ungelesen,
17.03.2021, 14:27:1117.03.21
an cap-...@googlegroups.com
Is it unexpected that a program can overwrite its own memory? Even in Rust it's possible to do this already by using ‘unsafe’ to call some C code that typecasts (for example). But programs wouldn’t work if they weren’t allowed to overwrite memory (how would they be loaded to run?)

This is a clever demonstration that it is possible to adhere nicely to your model (marking unsafe code), but still be blindsided by something completely outside of your model/abstraction, but which provides you context (programs run on an OS).

- johnk

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/38B53323-CD9F-4932-89B7-15EF9A6B91FF%40forgerock.com.

Neil Madden

ungelesen,
17.03.2021, 16:09:3017.03.21
an cap-...@googlegroups.com
On 17 Mar 2021, at 18:27, John Kemp <stable.p...@gmail.com> wrote:

Is it unexpected that a program can overwrite its own memory?

In a language that claims to be memory-safe, yes. 

Even in Rust it's possible to do this already by using ‘unsafe’ to call some C code that typecasts (for example).

But you’re explicitly stepping outside of the language abstractions to do that, which is what “unsafe” is for. As the article shows, you can easily ban the use of unsafe blocks in Rust to prevent this, but that is not sufficient. 

But programs wouldn’t work if they weren’t allowed to overwrite memory (how would they be loaded to run?)

This is a clever demonstration that it is possible to adhere nicely to your model (marking unsafe code), but still be blindsided by something completely outside of your model/abstraction, but which provides you context (programs run on an OS).

My take on it is this: introductions to capability security often list memory-safety as a key prerequisite - eg [1]. What this article shows is that in fact security is itself a prerequisite for memory-safety. Only by controlling access to the environment can you defend such abstractions. 


— Neil


- johnk

On Mar 17, 2021, at 12:05 PM, Neil Madden <neil....@forgerock.com> wrote:

Just came across this entertaining blog post about bypassing Rust’s unsafe mode and implementing “transmute” (reinterpret cast) entirely within “safe” code:


The trick is that it opens /proc/self/mem in write mode and uses it to adjust a tagged union behind the back of the language semantics. We all know that security usually depends on memory safety, so this is a nice illustration that memory safety itself depends on other security details (at least, on Linux). 

— Neil

ForgeRock values your Privacy

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/38B53323-CD9F-4932-89B7-15EF9A6B91FF%40forgerock.com.

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

ForgeRock values your Privacy

John Kemp

ungelesen,
17.03.2021, 16:51:2017.03.21
an cap-...@googlegroups.com
On Mar 17, 2021, at 4:09 PM, Neil Madden <neil....@forgerock.com> wrote:

On 17 Mar 2021, at 18:27, John Kemp <stable.p...@gmail.com> wrote:

Is it unexpected that a program can overwrite its own memory?

In a language that claims to be memory-safe, yes. 

Then no language (abstracted away from its physical computing environment) can ever be truly memory-safe despite any such claim. That’s not really a surprise though. 


Even in Rust it's possible to do this already by using ‘unsafe’ to call some C code that typecasts (for example). 

But you’re explicitly stepping outside of the language abstractions to do that, which is what “unsafe” is for. As the article shows, you can easily ban the use of unsafe blocks in Rust to prevent this, but that is not sufficient. 

Indeed. 


But programs wouldn’t work if they weren’t allowed to overwrite memory (how would they be loaded to run?)

This is a clever demonstration that it is possible to adhere nicely to your model (marking unsafe code), but still be blindsided by something completely outside of your model/abstraction, but which provides you context (programs run on an OS).

My take on it is this: introductions to capability security often list memory-safety as a key prerequisite - eg [1]. What this article shows is that in fact security is itself a prerequisite for memory-safety. Only by controlling access to the environment can you defend such abstractions. 


FWIW, we agree :) 

But models are only ever as secure as their contextual assumptions; the thing that’s so bad about this specific issue in Rust (if you ask me, a novice in the language) is that you can do this without explicitly _marking_ the code as unsafe (as you note above). I think (thought) an actual key benefit of Rust is that you can audit the code (read it) and, in theory, see the parts as unsafe because they are so marked. This example clearly demonstrates that’s not true, and makes Rust less useful than I thought. 

- johnk


— Neil


- johnk

On Mar 17, 2021, at 12:05 PM, Neil Madden <neil....@forgerock.com> wrote:

Just came across this entertaining blog post about bypassing Rust’s unsafe mode and implementing “transmute” (reinterpret cast) entirely within “safe” code:


The trick is that it opens /proc/self/mem in write mode and uses it to adjust a tagged union behind the back of the language semantics. We all know that security usually depends on memory safety, so this is a nice illustration that memory safety itself depends on other security details (at least, on Linux). 

— Neil

ForgeRock values your Privacy

-- 
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/38B53323-CD9F-4932-89B7-15EF9A6B91FF%40forgerock.com.


-- 
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/60EEF518-8228-4A66-A1D2-D9EC40247F86%40gmail.com.

ForgeRock values your Privacy

-- 
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Alan Karp

ungelesen,
17.03.2021, 16:59:2517.03.21
an cap-...@googlegroups.com
John Kemp <stable.p...@gmail.com> wrote:

But models are only ever as secure as their contextual assumptions; the thing that’s so bad about this specific issue in Rust (if you ask me, a novice in the language) is that you can do this without explicitly _marking_ the code as unsafe (as you note above). I think (thought) an actual key benefit of Rust is that you can audit the code (read it) and, in theory, see the parts as unsafe because they are so marked. This example clearly demonstrates that’s not true, and makes Rust less useful than I thought. 

Or does it mean you have to audit for more than just use of the unsafe keyword?

--------------
Alan Karp

Tony Arcieri

ungelesen,
17.03.2021, 17:02:2417.03.21
an cap-...@googlegroups.com
On Wed, Mar 17, 2021 at 1:09 PM Neil Madden <neil....@forgerock.com> wrote:
My take on it is this: introductions to capability security often list memory-safety as a key prerequisite - eg [1]. What this article shows is that in fact security is itself a prerequisite for memory-safety. Only by controlling access to the environment can you defend such abstractions.

Rust's `std` carries with it all of the ambient authority of system calls the underlying operating system provides.

While it's interesting that they built an abstraction that leverages this ambient authority in an unexpected way, I don't consider this any more problematic from a memory safety perspective than a "safe" program which can use std::process::Command to run "sudo dd if=/dev/urandom of=/dev/ram".

Getting OCap-like properties out of Rust requires #![no_std] and #![forbid(unsafe_code)] at a minimum. If there are any dependencies, they must follow this strategy too. The cargo-geiger tool can lint your dependency tree to ensure it has these properties, although it isn't airtight.

There were a few other "escape hatches" that could still be used for ambient authority in such cases, but I have been participating in trying to close them:


--
Tony Arcieri

Kevin Reid

ungelesen,
17.03.2021, 17:10:1717.03.21
an cap-...@googlegroups.com
On Wed, Mar 17, 2021 at 1:51 PM John Kemp <stable.p...@gmail.com> wrote:
But models are only ever as secure as their contextual assumptions; the thing that’s so bad about this specific issue in Rust (if you ask me, a novice in the language) is that you can do this without explicitly _marking_ the code as unsafe (as you note above). I think (thought) an actual key benefit of Rust is that you can audit the code (read it) and, in theory, see the parts as unsafe because they are so marked. This example clearly demonstrates that’s not true, and makes Rust less useful than I thought.

Accessing /proc/self/mem defeats the memory safety of Java, JavaScript, Python, etc. just as much as it defeats that of Rust; the unusual thing here is that Rust offers an intra-language mechanism (unsafe) to accomplish the same things without indirection. Breaking memory safety is a general hazard wherever there is some means of memory access outside of CPU instructions executed by the output of exactly one compiler purporting to provide memory safety:

• /proc/*/mem
• System calls
• Debuggers
• Dynamically loaded libraries
• …Probably many more examples I haven't thought of

It's not a flaw in a building design if you reach out a window and destroy its foundation. All programs depend on the platform they execute on, and very few platforms offer guarantees of process integrity without some exception.

John Kemp

ungelesen,
17.03.2021, 17:48:0617.03.21
an cap-...@googlegroups.com
On Mar 17, 2021, at 5:02 PM, Tony Arcieri <bas...@gmail.com> wrote:

There were a few other "escape hatches" that could still be used for ambient authority in such cases, but I have been participating in trying to close them:


Be careful, you might turn Rust back into Haskell… ;)

- johnk

Bill Frantz

ungelesen,
17.03.2021, 17:52:2017.03.21
an cap-...@googlegroups.com
On 3/17/21 at 5:09 PM, kpr...@switchb.org (Kevin Reid) wrote:

>... Breaking memory safety is a general
>hazard wherever there is some means of memory access outside of CPU
>instructions executed by the output of exactly one compiler purporting to
>provide memory safety:
>
>• /proc/*/mem
>• System calls
>• Debuggers
>• Dynamically loaded libraries
>• …Probably many more examples I haven't thought of

Add in I/O channels and/or devices with DMA.

Some of this discussion reminds me of olden times, when people
were thinking about strict Harvard architecture as a solution to
attacks on systems. They were suggesting extending the
separation of data memory and instruction memory to the I/O
devices. The question is how do you write a compiler, a linker,
a loader with this kind of strict separation?

A similar question comes up when you consider interpreted
languages. One of my favorite examples is font libraries which
successfully performed attacks on computer systems through bugs
in the font interpreters.

Cheers - Bill

-----------------------------------------------------------------------
Bill Frantz | If the site is supported by | Periwinkle
(408)348-7900 | ads, you are the product. | 150
Rivermead Road #235
www.pwpconsult.com | |
Peterborough, NY 03458

John Kemp

ungelesen,
17.03.2021, 18:14:5617.03.21
an cap-...@googlegroups.com
Kevin,


On Mar 17, 2021, at 5:09 PM, Kevin Reid <kpr...@switchb.org> wrote:

It's not a flaw in a building design if you reach out a window and destroy its foundation. All programs depend on the platform they execute on, and very few platforms offer guarantees of process integrity without some exception.

I completely agree with you. Rust is certainly no worse than any other programming language. And the explicit borrow, indications of mutation and unsafe code are really very nice. “Safe” is less safe than I thought, but it sounds like Tony’s fix will help. 

Finally, here’s a language/kernel that supports OS-level capabilities, directly, where you can reason about them within the language itself: http://mumble.net/~jar/pubs/secureos/secureos.html 

- johnk

Matt Rice

ungelesen,
17.03.2021, 18:47:0217.03.21
an cap-talk
On Wed, Mar 17, 2021 at 9:52 PM Bill Frantz <fra...@pwpconsult.com> wrote:
On 3/17/21 at 5:09 PM, kpr...@switchb.org (Kevin Reid) wrote:

>... Breaking memory safety is a general
>hazard wherever there is some means of memory access outside of CPU
>instructions executed by the output of exactly one compiler purporting to
>provide memory safety:
>
>• /proc/*/mem
>• System calls
>• Debuggers
>• Dynamically loaded libraries
>• …Probably many more examples I haven't thought of

Add in I/O channels and/or devices with DMA.

Some of this discussion reminds me of olden times, when people
were thinking about strict Harvard architecture as a solution to
attacks on systems. They were suggesting extending the
separation of data memory and instruction memory to the I/O
devices. The question is how do you write a compiler, a linker,
a loader with this kind of strict separation?


I personally think it's entirely reasonable/feasible given the oodles of memory available in post-olden times,
if you consider doing Input IO before the entry point, and Output IO atexit, and hoard everything in memory during compilation/linking/loading...
the scheduler needs to flip https://en.wikipedia.org/wiki/W%5EX bits not specifically Harvard architecture, but i suppose copying would suffice in lieu...
I actually think for compiler/linker (or transformational programs as I believe Mark calls them) you can get away with a system almost devoid of system calls, fewer than even a capability system.
at least If you are willing to limit yourself to transformational programs running on something resembling an old batch processing OS.

I guess another way to state it is if you assume it is running confined, and running IO before and after processing, they require no other external capabilities,
perhaps this is considered cheating, you might argue the system calls do exist (I.e. in the specific IO processes) but given no capabilities the compilation process has no use in invoking any *shrug*...
 
A similar question comes up when you consider interpreted
languages. One of my favorite examples is font libraries which
successfully performed attacks on computer systems through bugs
in the font interpreters.

Cheers - Bill

-----------------------------------------------------------------------
Bill Frantz        | If the site is supported by  | Periwinkle
(408)348-7900      | ads, you are the product.    | 150
Rivermead Road #235
www.pwpconsult.com |                              |
Peterborough, NY 03458

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Raoul Duke

ungelesen,
17.03.2021, 18:47:1717.03.21
an cap-...@googlegroups.com
hm, i feel like somebody should work on making a subset of languages
that is more capability safe.

(that's said in jest cf. joe-e et. al. :-)

Tony Arcieri

ungelesen,
17.03.2021, 18:56:4617.03.21
an cap-...@googlegroups.com
On Wed, Mar 17, 2021 at 3:47 PM Raoul Duke <rao...@gmail.com> wrote:
hm, i feel like somebody should work on making a subset of languages
that is more capability safe.

If you'd like a subset of Rust that works like this, check out hacspec:


It's a work-in-progress Rust subset with formal semantics amenable to mechanical translation to F* (with the goal of supporting other formal verification-oriented languages like Coq and Crucible).

Ambient authority is inexpressible in this subset of the language. It's probably the closest thing to a truly "sandbox safe" subset of Rust.

--
Tony Arcieri

Alan Karp

ungelesen,
17.03.2021, 19:12:3517.03.21
an cap-...@googlegroups.com
A tamed library was needed for both Java (Joe-E) and JavaScript (Caja).  Why would Rust be any different?

--------------
Alan Karp


On Wed, Mar 17, 2021 at 3:47 PM Raoul Duke <rao...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
Allen antworten
Antwort an Autor
Weiterleiten
0 neue Nachrichten