--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/38B53323-CD9F-4932-89B7-15EF9A6B91FF%40forgerock.com.
Is it unexpected that a program can overwrite its own memory?
Even in Rust it's possible to do this already by using ‘unsafe’ to call some C code that typecasts (for example).
But programs wouldn’t work if they weren’t allowed to overwrite memory (how would they be loaded to run?)This is a clever demonstration that it is possible to adhere nicely to your model (marking unsafe code), but still be blindsided by something completely outside of your model/abstraction, but which provides you context (programs run on an OS).
- johnk--On Mar 17, 2021, at 12:05 PM, Neil Madden <neil....@forgerock.com> wrote:Just came across this entertaining blog post about bypassing Rust’s unsafe mode and implementing “transmute” (reinterpret cast) entirely within “safe” code:The trick is that it opens /proc/self/mem in write mode and uses it to adjust a tagged union behind the back of the language semantics. We all know that security usually depends on memory safety, so this is a nice illustration that memory safety itself depends on other security details (at least, on Linux).— Neil
ForgeRock values your Privacy--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/38B53323-CD9F-4932-89B7-15EF9A6B91FF%40forgerock.com.
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/60EEF518-8228-4A66-A1D2-D9EC40247F86%40gmail.com.
On Mar 17, 2021, at 4:09 PM, Neil Madden <neil....@forgerock.com> wrote:On 17 Mar 2021, at 18:27, John Kemp <stable.p...@gmail.com> wrote:Is it unexpected that a program can overwrite its own memory?In a language that claims to be memory-safe, yes.
Even in Rust it's possible to do this already by using ‘unsafe’ to call some C code that typecasts (for example).But you’re explicitly stepping outside of the language abstractions to do that, which is what “unsafe” is for. As the article shows, you can easily ban the use of unsafe blocks in Rust to prevent this, but that is not sufficient.
But programs wouldn’t work if they weren’t allowed to overwrite memory (how would they be loaded to run?)This is a clever demonstration that it is possible to adhere nicely to your model (marking unsafe code), but still be blindsided by something completely outside of your model/abstraction, but which provides you context (programs run on an OS).My take on it is this: introductions to capability security often list memory-safety as a key prerequisite - eg [1]. What this article shows is that in fact security is itself a prerequisite for memory-safety. Only by controlling access to the environment can you defend such abstractions.
— Neil- johnkOn Mar 17, 2021, at 12:05 PM, Neil Madden <neil....@forgerock.com> wrote:Just came across this entertaining blog post about bypassing Rust’s unsafe mode and implementing “transmute” (reinterpret cast) entirely within “safe” code:The trick is that it opens /proc/self/mem in write mode and uses it to adjust a tagged union behind the back of the language semantics. We all know that security usually depends on memory safety, so this is a nice illustration that memory safety itself depends on other security details (at least, on Linux).— Neil
ForgeRock values your Privacy--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/38B53323-CD9F-4932-89B7-15EF9A6B91FF%40forgerock.com.--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/60EEF518-8228-4A66-A1D2-D9EC40247F86%40gmail.com.
ForgeRock values your Privacy--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/4524456F-7A51-485B-8BB2-5C3F9A2A4D8A%40forgerock.com.
But models are only ever as secure as their contextual assumptions; the thing that’s so bad about this specific issue in Rust (if you ask me, a novice in the language) is that you can do this without explicitly _marking_ the code as unsafe (as you note above). I think (thought) an actual key benefit of Rust is that you can audit the code (read it) and, in theory, see the parts as unsafe because they are so marked. This example clearly demonstrates that’s not true, and makes Rust less useful than I thought.
My take on it is this: introductions to capability security often list memory-safety as a key prerequisite - eg [1]. What this article shows is that in fact security is itself a prerequisite for memory-safety. Only by controlling access to the environment can you defend such abstractions.
But models are only ever as secure as their contextual assumptions; the thing that’s so bad about this specific issue in Rust (if you ask me, a novice in the language) is that you can do this without explicitly _marking_ the code as unsafe (as you note above). I think (thought) an actual key benefit of Rust is that you can audit the code (read it) and, in theory, see the parts as unsafe because they are so marked. This example clearly demonstrates that’s not true, and makes Rust less useful than I thought.
On Mar 17, 2021, at 5:02 PM, Tony Arcieri <bas...@gmail.com> wrote:There were a few other "escape hatches" that could still be used for ambient authority in such cases, but I have been participating in trying to close them:
On Mar 17, 2021, at 5:09 PM, Kevin Reid <kpr...@switchb.org> wrote:It's not a flaw in a building design if you reach out a window and destroy its foundation. All programs depend on the platform they execute on, and very few platforms offer guarantees of process integrity without some exception.
On 3/17/21 at 5:09 PM, kpr...@switchb.org (Kevin Reid) wrote:
>... Breaking memory safety is a general
>hazard wherever there is some means of memory access outside of CPU
>instructions executed by the output of exactly one compiler purporting to
>provide memory safety:
>
>• /proc/*/mem
>• System calls
>• Debuggers
>• Dynamically loaded libraries
>• …Probably many more examples I haven't thought of
Add in I/O channels and/or devices with DMA.
Some of this discussion reminds me of olden times, when people
were thinking about strict Harvard architecture as a solution to
attacks on systems. They were suggesting extending the
separation of data memory and instruction memory to the I/O
devices. The question is how do you write a compiler, a linker,
a loader with this kind of strict separation?
A similar question comes up when you consider interpreted
languages. One of my favorite examples is font libraries which
successfully performed attacks on computer systems through bugs
in the font interpreters.
Cheers - Bill
-----------------------------------------------------------------------
Bill Frantz | If the site is supported by | Periwinkle
(408)348-7900 | ads, you are the product. | 150
Rivermead Road #235
www.pwpconsult.com | |
Peterborough, NY 03458
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/r480Ps-10146i-BEFEF70FD0824B0ABBAC0DD710D5AE7F%40Williams-MacBook-Pro.local.
hm, i feel like somebody should work on making a subset of languages
that is more capability safe.
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAJ7XQb4W-vu5oStV3pQhKQHT%3DAgPzy5V1SxNgAJZKPbkBUY4RQ%40mail.gmail.com.