On Fri, 15 Dec 2023 17:10:36 +0000, MitchAlsup wrote:
> Quadibloc wrote:
>> Is the CPU even the place for sandboxing? A genuinely effective sandbox
>> would involve a physical separation between the protected computer and
>> the one connected to the Internet, after all. But that isn't
>> convenient...
>
> With 10-cycle context switches, you can run the sandbox where those
> cores are only provided indirect access to the internet through an
> IPI-like mechanism (also 10-cycles).
>
> When a real hard context switch remains 10-cycless, you an run the
> secure sandbox under a different HyperVisor that provides no illusion of
> internet access. Still 10-cycles.
It's certainly true that if you can have faster context switches,
you can use context switches more often with less loss of performance.
One obvious mechanism that some old computers used was to have a
separate set of registers for the 'other' state, instead of saving
and restoring registers from memory. So, yes, we certainly know
how to do that.
To me, though, the problem is that of course you can have context A
and context B, but how do we make the untrusted context secure, so
that it doesn't just find some vulnerability somewhere that wasn't
anticipated to affect or spy on the privileged context?
It's in a different address space, because the MMU put it there? Great,
that _should_ work, but we've already had cases of malware surmounting
the barriers between virtual machines under a hypervisor, for example.
If using a *virtual machine* won't give you a secure sandbox, then I
despair of anything else giving you a secure sandbox on a single computer.
And, worse, even if one had a perfect secure sandbox, it wouldn't
completely solve the security problem. Because one of the things
that you use the Internet for is to download programs to run on
your "good" computer. So the sandbox runs JavaScript and stuff like
that, but there's still a link between it and the computer we
want to keep secure, so that stuff can be downloaded.
And the stuff that's downloaded can be corrupted - on the server,
or inside the sandbox where the bad things are locked up!
These meditations lead me to the conclusion that there's no
really simple solution that can easily be seen to be perfect.
To give computers even a fighting chance to be more secure, then,
it seems the result will have to be more like this:
- Even the "good" computer, that runs the programs that need its
full speed and power, will still need to run antivirus programs
and have mitigations. Perimiter security that does away with the
need for this is not possible.
- The Internet-facing computer needs to be made highly resistant
to being compromised. An old idea from the early Bell Electronic
Switching System seems to be what is appropriate here: this
computer should be physically unable to ever write to the memory
it uses for (primary) executable code, that is, the programs
that handle Internet access, its own local operating system, and
so on; instead, that computer's software gets loaded into its
memory by the "good" computer shielded behind it.
It still gets to run "secondary" executable code - JavaScript and
the like - and because that's so dangerous, the secondary computer,
instead of being thought of as the sandbox, should still have
the kind of software sandbox functionality that processors do
nowadays. But, wait - we already know those sandboxes _can_ be
compromised. So, while what is mentioned so far creates a second
line of defense, which is nice, it's not really a "solution", since
the lines of defense can be compromised one at a time.
So you need one other thing. A good mechanism, external to the
Internet-facing computer, which can detect if something has been
compromised on that computer. For example, a technique like
rowhammer could compromise the program memory in it that
it doesn't even have a read connection to! (Of course, here, using
special memory, even if slower and/or more expensive, that avoids
rowhammer is another measure that I think is warranted.)
So the idea, I guess, so far is this:
For routine stuff - JavaScript that executes to give web sites more
functionality - that stuff stays outside the "real" computer, thus
making a large reduction in the risk.
Some functionality requires things to go from the Internet to the
"good" computer to be executed, but if the amount of that is limited,
then one can restrict that to trusted sources: i.e. reputable
game publishers, trusted repositories of software for download.
That keeps the danger down to a "dull roar"; no longer will sending a
bad E-mail to a computer or getting it to look at a bad web site let
a miscreant take over the computer. Social engineering, where a user
is led to trust the _wrong_ software repository, of course, is still
going to be possible. That couldn't be eliminated without compromising
the usability and power of the computer (of course, for special
purposes, such a compromise may be admissible; i.e. locked down
Chromebooks or iOS devices lent out by schools to their students and
stuff like that).
So now you've heard my philosophy on security. (And that guy who
made philosophy a dirty word here disappeared for real about the time
of the spam onslaught. So that ill wind blew some good.)
John Savard