> Sorry about the overlong reply;
Back at you. :^)
> Wrt sandboxing; note that if we make the behavior dependant on the
> host "sseed" configuration, we're unnecessarily telling a VS/VU mode
> entity something about the host physical configuration; either
> directly or via execution latency. I don't like the implied mixing of
> the state of the virtual guest and the host configuration.
I understand, but we need to be realistic. (There's also a purist
information-leakage reason not to raise virtual instruction exceptions
when mseccfg.sseed = 0, which I get to later.)
According to what I've been told, for performance reasons it's
standard practice to run most guest virtual machines at least
partly "paravirtualized", which means the guest knows, somewhere in
its software, that it's inside a guest environment. It's true that
we've tried to make the RISC-V hypervisor hardware capable of truly
virtualizing a wide range of machines so the guest doesn't easily know.
But we should not imagine this is the typical use case. The vast
majority of folks who care about the performance of their guests run
them every day in paravirtualized environments, and thus regularly mix
"the state of the virtual guest and the host configuration", as you
say.
> I guess this is a security person in me talking; I'm automatically
> assuming that the guest/enclave/entity in VS/VU mode is not going to
> play along and will try to DoS the host or something :)
You mean, by deviously making SBI calls instead of executing CSR
accesses to seed that get trapped and emulated? What's the difference
for security? Why should we assume SBI call are less secure?
> > If we can assume such an SBI function exists, then I would expect a
> > guest OS running in a virtual machine (VS mode) also to invoke this
> > function in most circumstances, obviating any concerns about the
> > difference in performance between illegal and virtual instruction
> > traps.
>
> If we could assume that, then we could have the entropy source to be
> M-mode only.
I don't follow that statement exactly. The reason we support giving
S-mode direct access to CSR seed is to improve the speed for S mode,
not because we're incapable of inventing an SBI function for passing
entropy down to the OS.
> We added the S-mode support in order for (Linux) kernels
> to be able to access the entropy source directly and created virtual
> entropy sources to facilitate emulation. An SBI function is not
> emulation.
But why should I care that an SBI function isn't emulation? I think
we're back to what I said before: It sounds like you're assuming that
because we _can_ emulate accesses to the seed CSR to provide entropy
bits, we _must_ emulate accesses to the seed CSR to provide entropy
bits. I strongly disagree.
If the argument is supposed to be that perfect security dictates that
guests not learn they're running in a virtual machine, that argument
appears to me to be deflated by the reality that guests are almost
always handed this knowledge in practice.
> In any case, I think making the VS/VU exception type dependant on
> the host mseccfg.sseed setting would seem to lead to more. not less,
> complicated hypervisor implementation (necessitating emulation via
> M-mode, perhaps not frequently, but still).
Emulation in M mode is not how it would work when mseccfg.sseed = 0.
Rather, the illegal instruction trap handler at M level would notice
that the exception occurred in VS/VU mode and would then delegate the
exception to HS mode, by software. If M level is emulating CSR seed
for HS mode, the exception delegated to HS mode would be a virtual
instruction exception; else it would be an illegal instruction
exception. The real handling of the exception for VS/VU mode would be
done by the hypervisor in HS mode as usual.
But this all presupposes both that mseccfg.sseed = 0 and that we told
the guest OS the seed CSR is implemented for it to use. In almost all
cases, what really should happen instead is the hypervisor tells the
guest OS that seed isn't available to it (which is the truth), so the
guest may make more efficient SBI calls directly to the hypervisor.
> I still can't see many advantages in making the behavior of VS/VU
> dependant on global mseccfg.sseed. [...]
The main reason for insisting that we always raise an illegal
instruction exception when mseccfg.sseed = 0 is to address the
situation where M level leaves mseccfg.sseed = 0 and _doesn't_ emulate
the seed CSR for HS mode, instead telling the OS that Zkr isn't
implemented. HS mode isn't supposed to see virtual instruction
exceptions from VS/VU mode for features that aren't even implemented
for HS mode. Ironically, given the thread of this conversation,
raising a virtual instruction exception in this circumstance risks
leaking to the OS the tiny bit of information that M level is lying
to it about Zkr. And although I've insisted that most users dismiss
these tiny leakages as insignificant, nevertheless we've tried to be
consistent and avoid it in hardware for those who may care.
For readers who detect an inconsistency in my position, note this
difference: In one case it's the hardware leaking the information
(to be avoided); in the other, the small leakage occurs due to choices
freely made by users and their software (allowed, and common for
performance reasons, as I've said).
> I am also left wondering about the interpretation of mseccfg.useed
> in case "sseed" is not virtualized. If VS/VU are set to just invoke
> a virtual instruction exception, "useed" handling is as easy.
I don't understand this paragraph. There's no intersection between
the setting of mseccfg.useed, which affects only U mode (not VU mode),
and whether the exception raised from VS/VU mode should be the illegal
instruction or virtual instruction exception.
Regards,
- John Hauser