[PATCH 00/11] Xvisor Sstc support for host and other improvements

19 views
Skip to first unread message

Anup Patel

unread,
Oct 11, 2022, 12:10:01 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
This series adds Sstc support for host timer driver and it does various
improvements/preparations required for implementing SBI nested acceleration.

These patches can also be found in riscv_sstc_v1 branch at:
https://github.com/avpatel/xvisor-next.git

Anup Patel (11):
CORE: Add vmm_scheduler_irq_regs() function
CORE: Add endianness helper macros for long
RISC-V: Improve SRET based nested world-switch
RISC-V: Make function to emulate SRET instruction as global
RISC-V: Combine SBI extension handler output parameters into a struct
RISC-V: Add regs_updated flag in struct cpu_vcpu_sbi_return
RISC-V: Add cpu_vcpu_sbi_xlate_error() helper function
RISC-V: Change the SBI specification version to v1.0 for guest
RISC-V: Extend ISA parsing to detect Sstc extension
RISC-V: Add CSR defines for Sstc extension
DRIVERS: riscv_timer: Use Sstc extension when available

arch/riscv/cpu/generic/cpu_init.c | 2 +
arch/riscv/cpu/generic/cpu_vcpu_nested.c | 30 ++++++++++---
arch/riscv/cpu/generic/cpu_vcpu_sbi.c | 44 ++++++++++++++++---
arch/riscv/cpu/generic/cpu_vcpu_sbi_base.c | 25 +++++------
arch/riscv/cpu/generic/cpu_vcpu_sbi_hsm.c | 11 +++--
arch/riscv/cpu/generic/cpu_vcpu_sbi_legacy.c | 17 +++----
arch/riscv/cpu/generic/cpu_vcpu_sbi_replace.c | 28 +++++-------
arch/riscv/cpu/generic/cpu_vcpu_sbi_xvisor.c | 9 ++--
arch/riscv/cpu/generic/cpu_vcpu_trap.c | 4 +-
arch/riscv/cpu/generic/include/cpu_hwcap.h | 1 +
arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h | 19 +++++---
.../riscv/cpu/generic/include/cpu_vcpu_trap.h | 2 +
.../cpu/generic/include/riscv_encoding.h | 8 ++++
core/include/vmm_host_io.h | 8 ++++
core/include/vmm_scheduler.h | 3 ++
core/vmm_scheduler.c | 7 +++
drivers/clocksource/riscv_timer.c | 37 ++++++++++++++--
17 files changed, 183 insertions(+), 72 deletions(-)

--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:03 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
For SBI nested acceleration extension, we will need a way to retreive
pointer to registers on stack so let us add vmm_scheduler_irq_regs()
for this purpose.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
core/include/vmm_scheduler.h | 3 +++
core/vmm_scheduler.c | 7 +++++++
2 files changed, 10 insertions(+)

diff --git a/core/include/vmm_scheduler.h b/core/include/vmm_scheduler.h
index 77f1bf2a..96545100 100644
--- a/core/include/vmm_scheduler.h
+++ b/core/include/vmm_scheduler.h
@@ -64,6 +64,9 @@ int vmm_scheduler_set_hcpu(struct vmm_vcpu *vcpu, u32 hcpu);
/** Enter IRQ Context (Must be called from somewhere) */
void vmm_scheduler_irq_enter(arch_regs_t *regs, bool vcpu_context);

+/** Retreive register pointer saved for IRQ/Normal Context */
+arch_regs_t *vmm_scheduler_irq_regs(void);
+
/** Exit IRQ Context (Must be called from somewhere) */
void vmm_scheduler_irq_exit(arch_regs_t *regs);

diff --git a/core/vmm_scheduler.c b/core/vmm_scheduler.c
index 08215ca5..76b6a868 100644
--- a/core/vmm_scheduler.c
+++ b/core/vmm_scheduler.c
@@ -832,6 +832,13 @@ void vmm_scheduler_irq_enter(arch_regs_t *regs, bool vcpu_context)
schedp->yield_on_irq_exit = FALSE;
}

+arch_regs_t *vmm_scheduler_irq_regs(void)
+{
+ struct vmm_scheduler_ctrl *schedp = &this_cpu(sched);
+
+ return schedp->irq_regs;
+}
+
void vmm_scheduler_irq_exit(arch_regs_t *regs)
{
struct vmm_scheduler_ctrl *schedp = &this_cpu(sched);
--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:05 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
For RISC-V, we need endianness conversion macros for long data type
so let us add these macros directly in vmm_host_io.h of core code.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
core/include/vmm_host_io.h | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/core/include/vmm_host_io.h b/core/include/vmm_host_io.h
index 06a81ab6..9842206e 100644
--- a/core/include/vmm_host_io.h
+++ b/core/include/vmm_host_io.h
@@ -56,6 +56,14 @@

#define vmm_be64_to_cpu(data) arch_be64_to_cpu(data)

+#if ARCH_BITS_PER_LONG == 32
+#define vmm_cpu_to_le_long(__val) vmm_cpu_to_le32(__val)
+#define vmm_le_long_to_cpu(__val) vmm_le32_to_cpu(__val)
+#else
+#define vmm_cpu_to_le_long(__val) vmm_cpu_to_le64(__val)
+#define vmm_le_long_to_cpu(__val) vmm_le64_to_cpu(__val)
+#endif
+
/** I/O space access functions (Assumed to be Little Endian) */
static inline u8 vmm_inb(unsigned long port)
{
--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:09 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
We improve SRET based nested world-switch in following ways:
1) Add more comments in cpu_vcpu_nested_hext_csr_rmw() for the
case when hstatus.SPV is updated
2) Re-enable SRET trapping for virtual-HS mode when we transition
from virt=ON to virt=OFF in cpu_vcpu_nested_set_virt() because
virtual-HS mode might execute SRET immediatly without changing
state of hstatus.SPV bit.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
arch/riscv/cpu/generic/cpu_vcpu_nested.c | 30 ++++++++++++++++++------
1 file changed, 23 insertions(+), 7 deletions(-)

diff --git a/arch/riscv/cpu/generic/cpu_vcpu_nested.c b/arch/riscv/cpu/generic/cpu_vcpu_nested.c
index b8284724..1c7871a8 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_nested.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_nested.c
@@ -1044,10 +1044,18 @@ int cpu_vcpu_nested_hext_csr_rmw(struct vmm_vcpu *vcpu, arch_regs_t *regs,
HSTATUS_GVA;
if (wr_mask & HSTATUS_SPV) {
/*
- * Enable (or Disable) host SRET trapping for
- * virtual-HS mode. This will be auto-disabled
- * by cpu_vcpu_nested_set_virt() upon SRET trap
- * from virtual-HS mode.
+ * If hstatus.SPV == 1 then enable host SRET
+ * trapping for the virtual-HS mode which will
+ * allow host to do nested world-switch upon
+ * next SRET instruction executed by the
+ * virtual-HS-mode.
+ *
+ * If hstatus.SPV == 0 then disable host SRET
+ * trapping for the virtual-HS mode which will
+ * ensure that host does not do any nested
+ * world-switch for SRET instruction executed
+ * virtual-HS mode for general interrupt and
+ * trap handling.
*/
regs->hstatus &= ~HSTATUS_VTSR;
regs->hstatus |= (new_val & HSTATUS_SPV) ?
@@ -1570,11 +1578,19 @@ skip_csr_update:
}
}

- /* Update host SRET and VM trapping */
+ /* Update host SRET trapping */
regs->hstatus &= ~HSTATUS_VTSR;
- if (virt && (npriv->hstatus & HSTATUS_VTSR)) {
- regs->hstatus |= HSTATUS_VTSR;
+ if (virt) {
+ if (npriv->hstatus & HSTATUS_VTSR) {
+ regs->hstatus |= HSTATUS_VTSR;
+ }
+ } else {
+ if (npriv->hstatus & HSTATUS_SPV) {
+ regs->hstatus |= HSTATUS_VTSR;
+ }
}
+
+ /* Update host VM trapping */
regs->hstatus &= ~HSTATUS_VTVM;
if (virt && (npriv->hstatus & HSTATUS_VTVM)) {
regs->hstatus |= HSTATUS_VTVM;
--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:09 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
The function to emulate SRET instruction will be shared by instruction
trap-n-emulate and SBI nested acceleration extension so let us make
this as a global function.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
arch/riscv/cpu/generic/cpu_vcpu_trap.c | 4 ++--
arch/riscv/cpu/generic/include/cpu_vcpu_trap.h | 2 ++
2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/cpu/generic/cpu_vcpu_trap.c b/arch/riscv/cpu/generic/cpu_vcpu_trap.c
index a2085a08..32bcc6dd 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_trap.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_trap.c
@@ -732,7 +732,7 @@ static int csr_insn(struct vmm_vcpu *vcpu, arch_regs_t *regs, ulong insn)
return VMM_OK;
}

-static int sret_insn(struct vmm_vcpu *vcpu, arch_regs_t *regs, ulong insn)
+int cpu_vcpu_sret_insn(struct vmm_vcpu *vcpu, arch_regs_t *regs, ulong insn)
{
bool next_virt;
unsigned long vsstatus, next_sepc, next_spp;
@@ -1102,7 +1102,7 @@ static const struct system_opcode_func system_opcode_funcs[] = {
{
.mask = INSN_MASK_SRET,
.match = INSN_MATCH_SRET,
- .func = sret_insn,
+ .func = cpu_vcpu_sret_insn,
},
{
.mask = INSN_MASK_WFI,
diff --git a/arch/riscv/cpu/generic/include/cpu_vcpu_trap.h b/arch/riscv/cpu/generic/include/cpu_vcpu_trap.h
index 4d2d1b25..e77168d5 100644
--- a/arch/riscv/cpu/generic/include/cpu_vcpu_trap.h
+++ b/arch/riscv/cpu/generic/include/cpu_vcpu_trap.h
@@ -59,6 +59,8 @@ void cpu_vcpu_redirect_smode_trap(arch_regs_t *regs,
void cpu_vcpu_redirect_trap(struct vmm_vcpu *vcpu, arch_regs_t *regs,
struct cpu_vcpu_trap *trap);

+int cpu_vcpu_sret_insn(struct vmm_vcpu *vcpu, arch_regs_t *regs, ulong insn);
+
int cpu_vcpu_page_fault(struct vmm_vcpu *vcpu,
arch_regs_t *regs,
struct cpu_vcpu_trap *trap);
--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:11 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
Currently, we have two output parameters in a SBI extension handler
namely "out_val" and "out_trap". If we add more output parameters
then SBI extension handler prototype will also grow further.

To have a fixed SBI extension handler prototype, we introduce new
"struct cpu_vcpu_sbi_return" which combines all output parameters
of a SBI extension handler and pass a pointer to this new struct
as parameter.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
arch/riscv/cpu/generic/cpu_vcpu_sbi.c | 7 ++---
arch/riscv/cpu/generic/cpu_vcpu_sbi_base.c | 25 ++++++++---------
arch/riscv/cpu/generic/cpu_vcpu_sbi_hsm.c | 11 ++++----
arch/riscv/cpu/generic/cpu_vcpu_sbi_legacy.c | 17 +++++------
arch/riscv/cpu/generic/cpu_vcpu_sbi_replace.c | 28 ++++++++-----------
arch/riscv/cpu/generic/cpu_vcpu_sbi_xvisor.c | 9 +++---
arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h | 12 +++++---
7 files changed, 53 insertions(+), 56 deletions(-)

diff --git a/arch/riscv/cpu/generic/cpu_vcpu_sbi.c b/arch/riscv/cpu/generic/cpu_vcpu_sbi.c
index 23a26b09..5603858d 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_sbi.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_sbi.c
@@ -70,7 +70,7 @@ int cpu_vcpu_sbi_ecall(struct vmm_vcpu *vcpu, ulong cause,
unsigned long extension_id = regs->a7;
unsigned long func_id = regs->a6;
struct cpu_vcpu_trap trap = { 0 };
- unsigned long out_val = 0;
+ struct cpu_vcpu_sbi_return out = { .value = 0, .trap = &trap };
bool is_0_1_spec = FALSE;
unsigned long args[6];

@@ -95,8 +95,7 @@ int cpu_vcpu_sbi_ecall(struct vmm_vcpu *vcpu, ulong cause,

ext = cpu_vcpu_sbi_find_extension(extension_id);
if (ext && ext->handle) {
- ret = ext->handle(vcpu, extension_id, func_id,
- args, &out_val, &trap);
+ ret = ext->handle(vcpu, extension_id, func_id, args, &out);
if (extension_id >= SBI_EXT_0_1_SET_TIMER &&
extension_id <= SBI_EXT_0_1_SHUTDOWN)
is_0_1_spec = TRUE;
@@ -118,7 +117,7 @@ int cpu_vcpu_sbi_ecall(struct vmm_vcpu *vcpu, ulong cause,
regs->sepc += 4;
regs->a0 = ret;
if (!is_0_1_spec)
- regs->a1 = out_val;
+ regs->a1 = out.value;
}

return 0;
diff --git a/arch/riscv/cpu/generic/cpu_vcpu_sbi_base.c b/arch/riscv/cpu/generic/cpu_vcpu_sbi_base.c
index f14eaef7..e5b05da8 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_sbi_base.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_sbi_base.c
@@ -29,37 +29,36 @@
#include <cpu_vcpu_sbi.h>
#include <riscv_sbi.h>

-static int vcpu_sbi_base_ecall(struct vmm_vcpu *vcpu,
- unsigned long ext_id, unsigned long func_id,
- unsigned long *args, unsigned long *out_val,
- struct cpu_vcpu_trap *out_trap)
+static int vcpu_sbi_base_ecall(struct vmm_vcpu *vcpu, unsigned long ext_id,
+ unsigned long func_id, unsigned long *args,
+ struct cpu_vcpu_sbi_return *out)
{
int ret = 0;
struct sbiret hret;

switch (func_id) {
case SBI_EXT_BASE_GET_SPEC_VERSION:
- *out_val = (CPU_VCPU_SBI_VERSION_MAJOR <<
- SBI_SPEC_VERSION_MAJOR_SHIFT) |
- CPU_VCPU_SBI_VERSION_MINOR;
+ out->value = (CPU_VCPU_SBI_VERSION_MAJOR <<
+ SBI_SPEC_VERSION_MAJOR_SHIFT) |
+ CPU_VCPU_SBI_VERSION_MINOR;
break;
case SBI_EXT_BASE_GET_IMP_ID:
- *out_val = CPU_VCPU_SBI_IMPID;
+ out->value = CPU_VCPU_SBI_IMPID;
break;
case SBI_EXT_BASE_GET_IMP_VERSION:
- *out_val = VMM_VERSION_MAJOR << 24 |
- VMM_VERSION_MINOR << 12 |
- VMM_VERSION_RELEASE;
+ out->value = VMM_VERSION_MAJOR << 24 |
+ VMM_VERSION_MINOR << 12 |
+ VMM_VERSION_RELEASE;
break;
case SBI_EXT_BASE_GET_MVENDORID:
case SBI_EXT_BASE_GET_MARCHID:
case SBI_EXT_BASE_GET_MIMPID:
hret = sbi_ecall(SBI_EXT_BASE, func_id, 0, 0, 0, 0, 0, 0);
ret = hret.error;
- *out_val = hret.value;
+ out->value = hret.value;
break;
case SBI_EXT_BASE_PROBE_EXT:
- *out_val = (cpu_vcpu_sbi_find_extension(args[0])) ? 1 : 0;
+ out->value = (cpu_vcpu_sbi_find_extension(args[0])) ? 1 : 0;
break;
default:
ret = SBI_ERR_NOT_SUPPORTED;
diff --git a/arch/riscv/cpu/generic/cpu_vcpu_sbi_hsm.c b/arch/riscv/cpu/generic/cpu_vcpu_sbi_hsm.c
index f965e6f0..f3e54d1c 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_sbi_hsm.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_sbi_hsm.c
@@ -29,10 +29,9 @@
#include <cpu_vcpu_sbi.h>
#include <riscv_sbi.h>

-static int vcpu_sbi_hsm_ecall(struct vmm_vcpu *vcpu,
- unsigned long ext_id, unsigned long func_id,
- unsigned long *args, unsigned long *out_val,
- struct cpu_vcpu_trap *out_trap)
+static int vcpu_sbi_hsm_ecall(struct vmm_vcpu *vcpu, unsigned long ext_id,
+ unsigned long func_id, unsigned long *args,
+ struct cpu_vcpu_sbi_return *out)
{
int rc;
u32 reg_flags = 0x0;
@@ -68,9 +67,9 @@ static int vcpu_sbi_hsm_ecall(struct vmm_vcpu *vcpu,
if (!rvcpu)
return SBI_ERR_INVALID_PARAM;
if (vmm_manager_vcpu_get_state(rvcpu) != VMM_VCPU_STATE_RESET)
- *out_val = SBI_HSM_STATE_STARTED;
+ out->value = SBI_HSM_STATE_STARTED;
else
- *out_val = SBI_HSM_STATE_STOPPED;
+ out->value = SBI_HSM_STATE_STOPPED;
break;
case SBI_EXT_HSM_HART_SUSPEND:
if (args[0] == SBI_HSM_SUSPEND_RET_DEFAULT) {
diff --git a/arch/riscv/cpu/generic/cpu_vcpu_sbi_legacy.c b/arch/riscv/cpu/generic/cpu_vcpu_sbi_legacy.c
index d25acf0b..57720c6d 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_sbi_legacy.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_sbi_legacy.c
@@ -35,10 +35,9 @@
#include <cpu_tlb.h>
#include <riscv_sbi.h>

-static int vcpu_sbi_legacy_ecall(struct vmm_vcpu *vcpu,
- unsigned long ext_id, unsigned long func_id,
- unsigned long *args, unsigned long *out_val,
- struct cpu_vcpu_trap *out_trap)
+static int vcpu_sbi_legacy_ecall(struct vmm_vcpu *vcpu, unsigned long ext_id,
+ unsigned long func_id, unsigned long *args,
+ struct cpu_vcpu_sbi_return *out)
{
u8 send;
u32 hcpu;
@@ -70,10 +69,11 @@ static int vcpu_sbi_legacy_ecall(struct vmm_vcpu *vcpu,
break;
case SBI_EXT_0_1_SEND_IPI:
if (args[0])
- hmask = __cpu_vcpu_unpriv_read_ulong(args[0], out_trap);
+ hmask = __cpu_vcpu_unpriv_read_ulong(args[0],
+ out->trap);
else
hmask = (1UL << guest->vcpu_count) - 1;
- if (out_trap->scause) {
+ if (out->trap->scause) {
break;
}
for_each_set_bit(i, &hmask, BITS_PER_LONG) {
@@ -97,10 +97,11 @@ static int vcpu_sbi_legacy_ecall(struct vmm_vcpu *vcpu,
case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
if (args[0])
- hmask = __cpu_vcpu_unpriv_read_ulong(args[0], out_trap);
+ hmask = __cpu_vcpu_unpriv_read_ulong(args[0],
+ out->trap);
else
hmask = (1UL << guest->vcpu_count) - 1;
- if (out_trap->scause) {
+ if (out->trap->scause) {
break;
}
vmm_cpumask_clear(&cm);
diff --git a/arch/riscv/cpu/generic/cpu_vcpu_sbi_replace.c b/arch/riscv/cpu/generic/cpu_vcpu_sbi_replace.c
index 4c5cb44c..7502c0b2 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_sbi_replace.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_sbi_replace.c
@@ -33,10 +33,9 @@
#include <generic_mmu.h>
#include <riscv_sbi.h>

-static int vcpu_sbi_time_ecall(struct vmm_vcpu *vcpu,
- unsigned long ext_id, unsigned long func_id,
- unsigned long *args, unsigned long *out_val,
- struct cpu_vcpu_trap *out_trap)
+static int vcpu_sbi_time_ecall(struct vmm_vcpu *vcpu, unsigned long ext_id,
+ unsigned long func_id, unsigned long *args,
+ struct cpu_vcpu_sbi_return *out)
{
if (func_id != SBI_EXT_TIME_SET_TIMER)
return SBI_ERR_NOT_SUPPORTED;
@@ -56,10 +55,9 @@ const struct cpu_vcpu_sbi_extension vcpu_sbi_time = {
.handle = vcpu_sbi_time_ecall,
};

-static int vcpu_sbi_rfence_ecall(struct vmm_vcpu *vcpu,
- unsigned long ext_id, unsigned long func_id,
- unsigned long *args, unsigned long *out_val,
- struct cpu_vcpu_trap *out_trap)
+static int vcpu_sbi_rfence_ecall(struct vmm_vcpu *vcpu, unsigned long ext_id,
+ unsigned long func_id, unsigned long *args,
+ struct cpu_vcpu_sbi_return *out)
{
u32 hcpu;
struct vmm_vcpu *rvcpu;
@@ -163,10 +161,9 @@ const struct cpu_vcpu_sbi_extension vcpu_sbi_rfence = {
.handle = vcpu_sbi_rfence_ecall,
};

-static int vcpu_sbi_ipi_ecall(struct vmm_vcpu *vcpu,
- unsigned long ext_id, unsigned long func_id,
- unsigned long *args, unsigned long *out_val,
- struct cpu_vcpu_trap *out_trap)
+static int vcpu_sbi_ipi_ecall(struct vmm_vcpu *vcpu, unsigned long ext_id,
+ unsigned long func_id, unsigned long *args,
+ struct cpu_vcpu_sbi_return *out)
{
struct vmm_vcpu *rvcpu;
struct vmm_guest *guest = vcpu->guest;
@@ -197,10 +194,9 @@ const struct cpu_vcpu_sbi_extension vcpu_sbi_ipi = {
.handle = vcpu_sbi_ipi_ecall,
};

-static int vcpu_sbi_srst_ecall(struct vmm_vcpu *vcpu,
- unsigned long ext_id, unsigned long func_id,
- unsigned long *args, unsigned long *out_val,
- struct cpu_vcpu_trap *out_trap)
+static int vcpu_sbi_srst_ecall(struct vmm_vcpu *vcpu, unsigned long ext_id,
+ unsigned long func_id, unsigned long *args,
+ struct cpu_vcpu_sbi_return *out)
{
int ret;
struct vmm_guest *guest = vcpu->guest;
diff --git a/arch/riscv/cpu/generic/cpu_vcpu_sbi_xvisor.c b/arch/riscv/cpu/generic/cpu_vcpu_sbi_xvisor.c
index ef3b2664..3a4b20e9 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_sbi_xvisor.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_sbi_xvisor.c
@@ -35,15 +35,14 @@

#define SBI_EXT_XVISOR_ISA_EXT 0x0

-static int vcpu_sbi_xvisor_ecall(struct vmm_vcpu *vcpu,
- unsigned long ext_id, unsigned long func_id,
- unsigned long *args, unsigned long *out_val,
- struct cpu_vcpu_trap *out_trap)
+static int vcpu_sbi_xvisor_ecall(struct vmm_vcpu *vcpu, unsigned long ext_id,
+ unsigned long func_id, unsigned long *args,
+ struct cpu_vcpu_sbi_return *out)
{
switch (func_id) {
case SBI_EXT_XVISOR_ISA_EXT:
if (args[0] < RISCV_ISA_EXT_MAX) {
- *out_val = __riscv_isa_extension_available(
+ out->value = __riscv_isa_extension_available(
riscv_priv(vcpu)->isa,
args[0]);
} else {
diff --git a/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h b/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
index 3a11e150..5de73aab 100644
--- a/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
+++ b/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
@@ -32,13 +32,17 @@ struct cpu_vcpu_trap;
#define CPU_VCPU_SBI_VERSION_MINOR 3
#define CPU_VCPU_SBI_IMPID 2

+struct cpu_vcpu_sbi_return {
+ unsigned long value;
+ struct cpu_vcpu_trap *trap;
+};
+
struct cpu_vcpu_sbi_extension {
unsigned long extid_start;
unsigned long extid_end;
- int (*handle)(struct vmm_vcpu *vcpu,
- unsigned long ext_id, unsigned long func_id,
- unsigned long *args, unsigned long *out_val,
- struct cpu_vcpu_trap *out_trap);
+ int (*handle)(struct vmm_vcpu *vcpu, unsigned long ext_id,
+ unsigned long func_id, unsigned long *args,
+ struct cpu_vcpu_sbi_return *out);
};

const struct cpu_vcpu_sbi_extension *cpu_vcpu_sbi_find_extension(
--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:12 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
The sync SRET call defined by the SBI nested acceleration extension will
directly update VCPU registers (including sepc CSR). To implement sync
SRET call, we add "regs_update" flag in "struct cpu_vcpu_sbi_return"
which if set then cpu_vcpu_sbi_ecall() will not update VCPU registers.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
arch/riscv/cpu/generic/cpu_vcpu_sbi.c | 5 +++--
arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h | 1 +
2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/cpu/generic/cpu_vcpu_sbi.c b/arch/riscv/cpu/generic/cpu_vcpu_sbi.c
index 5603858d..7c9a1d2c 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_sbi.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_sbi.c
@@ -70,7 +70,8 @@ int cpu_vcpu_sbi_ecall(struct vmm_vcpu *vcpu, ulong cause,
unsigned long extension_id = regs->a7;
unsigned long func_id = regs->a6;
struct cpu_vcpu_trap trap = { 0 };
- struct cpu_vcpu_sbi_return out = { .value = 0, .trap = &trap };
+ struct cpu_vcpu_sbi_return out = { .value = 0, .trap = &trap,
+ .regs_updated = FALSE };
bool is_0_1_spec = FALSE;
unsigned long args[6];

@@ -106,7 +107,7 @@ int cpu_vcpu_sbi_ecall(struct vmm_vcpu *vcpu, ulong cause,
if (trap.scause) {
trap.sepc = regs->sepc;
cpu_vcpu_redirect_trap(vcpu, regs, &trap);
- } else {
+ } else if (!out.regs_updated) {
/* This function should return non-zero value only in case of
* fatal error. However, there is no good way to distinguish
* between a fatal and non-fatal errors yet. That's why we treat
diff --git a/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h b/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
index 5de73aab..9b9f6894 100644
--- a/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
+++ b/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
@@ -35,6 +35,7 @@ struct cpu_vcpu_trap;
struct cpu_vcpu_sbi_return {
unsigned long value;
struct cpu_vcpu_trap *trap;
+ bool regs_updated;
};

struct cpu_vcpu_sbi_extension {
--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:14 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
We add cpu_vcpu_sbi_xlate_error() helper function which can assist
upcoming SBI extension implementations (such as nested accleration)
in converting Xvisor error code to SBI specification error code.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
arch/riscv/cpu/generic/cpu_vcpu_sbi.c | 34 +++++++++++++++++++
arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h | 2 ++
2 files changed, 36 insertions(+)

diff --git a/arch/riscv/cpu/generic/cpu_vcpu_sbi.c b/arch/riscv/cpu/generic/cpu_vcpu_sbi.c
index 7c9a1d2c..ebe34465 100644
--- a/arch/riscv/cpu/generic/cpu_vcpu_sbi.c
+++ b/arch/riscv/cpu/generic/cpu_vcpu_sbi.c
@@ -123,3 +123,37 @@ int cpu_vcpu_sbi_ecall(struct vmm_vcpu *vcpu, ulong cause,

return 0;
}
+
+int cpu_vcpu_sbi_xlate_error(int xvisor_error)
+{
+ switch (xvisor_error) {
+ case VMM_OK:
+ return SBI_SUCCESS;
+
+ case VMM_ENOTAVAIL:
+ case VMM_ENOENT:
+ case VMM_ENOSYS:
+ case VMM_ENODEV:
+ case VMM_EOPNOTSUPP:
+ case VMM_ENOTSUPP:
+ return SBI_ERR_NOT_SUPPORTED;
+
+ case VMM_EINVALID:
+ return SBI_ERR_INVALID_PARAM;
+
+ case VMM_EACCESS:
+ return SBI_ERR_DENIED;
+
+ case VMM_ERANGE:
+ return SBI_ERR_INVALID_ADDRESS;
+
+ case VMM_EALREADY:
+ case VMM_EEXIST:
+ return SBI_ERR_ALREADY_AVAILABLE;
+
+ default:
+ break;
+ }
+
+ return SBI_ERR_FAILED;
+}
diff --git a/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h b/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
index 9b9f6894..c5967507 100644
--- a/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
+++ b/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
@@ -52,4 +52,6 @@ const struct cpu_vcpu_sbi_extension *cpu_vcpu_sbi_find_extension(
int cpu_vcpu_sbi_ecall(struct vmm_vcpu *vcpu, ulong mcause,
arch_regs_t *regs);

+int cpu_vcpu_sbi_xlate_error(int xvisor_error);
+
#endif
--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:17 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
The SBI v1.0 specificaiton is functionally same as SBI v0.3
specification except that SBI v1.0 specification went through
the full RISC-V International ratification process.

Let us change the SBI specification version to v1.0 for guest.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h b/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
index c5967507..6802b5f6 100644
--- a/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
+++ b/arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h
@@ -28,8 +28,8 @@
struct vmm_vcpu;
struct cpu_vcpu_trap;

-#define CPU_VCPU_SBI_VERSION_MAJOR 0
-#define CPU_VCPU_SBI_VERSION_MINOR 3
+#define CPU_VCPU_SBI_VERSION_MAJOR 1
+#define CPU_VCPU_SBI_VERSION_MINOR 0
#define CPU_VCPU_SBI_IMPID 2

struct cpu_vcpu_sbi_return {
--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:18 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
We extend ISA parsing to detect Sstc extension from CPU DT nodes.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
arch/riscv/cpu/generic/cpu_init.c | 2 ++
arch/riscv/cpu/generic/include/cpu_hwcap.h | 1 +
2 files changed, 3 insertions(+)

diff --git a/arch/riscv/cpu/generic/cpu_init.c b/arch/riscv/cpu/generic/cpu_init.c
index 0be32648..823d1f7f 100644
--- a/arch/riscv/cpu/generic/cpu_init.c
+++ b/arch/riscv/cpu/generic/cpu_init.c
@@ -115,6 +115,7 @@ int riscv_isa_populate_string(unsigned long xlen,

SET_ISA_EXT_MAP("smaia", RISCV_ISA_EXT_SMAIA);
SET_ISA_EXT_MAP("ssaia", RISCV_ISA_EXT_SSAIA);
+ SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC);
#undef SET_ISA_EXT_MAP

return VMM_OK;
@@ -198,6 +199,7 @@ int riscv_isa_parse_string(const char *isa,

SET_ISA_EXT_MAP("smaia", RISCV_ISA_EXT_SMAIA);
SET_ISA_EXT_MAP("ssaia", RISCV_ISA_EXT_SSAIA);
+ SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC);
#undef SET_ISA_EXT_MAP
}

diff --git a/arch/riscv/cpu/generic/include/cpu_hwcap.h b/arch/riscv/cpu/generic/include/cpu_hwcap.h
index 3fda16ed..c47dffd0 100644
--- a/arch/riscv/cpu/generic/include/cpu_hwcap.h
+++ b/arch/riscv/cpu/generic/include/cpu_hwcap.h
@@ -57,6 +57,7 @@
enum riscv_isa_ext_id {
RISCV_ISA_EXT_SSAIA = RISCV_ISA_EXT_BASE,
RISCV_ISA_EXT_SMAIA,
+ RISCV_ISA_EXT_SSTC,
RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX,
};

--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:20 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
We add CSR defines for registers added by Sstc extension.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
arch/riscv/cpu/generic/include/riscv_encoding.h | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/arch/riscv/cpu/generic/include/riscv_encoding.h b/arch/riscv/cpu/generic/include/riscv_encoding.h
index 5e85eaef..1514df11 100644
--- a/arch/riscv/cpu/generic/include/riscv_encoding.h
+++ b/arch/riscv/cpu/generic/include/riscv_encoding.h
@@ -494,6 +494,10 @@
/* Counter Overflow CSR */
#define CSR_SCOUNTOVF 0xda0

+/* Supervisor Time Compare (Sstc) */
+#define CSR_STIMECMP 0x14D
+#define CSR_STIMECMPH 0x15D
+
/* ===== Hypervisor-level CSRs ===== */

/* Hypervisor Trap Setup (H-extension) */
@@ -556,6 +560,10 @@
#define CSR_VSIEH 0x214
#define CSR_VSIPH 0x254

+/* Virtual Supervisor Time Compare (Sstc) */
+#define CSR_VSTIMECMP 0x24D
+#define CSR_VSTIMECMPH 0x25D
+
/* ===== Machine-level CSRs ===== */

/* Machine Information Registers */
--
2.34.1

Anup Patel

unread,
Oct 11, 2022, 12:10:22 PM10/11/22
to xvisor...@googlegroups.com, Anup Patel
We should use Sstc extension to trigger timer interrupts when
underlying hardware supports it.

Signed-off-by: Anup Patel <apa...@ventanamicro.com>
---
drivers/clocksource/riscv_timer.c | 37 +++++++++++++++++++++++++++----
1 file changed, 33 insertions(+), 4 deletions(-)

diff --git a/drivers/clocksource/riscv_timer.c b/drivers/clocksource/riscv_timer.c
index caabcac9..78fe4017 100644
--- a/drivers/clocksource/riscv_timer.c
+++ b/drivers/clocksource/riscv_timer.c
@@ -103,8 +103,10 @@ static int __init riscv_timer_clocksource_init(struct vmm_devtree_node *node)
return rc;
}

- vmm_init_printf("riscv-timer: registered clocksource @ %ldHz\n",
- riscv_timer_hz);
+ vmm_init_printf("riscv-timer: registered clocksource @ %ldHz%s\n",
+ riscv_timer_hz,
+ (riscv_isa_extension_available(NULL, SSTC)) ?
+ " using Sstc" : "");
return VMM_OK;
}
VMM_CLOCKSOURCE_INIT_DECLARE(riscvclksrc, "riscv",
@@ -125,6 +127,22 @@ static int riscv_timer_set_next_event(unsigned long evt,
return VMM_OK;
}

+static int riscv_timer_sstc_set_next_event(unsigned long evt,
+ struct vmm_clockchip *unused)
+{
+ u64 next = get_cycles64() + evt;
+
+ csr_set(sie, SIE_STIE);
+#ifdef CONFIG_32BIT
+ csr_write(CSR_STIMECMP, (u32)next);
+ csr_write(CSR_STIMECMPH, (u32)(next >> 32));
+#else
+ csr_write(CSR_STIMECMP, next);
+#endif
+
+ return VMM_OK;
+}
+
static vmm_irq_return_t riscv_timer_handler(int irq, void *dev)
{
struct vmm_clockchip *cc = dev;
@@ -161,7 +179,11 @@ static int riscv_timer_startup(struct vmm_cpuhp_notify *cpuhp, u32 cpu)
cc->min_delta_ns = vmm_clockchip_delta2ns(0xF, cc);
cc->max_delta_ns = vmm_clockchip_delta2ns(0x7FFFFFFF, cc);
cc->set_mode = &riscv_timer_set_mode;
- cc->set_next_event = &riscv_timer_set_next_event;
+ if (riscv_isa_extension_available(NULL, SSTC)) {
+ cc->set_next_event = &riscv_timer_sstc_set_next_event;
+ } else {
+ cc->set_next_event = &riscv_timer_set_next_event;
+ }
cc->priv = NULL;

/* Register riscv timer clockchip */
@@ -171,7 +193,14 @@ static int riscv_timer_startup(struct vmm_cpuhp_notify *cpuhp, u32 cpu)
}

/* Ensure that timer interrupt bit zero in the sip CSR */
- sbi_set_timer(U64_MAX);
+ if (riscv_isa_extension_available(NULL, SSTC)) {
+ csr_write(CSR_STIMECMP, -1UL);
+#ifdef CONFIG_32BIT
+ csr_write(CSR_STIMECMPH, -1UL);
+#endif
+ } else {
+ sbi_set_timer(U64_MAX);
+ }

/* Register irq handler for riscv timer */
rc = vmm_host_irq_register(IRQ_S_TIMER, "riscv-timer",
--
2.34.1

Anup Patel

unread,
Oct 17, 2022, 12:57:18 AM10/17/22
to xvisor...@googlegroups.com, Anup Patel
On Tue, Oct 11, 2022 at 9:40 PM Anup Patel <apa...@ventanamicro.com> wrote:
>
> This series adds Sstc support for host timer driver and it does various
> improvements/preparations required for implementing SBI nested acceleration.
>
> These patches can also be found in riscv_sstc_v1 branch at:
> https://github.com/avpatel/xvisor-next.git
>
> Anup Patel (11):
> CORE: Add vmm_scheduler_irq_regs() function
> CORE: Add endianness helper macros for long
> RISC-V: Improve SRET based nested world-switch
> RISC-V: Make function to emulate SRET instruction as global
> RISC-V: Combine SBI extension handler output parameters into a struct
> RISC-V: Add regs_updated flag in struct cpu_vcpu_sbi_return
> RISC-V: Add cpu_vcpu_sbi_xlate_error() helper function
> RISC-V: Change the SBI specification version to v1.0 for guest
> RISC-V: Extend ISA parsing to detect Sstc extension
> RISC-V: Add CSR defines for Sstc extension
> DRIVERS: riscv_timer: Use Sstc extension when available

Applied this series to the xvisor-next repo

Regards,
Anup

>
> arch/riscv/cpu/generic/cpu_init.c | 2 +
> arch/riscv/cpu/generic/cpu_vcpu_nested.c | 30 ++++++++++---
> arch/riscv/cpu/generic/cpu_vcpu_sbi.c | 44 ++++++++++++++++---
> arch/riscv/cpu/generic/cpu_vcpu_sbi_base.c | 25 +++++------
> arch/riscv/cpu/generic/cpu_vcpu_sbi_hsm.c | 11 +++--
> arch/riscv/cpu/generic/cpu_vcpu_sbi_legacy.c | 17 +++----
> arch/riscv/cpu/generic/cpu_vcpu_sbi_replace.c | 28 +++++-------
> arch/riscv/cpu/generic/cpu_vcpu_sbi_xvisor.c | 9 ++--
> arch/riscv/cpu/generic/cpu_vcpu_trap.c | 4 +-
> arch/riscv/cpu/generic/include/cpu_hwcap.h | 1 +
> arch/riscv/cpu/generic/include/cpu_vcpu_sbi.h | 19 +++++---
> .../riscv/cpu/generic/include/cpu_vcpu_trap.h | 2 +
> .../cpu/generic/include/riscv_encoding.h | 8 ++++
> core/include/vmm_host_io.h | 8 ++++
> core/include/vmm_scheduler.h | 3 ++
> core/vmm_scheduler.c | 7 +++
> drivers/clocksource/riscv_timer.c | 37 ++++++++++++++--
> 17 files changed, 183 insertions(+), 72 deletions(-)
>
> --
> 2.34.1
>
> --
> You received this message because you are subscribed to the Google Groups "Xvisor Development" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to xvisor-devel...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/xvisor-devel/20221011160950.263483-1-apatel%40ventanamicro.com.
Reply all
Reply to author
Forward
0 new messages