[PATCH 05/27] arm: Align description arch_paging_flush_cpu_caches with actual logic

33 views
Skip to first unread message

Jan Kiszka

unread,
Aug 10, 2016, 3:29:18 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
Despite claiming to perform also a dcache invalidation,
arch_paging_flush_cpu_caches was only doing a clean so far, probably due
to a mistake in the definition of DCCIMVAC (it was defined as DCCMVAC).

However, there is no need to invalidate the caches here. Therefore align
the sysreg definition and comments to what the code already did.

CC: Marc Zyngier <marc.z...@arm.com>
CC: Mark Rutland <mark.r...@arm.com>
Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/include/asm/paging.h | 4 ++--
hypervisor/arch/arm/include/asm/sysregs.h | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index a367e2c..4c2edba 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -207,8 +207,8 @@ static inline void arch_paging_flush_page_tlbs(unsigned long page_addr)
static inline void arch_paging_flush_cpu_caches(void *addr, long size)
{
do {
- /* Clean & invalidate by MVA to PoC */
- arm_write_sysreg(DCCIMVAC, addr);
+ /* Clean by MVA to PoC */
+ arm_write_sysreg(DCCMVAC, addr);
size -= cache_line_size;
addr += cache_line_size;
} while (size > 0);
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index 3011364..c19248e 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -103,7 +103,7 @@

#define ICIALLUIS SYSREG_32(0, c7, c1, 0)
#define ICIALLU SYSREG_32(0, c7, c5, 0)
-#define DCCIMVAC SYSREG_32(0, c7, c10, 1)
+#define DCCMVAC SYSREG_32(0, c7, c10, 1)
#define DCCSW SYSREG_32(0, c7, c10, 2)
#define DCCISW SYSREG_32(0, c7, c14, 2)

--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:18 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
HSR_MATCH_MCR_MRC and its 64-bit brother make dispatching trapped sysreg
accesses simpler. The will be more use cases soon.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/include/asm/processor.h | 8 ++++++++
hypervisor/arch/arm/traps.c | 17 ++++-------------
2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 9be4362..dc6f9bb 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -145,6 +145,14 @@
#define HSR_ICC_CV_BIT (1 << 24)
#define HSR_ICC_COND(icc) ((icc) >> 20 & 0xf)

+#define HSR_MATCH_MCR_MRC(hsr, crn, opc1, crm, opc2) \
+ (((hsr) & (BIT_MASK(19, 10) | BIT_MASK(4, 1))) == \
+ (((opc2) << 17) | ((opc1) << 14) | ((crn) << 10) | ((crm) << 1)))
+
+#define HSR_MATCH_MCRR_MRRC(hsr, opc1, crm) \
+ (((hsr) & (BIT_MASK(19, 16) | BIT_MASK(4, 1))) == \
+ (((opc1) << 16) | ((crm) << 1)))
+
#define EXIT_REASON_UNDEF 0x1
#define EXIT_REASON_HVC 0x2
#define EXIT_REASON_PABT 0x3
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index 5723f05..093d2f5 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -233,14 +233,11 @@ static int arch_handle_hvc(struct trap_context *ctx)

static int arch_handle_cp15_32(struct trap_context *ctx)
{
- u32 opc2 = ctx->hsr >> 17 & 0x7;
- u32 opc1 = ctx->hsr >> 14 & 0x7;
- u32 crn = ctx->hsr >> 10 & 0xf;
u32 rt = ctx->hsr >> 5 & 0xf;
- u32 crm = ctx->hsr >> 1 & 0xf;
u32 read = ctx->hsr & 1;

- if (opc1 == 0 && crn == 1 && crm == 0 && opc2 == 1) {
+ /* trapped by HCR.TAC */
+ if (HSR_MATCH_MCR_MRC(ctx->hsr, 1, 0, 0, 1)) { /* ACTLR */
/* Do not let the guest disable coherency by writing ACTLR... */
if (read) {
unsigned long val;
@@ -258,10 +255,8 @@ static int arch_handle_cp15_32(struct trap_context *ctx)
static int arch_handle_cp15_64(struct trap_context *ctx)
{
unsigned long rt_val, rt2_val;
- u32 opc1 = ctx->hsr >> 16 & 0x7;
u32 rt2 = ctx->hsr >> 10 & 0xf;
u32 rt = ctx->hsr >> 5 & 0xf;
- u32 crm = ctx->hsr >> 1 & 0xf;
u32 read = ctx->hsr & 1;

if (!read) {
@@ -270,16 +265,12 @@ static int arch_handle_cp15_64(struct trap_context *ctx)
}

#ifdef CONFIG_ARM_GIC_V3
- /* Trapped ICC_SGI1R write */
- if (!read && opc1 == 0 && crm == 12) {
+ /* trapped by HCR.IMO/FMO */
+ if (!read && HSR_MATCH_MCRR_MRRC(ctx->hsr, 0, 12)) { /* ICC_SGI1R */
arch_skip_instruction(ctx);
gicv3_handle_sgir_write((u64)rt2_val << 32 | rt_val);
return TRAP_HANDLED;
}
-#else
- /* Avoid `unused' warning... */
- crm = crm;
- opc1 = opc1;
#endif

return TRAP_UNHANDLED;
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:18 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
This series addresses a number of critical issues in the ARM support of
Jailhouse. Most outstanding is surely the so far missing interception of
set/way cache flushes which is able to cause hypervisor state
corruptions.

Moreover, this solves various cases where creating and destroying cells
or the hypervisor itself causes lock-ups or other errors. One of this
was related to Linux declaring HYP support unavailable after Jailhouse
ran once. Fixing it now requires a tiny patch to the root cell kernel
in order to export __boot_cpu_mode on ARM - unavoidable.

Even after digging deep into ARM caching architectures and surely
understanding much more about it by now, I still feel not fully safe
about this. Any reviews are welcome! Also regarding one aspect that
didn't make it into this series: no break-before-make on stage-2 page
table changes, see also [1].

Marc & Mark, if anyone of you would have some time to look at least over
the assumptions and concepts I applied in the patches your are CC'ed,
that would be great. If you don't understand anything or lack context,
just let me know.

There will be a part II of changes, but those are mostly cosmetic, not
fixing additional bugs. Will send that later to ease the review of this
series.

Note that I didn't try any rebasing of ARM64 so far. Some issues
addressed here should be quite relevant for it as well...

Jan

[1] https://groups.google.com/d/msg/jailhouse-dev/c9Ier7mUNoI/YBirkGotBQAJ
(sigh, I miss gmane...)


CC: Marc Zyngier <marc.z...@arm.com>
CC: Mark Rutland <mark.r...@arm.com>

Jan Kiszka (27):
driver: Evaluate and maintain Linux' __boot_cpu_mode on ARM
driver: Flush D-cache after cell loading on ARMv7
core: Add MIN macro
arm: Rename arch_mmu API to arm_paging
arm: Align description arch_paging_flush_cpu_caches with actual logic
arm: Introduce and use arm_dcaches_flush
arm: Implement arm_cell_dcaches_flush
arm: Rename ESR to HSR
arm: Introduce and use sysreg matching macros
arm: Prepare for trapping virtual memory control registers
arm: Trap and emulate set/way cache flushes
arm: Build stage-2 page tables coherently
arm: Invalidate d-cache entries related to upcoming/vanishing cell
arm: Remove arch_cell_caches_flush and arch_cpu_icache_flush
arm: Move cell ID check to arm_paging_cell_init
arm: Remove return code from arm_paging_vcpu_init
arm: Refactor arch_cpu_tlb_flush to arm_paging_vcpu_flush_tlbs
arm: Simplify CPU shutdown
arm: Remove PC and CPSR from trap context
arm: Let arm_paging_vcpu_init take paging structure as argument
arm: Introduce CPU parking page
arm: Introduce infrastructure for reworked CPU management
arm: Reorder function to match final layout
arm: Refactor SMP setup and cleanup
arm: Switch to new CPU management services
arm: Move smc trampoline to exception.S
arm: Remove unused PSCI code, types and defines

driver/cell.c | 13 +-
driver/main.c | 18 ++
hypervisor/arch/arm/Makefile | 4 +-
hypervisor/arch/arm/caches.S | 8 -
hypervisor/arch/arm/control.c | 253 +++++++++++++++-------------
hypervisor/arch/arm/exception.S | 10 +-
hypervisor/arch/arm/gic-v2.c | 4 +-
hypervisor/arch/arm/include/asm/cell.h | 10 --
hypervisor/arch/arm/include/asm/control.h | 18 +-
hypervisor/arch/arm/include/asm/paging.h | 38 ++++-
hypervisor/arch/arm/include/asm/percpu.h | 32 +++-
hypervisor/arch/arm/include/asm/platform.h | 2 +-
hypervisor/arch/arm/include/asm/processor.h | 63 ++++---
hypervisor/arch/arm/include/asm/psci.h | 38 -----
hypervisor/arch/arm/include/asm/smp.h | 19 +--
hypervisor/arch/arm/include/asm/sysregs.h | 5 +-
hypervisor/arch/arm/include/asm/traps.h | 4 +-
hypervisor/arch/arm/irqchip.c | 2 +-
hypervisor/arch/arm/mmio.c | 23 ++-
hypervisor/arch/arm/mmu_cell.c | 101 ++++++-----
hypervisor/arch/arm/mmu_hyp.c | 15 ++
hypervisor/arch/arm/psci.c | 127 +++-----------
hypervisor/arch/arm/psci_low.S | 82 ---------
hypervisor/arch/arm/setup.c | 44 +++--
hypervisor/arch/arm/smp-sun7i.c | 25 ---
hypervisor/arch/arm/smp-tegra124.c | 25 ---
hypervisor/arch/arm/smp-vexpress.c | 91 +++++-----
hypervisor/arch/arm/smp.c | 23 ++-
hypervisor/arch/arm/traps.c | 218 ++++++++++++++++--------
hypervisor/include/jailhouse/utils.h | 1 +
30 files changed, 629 insertions(+), 687 deletions(-)
delete mode 100644 hypervisor/arch/arm/psci_low.S
delete mode 100644 hypervisor/arch/arm/smp-sun7i.c
delete mode 100644 hypervisor/arch/arm/smp-tegra124.c

--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:18 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
We need to clear the BOOT_CPU_MODE_MISMATCH flag from __boot_cpu_mode
when disabling Jailhouse. It will be set by Linux when online a CPU
while Jailhouse is running, because we then let the guest start in SVC
instead of HYP mode. If we leave the flag set, the next physical CPU
onlining will not properly install the hypervisor vectors, and we will
crash when trying to enable Jailhouse again.

We no rely on a tiny upstream Linux patch to export __boot_cpu_mode to
GPL modules. As it now becomes available for us, we can also use it (via
is_hyp_mode_available) to check for the availability of the hypervisor
stub Linux should have installed instead of simply crashing the system
when it is missing.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
driver/main.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/driver/main.c b/driver/main.c
index 0a76e18..746de07 100644
--- a/driver/main.c
+++ b/driver/main.c
@@ -27,6 +27,9 @@
#include <asm/smp.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
+#ifdef CONFIG_ARM
+#include <asm/virt.h>
+#endif

#include "cell.h"
#include "jailhouse.h"
@@ -188,6 +191,13 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
long max_cpus;
int err;

+#ifdef CONFIG_ARM
+ if (!is_hyp_mode_available()) {
+ pr_err("jailhouse: HYP mode not available\n");
+ return -ENODEV;
+ }
+#endif
+
fw_name = jailhouse_fw_name();
if (!fw_name) {
pr_err("jailhouse: Missing or unsupported HVM technology\n");
@@ -407,6 +417,14 @@ static int jailhouse_cmd_disable(void)
goto unlock_out;
}

+#ifdef CONFIG_ARM
+ /*
+ * This flag has been set when onlining a CPU under Jailhouse
+ * supervision into SVC instead of HYP mode.
+ */
+ __boot_cpu_mode &= ~BOOT_CPU_MODE_MISMATCH;
+#endif
+
atomic_set(&call_done, 0);
on_each_cpu(leave_hypervisor, NULL, 0);
while (atomic_read(&call_done) != num_online_cpus())
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:18 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
This generalizes arch_paging_flush_cpu_caches to arm_dcaches_flush, a
dcache flusher by MVA that can be instructed to either clean, invalidate
or do both on a memory region mapped to the hypervisor address space.

Already arch_paging_flush_cpu_caches was too large to be inlined, so
arm_dcaches_flush is implemented in mmu_hyp.c. This version also takes
regions into account that are not multiples of the minimum cache line
size.

CC: Marc Zyngier <marc.z...@arm.com>
CC: Mark Rutland <mark.r...@arm.com>
Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/include/asm/paging.h | 17 ++++++++++-------
hypervisor/arch/arm/include/asm/sysregs.h | 2 ++
hypervisor/arch/arm/mmu_hyp.c | 15 +++++++++++++++
3 files changed, 27 insertions(+), 7 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 4c2edba..1177023 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -175,6 +175,12 @@ struct per_cpu;

typedef u64 *pt_entry_t;

+enum dcache_flush {
+ DCACHE_CLEAN,
+ DCACHE_INVALIDATE,
+ DCACHE_CLEAN_AND_INVALIDATE,
+};
+
extern unsigned int cpu_parange;
extern unsigned int cache_line_size;

@@ -183,6 +189,8 @@ void arm_paging_cell_destroy(struct cell *cell);

int arm_paging_vcpu_init(struct per_cpu *cpu_data);

+void arm_dcaches_flush(void *addr, long size, enum dcache_flush flush);
+
/* return the bits supported for the physical address range for this
* machine; in arch_paging_init this value will be kept in
* cpu_parange for later reference */
@@ -203,15 +211,10 @@ static inline void arch_paging_flush_page_tlbs(unsigned long page_addr)
arm_write_sysreg(TLBIMVAH, page_addr & PAGE_MASK);
}

-/* Used to clean the PAGE_MAP_COHERENT page table changes */
+/* Used to clean the PAGING_COHERENT page table changes */
static inline void arch_paging_flush_cpu_caches(void *addr, long size)
{
- do {
- /* Clean by MVA to PoC */
- arm_write_sysreg(DCCMVAC, addr);
- size -= cache_line_size;
- addr += cache_line_size;
- } while (size > 0);
+ arm_dcaches_flush(addr, size, DCACHE_CLEAN);
}

#endif /* !__ASSEMBLY__ */
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index c19248e..83b5cff 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -103,8 +103,10 @@

#define ICIALLUIS SYSREG_32(0, c7, c1, 0)
#define ICIALLU SYSREG_32(0, c7, c5, 0)
+#define DCIMVAC SYSREG_32(0, c7, c6, 1)
#define DCCMVAC SYSREG_32(0, c7, c10, 1)
#define DCCSW SYSREG_32(0, c7, c10, 2)
+#define DCCIMVAC SYSREG_32(0, c7, c14, 1)
#define DCCISW SYSREG_32(0, c7, c14, 2)

#define TLBIALL SYSREG_32(0, c8, c7, 0)
diff --git a/hypervisor/arch/arm/mmu_hyp.c b/hypervisor/arch/arm/mmu_hyp.c
index 99b5fa5..bba54a7 100644
--- a/hypervisor/arch/arm/mmu_hyp.c
+++ b/hypervisor/arch/arm/mmu_hyp.c
@@ -369,3 +369,18 @@ int arch_unmap_device(void *vaddr, unsigned long size)
return paging_destroy(&hv_paging_structs, (unsigned long)vaddr, size,
PAGING_NON_COHERENT);
}
+
+void arm_dcaches_flush(void *addr, long size, enum dcache_flush flush)
+{
+ while (size > 0) {
+ /* clean / invalidate by MVA to PoC */
+ if (flush == DCACHE_CLEAN)
+ arm_write_sysreg(DCCMVAC, addr);
+ else if (flush == DCACHE_INVALIDATE)
+ arm_write_sysreg(DCIMVAC, addr);
+ else
+ arm_write_sysreg(DCCIMVAC, addr);
+ size -= MIN(cache_line_size, size);
+ addr += cache_line_size;
+ }
+}
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:19 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
This adds all trap handling required once HCR.TVM is set. Then a number
of system control register writes trap to HYP mode, and we need to
execute them on behalf of the guest.

The only one we are actually interested in is SCTLR, so this trap is
already prepare for additional logic. The others are simply handled via
a macro that encapsulates the match as well as the register write.
TTBR0/1 may either show up as 32 or 64-bit write, depending on LPAE
being off or on.

CC: Marc Zyngier <marc.z...@arm.com>
CC: Mark Rutland <mark.r...@arm.com>
Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/traps.c | 102 +++++++++++++++++++++++++++++++++-----------
1 file changed, 77 insertions(+), 25 deletions(-)

diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index 093d2f5..1c629fc 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -233,47 +233,99 @@ static int arch_handle_hvc(struct trap_context *ctx)

static int arch_handle_cp15_32(struct trap_context *ctx)
{
- u32 rt = ctx->hsr >> 5 & 0xf;
- u32 read = ctx->hsr & 1;
+ u32 hsr = ctx->hsr;
+ u32 rt = (hsr >> 5) & 0xf;
+ u32 read = hsr & 1;
+ unsigned long val;
+
+#define CP15_32_PERFORM_WRITE(crn, opc1, crm, opc2) ({ \
+ bool match = false; \
+ if (HSR_MATCH_MCR_MRC(hsr, crn, opc1, crm, opc2)) { \
+ arm_write_sysreg_32(opc1, c##crn, c##crm, opc2, val); \
+ match = true; \
+ } \
+ match; \
+})
+
+ if (!read)
+ access_cell_reg(ctx, rt, &val, true);

/* trapped by HCR.TAC */
if (HSR_MATCH_MCR_MRC(ctx->hsr, 1, 0, 0, 1)) { /* ACTLR */
/* Do not let the guest disable coherency by writing ACTLR... */
- if (read) {
- unsigned long val;
+ if (read)
arm_read_sysreg(ACTLR_EL1, val);
- access_cell_reg(ctx, rt, &val, false);
- }
- arch_skip_instruction(ctx);
-
- return TRAP_HANDLED;
}
+ /* all other regs are write-only / only trapped on writes */
+ else if (read) {
+ return TRAP_UNHANDLED;
+ }
+ /* trapped if HCR.TVM is set */
+ else if (HSR_MATCH_MCR_MRC(hsr, 1, 0, 0, 0)) { /* SCTLR */
+ // TODO: check if caches are turned on or off
+ arm_write_sysreg(SCTLR_EL1, val);
+ } else if (!(CP15_32_PERFORM_WRITE(2, 0, 0, 0) || /* TTBR0 */
+ CP15_32_PERFORM_WRITE(2, 0, 0, 1) || /* TTBR1 */
+ CP15_32_PERFORM_WRITE(2, 0, 0, 2) || /* TTBCR */
+ CP15_32_PERFORM_WRITE(3, 0, 0, 0) || /* DACR */
+ CP15_32_PERFORM_WRITE(5, 0, 0, 0) || /* DFSR */
+ CP15_32_PERFORM_WRITE(5, 0, 0, 1) || /* IFSR */
+ CP15_32_PERFORM_WRITE(6, 0, 0, 0) || /* DFAR */
+ CP15_32_PERFORM_WRITE(6, 0, 0, 2) || /* IFAR */
+ CP15_32_PERFORM_WRITE(5, 0, 1, 0) || /* ADFSR */
+ CP15_32_PERFORM_WRITE(5, 0, 1, 1) || /* AIDSR */
+ CP15_32_PERFORM_WRITE(10, 0, 2, 0) || /* PRRR / MAIR0 */
+ CP15_32_PERFORM_WRITE(10, 0, 2, 1) || /* NMRR / MAIR1 */
+ CP15_32_PERFORM_WRITE(13, 0, 0, 1))) { /* CONTEXTIDR */
+ return TRAP_UNHANDLED;
+ }
+
+ if (read)
+ access_cell_reg(ctx, rt, &val, false);
+
+ arch_skip_instruction(ctx);

- return TRAP_UNHANDLED;
+ return TRAP_HANDLED;
}

static int arch_handle_cp15_64(struct trap_context *ctx)
{
- unsigned long rt_val, rt2_val;
- u32 rt2 = ctx->hsr >> 10 & 0xf;
- u32 rt = ctx->hsr >> 5 & 0xf;
- u32 read = ctx->hsr & 1;
-
- if (!read) {
- access_cell_reg(ctx, rt, &rt_val, true);
- access_cell_reg(ctx, rt2, &rt2_val, true);
- }
+ u32 hsr = ctx->hsr;
+ u32 rt2 = (hsr >> 10) & 0xf;
+ u32 rt = (hsr >> 5) & 0xf;
+ u32 read = hsr & 1;
+ unsigned long lo, hi;
+
+#define CP15_64_PERFORM_WRITE(opc1, crm) ({ \
+ bool match = false; \
+ if (HSR_MATCH_MCRR_MRRC(hsr, opc1, crm)) { \
+ arm_write_sysreg_64(opc1, c##crm, ((u64)hi << 32) | lo); \
+ match = true; \
+ } \
+ match; \
+})
+
+ /* all regs are write-only / only trapped on writes */
+ if (read)
+ return TRAP_UNHANDLED;
+
+ access_cell_reg(ctx, rt, &lo, true);
+ access_cell_reg(ctx, rt2, &hi, true);

#ifdef CONFIG_ARM_GIC_V3
/* trapped by HCR.IMO/FMO */
- if (!read && HSR_MATCH_MCRR_MRRC(ctx->hsr, 0, 12)) { /* ICC_SGI1R */
- arch_skip_instruction(ctx);
- gicv3_handle_sgir_write((u64)rt2_val << 32 | rt_val);
- return TRAP_HANDLED;
- }
+ if (HSR_MATCH_MCRR_MRRC(ctx->hsr, 0, 12)) /* ICC_SGI1R */
+ gicv3_handle_sgir_write(((u64)hi << 32) | lo);
+ else
#endif
+ /* trapped if HCR.TVM is set */
+ if (!(CP15_64_PERFORM_WRITE(0, 2) || /* TTBR0 */
+ CP15_64_PERFORM_WRITE(1, 2))) /* TTBR1 */
+ return TRAP_UNHANDLED;

- return TRAP_UNHANDLED;
+ arch_skip_instruction(ctx);
+
+ return TRAP_HANDLED;
}

static const trap_handler trap_handlers[38] =
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:19 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
This flushes all dcache entries related to a specific cell by mapping
each physical RAM page of the cell into the hypervisor and then perform
the requested flush on the corresponding virtual address. Those flushes
will be broadcast to all CPUs, thus the call only needs to be performed
once on any CPU in the system.

This pattern was bluntly stolen from KVM. It will serve as a building
block to emulate guest-issued set/way cache maintenance operations.

CC: Marc Zyngier <marc.z...@arm.com>
CC: Mark Rutland <mark.r...@arm.com>
Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/include/asm/paging.h | 1 +
hypervisor/arch/arm/mmu_cell.c | 35 ++++++++++++++++++++++++++++++++
2 files changed, 36 insertions(+)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 1177023..f2ee398 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -190,6 +190,7 @@ void arm_paging_cell_destroy(struct cell *cell);
int arm_paging_vcpu_init(struct per_cpu *cpu_data);

void arm_dcaches_flush(void *addr, long size, enum dcache_flush flush);
+void arm_cell_dcaches_flush(struct cell *cell, enum dcache_flush flush);

/* return the bits supported for the physical address range for this
* machine; in arch_paging_init this value will be kept in
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 6bce1ab..baf9ba0 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -55,6 +55,41 @@ unsigned long arch_paging_gphys2phys(struct per_cpu *cpu_data,
return paging_virt2phys(&cpu_data->cell->arch.mm, gphys, flags);
}

+void arm_cell_dcaches_flush(struct cell *cell, enum dcache_flush flush)
+{
+ unsigned long vaddr = TEMPORARY_MAPPING_BASE +
+ this_cpu_id() * PAGE_SIZE * NUM_TEMPORARY_PAGES;
+ unsigned long region_addr, region_size, size;
+ struct jailhouse_memory const *mem;
+ unsigned int n;
+
+ for_each_mem_region(mem, cell->config, n) {
+ if (mem->flags & (JAILHOUSE_MEM_IO | JAILHOUSE_MEM_COMM_REGION))
+ continue;
+
+ region_addr = mem->phys_start;
+ region_size = mem->size;
+
+ while (region_size > 0) {
+ size = MIN(region_size,
+ NUM_TEMPORARY_PAGES * PAGE_SIZE);
+
+ /* cannot fail, mapping area is preallocated */
+ paging_create(&hv_paging_structs, region_addr, size,
+ vaddr, PAGE_DEFAULT_FLAGS,
+ PAGING_NON_COHERENT);
+
+ arm_dcaches_flush((void *)vaddr, size, flush);
+
+ region_addr += size;
+ region_size -= size;
+ }
+ }
+
+ /* ensure completion of the flush */
+ dmb(ish);
+}
+
int arm_paging_cell_init(struct cell *cell)
{
cell->arch.mm.root_paging = cell_paging;
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:19 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
We now invalidate/clean the cell caches via MVA, and the driver performs
a MVA-based flush of everything it loads into a cell as well. So we can
safely drop arch_cell_caches_flush as well as arch_cpu_icache_flush that
only this function invokes.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/caches.S | 8 --------
hypervisor/arch/arm/control.c | 11 -----------
hypervisor/arch/arm/include/asm/cell.h | 6 ------
hypervisor/arch/arm/include/asm/control.h | 2 --
hypervisor/arch/arm/mmu_cell.c | 26 --------------------------
5 files changed, 53 deletions(-)

diff --git a/hypervisor/arch/arm/caches.S b/hypervisor/arch/arm/caches.S
index f965e6a..c71dea1 100644
--- a/hypervisor/arch/arm/caches.S
+++ b/hypervisor/arch/arm/caches.S
@@ -78,11 +78,3 @@ next_cache:
finish: isb
pop {r0-r11}
bx lr
-
- .global arch_cpu_icache_flush
-arch_cpu_icache_flush:
- dsb
- arm_write_sysreg(ICIALLU, r0) @ r0 value is ignored
- dsb
- isb
- bx lr
diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 83add4c..245a87f 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -98,13 +98,6 @@ void arch_reset_self(struct per_cpu *cpu_data)
err = arm_paging_vcpu_init(cpu_data);
if (err)
printk("MMU setup failed\n");
- /*
- * On the first CPU to reach this, write all cell datas to memory so it
- * can be started with caches disabled.
- * On all CPUs, invalidate the instruction caches to take into account
- * the potential new instructions.
- */
- arch_cell_caches_flush(cell);

/*
* We come from the IRQ handler, but we won't return there, so the IPI
@@ -228,16 +221,12 @@ void arch_resume_cpu(unsigned int cpu_id)
/* CPU must be stopped */
void arch_park_cpu(unsigned int cpu_id)
{
- struct per_cpu *cpu_data = per_cpu(cpu_id);
-
/*
* Reset always follows park_cpu, so we just need to make sure that the
* CPU is suspended
*/
if (psci_wait_cpu_stopped(cpu_id) != 0)
printk("ERROR: CPU%d is supposed to be stopped\n", cpu_id);
- else
- cpu_data->cell->arch.needs_flush = true;
}

/* CPU must be stopped */
diff --git a/hypervisor/arch/arm/include/asm/cell.h b/hypervisor/arch/arm/include/asm/cell.h
index 696856a..305a2e8 100644
--- a/hypervisor/arch/arm/include/asm/cell.h
+++ b/hypervisor/arch/arm/include/asm/cell.h
@@ -15,22 +15,16 @@

#include <jailhouse/types.h>
#include <asm/smp.h>
-#include <asm/spinlock.h>

#ifndef __ASSEMBLY__

-#include <jailhouse/cell-config.h>
#include <jailhouse/paging.h>
-#include <jailhouse/hypercall.h>

/** ARM-specific cell states. */
struct arch_cell {
struct paging_structures mm;
struct smp_ops *smp;

- spinlock_t caches_lock;
- bool needs_flush;
-
u32 irq_bitmap[1024/32];

unsigned int last_virt_id;
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index c84a0e3..e901f83 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -25,9 +25,7 @@
#include <asm/percpu.h>

void arch_cpu_dcaches_flush(unsigned int action);
-void arch_cpu_icache_flush(void);
void arch_cpu_tlb_flush(struct per_cpu *cpu_data);
-void arch_cell_caches_flush(struct cell *cell);

void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn);
void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index a6a03c9..83ad0f2 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -146,29 +146,3 @@ void arch_cpu_tlb_flush(struct per_cpu *cpu_data)
dsb(nsh);
cpu_data->flush_vcpu_caches = false;
}
-
-void arch_cell_caches_flush(struct cell *cell)
-{
- /* Only the first CPU needs to clean the data caches */
- spin_lock(&cell->arch.caches_lock);
- if (cell->arch.needs_flush) {
- /*
- * Since there is no way to know which virtual addresses have been used
- * by the root cell to write the new cell's data, a complete clean has
- * to be performed.
- */
- arch_cpu_dcaches_flush(CACHES_CLEAN_INVALIDATE);
- cell->arch.needs_flush = false;
- }
- spin_unlock(&cell->arch.caches_lock);
-
- /*
- * New instructions may have been written, so the I-cache needs to be
- * invalidated even though the VMID is different.
- * A complete invalidation is the only way to ensure all virtual aliases
- * of these memory locations are invalidated, whatever the cache type.
- */
- arch_cpu_icache_flush();
-
- /* ERET will ensure context synchronization */
-}
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:20 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
It's sufficient to do this once during cell initialization. Make the
error traceable at this chance.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/mmu_cell.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 83ad0f2..1c055c6 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -92,6 +92,9 @@ void arm_cell_dcaches_flush(struct cell *cell, enum dcache_flush flush)

int arm_paging_cell_init(struct cell *cell)
{
+ if (cell->id > 0xff)
+ return trace_error(-E2BIG);
+
cell->arch.mm.root_paging = cell_paging;
cell->arch.mm.root_table =
page_alloc_aligned(&mem_pool, ARM_CELL_ROOT_PT_SZ);
@@ -114,10 +117,6 @@ int arm_paging_vcpu_init(struct per_cpu *cpu_data)
u64 vttbr = 0;
u32 vtcr = VTCR_CELL;

- if (cell->id > 0xff) {
- panic_printk("No cell ID available\n");
- return -E2BIG;
- }
vttbr |= (u64)cell->id << VTTBR_VMID_SHIFT;
vttbr |= (u64)(cell_table & TTBR_MASK);

--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:20 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
This allows to manipulate them also from functions that do not have
access to a reference to the current exception context

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/include/asm/traps.h | 2 --
hypervisor/arch/arm/mmio.c | 21 ++++++++++-------
hypervisor/arch/arm/traps.c | 42 ++++++++++++++++++---------------
3 files changed, 36 insertions(+), 29 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/traps.h b/hypervisor/arch/arm/include/asm/traps.h
index 1364eef..53684c0 100644
--- a/hypervisor/arch/arm/include/asm/traps.h
+++ b/hypervisor/arch/arm/include/asm/traps.h
@@ -29,8 +29,6 @@ enum trap_return {
struct trap_context {
unsigned long *regs;
u32 hsr;
- u32 cpsr;
- u32 pc;
};

typedef int (*trap_handler)(struct trap_context *ctx);
diff --git a/hypervisor/arch/arm/mmio.c b/hypervisor/arch/arm/mmio.c
index 6b72f3e..5f18507 100644
--- a/hypervisor/arch/arm/mmio.c
+++ b/hypervisor/arch/arm/mmio.c
@@ -28,30 +28,35 @@ static void arch_inject_dabt(struct trap_context *ctx, unsigned long addr)
unsigned int lr_offset;
unsigned long vbar;
bool is_thumb;
- u32 sctlr, ttbcr;
+ u32 sctlr, ttbcr, cpsr, pc;

arm_read_sysreg(SCTLR_EL1, sctlr);
arm_read_sysreg(TTBCR, ttbcr);

+ arm_read_banked_reg(ELR_hyp, pc);
+ arm_read_banked_reg(SPSR_hyp, cpsr);
+
/* Set cpsr */
- is_thumb = ctx->cpsr & PSR_T_BIT;
- ctx->cpsr &= ~(PSR_MODE_MASK | PSR_IT_MASK(0xff) | PSR_T_BIT
+ is_thumb = cpsr & PSR_T_BIT;
+ cpsr &= ~(PSR_MODE_MASK | PSR_IT_MASK(0xff) | PSR_T_BIT
| PSR_J_BIT | PSR_E_BIT);
- ctx->cpsr |= (PSR_ABT_MODE | PSR_I_BIT | PSR_A_BIT);
+ cpsr |= (PSR_ABT_MODE | PSR_I_BIT | PSR_A_BIT);
if (sctlr & SCTLR_TE_BIT)
- ctx->cpsr |= PSR_T_BIT;
+ cpsr |= PSR_T_BIT;
if (sctlr & SCTLR_EE_BIT)
- ctx->cpsr |= PSR_E_BIT;
+ cpsr |= PSR_E_BIT;
+
+ arm_write_banked_reg(SPSR_hyp, cpsr);

lr_offset = (is_thumb ? 4 : 0);
- arm_write_banked_reg(LR_abt, ctx->pc + lr_offset);
+ arm_write_banked_reg(LR_abt, pc + lr_offset);

/* Branch to dabt vector */
if (sctlr & SCTLR_V_BIT)
vbar = 0xffff0000;
else
arm_read_sysreg(VBAR, vbar);
- ctx->pc = vbar + 0x10;
+ arm_write_banked_reg(ELR_hyp, vbar + 0x10);

/* Signal a debug fault. DFSR layout depends on the LPAE bit */
if (ttbcr >> 31)
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index 29d6c6e..c7fb9e8 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -53,9 +53,11 @@ static bool arch_failed_condition(struct trap_context *ctx)
{
u32 class = HSR_EC(ctx->hsr);
u32 icc = HSR_ICC(ctx->hsr);
- u32 cpsr = ctx->cpsr;
- u32 flags = cpsr >> 28;
- u32 cond;
+ u32 cpsr, flags, cond;
+
+ arm_read_banked_reg(SPSR_hyp, cpsr);
+ flags = cpsr >> 28;
+
/*
* Trapped instruction is unconditional, already passed the condition
* check, or is invalid
@@ -95,8 +97,9 @@ static bool arch_failed_condition(struct trap_context *ctx)
static void arch_advance_itstate(struct trap_context *ctx)
{
unsigned long itbits, cond;
- unsigned long cpsr = ctx->cpsr;
+ u32 cpsr;

+ arm_read_banked_reg(SPSR_hyp, cpsr);
if (!(cpsr & PSR_IT_MASK(0xff)))
return;

@@ -113,21 +116,26 @@ static void arch_advance_itstate(struct trap_context *ctx)
cpsr &= ~PSR_IT_MASK(0xff);
cpsr |= PSR_IT_MASK(itbits);

- ctx->cpsr = cpsr;
+ arm_write_banked_reg(SPSR_hyp, cpsr);
}

void arch_skip_instruction(struct trap_context *ctx)
{
- u32 instruction_length = HSR_IL(ctx->hsr);
+ u32 pc;

- ctx->pc += (instruction_length ? 4 : 2);
+ arm_read_banked_reg(ELR_hyp, pc);
+ pc += HSR_IL(ctx->hsr) ? 4 : 2;
+ arm_write_banked_reg(ELR_hyp, pc);
arch_advance_itstate(ctx);
}

void access_cell_reg(struct trap_context *ctx, u8 reg, unsigned long *val,
bool is_read)
{
- unsigned long mode = ctx->cpsr & PSR_MODE_MASK;
+ u32 mode;
+
+ arm_read_banked_reg(SPSR_hyp, mode);
+ mode &= PSR_MODE_MASK;

switch (reg) {
case 0 ... 7:
@@ -178,9 +186,9 @@ void access_cell_reg(struct trap_context *ctx, u8 reg, unsigned long *val,
printk("WARNING: trapped instruction attempted to explicitly "
"access the PC.\n");
if (is_read)
- *val = ctx->pc;
+ arm_read_banked_reg(ELR_hyp, *val);
else
- ctx->pc = *val;
+ arm_write_banked_reg(ELR_hyp, *val);
break;
default:
/* Programming error */
@@ -193,9 +201,11 @@ static void dump_guest_regs(struct trap_context *ctx)
{
u8 reg;
unsigned long reg_val;
+ u32 pc, cpsr;

- panic_printk("pc=0x%08x cpsr=0x%08x hsr=0x%08x\n", ctx->pc, ctx->cpsr,
- ctx->hsr);
+ arm_read_banked_reg(ELR_hyp, pc);
+ arm_read_banked_reg(SPSR_hyp, cpsr);
+ panic_printk("pc=0x%08x cpsr=0x%08x hsr=0x%08x\n", pc, cpsr, ctx->hsr);
for (reg = 0; reg < 15; reg++) {
access_cell_reg(ctx, reg, &reg_val, true);
panic_printk("r%d=0x%08x ", reg, reg_val);
@@ -372,8 +382,6 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
u32 exception_class;
int ret = TRAP_UNHANDLED;

- arm_read_banked_reg(ELR_hyp, ctx.pc);
- arm_read_banked_reg(SPSR_hyp, ctx.cpsr);
arm_read_sysreg(HSR, ctx.hsr);
exception_class = HSR_EC(ctx.hsr);
ctx.regs = guest_regs->usr;
@@ -384,7 +392,7 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
*/
if (arch_failed_condition(&ctx)) {
arch_skip_instruction(&ctx);
- goto restore_context;
+ return;
}

if (trap_handlers[exception_class])
@@ -400,8 +408,4 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
dump_guest_regs(&ctx);
panic_park();
}
-
-restore_context:
- arm_write_banked_reg(SPSR_hyp, ctx.cpsr);
- arm_write_banked_reg(ELR_hyp, ctx.pc);
}
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:20 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
Prior to running the guest with caches disabled, we have to make sure
that all stage-2 page tables are flushed to memory. The easiest way to
achieve this is by building the tables coherently.

CC: Marc Zyngier <marc.z...@arm.com>
CC: Mark Rutland <mark.r...@arm.com>
Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/gic-v2.c | 4 ++--
hypervisor/arch/arm/mmu_cell.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/hypervisor/arch/arm/gic-v2.c b/hypervisor/arch/arm/gic-v2.c
index 3b0cbb0..bfeda90 100644
--- a/hypervisor/arch/arm/gic-v2.c
+++ b/hypervisor/arch/arm/gic-v2.c
@@ -192,7 +192,7 @@ static int gic_cell_init(struct cell *cell)
gicc_size, (unsigned long)gicc_base,
(PTE_FLAG_VALID | PTE_ACCESS_FLAG |
S2_PTE_ACCESS_RW | S2_PTE_FLAG_DEVICE),
- PAGING_NON_COHERENT);
+ PAGING_COHERENT);
if (err)
return err;

@@ -204,7 +204,7 @@ static int gic_cell_init(struct cell *cell)
static void gic_cell_exit(struct cell *cell)
{
paging_destroy(&cell->arch.mm, (unsigned long)gicc_base, gicc_size,
- PAGING_NON_COHERENT);
+ PAGING_COHERENT);
}

static void gic_adjust_irq_target(struct cell *cell, u16 irq_id)
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index baf9ba0..a6a03c9 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -38,14 +38,14 @@ int arch_map_memory_region(struct cell *cell,
*/

return paging_create(&cell->arch.mm, phys_start, mem->size,
- mem->virt_start, flags, PAGING_NON_COHERENT);
+ mem->virt_start, flags, PAGING_COHERENT);
}

int arch_unmap_memory_region(struct cell *cell,
const struct jailhouse_memory *mem)
{
return paging_destroy(&cell->arch.mm, mem->virt_start, mem->size,
- PAGING_NON_COHERENT);
+ PAGING_COHERENT);
}

unsigned long arch_paging_gphys2phys(struct per_cpu *cpu_data,
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:20 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
Move resetting of flush_vcpu_caches to the callers the actually require
it, then remove the misleading cpu_data argument (service only works for
the calling CPU) and rename it so that it is consistent with other cache
flushing services.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 9 ++++++---
hypervisor/arch/arm/include/asm/control.h | 1 -
hypervisor/arch/arm/include/asm/paging.h | 9 +++++++++
hypervisor/arch/arm/include/asm/processor.h | 2 --
hypervisor/arch/arm/mmu_cell.c | 13 +------------
hypervisor/arch/arm/setup.c | 2 +-
6 files changed, 17 insertions(+), 19 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 3c14c0d..494b160 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -131,8 +131,11 @@ static void arch_suspend_self(struct per_cpu *cpu_data)
{
psci_suspend(cpu_data);

- if (cpu_data->flush_vcpu_caches)
- arch_cpu_tlb_flush(cpu_data);
+ if (cpu_data->flush_vcpu_caches) {
+ arm_paging_vcpu_flush_tlbs();
+ dsb(nsh);
+ cpu_data->flush_vcpu_caches = false;
+ }
}

static void arch_dump_exit(struct registers *regs, const char *reason)
@@ -357,7 +360,7 @@ void arch_flush_cell_vcpu_caches(struct cell *cell)

for_each_cpu(cpu, cell->cpu_set)
if (cpu == this_cpu_id())
- arch_cpu_tlb_flush(per_cpu(cpu));
+ arm_paging_vcpu_flush_tlbs();
else
per_cpu(cpu)->flush_vcpu_caches = true;
}
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index e901f83..930f37e 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -25,7 +25,6 @@
#include <asm/percpu.h>

void arch_cpu_dcaches_flush(unsigned int action);
-void arch_cpu_tlb_flush(struct per_cpu *cpu_data);

void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn);
void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 72ae192..fc4103b 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -192,6 +192,15 @@ void arm_paging_vcpu_init(struct per_cpu *cpu_data);
void arm_dcaches_flush(void *addr, long size, enum dcache_flush flush);
void arm_cell_dcaches_flush(struct cell *cell, enum dcache_flush flush);

+static inline void arm_paging_vcpu_flush_tlbs(void)
+{
+ /*
+ * Invalidate all stage-1 and 2 TLB entries for the current VMID
+ * ERET will ensure completion of these ops
+ */
+ arm_write_sysreg(TLBIALL, 0);
+}
+
/* return the bits supported for the physical address range for this
* machine; in arch_paging_init this value will be kept in
* cpu_parange for later reference */
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index e32e768..1dcc3da 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -208,8 +208,6 @@ static inline bool is_el2(void)
return (psr & PSR_MODE_MASK) == PSR_HYP_MODE;
}

-#define tlb_flush_guest() arm_write_sysreg(TLBIALL, 1)
-
#endif /* !__ASSEMBLY__ */

#endif /* !_JAILHOUSE_ASM_PROCESSOR_H */
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index dae7db4..c9bd683 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -130,16 +130,5 @@ void arm_paging_vcpu_init(struct per_cpu *cpu_data)
* since they register themselves to the root cpu_set afterwards. It
* means that this unconditionnal flush is redundant on master CPU.
*/
- arch_cpu_tlb_flush(cpu_data);
-}
-
-void arch_cpu_tlb_flush(struct per_cpu *cpu_data)
-{
- /*
- * Invalidate all stage-1 and 2 TLB entries for the current VMID
- * ERET will ensure completion of these ops
- */
- tlb_flush_guest();
- dsb(nsh);
- cpu_data->flush_vcpu_caches = false;
+ arm_paging_vcpu_flush_tlbs();
}
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 6ee8c7c..73548a7 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -145,7 +145,7 @@ void arch_shutdown_self(struct per_cpu *cpu_data)
arm_write_sysreg(VTCR_EL2, 0);

/* Remove stage-2 mappings */
- arch_cpu_tlb_flush(cpu_data);
+ arm_paging_vcpu_flush_tlbs();

/* TLB flush needs the cell's VMID */
isb();
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:20 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
Align with x86 and call the shutdown function on return from the disable
hypercall. All the special cases in arch_reset_self are no longer needed
since we only shut down with all CPUs assigned to the root cell (commit
e69075455bd1).

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 25 +++----------------------
hypervisor/arch/arm/include/asm/percpu.h | 1 -
hypervisor/arch/arm/traps.c | 11 ++++++++---
3 files changed, 11 insertions(+), 26 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 494b160..78c33ff 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -91,10 +91,8 @@ void arch_reset_self(struct per_cpu *cpu_data)
unsigned long reset_address;
struct cell *cell = cpu_data->cell;
struct registers *regs = guest_regs(cpu_data);
- bool is_shutdown = cpu_data->shutdown;

- if (!is_shutdown)
- arm_paging_vcpu_init(cpu_data);
+ arm_paging_vcpu_init(cpu_data);

/*
* We come from the IRQ handler, but we won't return there, so the IPI
@@ -102,11 +100,10 @@ void arch_reset_self(struct per_cpu *cpu_data)
*/
irqchip_eoi_irq(SGI_CPU_OFF, true);

- if (!is_shutdown)
- irqchip_cpu_reset(cpu_data);
+ irqchip_cpu_reset(cpu_data);

/* Wait for the driver to call cpu_up */
- if (cell == &root_cell || is_shutdown)
+ if (cell == &root_cell)
reset_address = arch_smp_spin(cpu_data, root_cell.arch.smp);
else
reset_address = arch_smp_spin(cpu_data, cell->arch.smp);
@@ -120,10 +117,6 @@ void arch_reset_self(struct per_cpu *cpu_data)
arm_write_banked_reg(ELR_hyp, reset_address);
arm_write_banked_reg(SPSR_hyp, RESET_PSR);

- if (is_shutdown)
- /* Won't return here. */
- arch_shutdown_self(cpu_data);
-
vmreturn(regs);
}

@@ -200,10 +193,6 @@ struct registers* arch_handle_exit(struct per_cpu *cpu_data,
panic_stop();
}

- if (cpu_data->shutdown)
- /* Won't return here. */
- arch_shutdown_self(cpu_data);
-
return regs;
}

@@ -393,12 +382,4 @@ void arch_panic_park(void)

void arch_shutdown(void)
{
- unsigned int cpu;
-
- /*
- * Let the exit handler call reset_self to let the core finish its
- * shutdown function and release its lock.
- */
- for_each_cpu(cpu, root_cell.cpu_set)
- per_cpu(cpu)->shutdown = true;
}
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 0e9eac8..4220f9e 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -61,7 +61,6 @@ struct per_cpu {

bool flush_vcpu_caches;
int shutdown_state;
- bool shutdown;
unsigned long mpidr;
bool failed;
} __attribute__((aligned(PAGE_SIZE)));
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index fafdcd8..29d6c6e 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -222,11 +222,16 @@ static int arch_handle_smc(struct trap_context *ctx)
static int arch_handle_hvc(struct trap_context *ctx)
{
unsigned long *regs = ctx->regs;
+ unsigned long code = regs[0];

- if (IS_PSCI_32(regs[0]) || IS_PSCI_UBOOT(regs[0]))
+ if (IS_PSCI_32(code) || IS_PSCI_UBOOT(code)) {
regs[0] = psci_dispatch(ctx);
- else
- regs[0] = hypercall(regs[0], regs[1], regs[2]);
+ } else {
+ regs[0] = hypercall(code, regs[1], regs[2]);
+
+ if (code == JAILHOUSE_HC_DISABLE && regs[0] == 0)
+ arch_shutdown_self(this_cpu_data());
+ }

return TRAP_HANDLED;
}
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:20 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
We cannot allow a cell to perform cache maintenance on set/way-basis
natively. In the worst case, invalidation without cleaning, it can drop
unwritten hypervisor state. This was observed occasionally during heavy
CPU onlining/offline workload (and after fixing the other bugs in this
area). Moreover, set/way cleaning as impact beyond the scope of the
cell, thus could delay time sensitive workload in other cells.

The approach implemented here is derived from KVM: on the first S/W
flush, perform a full flush via MVA on the guest address space. Then
enable TVM to catch the point when the guest enables or disables caches
afterwards. At this point, redo the flush if caches are flipped from on
to off and turn TVM trapping off again. KVM flushes the caches again
also when enabling them, but there is no need, and we can save this
rather costly step.

CC: Marc Zyngier <marc.z...@arm.com>
CC: Mark Rutland <mark.r...@arm.com>
Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/include/asm/processor.h | 3 +++
hypervisor/arch/arm/setup.c | 2 +-
hypervisor/arch/arm/traps.c | 26 +++++++++++++++++++++++++-
3 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index dc6f9bb..e32e768 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -64,6 +64,9 @@
#define SCTLR_AFE_BIT (1 << 29)
#define SCTLR_TE_BIT (1 << 30)

+#define SCTLR_C_AND_M_SET(sctlr) \
+ (((sctlr) & (SCTLR_C_BIT | SCTLR_M_BIT)) == (SCTLR_C_BIT | SCTLR_M_BIT))
+
/* Bits to wipe on cell reset */
#define SCTLR_MASK (SCTLR_M_BIT | SCTLR_A_BIT | SCTLR_C_BIT \
| SCTLR_I_BIT | SCTLR_V_BIT | SCTLR_WXN_BIT \
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 33ad299..c1bf0fd 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -53,7 +53,7 @@ int arch_cpu_init(struct per_cpu *cpu_data)
{
int err = 0;
unsigned long hcr = HCR_VM_BIT | HCR_IMO_BIT | HCR_FMO_BIT
- | HCR_TSC_BIT | HCR_TAC_BIT;
+ | HCR_TSC_BIT | HCR_TAC_BIT | HCR_TSW_BIT;

cpu_data->psci_mbox.entry = 0;
cpu_data->virt_id = cpu_data->cpu_id;
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index 1c629fc..fafdcd8 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -236,6 +236,7 @@ static int arch_handle_cp15_32(struct trap_context *ctx)
u32 hsr = ctx->hsr;
u32 rt = (hsr >> 5) & 0xf;
u32 read = hsr & 1;
+ u32 hcr, old_sctlr;
unsigned long val;

#define CP15_32_PERFORM_WRITE(crn, opc1, crm, opc2) ({ \
@@ -260,10 +261,33 @@ static int arch_handle_cp15_32(struct trap_context *ctx)
else if (read) {
return TRAP_UNHANDLED;
}
+ /* trapped by HCR.TSW */
+ else if (HSR_MATCH_MCR_MRC(hsr, 7, 0, 6, 2) || /* DCISW */
+ HSR_MATCH_MCR_MRC(hsr, 7, 0, 10, 2) || /* DCCSW */
+ HSR_MATCH_MCR_MRC(hsr, 7, 0, 14, 2)) { /* DCCISW */
+ arm_read_sysreg(HCR, hcr);
+ if (!(hcr & HCR_TVM_BIT)) {
+ arm_cell_dcaches_flush(this_cell(),
+ DCACHE_CLEAN_AND_INVALIDATE);
+ arm_write_sysreg(HCR, hcr | HCR_TVM_BIT);
+ }
+ }
/* trapped if HCR.TVM is set */
else if (HSR_MATCH_MCR_MRC(hsr, 1, 0, 0, 0)) { /* SCTLR */
- // TODO: check if caches are turned on or off
+ arm_read_sysreg(SCTLR_EL1, old_sctlr);
+
arm_write_sysreg(SCTLR_EL1, val);
+
+ /* Check if caches were turned on or off. */
+ if (SCTLR_C_AND_M_SET(val) != SCTLR_C_AND_M_SET(old_sctlr)) {
+ /* Flush dcaches again if they were enabled before. */
+ if (SCTLR_C_AND_M_SET(old_sctlr))
+ arm_cell_dcaches_flush(this_cell(),
+ DCACHE_CLEAN_AND_INVALIDATE);
+ /* Stop tracking VM control regs. */
+ arm_read_sysreg(HCR, hcr);
+ arm_write_sysreg(HCR, hcr & ~HCR_TVM_BIT);
+ }
} else if (!(CP15_32_PERFORM_WRITE(2, 0, 0, 0) || /* TTBR0 */
CP15_32_PERFORM_WRITE(2, 0, 0, 1) || /* TTBR1 */
CP15_32_PERFORM_WRITE(2, 0, 0, 2) || /* TTBCR */
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:20 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
All d-cache entries related to memory that a new cell will use or that
a destructed cell was using are irrelevant now. Invalidate them so that
nothing leaks from/to other cells.

CC: Marc Zyngier <marc.z...@arm.com>
CC: Mark Rutland <mark.r...@arm.com>
Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index f9e117d..83add4c 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -378,6 +378,12 @@ void arch_flush_cell_vcpu_caches(struct cell *cell)

void arch_config_commit(struct cell *cell_added_removed)
{
+ /*
+ * We only need to flush caches for non-root cells and can ignore this
+ * call when being invoked during setup on the root cell.
+ */
+ if (cell_added_removed && cell_added_removed != &root_cell)
+ arm_cell_dcaches_flush(cell_added_removed, DCACHE_INVALIDATE);
}

void __attribute__((noreturn)) arch_panic_stop(void)
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:20 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
It always returns 0 now.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 5 +----
hypervisor/arch/arm/include/asm/paging.h | 2 +-
hypervisor/arch/arm/mmu_cell.c | 4 +---
hypervisor/arch/arm/setup.c | 4 +---
4 files changed, 4 insertions(+), 11 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 245a87f..3c14c0d 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -88,16 +88,13 @@ static void arch_reset_el1(struct registers *regs)

void arch_reset_self(struct per_cpu *cpu_data)
{
- int err = 0;
unsigned long reset_address;
struct cell *cell = cpu_data->cell;
struct registers *regs = guest_regs(cpu_data);
bool is_shutdown = cpu_data->shutdown;

if (!is_shutdown)
- err = arm_paging_vcpu_init(cpu_data);
- if (err)
- printk("MMU setup failed\n");
+ arm_paging_vcpu_init(cpu_data);

/*
* We come from the IRQ handler, but we won't return there, so the IPI
diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index f2ee398..72ae192 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -187,7 +187,7 @@ extern unsigned int cache_line_size;
int arm_paging_cell_init(struct cell *cell);
void arm_paging_cell_destroy(struct cell *cell);

-int arm_paging_vcpu_init(struct per_cpu *cpu_data);
+void arm_paging_vcpu_init(struct per_cpu *cpu_data);

void arm_dcaches_flush(void *addr, long size, enum dcache_flush flush);
void arm_cell_dcaches_flush(struct cell *cell, enum dcache_flush flush);
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 1c055c6..dae7db4 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -110,7 +110,7 @@ void arm_paging_cell_destroy(struct cell *cell)
page_free(&mem_pool, cell->arch.mm.root_table, ARM_CELL_ROOT_PT_SZ);
}

-int arm_paging_vcpu_init(struct per_cpu *cpu_data)
+void arm_paging_vcpu_init(struct per_cpu *cpu_data)
{
struct cell *cell = cpu_data->cell;
unsigned long cell_table = paging_hvirt2phys(cell->arch.mm.root_table);
@@ -131,8 +131,6 @@ int arm_paging_vcpu_init(struct per_cpu *cpu_data)
* means that this unconditionnal flush is redundant on master CPU.
*/
arch_cpu_tlb_flush(cpu_data);
-
- return 0;
}

void arch_cpu_tlb_flush(struct per_cpu *cpu_data)
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index c1bf0fd..6ee8c7c 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -80,9 +80,7 @@ int arch_cpu_init(struct per_cpu *cpu_data)
/* Setup guest traps */
arm_write_sysreg(HCR, hcr);

- err = arm_paging_vcpu_init(cpu_data);
- if (err)
- return err;
+ arm_paging_vcpu_init(cpu_data);

err = irqchip_init();
if (err)
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:21 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
Prepare for setting a VCPU to different paging structures than those of
its owner cell.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 2 +-
hypervisor/arch/arm/include/asm/paging.h | 4 ++--
hypervisor/arch/arm/mmu_cell.c | 10 ++++------
hypervisor/arch/arm/setup.c | 2 +-
4 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 78c33ff..3f5703c 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -92,7 +92,7 @@ void arch_reset_self(struct per_cpu *cpu_data)
struct cell *cell = cpu_data->cell;
struct registers *regs = guest_regs(cpu_data);

- arm_paging_vcpu_init(cpu_data);
+ arm_paging_vcpu_init(&cell->arch.mm);

/*
* We come from the IRQ handler, but we won't return there, so the IPI
diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index fc4103b..1d6663e 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -171,7 +171,7 @@
#ifndef __ASSEMBLY__

struct cell;
-struct per_cpu;
+struct paging_structures;

typedef u64 *pt_entry_t;

@@ -187,7 +187,7 @@ extern unsigned int cache_line_size;
int arm_paging_cell_init(struct cell *cell);
void arm_paging_cell_destroy(struct cell *cell);

-void arm_paging_vcpu_init(struct per_cpu *cpu_data);
+void arm_paging_vcpu_init(struct paging_structures *pg_structs);

void arm_dcaches_flush(void *addr, long size, enum dcache_flush flush);
void arm_cell_dcaches_flush(struct cell *cell, enum dcache_flush flush);
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index c9bd683..d26c9d3 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -110,18 +110,16 @@ void arm_paging_cell_destroy(struct cell *cell)
page_free(&mem_pool, cell->arch.mm.root_table, ARM_CELL_ROOT_PT_SZ);
}

-void arm_paging_vcpu_init(struct per_cpu *cpu_data)
+void arm_paging_vcpu_init(struct paging_structures *pg_structs)
{
- struct cell *cell = cpu_data->cell;
- unsigned long cell_table = paging_hvirt2phys(cell->arch.mm.root_table);
+ unsigned long cell_table = paging_hvirt2phys(pg_structs->root_table);
u64 vttbr = 0;
- u32 vtcr = VTCR_CELL;

- vttbr |= (u64)cell->id << VTTBR_VMID_SHIFT;
+ vttbr |= (u64)this_cell()->id << VTTBR_VMID_SHIFT;
vttbr |= (u64)(cell_table & TTBR_MASK);

arm_write_sysreg(VTTBR_EL2, vttbr);
- arm_write_sysreg(VTCR_EL2, vtcr);
+ arm_write_sysreg(VTCR_EL2, VTCR_CELL);

/* Ensure that the new VMID is present before flushing the caches */
isb();
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 73548a7..8f9edca 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -80,7 +80,7 @@ int arch_cpu_init(struct per_cpu *cpu_data)
/* Setup guest traps */
arm_write_sysreg(HCR, hcr);

- arm_paging_vcpu_init(cpu_data);
+ arm_paging_vcpu_init(&root_cell.arch.mm);

Jan Kiszka

unread,
Aug 10, 2016, 3:29:21 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
This prepares ARM to park CPUs in guest mode instead of host mode. It
will allow to reuse many of the control logic of x86 for managing CPUs
that are in emulated shutdown or in waiting state before a cell starts.

In fact, we could make this a generic pattern later on, converting also
the Intel-specific version to this.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/include/asm/control.h | 3 +++
hypervisor/arch/arm/setup.c | 27 ++++++++++++++++++++++++---
2 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index 930f37e..0727d90 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -22,8 +22,11 @@
#ifndef __ASSEMBLY__

#include <jailhouse/cell.h>
+#include <jailhouse/paging.h>
#include <asm/percpu.h>

+extern struct paging_structures parking_mm;
+
void arch_cpu_dcaches_flush(unsigned int action);

void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn);
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 8f9edca..51af423 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -1,7 +1,7 @@
/*
* Jailhouse, a Linux-based partitioning hypervisor
*
- * Copyright (c) Siemens AG, 2013
+ * Copyright (c) Siemens AG, 2013-2016
*
* Authors:
* Jan Kiszka <jan.k...@siemens.com>
@@ -20,7 +20,13 @@
#include <jailhouse/processor.h>
#include <jailhouse/string.h>

+static u32 __attribute__((aligned(PAGE_SIZE))) parking_code[PAGE_SIZE / 4] = {
+ 0xe320f003, /* 1: wfi */
+ 0xeafffffd, /* b 1b */
+};
+
unsigned int cache_line_size;
+struct paging_structures parking_mm;

static int arch_check_features(void)
{
@@ -41,9 +47,24 @@ static int arch_check_features(void)

int arch_init_early(void)
{
- int err = 0;
+ int err;

- if ((err = arch_check_features()) != 0)
+ err = arch_check_features();
+ if (err)
+ return err;
+
+ parking_mm.root_paging = cell_paging;
+ parking_mm.root_table =
+ page_alloc_aligned(&mem_pool, ARM_CELL_ROOT_PT_SZ);
+ if (!parking_mm.root_table)
+ return -ENOMEM;
+
+ err = paging_create(&parking_mm, paging_hvirt2phys(parking_code),
+ PAGE_SIZE, 0,
+ (PTE_FLAG_VALID | PTE_ACCESS_FLAG |
+ S2_PTE_ACCESS_RO | S2_PTE_FLAG_NORMAL),
+ PAGING_COHERENT);
+ if (err)
return err;

return arm_paging_cell_init(&root_cell);
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:21 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
Prepare for a cleaner setup and cleanup of the SMP subsystem: introduce
a platform init function called once during setup as well as per-cell
init/exit functions. Those are stubbed for PSCI-based platforms and only
need to be implemented for vexpress so far.

We emulate the previous invocation of the init op for the root cell
during setup via smp_init. Note that init was and is only called for the
root cell (a highly confusing interface).

One difference of the new code for vexpress is that we will now expose
the mailbox interface for kicking off secondary CPUs also to non-root
cells. So far we only offered PSCI emulation, but there is no point in
hiding the FLAGSSET register in an emulated for from them - they will
now be able to chose the wake-up method.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 3 ++
hypervisor/arch/arm/include/asm/platform.h | 2 +-
hypervisor/arch/arm/include/asm/smp.h | 8 +++-
hypervisor/arch/arm/setup.c | 2 +-
hypervisor/arch/arm/smp-sun7i.c | 1 -
hypervisor/arch/arm/smp-tegra124.c | 1 -
hypervisor/arch/arm/smp-vexpress.c | 66 ++++++++++++++++++------------
hypervisor/arch/arm/smp.c | 18 +++++++-
8 files changed, 68 insertions(+), 33 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 577306f..8bccdfa 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -489,6 +489,7 @@ int arch_cell_create(struct cell *cell)
}

register_smp_ops(cell);
+ smp_cell_init(cell);

return 0;
}
@@ -508,6 +509,8 @@ void arch_cell_destroy(struct cell *cell)
percpu->cpu_on_entry = PSCI_INVALID_ADDRESS;
}

+ smp_cell_exit(cell);
+
irqchip_cell_exit(cell);

arm_paging_cell_destroy(cell);
diff --git a/hypervisor/arch/arm/include/asm/platform.h b/hypervisor/arch/arm/include/asm/platform.h
index 1911ee0..5ae7882 100644
--- a/hypervisor/arch/arm/include/asm/platform.h
+++ b/hypervisor/arch/arm/include/asm/platform.h
@@ -48,7 +48,7 @@
# endif /* GIC */

# define MAINTENANCE_IRQ 25
-# define SYSREGS_BASE 0x1c010000
+# define SYSREGS_BASE ((void *)0x1c010000)

#endif /* CONFIG_MACH_VEXPRESS */

diff --git a/hypervisor/arch/arm/include/asm/smp.h b/hypervisor/arch/arm/include/asm/smp.h
index 00f1302..173908a 100644
--- a/hypervisor/arch/arm/include/asm/smp.h
+++ b/hypervisor/arch/arm/include/asm/smp.h
@@ -2,9 +2,11 @@
* Jailhouse, a Linux-based partitioning hypervisor
*
* Copyright (c) ARM Limited, 2014
+ * Copyright (c) Siemens AG, 2016
*
* Authors:
* Jean-Philippe Brucker <jean-phili...@arm.com>
+ * Jan Kiszka <jan.k...@siemens.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
@@ -20,7 +22,6 @@ struct per_cpu;
struct cell;

struct smp_ops {
- int (*init)(struct cell *cell);
/* Returns an address */
unsigned long (*cpu_spin)(struct per_cpu *cpu_data);
};
@@ -30,5 +31,10 @@ extern const unsigned int smp_mmio_regions;
unsigned long arch_smp_spin(struct per_cpu *cpu_data, struct smp_ops *ops);
void register_smp_ops(struct cell *cell);

+int smp_init(void);
+
+void smp_cell_init(struct cell *cell);
+void smp_cell_exit(struct cell *cell);
+
#endif /* !__ASSEMBLY__ */
#endif /* !JAILHOUSE_ASM_SMP_H_ */
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 51af423..2e2bee2 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -124,7 +124,7 @@ int arch_init_late(void)
/* Platform-specific SMP operations */
register_smp_ops(&root_cell);

- err = root_cell.arch.smp->init(&root_cell);
+ err = smp_init();
if (err)
return err;

diff --git a/hypervisor/arch/arm/smp-sun7i.c b/hypervisor/arch/arm/smp-sun7i.c
index 0c7cd4f..6b03d5c 100644
--- a/hypervisor/arch/arm/smp-sun7i.c
+++ b/hypervisor/arch/arm/smp-sun7i.c
@@ -15,7 +15,6 @@
#include <asm/smp.h>

static struct smp_ops sun7i_smp_ops = {
- .init = psci_cell_init,
.cpu_spin = psci_emulate_spin,
};

diff --git a/hypervisor/arch/arm/smp-tegra124.c b/hypervisor/arch/arm/smp-tegra124.c
index 86017d3..63555da 100644
--- a/hypervisor/arch/arm/smp-tegra124.c
+++ b/hypervisor/arch/arm/smp-tegra124.c
@@ -15,7 +15,6 @@
#include <asm/smp.h>

static struct smp_ops tegra124_smp_ops = {
- .init = psci_cell_init,
.cpu_spin = psci_emulate_spin,
};

diff --git a/hypervisor/arch/arm/smp-vexpress.c b/hypervisor/arch/arm/smp-vexpress.c
index 07875fa..1750a41 100644
--- a/hypervisor/arch/arm/smp-vexpress.c
+++ b/hypervisor/arch/arm/smp-vexpress.c
@@ -2,33 +2,35 @@
* Jailhouse, a Linux-based partitioning hypervisor
*
* Copyright (c) ARM Limited, 2014
+ * Copyright (c) Siemens AG, 2016
*
* Authors:
* Jean-Philippe Brucker <jean-phili...@arm.com>
+ * Jan Kiszka <jan.k...@siemens.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*/

#include <jailhouse/control.h>
-#include <jailhouse/printk.h>
-#include <jailhouse/processor.h>
-#include <asm/irqchip.h>
-#include <asm/paging.h>
+#include <jailhouse/mmio.h>
+#include <asm/control.h>
#include <asm/platform.h>
#include <asm/setup.h>
#include <asm/smp.h>

+#define VEXPRESS_FLAGSSET 0x30
+
const unsigned int smp_mmio_regions = 1;

-static const unsigned long hotplug_mbox = SYSREGS_BASE + 0x30;
+static unsigned long root_entry;

static enum mmio_result smp_mmio(void *arg, struct mmio_access *mmio)
{
struct per_cpu *cpu_data = this_cpu_data();
unsigned int cpu;

- if (mmio->address != (hotplug_mbox & PAGE_OFFS_MASK) || !mmio->is_write)
+ if (mmio->address != VEXPRESS_FLAGSSET || !mmio->is_write)
/* Ignore all other accesses */
return MMIO_HANDLED;

@@ -40,34 +42,16 @@ static enum mmio_result smp_mmio(void *arg, struct mmio_access *mmio)
return MMIO_HANDLED;
}

-static int smp_init(struct cell *cell)
-{
- void *mbox_page = (void *)(hotplug_mbox & PAGE_MASK);
- int err;
-
- /* Map the mailbox page */
- err = arch_map_device(mbox_page, mbox_page, PAGE_SIZE);
- if (err) {
- printk("Unable to map spin mbox page\n");
- return err;
- }
-
- mmio_region_register(cell, (unsigned long)mbox_page, PAGE_SIZE,
- smp_mmio, NULL);
- return 0;
-}
-
static unsigned long smp_spin(struct per_cpu *cpu_data)
{
/*
* This is super-dodgy: we assume nothing wrote to the flag register
* since the kernel called smp_prepare_cpus, at initialisation.
*/
- return mmio_read32((void *)hotplug_mbox);
+ return root_entry;
}

static struct smp_ops vexpress_smp_ops = {
- .init = smp_init,
.cpu_spin = smp_spin,
};

@@ -76,7 +60,6 @@ static struct smp_ops vexpress_smp_ops = {
* an access to the mbox from the primary.
*/
static struct smp_ops vexpress_guest_smp_ops = {
- .init = psci_cell_init,
.cpu_spin = psci_emulate_spin,
};

@@ -92,3 +75,34 @@ void register_smp_ops(struct cell *cell)
else
cell->arch.smp = &vexpress_guest_smp_ops;
}
+
+int smp_init(void)
+{
+ int err;
+
+ err = arch_map_device(SYSREGS_BASE, SYSREGS_BASE, PAGE_SIZE);
+ if (err)
+ return err;
+ root_entry = mmio_read32(SYSREGS_BASE + VEXPRESS_FLAGSSET);
+ arch_unmap_device(SYSREGS_BASE, PAGE_SIZE);
+
+ smp_cell_init(&root_cell);
+
+ return 0;
+}
+
+void smp_cell_init(struct cell *cell)
+{
+ mmio_region_register(cell, (unsigned long)SYSREGS_BASE, PAGE_SIZE,
+ smp_mmio, NULL);
+}
+
+void smp_cell_exit(struct cell *cell)
+{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, cell->cpu_set) {
+ per_cpu(cpu)->cpu_on_entry = root_entry;
+ per_cpu(cpu)->cpu_on_context = 0;
+ }
+}
diff --git a/hypervisor/arch/arm/smp.c b/hypervisor/arch/arm/smp.c
index e7f3fef..1b43168 100644
--- a/hypervisor/arch/arm/smp.c
+++ b/hypervisor/arch/arm/smp.c
@@ -2,17 +2,18 @@
* Jailhouse, a Linux-based partitioning hypervisor
*
* Copyright (c) ARM Limited, 2014
+ * Copyright (c) Siemens AG, 2016
*
* Authors:
* Jean-Philippe Brucker <jean-phili...@arm.com>
+ * Jan Kiszka <jan.k...@siemens.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*/

-#include <jailhouse/mmio.h>
+#include <asm/percpu.h>
#include <asm/smp.h>
-#include <asm/traps.h>

const unsigned int __attribute__((weak)) smp_mmio_regions;

@@ -29,3 +30,16 @@ unsigned long arch_smp_spin(struct per_cpu *cpu_data, struct smp_ops *ops)

return ops->cpu_spin(cpu_data);
}
+
+int __attribute__((weak)) smp_init(void)
+{
+ return psci_cell_init(&root_cell);
+}
+
+void __attribute__((weak)) smp_cell_init(struct cell *cell)
+{
+}
+
+void __attribute__((weak)) smp_cell_exit(struct cell *cell)
+{
+}
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:21 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
The existing CPU management that provides reset, parking and PSCI
emulation is broken (full of races) and unfortunately unfixable. The
only option found was a complete rewrite, switching to the logic that
also x86 uses.

This prepares the switch by introducing the key services first. Those
are the event handling function check_events with additional helpers.
arm_cpu_park uses the new parking page to stop a CPU in guest mode.
cpu_reset is widely identical to arch_reset_el1 and partly to
arch_reset_self and will replace both. All services remain unused for
now (SGI_EVENT is dispatched but not yet raised).

One key difference of the new logic is that the tricky to understand
jumps from the PSCI suspension loop to different resumption points will
be gone. Instead, events signaled via a kick plus various flags in the
per_cpu data structure decide what the CPU will do next. Processing of
those flags takes place in check_events.

The flags are generally protected now by a spinlock, resolving many of
the subtle wake-up races of the old code.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 174 ++++++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/control.h | 6 +-
hypervisor/arch/arm/include/asm/percpu.h | 27 +++++
3 files changed, 206 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 3f5703c..6f50afe 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -2,9 +2,11 @@
* Jailhouse, a Linux-based partitioning hypervisor
*
* Copyright (c) ARM Limited, 2014
+ * Copyright (c) Siemens AG, 2016
*
* Authors:
* Jean-Philippe Brucker <jean-phili...@arm.com>
+ * Jan Kiszka <jan.k...@siemens.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
@@ -120,6 +122,86 @@ void arch_reset_self(struct per_cpu *cpu_data)
vmreturn(regs);
}

+static void cpu_reset(void)
+{
+ struct per_cpu *cpu_data = this_cpu_data();
+ struct cell *cell = cpu_data->cell;
+ struct registers *regs = guest_regs(cpu_data);
+ u32 sctlr;
+
+ /* Wipe all banked and usr regs */
+ memset(regs, 0, sizeof(struct registers));
+
+ arm_write_banked_reg(SP_usr, 0);
+ arm_write_banked_reg(SP_svc, 0);
+ arm_write_banked_reg(SP_abt, 0);
+ arm_write_banked_reg(SP_und, 0);
+ arm_write_banked_reg(SP_irq, 0);
+ arm_write_banked_reg(SP_fiq, 0);
+ arm_write_banked_reg(LR_svc, 0);
+ arm_write_banked_reg(LR_abt, 0);
+ arm_write_banked_reg(LR_und, 0);
+ arm_write_banked_reg(LR_irq, 0);
+ arm_write_banked_reg(LR_fiq, 0);
+ arm_write_banked_reg(R8_fiq, 0);
+ arm_write_banked_reg(R9_fiq, 0);
+ arm_write_banked_reg(R10_fiq, 0);
+ arm_write_banked_reg(R11_fiq, 0);
+ arm_write_banked_reg(R12_fiq, 0);
+ arm_write_banked_reg(SPSR_svc, 0);
+ arm_write_banked_reg(SPSR_abt, 0);
+ arm_write_banked_reg(SPSR_und, 0);
+ arm_write_banked_reg(SPSR_irq, 0);
+ arm_write_banked_reg(SPSR_fiq, 0);
+
+ /* Wipe the system registers */
+ arm_read_sysreg(SCTLR_EL1, sctlr);
+ sctlr = sctlr & ~SCTLR_MASK;
+ arm_write_sysreg(SCTLR_EL1, sctlr);
+ arm_write_sysreg(CPACR_EL1, 0);
+ arm_write_sysreg(CONTEXTIDR_EL1, 0);
+ arm_write_sysreg(PAR_EL1, 0);
+ arm_write_sysreg(TTBR0_EL1, 0);
+ arm_write_sysreg(TTBR1_EL1, 0);
+ arm_write_sysreg(CSSELR_EL1, 0);
+
+ arm_write_sysreg(CNTKCTL_EL1, 0);
+ arm_write_sysreg(CNTP_CTL_EL0, 0);
+ arm_write_sysreg(CNTP_CVAL_EL0, 0);
+ arm_write_sysreg(CNTV_CTL_EL0, 0);
+ arm_write_sysreg(CNTV_CVAL_EL0, 0);
+
+ /* AArch32 specific */
+ arm_write_sysreg(TTBCR, 0);
+ arm_write_sysreg(DACR, 0);
+ arm_write_sysreg(VBAR, 0);
+ arm_write_sysreg(DFSR, 0);
+ arm_write_sysreg(DFAR, 0);
+ arm_write_sysreg(IFSR, 0);
+ arm_write_sysreg(IFAR, 0);
+ arm_write_sysreg(ADFSR, 0);
+ arm_write_sysreg(AIFSR, 0);
+ arm_write_sysreg(MAIR0, 0);
+ arm_write_sysreg(MAIR1, 0);
+ arm_write_sysreg(AMAIR0, 0);
+ arm_write_sysreg(AMAIR1, 0);
+ arm_write_sysreg(TPIDRURW, 0);
+ arm_write_sysreg(TPIDRURO, 0);
+ arm_write_sysreg(TPIDRPRW, 0);
+
+ arm_write_banked_reg(SPSR_hyp, RESET_PSR);
+ arm_write_banked_reg(ELR_hyp, cpu_data->cpu_on_entry);
+
+ /* transfer the context that may have been passed to PSCI_CPU_ON */
+ regs->usr[1] = cpu_data->cpu_on_context;
+
+ arm_write_sysreg(VMPIDR_EL2, cpu_data->virt_id | MPIDR_MP_BIT);
+
+ arm_paging_vcpu_init(&cell->arch.mm);
+
+ irqchip_cpu_reset(cpu_data);
+}
+
static void arch_suspend_self(struct per_cpu *cpu_data)
{
psci_suspend(cpu_data);
@@ -131,6 +213,25 @@ static void arch_suspend_self(struct per_cpu *cpu_data)
}
}

+static void enter_cpu_off(struct per_cpu *cpu_data)
+{
+ cpu_data->park = false;
+ cpu_data->wait_for_poweron = true;
+}
+
+void arm_cpu_park(void)
+{
+ struct per_cpu *cpu_data = this_cpu_data();
+
+ spin_lock(&cpu_data->control_lock);
+ enter_cpu_off(cpu_data);
+ spin_unlock(&cpu_data->control_lock);
+
+ cpu_reset();
+ arm_write_banked_reg(ELR_hyp, 0);
+ arm_paging_vcpu_init(&parking_mm);
+}
+
static void arch_dump_exit(struct registers *regs, const char *reason)
{
unsigned long pc;
@@ -196,6 +297,15 @@ struct registers* arch_handle_exit(struct per_cpu *cpu_data,
return regs;
}

+void arm_cpu_kick(unsigned int cpu_id)
+{
+ struct sgi sgi = {};
+
+ sgi.targets = 1 << cpu_id;
+ sgi.id = SGI_EVENT;
+ irqchip_send_sgi(&sgi);
+}
+
/* CPU must be stopped */
void arch_resume_cpu(unsigned int cpu_id)
{
@@ -246,6 +356,62 @@ void arch_suspend_cpu(unsigned int cpu_id)
psci_wait_cpu_stopped(cpu_id);
}

+static void check_events(struct per_cpu *cpu_data)
+{
+ bool reset = false;
+
+ spin_lock(&cpu_data->control_lock);
+
+ do {
+ if (cpu_data->suspend_cpu)
+ cpu_data->cpu_suspended = true;
+
+ spin_unlock(&cpu_data->control_lock);
+
+ while (cpu_data->suspend_cpu)
+ cpu_relax();
+
+ spin_lock(&cpu_data->control_lock);
+
+ if (!cpu_data->suspend_cpu) {
+ cpu_data->cpu_suspended = false;
+
+ if (cpu_data->park) {
+ enter_cpu_off(cpu_data);
+ break;
+ }
+
+ if (cpu_data->reset) {
+ cpu_data->reset = false;
+ if (cpu_data->cpu_on_entry !=
+ PSCI_INVALID_ADDRESS) {
+ cpu_data->wait_for_poweron = false;
+ reset = true;
+ } else {
+ enter_cpu_off(cpu_data);
+ }
+ break;
+ }
+ }
+ } while (cpu_data->suspend_cpu);
+
+ if (cpu_data->flush_vcpu_caches) {
+ cpu_data->flush_vcpu_caches = false;
+ arm_paging_vcpu_flush_tlbs();
+ }
+
+ spin_unlock(&cpu_data->control_lock);
+
+ /*
+ * wait_for_poweron is only modified on this CPU, so checking outside of
+ * control_lock is fine.
+ */
+ if (cpu_data->wait_for_poweron)
+ arm_cpu_park();
+ else if (reset)
+ cpu_reset();
+}
+
void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
{
cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_MANAGEMENT]++;
@@ -257,6 +423,9 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
case SGI_CPU_OFF:
arch_suspend_self(cpu_data);
break;
+ case SGI_EVENT:
+ check_events(cpu_data);
+ break;
default:
printk("WARN: unknown SGI received %d\n", irqn);
}
@@ -309,6 +478,8 @@ int arch_cell_create(struct cell *cell)
* the cell set
*/
for_each_cpu(cpu, cell->cpu_set) {
+ per_cpu(cpu)->cpu_on_entry =
+ (virt_id == 0) ? 0 : PSCI_INVALID_ADDRESS;
per_cpu(cpu)->virt_id = virt_id;
virt_id++;
}
@@ -332,9 +503,12 @@ void arch_cell_destroy(struct cell *cell)

for_each_cpu(cpu, cell->cpu_set) {
percpu = per_cpu(cpu);
+
/* Re-assign the physical IDs for the root cell */
percpu->virt_id = percpu->cpu_id;
arch_reset_cpu(cpu);
+
+ percpu->cpu_on_entry = PSCI_INVALID_ADDRESS;
}

irqchip_cell_exit(cell);
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index 0727d90..d9346c7 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -14,7 +14,8 @@
#define _JAILHOUSE_ASM_CONTROL_H

#define SGI_INJECT 0
-#define SGI_CPU_OFF 1
+#define SGI_EVENT 1
+#define SGI_CPU_OFF 2

#define CACHES_CLEAN 0
#define CACHES_CLEAN_INVALIDATE 1
@@ -41,6 +42,9 @@ unsigned int arm_cpu_by_mpidr(struct cell *cell, unsigned long mpidr);
void __attribute__((noreturn)) vmreturn(struct registers *guest_regs);
void __attribute__((noreturn)) arch_shutdown_mmu(struct per_cpu *cpu_data);

+void arm_cpu_park(void);
+void arm_cpu_kick(unsigned int cpu_id);
+
#endif /* !__ASSEMBLY__ */

#endif /* !_JAILHOUSE_ASM_CONTROL_H */
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 4220f9e..fc97002 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -59,7 +59,34 @@ struct per_cpu {
__attribute__((aligned(8))) struct psci_mbox psci_mbox;
struct psci_mbox guest_mbox;

+ /**
+ * Lock protecting CPU state changes done for control tasks.
+ *
+ * The lock protects the following fields (unless CPU is suspended):
+ * @li per_cpu::suspend_cpu
+ * @li per_cpu::cpu_suspended (except for spinning on it to become
+ * true)
+ * @li per_cpu::flush_vcpu_caches
+ */
+ spinlock_t control_lock;
+
+ /** Set to true for instructing the CPU to suspend. */
+ volatile bool suspend_cpu;
+ /** True if CPU is waiting for power-on. */
+ volatile bool wait_for_poweron;
+ /** True if CPU is suspended. */
+ volatile bool cpu_suspended;
+ /** Set to true for pending reset. */
+ bool reset;
+ /** Set to true for pending park. */
+ bool park;
+ /** Set to true for a pending TLB flush for the paging layer that does
+ * host physical <-> guest physical memory mappings. */
bool flush_vcpu_caches;
+
+ unsigned long cpu_on_entry;
+ unsigned long cpu_on_context;
+
int shutdown_state;
unsigned long mpidr;
bool failed;
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:22 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
This patch became larger than usual ones, but I haven't found a way to
make it smaller. That's due to the fact it replaces the old, racy CPU
onlining/offlining/reset logic with the one derived from x86.

Along this comes a simplification of the smp interface we only have to
cater those few SoCs without proper PSCI support: it is no longer
required to add stubs for PSCI-based platforms. Consequently, those for
TK1 and Allwinner A20 are removed.

Now obsolete PSCI fragments will be purged in a separate patch.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/Makefile | 2 -
hypervisor/arch/arm/control.c | 177 +++++-------------------------
hypervisor/arch/arm/include/asm/cell.h | 4 -
hypervisor/arch/arm/include/asm/control.h | 4 +-
hypervisor/arch/arm/include/asm/smp.h | 13 ---
hypervisor/arch/arm/irqchip.c | 2 +-
hypervisor/arch/arm/psci.c | 67 +++++------
hypervisor/arch/arm/setup.c | 5 +-
hypervisor/arch/arm/smp-sun7i.c | 24 ----
hypervisor/arch/arm/smp-tegra124.c | 24 ----
hypervisor/arch/arm/smp-vexpress.c | 51 +++------
hypervisor/arch/arm/smp.c | 17 +--
12 files changed, 76 insertions(+), 314 deletions(-)
delete mode 100644 hypervisor/arch/arm/smp-sun7i.c
delete mode 100644 hypervisor/arch/arm/smp-tegra124.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 6a156a3..5e68b1a 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -27,5 +27,3 @@ obj-$(CONFIG_SERIAL_AMBA_PL011) += dbg-write-pl011.o
obj-$(CONFIG_SERIAL_8250_DW) += uart-8250-dw.o
obj-$(CONFIG_SERIAL_TEGRA) += uart-tegra.o
obj-$(CONFIG_MACH_VEXPRESS) += smp-vexpress.o
-obj-$(CONFIG_MACH_SUN7I) += smp-sun7i.o
-obj-$(CONFIG_MACH_TEGRA124) += smp-tegra124.o
diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 8bccdfa..5d49116 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -20,108 +20,10 @@
#include <asm/irqchip.h>
#include <asm/platform.h>
#include <asm/processor.h>
+#include <asm/smp.h>
#include <asm/sysregs.h>
#include <asm/traps.h>

-static void arch_reset_el1(struct registers *regs)
-{
- u32 sctlr;
-
- /* Wipe all banked and usr regs */
- memset(regs, 0, sizeof(struct registers));
-
- arm_write_banked_reg(SP_usr, 0);
- arm_write_banked_reg(SP_svc, 0);
- arm_write_banked_reg(SP_abt, 0);
- arm_write_banked_reg(SP_und, 0);
- arm_write_banked_reg(SP_irq, 0);
- arm_write_banked_reg(SP_fiq, 0);
- arm_write_banked_reg(LR_svc, 0);
- arm_write_banked_reg(LR_abt, 0);
- arm_write_banked_reg(LR_und, 0);
- arm_write_banked_reg(LR_irq, 0);
- arm_write_banked_reg(LR_fiq, 0);
- arm_write_banked_reg(R8_fiq, 0);
- arm_write_banked_reg(R9_fiq, 0);
- arm_write_banked_reg(R10_fiq, 0);
- arm_write_banked_reg(R11_fiq, 0);
- arm_write_banked_reg(R12_fiq, 0);
- arm_write_banked_reg(SPSR_svc, 0);
- arm_write_banked_reg(SPSR_abt, 0);
- arm_write_banked_reg(SPSR_und, 0);
- arm_write_banked_reg(SPSR_irq, 0);
- arm_write_banked_reg(SPSR_fiq, 0);
-
- /* Wipe the system registers */
- arm_read_sysreg(SCTLR_EL1, sctlr);
- sctlr = sctlr & ~SCTLR_MASK;
- arm_write_sysreg(SCTLR_EL1, sctlr);
- arm_write_sysreg(CPACR_EL1, 0);
- arm_write_sysreg(CONTEXTIDR_EL1, 0);
- arm_write_sysreg(PAR_EL1, 0);
- arm_write_sysreg(TTBR0_EL1, 0);
- arm_write_sysreg(TTBR1_EL1, 0);
- arm_write_sysreg(CSSELR_EL1, 0);
-
- arm_write_sysreg(CNTKCTL_EL1, 0);
- arm_write_sysreg(CNTP_CTL_EL0, 0);
- arm_write_sysreg(CNTP_CVAL_EL0, 0);
- arm_write_sysreg(CNTV_CTL_EL0, 0);
- arm_write_sysreg(CNTV_CVAL_EL0, 0);
-
- /* AArch32 specific */
- arm_write_sysreg(TTBCR, 0);
- arm_write_sysreg(DACR, 0);
- arm_write_sysreg(VBAR, 0);
- arm_write_sysreg(DFSR, 0);
- arm_write_sysreg(DFAR, 0);
- arm_write_sysreg(IFSR, 0);
- arm_write_sysreg(IFAR, 0);
- arm_write_sysreg(ADFSR, 0);
- arm_write_sysreg(AIFSR, 0);
- arm_write_sysreg(MAIR0, 0);
- arm_write_sysreg(MAIR1, 0);
- arm_write_sysreg(AMAIR0, 0);
- arm_write_sysreg(AMAIR1, 0);
- arm_write_sysreg(TPIDRURW, 0);
- arm_write_sysreg(TPIDRURO, 0);
- arm_write_sysreg(TPIDRPRW, 0);
-}
-
-void arch_reset_self(struct per_cpu *cpu_data)
-{
- unsigned long reset_address;
- struct cell *cell = cpu_data->cell;
- struct registers *regs = guest_regs(cpu_data);
-
- arm_paging_vcpu_init(&cell->arch.mm);
-
- /*
- * We come from the IRQ handler, but we won't return there, so the IPI
- * is deactivated here.
- */
- irqchip_eoi_irq(SGI_CPU_OFF, true);
-
- irqchip_cpu_reset(cpu_data);
-
- /* Wait for the driver to call cpu_up */
- if (cell == &root_cell)
- reset_address = arch_smp_spin(cpu_data, root_cell.arch.smp);
- else
- reset_address = arch_smp_spin(cpu_data, cell->arch.smp);
-
- /* Set the new MPIDR */
- arm_write_sysreg(VMPIDR_EL2, cpu_data->virt_id | MPIDR_MP_BIT);
-
- /* Restore an empty context */
- arch_reset_el1(regs);
-
- arm_write_banked_reg(ELR_hyp, reset_address);
- arm_write_banked_reg(SPSR_hyp, RESET_PSR);
-
- vmreturn(regs);
-}
-
static void cpu_reset(void)
{
struct per_cpu *cpu_data = this_cpu_data();
@@ -202,17 +104,6 @@ static void cpu_reset(void)
irqchip_cpu_reset(cpu_data);
}

-static void arch_suspend_self(struct per_cpu *cpu_data)
-{
- psci_suspend(cpu_data);
-
- if (cpu_data->flush_vcpu_caches) {
- arm_paging_vcpu_flush_tlbs();
- dsb(nsh);
- cpu_data->flush_vcpu_caches = false;
- }
-}
-
static void enter_cpu_off(struct per_cpu *cpu_data)
{
cpu_data->park = false;
@@ -308,49 +199,48 @@ void arm_cpu_kick(unsigned int cpu_id)

void arch_suspend_cpu(unsigned int cpu_id)
{
- struct sgi sgi;
+ struct per_cpu *target_data = per_cpu(cpu_id);
+ bool target_suspended;

- if (psci_cpu_stopped(cpu_id))
- return;
+ spin_lock(&target_data->control_lock);

- sgi.routing_mode = 0;
- sgi.aff1 = 0;
- sgi.aff2 = 0;
- sgi.aff3 = 0;
- sgi.targets = 1 << cpu_id;
- sgi.id = SGI_CPU_OFF;
+ target_data->suspend_cpu = true;
+ target_suspended = target_data->cpu_suspended;

- irqchip_send_sgi(&sgi);
+ spin_unlock(&target_data->control_lock);

- psci_wait_cpu_stopped(cpu_id);
+ if (!target_suspended) {
+ arm_cpu_kick(cpu_id);
+
+ while (!target_data->cpu_suspended)
+ cpu_relax();
+ }
}

void arch_resume_cpu(unsigned int cpu_id)
{
- /*
- * Simply get out of the spin loop by returning to handle_sgi
- * If the CPU is being reset, it already has left the PSCI idle loop.
- */
- if (psci_cpu_stopped(cpu_id))
- psci_resume(cpu_id);
+ struct per_cpu *target_data = per_cpu(cpu_id);
+
+ /* take lock to avoid theoretical race with a pending suspension */
+ spin_lock(&target_data->control_lock);
+
+ target_data->suspend_cpu = false;
+
+ spin_unlock(&target_data->control_lock);
}

void arch_reset_cpu(unsigned int cpu_id)
{
- unsigned long cpu_data = (unsigned long)per_cpu(cpu_id);
+ per_cpu(cpu_id)->reset = true;

- if (psci_cpu_on(cpu_id, (unsigned long)arch_reset_self, cpu_data))
- printk("ERROR: unable to reset CPU%d (was running)\n", cpu_id);
+ arch_resume_cpu(cpu_id);
}

void arch_park_cpu(unsigned int cpu_id)
{
- /*
- * Reset always follows park_cpu, so we just need to make sure that the
- * CPU is suspended
- */
- if (psci_wait_cpu_stopped(cpu_id) != 0)
- printk("ERROR: CPU%d is supposed to be stopped\n", cpu_id);
+ per_cpu(cpu_id)->park = true;
+
+ arch_resume_cpu(cpu_id);
}

static void check_events(struct per_cpu *cpu_data)
@@ -417,9 +307,6 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
case SGI_INJECT:
irqchip_inject_pending(cpu_data);
break;
- case SGI_CPU_OFF:
- arch_suspend_self(cpu_data);
- break;
case SGI_EVENT:
check_events(cpu_data);
break;
@@ -488,7 +375,6 @@ int arch_cell_create(struct cell *cell)
return err;
}

- register_smp_ops(cell);
smp_cell_init(cell);

return 0;
@@ -504,7 +390,6 @@ void arch_cell_destroy(struct cell *cell)

/* Re-assign the physical IDs for the root cell */
percpu->virt_id = percpu->cpu_id;
- arch_reset_cpu(cpu);

percpu->cpu_on_entry = PSCI_INVALID_ADDRESS;
}
@@ -544,15 +429,7 @@ void __attribute__((noreturn)) arch_panic_stop(void)
__builtin_unreachable();
}

-void arch_panic_park(void)
-{
- /* Won't return to panic_park */
- if (phys_processor_id() == panic_cpu)
- panic_in_progress = 0;
-
- psci_cpu_off(this_cpu_data());
- __builtin_unreachable();
-}
+void arch_panic_park(void) __attribute__((alias("arm_cpu_park")));

void arch_shutdown(void)
{
diff --git a/hypervisor/arch/arm/include/asm/cell.h b/hypervisor/arch/arm/include/asm/cell.h
index 305a2e8..5413d30 100644
--- a/hypervisor/arch/arm/include/asm/cell.h
+++ b/hypervisor/arch/arm/include/asm/cell.h
@@ -13,9 +13,6 @@
#ifndef _JAILHOUSE_ASM_CELL_H
#define _JAILHOUSE_ASM_CELL_H

-#include <jailhouse/types.h>
-#include <asm/smp.h>
-
#ifndef __ASSEMBLY__

#include <jailhouse/paging.h>
@@ -23,7 +20,6 @@
/** ARM-specific cell states. */
struct arch_cell {
struct paging_structures mm;
- struct smp_ops *smp;

u32 irq_bitmap[1024/32];

diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index d9346c7..794d7bf 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -15,7 +15,6 @@

#define SGI_INJECT 0
#define SGI_EVENT 1
-#define SGI_CPU_OFF 2

#define CACHES_CLEAN 0
#define CACHES_CLEAN_INVALIDATE 1
@@ -35,8 +34,9 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
struct registers* arch_handle_exit(struct per_cpu *cpu_data,
struct registers *regs);
bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn);
-void arch_reset_self(struct per_cpu *cpu_data);
+
void arch_shutdown_self(struct per_cpu *cpu_data);
+
unsigned int arm_cpu_by_mpidr(struct cell *cell, unsigned long mpidr);

void __attribute__((noreturn)) vmreturn(struct registers *guest_regs);
diff --git a/hypervisor/arch/arm/include/asm/smp.h b/hypervisor/arch/arm/include/asm/smp.h
index 173908a..42d4394 100644
--- a/hypervisor/arch/arm/include/asm/smp.h
+++ b/hypervisor/arch/arm/include/asm/smp.h
@@ -15,26 +15,13 @@
#ifndef JAILHOUSE_ASM_SMP_H_
#define JAILHOUSE_ASM_SMP_H_

-#ifndef __ASSEMBLY__
-
-struct mmio_access;
-struct per_cpu;
struct cell;

-struct smp_ops {
- /* Returns an address */
- unsigned long (*cpu_spin)(struct per_cpu *cpu_data);
-};
-
extern const unsigned int smp_mmio_regions;

-unsigned long arch_smp_spin(struct per_cpu *cpu_data, struct smp_ops *ops);
-void register_smp_ops(struct cell *cell);
-
int smp_init(void);

void smp_cell_init(struct cell *cell);
void smp_cell_exit(struct cell *cell);

-#endif /* !__ASSEMBLY__ */
#endif /* !JAILHOUSE_ASM_SMP_H_ */
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index 5f72cdb..fd9ae6f 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -169,7 +169,7 @@ int irqchip_cell_init(struct cell *cell)
* Permit direct access to all SGIs and PPIs except for those used by
* the hypervisor.
*/
- cell->arch.irq_bitmap[0] = ~((1 << SGI_INJECT) | (1 << SGI_CPU_OFF) |
+ cell->arch.irq_bitmap[0] = ~((1 << SGI_INJECT) | (1 << SGI_EVENT) |
(1 << MAINTENANCE_IRQ));

err = irqchip.cell_init(cell);
diff --git a/hypervisor/arch/arm/psci.c b/hypervisor/arch/arm/psci.c
index e0a6703..fc7c3f8 100644
--- a/hypervisor/arch/arm/psci.c
+++ b/hypervisor/arch/arm/psci.c
@@ -2,9 +2,11 @@
* Jailhouse, a Linux-based partitioning hypervisor
*
* Copyright (c) ARM Limited, 2014
+ * Copyright (c) Siemens AG, 2016
*
* Authors:
* Jean-Philippe Brucker <jean-phili...@arm.com>
+ * Jan Kiszka <jan.k...@siemens.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
@@ -78,19 +80,37 @@ int psci_wait_cpu_stopped(unsigned int cpu_id)
static long psci_emulate_cpu_on(struct per_cpu *cpu_data,
struct trap_context *ctx)
{
+ struct per_cpu *target_data;
+ bool kick_cpu = false;
unsigned int cpu;
- struct psci_mbox *mbox;
+ long result;

cpu = arm_cpu_by_mpidr(cpu_data->cell, ctx->regs[1]);
if (cpu == -1)
/* Virtual id not in set */
return PSCI_DENIED;

- mbox = &(per_cpu(cpu)->guest_mbox);
- mbox->entry = ctx->regs[2];
- mbox->context = ctx->regs[3];
+ target_data = per_cpu(cpu);

- return psci_resume(cpu);
+ spin_lock(&target_data->control_lock);
+
+ if (target_data->wait_for_poweron) {
+ target_data->cpu_on_entry = ctx->regs[2];
+ target_data->cpu_on_context = ctx->regs[3];
+ target_data->reset = true;
+ kick_cpu = true;
+
+ result = PSCI_SUCCESS;
+ } else {
+ result = PSCI_ALREADY_ON;
+ }
+
+ spin_unlock(&target_data->control_lock);
+
+ if (kick_cpu)
+ arm_cpu_kick(cpu);
+
+ return result;
}

static long psci_emulate_affinity_info(struct per_cpu *cpu_data,
@@ -102,33 +122,8 @@ static long psci_emulate_affinity_info(struct per_cpu *cpu_data,
/* Virtual id not in set */
return PSCI_DENIED;

- return psci_cpu_stopped(cpu) ? PSCI_CPU_IS_OFF : PSCI_CPU_IS_ON;
-}
-
-/* Returns the secondary address set by the guest */
-unsigned long psci_emulate_spin(struct per_cpu *cpu_data)
-{
- struct psci_mbox *mbox = &(cpu_data->guest_mbox);
-
- mbox->entry = 0;
-
- /* Wait for emulate_cpu_on or a trapped mmio to the mbox */
- while (mbox->entry == 0)
- psci_suspend(cpu_data);
-
- return mbox->entry;
-}
-
-int psci_cell_init(struct cell *cell)
-{
- unsigned int cpu;
-
- for_each_cpu(cpu, cell->cpu_set) {
- per_cpu(cpu)->guest_mbox.entry = 0;
- per_cpu(cpu)->guest_mbox.context = 0;
- }
-
- return 0;
+ return per_cpu(cpu)->wait_for_poweron ?
+ PSCI_CPU_IS_OFF : PSCI_CPU_IS_ON;
}

long psci_dispatch(struct trap_context *ctx)
@@ -143,13 +138,7 @@ long psci_dispatch(struct trap_context *ctx)

case PSCI_CPU_OFF:
case PSCI_CPU_OFF_V0_1_UBOOT:
- /*
- * The reset function will take care of calling
- * psci_emulate_spin
- */
- arch_reset_self(cpu_data);
-
- /* Not reached */
+ arm_cpu_park();
return 0;

case PSCI_CPU_ON_32:
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 2e2bee2..1ac54d0 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -14,6 +14,7 @@
#include <asm/irqchip.h>
#include <asm/percpu.h>
#include <asm/setup.h>
+#include <asm/smp.h>
#include <asm/sysregs.h>
#include <jailhouse/control.h>
#include <jailhouse/paging.h>
@@ -76,7 +77,6 @@ int arch_cpu_init(struct per_cpu *cpu_data)
unsigned long hcr = HCR_VM_BIT | HCR_IMO_BIT | HCR_FMO_BIT
| HCR_TSC_BIT | HCR_TAC_BIT | HCR_TSW_BIT;

- cpu_data->psci_mbox.entry = 0;
cpu_data->virt_id = cpu_data->cpu_id;
cpu_data->mpidr = phys_processor_id();

@@ -121,9 +121,6 @@ int arch_init_late(void)
if (err)
return err;

- /* Platform-specific SMP operations */
- register_smp_ops(&root_cell);
-
err = smp_init();
if (err)
return err;
diff --git a/hypervisor/arch/arm/smp-sun7i.c b/hypervisor/arch/arm/smp-sun7i.c
deleted file mode 100644
index 6b03d5c..0000000
--- a/hypervisor/arch/arm/smp-sun7i.c
+++ /dev/null
@@ -1,24 +0,0 @@
-/*
- * Jailhouse, a Linux-based partitioning hypervisor
- *
- * Copyright (c) Siemens AG, 2014
- *
- * Authors:
- * Jan Kiszka <jan.k...@siemens.com>
- *
- * This work is licensed under the terms of the GNU GPL, version 2. See
- * the COPYING file in the top-level directory.
- */
-
-#include <jailhouse/cell.h>
-#include <asm/psci.h>
-#include <asm/smp.h>
-
-static struct smp_ops sun7i_smp_ops = {
- .cpu_spin = psci_emulate_spin,
-};
-
-void register_smp_ops(struct cell *cell)
-{
- cell->arch.smp = &sun7i_smp_ops;
-}
diff --git a/hypervisor/arch/arm/smp-tegra124.c b/hypervisor/arch/arm/smp-tegra124.c
deleted file mode 100644
index 63555da..0000000
--- a/hypervisor/arch/arm/smp-tegra124.c
+++ /dev/null
@@ -1,24 +0,0 @@
-/*
- * Jailhouse, a Linux-based partitioning hypervisor
- *
- * Copyright (c) Siemens AG, 2014, 2015
- *
- * Authors:
- * Jan Kiszka <jan.k...@siemens.com>
- *
- * This work is licensed under the terms of the GNU GPL, version 2. See
- * the COPYING file in the top-level directory.
- */
-
-#include <jailhouse/cell.h>
-#include <asm/psci.h>
-#include <asm/smp.h>
-
-static struct smp_ops tegra124_smp_ops = {
- .cpu_spin = psci_emulate_spin,
-};
-
-void register_smp_ops(struct cell *cell)
-{
- cell->arch.smp = &tegra124_smp_ops;
-}
diff --git a/hypervisor/arch/arm/smp-vexpress.c b/hypervisor/arch/arm/smp-vexpress.c
index 1750a41..4c11f44 100644
--- a/hypervisor/arch/arm/smp-vexpress.c
+++ b/hypervisor/arch/arm/smp-vexpress.c
@@ -27,7 +27,7 @@ static unsigned long root_entry;

static enum mmio_result smp_mmio(void *arg, struct mmio_access *mmio)
{
- struct per_cpu *cpu_data = this_cpu_data();
+ struct per_cpu *target_data, *cpu_data = this_cpu_data();
unsigned int cpu;

if (mmio->address != VEXPRESS_FLAGSSET || !mmio->is_write)
@@ -35,45 +35,24 @@ static enum mmio_result smp_mmio(void *arg, struct mmio_access *mmio)
return MMIO_HANDLED;

for_each_cpu_except(cpu, cpu_data->cell->cpu_set, cpu_data->cpu_id) {
- per_cpu(cpu)->guest_mbox.entry = mmio->value;
- psci_try_resume(cpu);
- }
+ target_data = per_cpu(cpu);

- return MMIO_HANDLED;
-}
+ arch_suspend_cpu(cpu);

-static unsigned long smp_spin(struct per_cpu *cpu_data)
-{
- /*
- * This is super-dodgy: we assume nothing wrote to the flag register
- * since the kernel called smp_prepare_cpus, at initialisation.
- */
- return root_entry;
-}
+ spin_lock(&target_data->control_lock);

-static struct smp_ops vexpress_smp_ops = {
- .cpu_spin = smp_spin,
-};
+ if (target_data->wait_for_poweron) {
+ target_data->cpu_on_entry = mmio->value;
+ target_data->cpu_on_context = 0;
+ target_data->reset = true;
+ }

-/*
- * Store the guest's secondaries into our PSCI, and wake them up when we catch
- * an access to the mbox from the primary.
- */
-static struct smp_ops vexpress_guest_smp_ops = {
- .cpu_spin = psci_emulate_spin,
-};
+ spin_unlock(&target_data->control_lock);

-void register_smp_ops(struct cell *cell)
-{
- /*
- * mach-vexpress only writes the SYS_FLAGS once at boot, so the root
- * cell cannot rely on this write to guess where the secondary CPUs
- * should return.
- */
- if (cell == &root_cell)
- cell->arch.smp = &vexpress_smp_ops;
- else
- cell->arch.smp = &vexpress_guest_smp_ops;
+ arch_resume_cpu(cpu);
+ }
+
+ return MMIO_HANDLED;
}

int smp_init(void)
@@ -104,5 +83,7 @@ void smp_cell_exit(struct cell *cell)
for_each_cpu(cpu, cell->cpu_set) {
per_cpu(cpu)->cpu_on_entry = root_entry;
per_cpu(cpu)->cpu_on_context = 0;
+ arch_suspend_cpu(cpu);
+ arch_reset_cpu(cpu);
}
}
diff --git a/hypervisor/arch/arm/smp.c b/hypervisor/arch/arm/smp.c
index 1b43168..f84ea94 100644
--- a/hypervisor/arch/arm/smp.c
+++ b/hypervisor/arch/arm/smp.c
@@ -12,28 +12,13 @@
* the COPYING file in the top-level directory.
*/

-#include <asm/percpu.h>
#include <asm/smp.h>

const unsigned int __attribute__((weak)) smp_mmio_regions;

-unsigned long arch_smp_spin(struct per_cpu *cpu_data, struct smp_ops *ops)
-{
- /*
- * Hotplugging CPU0 is not currently supported. It is always assumed to
- * be the primary CPU. This is consistent with the linux behavior on
- * most platforms.
- * The guest image always starts at virtual address 0.
- */
- if (cpu_data->virt_id == 0)
- return 0;
-
- return ops->cpu_spin(cpu_data);
-}
-
int __attribute__((weak)) smp_init(void)
{
- return psci_cell_init(&root_cell);
+ return 0;
}

void __attribute__((weak)) smp_cell_init(struct cell *cell)
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:22 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
Drop PSCI functions as well as types and defines that are no longer (or
were never) used.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/Makefile | 2 +-
hypervisor/arch/arm/include/asm/percpu.h | 4 --
hypervisor/arch/arm/include/asm/psci.h | 38 -----------------
hypervisor/arch/arm/psci.c | 60 ---------------------------
hypervisor/arch/arm/psci_low.S | 71 --------------------------------
5 files changed, 1 insertion(+), 174 deletions(-)
delete mode 100644 hypervisor/arch/arm/psci_low.S

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 5e68b1a..6006b83 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -19,7 +19,7 @@ always := built-in.o
obj-y := entry.o dbg-write.o exception.o setup.o control.o lib.o
obj-y += traps.o mmio.o
obj-y += paging.o mmu_hyp.o mmu_cell.o caches.o
-obj-y += psci.o psci_low.o smp.o
+obj-y += psci.o smp.o
obj-y += irqchip.o gic-common.o
obj-$(CONFIG_ARM_GIC_V3) += gic-v3.o
obj-$(CONFIG_ARM_GIC) += gic-v2.o
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index fc97002..1293486 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -55,10 +55,6 @@ struct per_cpu {

bool initialized;

- /* The mbox will be accessed with a ldrd, which requires alignment */
- __attribute__((aligned(8))) struct psci_mbox psci_mbox;
- struct psci_mbox guest_mbox;
-
/**
* Lock protecting CPU state changes done for control tasks.
*
diff --git a/hypervisor/arch/arm/include/asm/psci.h b/hypervisor/arch/arm/include/asm/psci.h
index dc8c007..77e0bff 100644
--- a/hypervisor/arch/arm/include/asm/psci.h
+++ b/hypervisor/arch/arm/include/asm/psci.h
@@ -14,20 +14,11 @@
#define _JAILHOUSE_ASM_PSCI_H

#define PSCI_VERSION 0x84000000
-#define PSCI_CPU_SUSPEND_32 0x84000001
-#define PSCI_CPU_SUSPEND_64 0xc4000001
#define PSCI_CPU_OFF 0x84000002
#define PSCI_CPU_ON_32 0x84000003
#define PSCI_CPU_ON_64 0xc4000003
#define PSCI_AFFINITY_INFO_32 0x84000004
#define PSCI_AFFINITY_INFO_64 0xc4000004
-#define PSCI_MIGRATE_32 0x84000005
-#define PSCI_MIGRATE_64 0xc4000005
-#define PSCI_MIGRATE_INFO_TYPE 0x84000006
-#define PSCI_MIGRATE_INFO_UP_CPU_32 0x84000007
-#define PSCI_MIGRATE_INFO_UP_CPU_64 0xc4000007
-#define PSCI_SYSTEM_OFF 0x84000008
-#define PSCI_SYSTEM_RESET 0x84000009

/* v0.1 function IDs as used by U-Boot */
#define PSCI_CPU_OFF_V0_1_UBOOT 0x95c1ba5f
@@ -38,10 +29,6 @@
#define PSCI_INVALID_PARAMETERS (-2)
#define PSCI_DENIED (-3)
#define PSCI_ALREADY_ON (-4)
-#define PSCI_ON_PENDING (-5)
-#define PSCI_INTERNAL_FAILURE (-6)
-#define PSCI_NOT_PRESENT (-7)
-#define PSCI_DISABLED (-8)

#define PSCI_CPU_IS_ON 0
#define PSCI_CPU_IS_OFF 1
@@ -51,33 +38,8 @@

#define PSCI_INVALID_ADDRESS 0xffffffff

-#ifndef __ASSEMBLY__
-
-#include <jailhouse/types.h>
-
-struct cell;
struct trap_context;
-struct per_cpu;
-
-struct psci_mbox {
- unsigned long entry;
- unsigned long context;
-};
-
-void psci_cpu_off(struct per_cpu *cpu_data);
-long psci_cpu_on(unsigned int target, unsigned long entry,
- unsigned long context);
-bool psci_cpu_stopped(unsigned int cpu_id);
-int psci_wait_cpu_stopped(unsigned int cpu_id);
-
-void psci_suspend(struct per_cpu *cpu_data);
-long psci_resume(unsigned int target);
-long psci_try_resume(unsigned int cpu_id);

long psci_dispatch(struct trap_context *ctx);

-int psci_cell_init(struct cell *cell);
-unsigned long psci_emulate_spin(struct per_cpu *cpu_data);
-
-#endif /* !__ASSEMBLY__ */
#endif /* _JAILHOUSE_ASM_PSCI_H */
diff --git a/hypervisor/arch/arm/psci.c b/hypervisor/arch/arm/psci.c
index fc7c3f8..52a73e9 100644
--- a/hypervisor/arch/arm/psci.c
+++ b/hypervisor/arch/arm/psci.c
@@ -13,70 +13,10 @@
*/

#include <asm/control.h>
-#include <asm/percpu.h>
#include <asm/psci.h>
#include <asm/traps.h>
#include <jailhouse/control.h>

-void _psci_cpu_off(struct psci_mbox *);
-long _psci_cpu_on(struct psci_mbox *, unsigned long, unsigned long);
-void _psci_suspend(struct psci_mbox *, unsigned long *address);
-void _psci_suspend_return(void);
-
-void psci_cpu_off(struct per_cpu *cpu_data)
-{
- _psci_cpu_off(&cpu_data->psci_mbox);
-}
-
-long psci_cpu_on(unsigned int target, unsigned long entry,
- unsigned long context)
-{
- struct per_cpu *cpu_data = per_cpu(target);
- struct psci_mbox *mbox = &cpu_data->psci_mbox;
-
- return _psci_cpu_on(mbox, entry, context);
-}
-
-/*
- * Not a real psci_cpu_suspend implementation. Only used to semantically
- * differentiate from `cpu_off'. Return is done via psci_resume.
- */
-void psci_suspend(struct per_cpu *cpu_data)
-{
- psci_cpu_off(cpu_data);
-}
-
-long psci_resume(unsigned int target)
-{
- psci_wait_cpu_stopped(target);
- return psci_cpu_on(target, (unsigned long)&_psci_suspend_return, 0);
-}
-
-bool psci_cpu_stopped(unsigned int cpu_id)
-{
- return per_cpu(cpu_id)->psci_mbox.entry == PSCI_INVALID_ADDRESS;
-}
-
-long psci_try_resume(unsigned int cpu_id)
-{
- if (psci_cpu_stopped(cpu_id))
- return psci_resume(cpu_id);
-
- return -EBUSY;
-}
-
-int psci_wait_cpu_stopped(unsigned int cpu_id)
-{
- /* FIXME: add a delay */
- do {
- if (psci_cpu_stopped(cpu_id))
- return 0;
- cpu_relax();
- } while (1);
-
- return -EBUSY;
-}
-
static long psci_emulate_cpu_on(struct per_cpu *cpu_data,
struct trap_context *ctx)
{
diff --git a/hypervisor/arch/arm/psci_low.S b/hypervisor/arch/arm/psci_low.S
deleted file mode 100644
index 76eeaba..0000000
--- a/hypervisor/arch/arm/psci_low.S
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * Jailhouse, a Linux-based partitioning hypervisor
- *
- * Copyright (c) ARM Limited, 2014
- *
- * Authors:
- * Jean-Philippe Brucker <jean-phili...@arm.com>
- *
- * This work is licensed under the terms of the GNU GPL, version 2. See
- * the COPYING file in the top-level directory.
- */
-
-#include <asm/head.h>
-#include <asm/psci.h>
-
- .global _psci_cpu_off
- /* r0: struct psci_mbox* */
-_psci_cpu_off:
- ldr r2, =PSCI_INVALID_ADDRESS
- /* Clear mbox */
- str r2, [r0]
- /*
- * No reordering against the ldr below for the PEs in our domain, so no
- * need for a barrier. Other CPUs will wait for an invalid address
- * before issuing a CPU_ON.
- */
-
- /* Wait for a CPU_ON call that updates the mbox */
-1: wfe
- ldr r1, [r0]
- cmp r1, r2
- beq 1b
-
- /* Jump to the requested entry, with a parameter */
- ldr r0, [r0, #4]
- bx r1
-
- .global _psci_cpu_on
- /* r0: struct psci_mbox*, r1: entry, r2: context */
-_psci_cpu_on:
- push {r4, r5, lr}
- /* strd needs to start with an even register */
- mov r3, r2
- mov r2, r1
- ldr r1, =PSCI_INVALID_ADDRESS
-
- ldrexd r4, r5, [r0]
- cmp r4, r1
- bne store_failed
- strexd r1, r2, r3, [r0]
- /* r1 contains the ex store flag */
- cmp r1, #0
- bne store_failed
-
- /*
- * Ensure that the stopped CPU can read the new address when receiving
- * the event.
- */
- dsb ish
- sev
- mov r0, #0
- pop {r4, r5, pc}
-
-store_failed:
- clrex
- mov r0, #PSCI_ALREADY_ON
- pop {r4, r5, pc}
-
- .global _psci_suspend_return
-_psci_suspend_return:
- bx lr
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:22 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
psci_low.S is going to be obsoleted.

Comes with two indention adjustments that didn't deserve a separate
commit.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/exception.S | 10 ++++++++--
hypervisor/arch/arm/psci_low.S | 11 -----------
2 files changed, 8 insertions(+), 13 deletions(-)

diff --git a/hypervisor/arch/arm/exception.S b/hypervisor/arch/arm/exception.S
index 6701aac..4ae57c7 100644
--- a/hypervisor/arch/arm/exception.S
+++ b/hypervisor/arch/arm/exception.S
@@ -61,7 +61,7 @@ vmexit_common:
* Because the hypervisor may call vmreturn to reset the stack,
* arch_handle_exit has to return with the guest registers in r0
*/
-.globl vmreturn
+ .globl vmreturn
vmreturn:
mov sp, r0
add sp, #4
@@ -75,7 +75,13 @@ vmreturn:
* r0-r3: arguments
* r0: return value
*/
-.globl hvc
+ .globl hvc
hvc:
hvc #0
bx lr
+
+ .arch_extension sec
+ .globl smc
+smc:
+ smc #0
+ bx lr
diff --git a/hypervisor/arch/arm/psci_low.S b/hypervisor/arch/arm/psci_low.S
index 58bdc0a..76eeaba 100644
--- a/hypervisor/arch/arm/psci_low.S
+++ b/hypervisor/arch/arm/psci_low.S
@@ -13,17 +13,6 @@
#include <asm/head.h>
#include <asm/psci.h>

- .arch_extension sec
- .globl smc
- /*
- * Since we trap all SMC instructions, it may be useful to forward them
- * when it isn't a PSCI call. The shutdown code will also have to issue
- * a real PSCI_OFF call on secondary CPUs.
- */
-smc:
- smc #0
- bx lr
-
.global _psci_cpu_off
/* r0: struct psci_mbox* */
_psci_cpu_off:
--
2.1.4

Jan Kiszka

unread,
Aug 10, 2016, 3:29:22 AM8/10/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
Non-functional change to match the arch_*_cpu function ordering of x86
also on ARM. Helps when comparing code and will reduce the churn of the
following commits a bit.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 55 ++++++++++++++++++++-----------------------
1 file changed, 26 insertions(+), 29 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 6f50afe..577306f 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -306,7 +306,25 @@ void arm_cpu_kick(unsigned int cpu_id)
irqchip_send_sgi(&sgi);
}

-/* CPU must be stopped */
+void arch_suspend_cpu(unsigned int cpu_id)
+{
+ struct sgi sgi;
+
+ if (psci_cpu_stopped(cpu_id))
+ return;
+
+ sgi.routing_mode = 0;
+ sgi.aff1 = 0;
+ sgi.aff2 = 0;
+ sgi.aff3 = 0;
+ sgi.targets = 1 << cpu_id;
+ sgi.id = SGI_CPU_OFF;
+
+ irqchip_send_sgi(&sgi);
+
+ psci_wait_cpu_stopped(cpu_id);
+}
+
void arch_resume_cpu(unsigned int cpu_id)
{
/*
@@ -317,18 +335,6 @@ void arch_resume_cpu(unsigned int cpu_id)
psci_resume(cpu_id);
}

-/* CPU must be stopped */
-void arch_park_cpu(unsigned int cpu_id)
-{
- /*
- * Reset always follows park_cpu, so we just need to make sure that the
- * CPU is suspended
- */
- if (psci_wait_cpu_stopped(cpu_id) != 0)
- printk("ERROR: CPU%d is supposed to be stopped\n", cpu_id);
-}
-
-/* CPU must be stopped */
void arch_reset_cpu(unsigned int cpu_id)
{
unsigned long cpu_data = (unsigned long)per_cpu(cpu_id);
@@ -337,23 +343,14 @@ void arch_reset_cpu(unsigned int cpu_id)
printk("ERROR: unable to reset CPU%d (was running)\n", cpu_id);
}

-void arch_suspend_cpu(unsigned int cpu_id)
+void arch_park_cpu(unsigned int cpu_id)
{
- struct sgi sgi;
-
- if (psci_cpu_stopped(cpu_id))
- return;
-
- sgi.routing_mode = 0;
- sgi.aff1 = 0;
- sgi.aff2 = 0;
- sgi.aff3 = 0;
- sgi.targets = 1 << cpu_id;
- sgi.id = SGI_CPU_OFF;
-
- irqchip_send_sgi(&sgi);
-
- psci_wait_cpu_stopped(cpu_id);
+ /*
+ * Reset always follows park_cpu, so we just need to make sure that the
+ * CPU is suspended
+ */
+ if (psci_wait_cpu_stopped(cpu_id) != 0)
+ printk("ERROR: CPU%d is supposed to be stopped\n", cpu_id);
}

static void check_events(struct per_cpu *cpu_data)
--
2.1.4

Ralf Ramsauer

unread,
Aug 10, 2016, 5:38:11 AM8/10/16
to Jan Kiszka, jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
Hi Jan,

I'm curious, which upstream patch patch are you talking about? I tried
to export is_hyp_mode_available a couple of months ago but wasn't
accepted upstream.

Uhm - you don't check the Kernel version and the jailhouse driver
doesn't load on kernels without __boot_cpu_mode exported. Am I missing
something?

Ralf
Ralf Ramsauer
PGP: 0x8F10049B

Jan Kiszka

unread,
Aug 10, 2016, 6:31:20 AM8/10/16
to Ralf Ramsauer, jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
On 2016-08-10 11:38, Ralf Ramsauer wrote:
> Hi Jan,
>
> I'm curious, which upstream patch patch are you talking about? I tried
> to export is_hyp_mode_available a couple of months ago but wasn't
> accepted upstream.

It's not upstream, I should probably reword this. There is also a typo
in the message.

"...
We now rely on a tiny patch against upstream Linux to export
__boot_cpu_mode to GPL modules. As this variable now becomes available
for us, we can also use it (via is_hyp_mode_available) to check for the
availability of the hypervisor stub Linux should have installed instead
of simply crashing the system when it is missing."

Jan
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

Jan Kiszka

unread,
Aug 10, 2016, 6:33:36 AM8/10/16
to Ralf Ramsauer, jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana
On 2016-08-10 12:31, Jan Kiszka wrote:
> On 2016-08-10 11:38, Ralf Ramsauer wrote:
>> Hi Jan,
>>
>> I'm curious, which upstream patch patch are you talking about? I tried
>> to export is_hyp_mode_available a couple of months ago but wasn't
>> accepted upstream.
>
> It's not upstream, I should probably reword this. There is also a typo
> in the message.
>
> "...
> We now rely on a tiny patch against upstream Linux to export

We now require a tiny patch...

Jan Kiszka

unread,
Aug 15, 2016, 2:00:18 PM8/15/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
Seems like this isn't enough yet: When we reload a cell and then reset
it, there is also an invalidation required. Otherwise the cell may pull
stale data from the previous session.

Current interfaces do provide a sufficient hook for this. Need to rework
them / add a better on.

Jan

Jan Kiszka

unread,
Aug 16, 2016, 3:33:18 AM8/16/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
This adds a clean hook to inform the architectures about a complete cell
reset. Rather than counting arch_reset_cpu invocations, this hook is a
straightforward way to perform cell-wide reset steps. ARM will use it,
x86 has no demand so far.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 4 ++++
hypervisor/arch/x86/control.c | 4 ++++
hypervisor/control.c | 2 ++
hypervisor/include/jailhouse/control.h | 11 +++++++++++
4 files changed, 21 insertions(+)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index f9e117d..4c5b63a 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -364,6 +364,10 @@ void arch_cell_destroy(struct cell *cell)
arm_paging_cell_destroy(cell);
}

+void arch_cell_reset(struct cell *cell)
+{
+}
+
/* Note: only supports synchronous flushing as triggered by config_commit! */
void arch_flush_cell_vcpu_caches(struct cell *cell)
{
diff --git a/hypervisor/arch/x86/control.c b/hypervisor/arch/x86/control.c
index 47a5a2f..46bf2cb 100644
--- a/hypervisor/arch/x86/control.c
+++ b/hypervisor/arch/x86/control.c
@@ -127,6 +127,10 @@ void arch_cell_destroy(struct cell *cell)
vcpu_cell_exit(cell);
}

+void arch_cell_reset(struct cell *cell)
+{
+}
+
void arch_config_commit(struct cell *cell_added_removed)
{
iommu_config_commit(cell_added_removed);
diff --git a/hypervisor/control.c b/hypervisor/control.c
index a266341..90d30c7 100644
--- a/hypervisor/control.c
+++ b/hypervisor/control.c
@@ -555,6 +555,8 @@ static int cell_start(struct per_cpu *cpu_data, unsigned long id)
cell->comm_page.comm_region.cell_state = JAILHOUSE_CELL_RUNNING;
cell->comm_page.comm_region.msg_to_cell = JAILHOUSE_MSG_NONE;

+ arch_cell_reset(cell);
+
for_each_cpu(cpu, cell->cpu_set) {
per_cpu(cpu)->failed = false;
arch_reset_cpu(cpu);
diff --git a/hypervisor/include/jailhouse/control.h b/hypervisor/include/jailhouse/control.h
index ffe1d09..bfc5cc9 100644
--- a/hypervisor/include/jailhouse/control.h
+++ b/hypervisor/include/jailhouse/control.h
@@ -249,6 +249,17 @@ int arch_cell_create(struct cell *cell);
void arch_cell_destroy(struct cell *cell);

/**
+ * Performs the architecture-specific steps for resetting a cell.
+ * @param cell Cell to be reset.
+ *
+ * @note This function shall not reset individual cell CPUs. Instead, this is
+ * triggered by the core via arch_reset_cpu().
+ *
+ * @see arch_reset_cpu
+ */
+void arch_cell_reset(struct cell *cell);
+
+/**
* Performs the architecture-specific steps for applying configuration changes.
* @param cell_added_removed Cell that was added or removed to/from the
* system or NULL.
--
2.1.4

Jan Kiszka

unread,
Aug 16, 2016, 3:33:34 AM8/16/16
to jailho...@googlegroups.com, Antonios Motakis, Claudio Fontana, Marc Zyngier, Mark Rutland
All d-cache entries related to memory that a cell will use after reset
or that a destructed cell was using are irrelevant now. Invalidate them
so that nothing leaks from/to other cells or previous sessions of the
same cell.

CC: Marc Zyngier <marc.z...@arm.com>
CC: Mark Rutland <mark.r...@arm.com>
Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/control.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 4c5b63a..8b34fcf 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -352,6 +352,8 @@ void arch_cell_destroy(struct cell *cell)
unsigned int cpu;
struct per_cpu *percpu;

+ arm_cell_dcaches_flush(cell, DCACHE_INVALIDATE);
+
for_each_cpu(cpu, cell->cpu_set) {
percpu = per_cpu(cpu);
/* Re-assign the physical IDs for the root cell */
@@ -366,6 +368,7 @@ void arch_cell_destroy(struct cell *cell)

void arch_cell_reset(struct cell *cell)
{
+ arm_cell_dcaches_flush(cell, DCACHE_INVALIDATE);
}

/* Note: only supports synchronous flushing as triggered by config_commit! */
Reply all
Reply to author
Forward
0 new messages