[PATCH 00/13] preparatory patch series for Jailhouse AArch64 support

19 views
Skip to first unread message

antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:42 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>


This patch series has been split off from the main Jailhouse for
AArch64 patch series, in order to keep each series shorter.

In this series, a few changes are included to the core in
preparation to the main patch series. In addition, most of the
patches are on the ARM AArch32 architecture port of Jailhouse.
Since the AArch64 port attempts to share some code with AArch32,
a few changes and code moves are needed.


Antonios Motakis (11):
driver: ioremap the hypervisor firmware to any kernel address
core: panic_stop: check current cell has been initialized
core: reimplement page_alloc to allow aligned allocations
hypervisor: arm: pass SPIs with large ids to the root cell
hypervisor: arm: phys_processor_id should return logical id
hypervisor: arm: move arm_cpu_virt2phys to lib.c
hypervisor: arm: psci: support multiple affinity levels in MPIDR
hypervisor: arm: make IS_PSCI_FN macro more restrictive
hypervisor: arm: hide TLB flush behind a macro
hypervisor: arm: prepare port for 48 bit PARange support
hypervisor: arm: put the value of VTCR for cells in a define

Claudio Fontana (1):
core: lib: move memcpy implementation from the ARM port

Dmitry Voytik (1):
driver: sync I-cache, D-cache and memory

driver/cell.c | 7 ++
driver/main.c | 28 +++++++-
hypervisor/arch/arm/control.c | 12 ----
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/irqchip.h | 2 +-
.../arch/arm/include/asm/jailhouse_hypercall.h | 1 +
hypervisor/arch/arm/include/asm/paging.h | 26 +++++--
hypervisor/arch/arm/include/asm/paging_modes.h | 2 +
hypervisor/arch/arm/include/asm/percpu.h | 11 +++
hypervisor/arch/arm/include/asm/processor.h | 2 +
hypervisor/arch/arm/include/asm/psci.h | 2 +-
hypervisor/arch/arm/irqchip.c | 3 +-
hypervisor/arch/arm/lib.c | 31 ++++++---
hypervisor/arch/arm/mmu_cell.c | 25 ++++---
hypervisor/arch/arm/paging.c | 81 ++++++++++++++++++++++
hypervisor/arch/arm/psci.c | 5 +-
hypervisor/arch/arm/setup.c | 1 +
hypervisor/arch/x86/apic.c | 2 +-
.../arch/x86/include/asm/jailhouse_hypercall.h | 3 +-
hypervisor/arch/x86/include/asm/paging.h | 3 +-
hypervisor/arch/x86/ioapic.c | 4 +-
hypervisor/arch/x86/svm.c | 6 +-
hypervisor/arch/x86/vmx.c | 2 +-
hypervisor/arch/x86/vtd.c | 12 ++--
hypervisor/control.c | 12 ++--
hypervisor/include/jailhouse/paging.h | 2 +-
hypervisor/lib.c | 12 ++++
hypervisor/mmio.c | 2 +-
hypervisor/paging.c | 58 +++++++++-------
hypervisor/pci.c | 9 +--
hypervisor/pci_ivshmem.c | 2 +-
hypervisor/setup.c | 2 +-
32 files changed, 274 insertions(+), 97 deletions(-)

--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:43 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

At the moment the Linux driver maps the Jailhouse binary to
JAILHOUSE_BASE. The underlying assumption is that Linux may map the
firmware (in the Linux kernel space), to the same virtual address it
has been built to run from.

This assumption is unworkable on ARMv8 processors running in AArch64
mode. Kernel memory is allocated in a high address region, that is
not addressable from EL2, where the hypervisor will run from.

This patch removes the assumption, by introducing the
JAILHOUSE_BORROW_ROOT_PT define, which describes the behavior of the
current architectures.

We also turn the entry point in the header, into an offset from the
Jailhouse load address, so we can enter the image regardless of
where it will be mapped.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
driver/main.c | 21 ++++++++++++++++++---
.../arch/arm/include/asm/jailhouse_hypercall.h | 1 +
.../arch/x86/include/asm/jailhouse_hypercall.h | 3 ++-
hypervisor/setup.c | 2 +-
4 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/driver/main.c b/driver/main.c
index 60b67bd..5644641 100644
--- a/driver/main.c
+++ b/driver/main.c
@@ -139,11 +139,14 @@ static void enter_hypervisor(void *info)
{
struct jailhouse_header *header = info;
unsigned int cpu = smp_processor_id();
+ int (*entry)(unsigned int);
int err;

+ entry = header->entry + (unsigned long) hypervisor_mem;
+
if (cpu < header->max_cpus)
/* either returns 0 or the same error code across all CPUs */
- err = header->entry(cpu);
+ err = entry(cpu);
else
err = -EINVAL;

@@ -178,7 +181,9 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
struct jailhouse_system *config;
struct jailhouse_memory *hv_mem = &config_header.hypervisor_memory;
struct jailhouse_header *header;
+#if JAILHOUSE_BORROW_ROOT_PT == 1
void __iomem *console = NULL;
+#endif
unsigned long config_size;
const char *fw_name;
long max_cpus;
@@ -235,8 +240,9 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
config_size >= hv_mem->size - hv_core_and_percpu_size)
goto error_release_fw;

- hypervisor_mem = jailhouse_ioremap(hv_mem->phys_start, JAILHOUSE_BASE,
- hv_mem->size);
+ hypervisor_mem = jailhouse_ioremap(hv_mem->phys_start,
+ JAILHOUSE_BORROW_ROOT_PT ?
+ JAILHOUSE_BASE : 0, hv_mem->size);
if (!hypervisor_mem) {
pr_err("jailhouse: Unable to map RAM reserved for hypervisor "
"at %08lx\n", (unsigned long)hv_mem->phys_start);
@@ -258,6 +264,7 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
}

if (config->debug_console.flags & JAILHOUSE_MEM_IO) {
+#if JAILHOUSE_BORROW_ROOT_PT == 1
console = ioremap(config->debug_console.phys_start,
config->debug_console.size);
if (!console) {
@@ -270,6 +277,10 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
/* The hypervisor has no notion of address spaces, so we need
* to enforce conversion. */
header->debug_console_base = (void * __force)console;
+#else
+ header->debug_console_base =
+ (void * __force) config->debug_console.phys_start;
+#endif
}

err = jailhouse_cell_prepare_root(&config->root_cell);
@@ -294,8 +305,10 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
goto error_free_cell;
}

+#if JAILHOUSE_BORROW_ROOT_PT == 1
if (console)
iounmap(console);
+#endif

release_firmware(hypervisor);

@@ -314,8 +327,10 @@ error_free_cell:

error_unmap:
vunmap(hypervisor_mem);
+#if JAILHOUSE_BORROW_ROOT_PT == 1
if (console)
iounmap(console);
+#endif

error_release_fw:
release_firmware(hypervisor);
diff --git a/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h b/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h
index 480f487..45e7a3d 100644
--- a/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h
+++ b/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h
@@ -37,6 +37,7 @@
*/

#define JAILHOUSE_BASE 0xf0000000
+#define JAILHOUSE_BORROW_ROOT_PT 1

#define JAILHOUSE_CALL_INS ".arch_extension virt\n\t" \
"hvc #0x4a48"
diff --git a/hypervisor/arch/x86/include/asm/jailhouse_hypercall.h b/hypervisor/arch/x86/include/asm/jailhouse_hypercall.h
index fe5f5d5..90b8cb7 100644
--- a/hypervisor/arch/x86/include/asm/jailhouse_hypercall.h
+++ b/hypervisor/arch/x86/include/asm/jailhouse_hypercall.h
@@ -36,7 +36,8 @@
* THE POSSIBILITY OF SUCH DAMAGE.
*/

-#define JAILHOUSE_BASE __MAKE_UL(0xfffffffff0000000)
+#define JAILHOUSE_BASE __MAKE_UL(0xfffffffff0000000)
+#define JAILHOUSE_BORROW_ROOT_PT 1

/*
* As this is never called on a CPU without VM extensions,
diff --git a/hypervisor/setup.c b/hypervisor/setup.c
index 0fb9f68..dc565ca 100644
--- a/hypervisor/setup.c
+++ b/hypervisor/setup.c
@@ -207,5 +207,5 @@ hypervisor_header = {
.signature = JAILHOUSE_SIGNATURE,
.core_size = (unsigned long)__page_pool - JAILHOUSE_BASE,
.percpu_size = sizeof(struct per_cpu),
- .entry = arch_entry,
+ .entry = arch_entry - JAILHOUSE_BASE,
};
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:47 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

Putting arm_cpu_virt2phys under arch/arm/lib.c is a little
more consistent (it can become friends with phys_processor_id),
and we'll benefit by sharing the implementation with AArch64.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/control.c | 12 ------------
hypervisor/arch/arm/lib.c | 13 +++++++++++++
2 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 1c17c31..7810746 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -302,18 +302,6 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
}
}

-unsigned int arm_cpu_virt2phys(struct cell *cell, unsigned int virt_id)
-{
- unsigned int cpu;
-
- for_each_cpu(cpu, cell->cpu_set) {
- if (per_cpu(cpu)->virt_id == virt_id)
- return cpu;
- }
-
- return -1;
-}
-
/*
* Handle the maintenance interrupt, the rest is injected into the cell.
* Return true when the IRQ has been handled by the hyp.
diff --git a/hypervisor/arch/arm/lib.c b/hypervisor/arch/arm/lib.c
index 7fb42f9..96798a0 100644
--- a/hypervisor/arch/arm/lib.c
+++ b/hypervisor/arch/arm/lib.c
@@ -10,6 +10,7 @@
* the COPYING file in the top-level directory.
*/

+#include <jailhouse/control.h>
#include <jailhouse/processor.h>
#include <jailhouse/string.h>
#include <jailhouse/types.h>
@@ -20,3 +21,15 @@ int phys_processor_id(void)
{
return this_cpu_data()->cpu_id;
}
+
+unsigned int arm_cpu_virt2phys(struct cell *cell, unsigned int virt_id)
+{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, cell->cpu_set) {
+ if (per_cpu(cpu)->virt_id == virt_id)
+ return cpu;
+ }
+
+ return -1;
+}
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:48 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

The current design for cell configuration files, defines the SPIs
to be passed to a cell as 64 bit bitmap. In order to use Jailhouse
on targets that have SPI ids larger than 64, we need to work
around this limitation.

Pass large SPIs to the root cell for now. A permanent solution to
this problem will need to tackle the cell configuration format.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/irqchip.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index 17ba90a..581c10f 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -108,7 +108,7 @@ static inline bool spi_in_cell(struct cell *cell, unsigned int spi)
u32 spi_mask;

if (spi >= 64)
- return false;
+ return (cell == &root_cell);
else if (spi >= 32)
spi_mask = cell->arch.spis >> 32;
else
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:48 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

PSCI actually takes CPU parameters by the MPIDR id, which may
differ from the logical id of the CPU. This patch is the first step
into properly handling the CPU affinity levels in the MPIDR.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/percpu.h | 11 +++++++++++
hypervisor/arch/arm/lib.c | 12 ++++++++++++
hypervisor/arch/arm/psci.c | 5 ++---
hypervisor/arch/arm/setup.c | 1 +
5 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index f050e76..f81f879 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -38,6 +38,7 @@ struct registers* arch_handle_exit(struct per_cpu *cpu_data,
bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn);
void arch_reset_self(struct per_cpu *cpu_data);
void arch_shutdown_self(struct per_cpu *cpu_data);
+unsigned int arm_cpu_by_mpid(struct cell *cell, unsigned long mpid);

void __attribute__((noreturn)) vmreturn(struct registers *guest_regs);
void __attribute__((noreturn)) arch_shutdown_mmu(struct per_cpu *cpu_data);
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 3ab3a68..9c06c67 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -32,6 +32,16 @@

struct pending_irq;

+union mpidr {
+ u32 val;
+ struct {
+ u8 aff0;
+ u8 aff1;
+ u8 aff2;
+ u8 pad1;
+ } f;
+};
+
struct per_cpu {
/* Keep these two in sync with defines above! */
u8 stack[PAGE_SIZE];
@@ -63,6 +73,7 @@ struct per_cpu {
bool flush_vcpu_caches;
int shutdown_state;
bool shutdown;
+ union mpidr mpidr;
bool failed;
} __attribute__((aligned(PAGE_SIZE)));

diff --git a/hypervisor/arch/arm/lib.c b/hypervisor/arch/arm/lib.c
index 96798a0..51d2fa6 100644
--- a/hypervisor/arch/arm/lib.c
+++ b/hypervisor/arch/arm/lib.c
@@ -14,6 +14,7 @@
#include <jailhouse/processor.h>
#include <jailhouse/string.h>
#include <jailhouse/types.h>
+#include <asm/control.h>
#include <asm/percpu.h>
#include <asm/sysregs.h>

@@ -33,3 +34,14 @@ unsigned int arm_cpu_virt2phys(struct cell *cell, unsigned int virt_id)

return -1;
}
+
+unsigned int arm_cpu_by_mpid(struct cell *cell, unsigned long mpid)
+{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, cell->cpu_set)
+ if (mpid == (per_cpu(cpu)->mpidr.val & MPIDR_CPUID_MASK))
+ return cpu;
+
+ return -1;
+}
diff --git a/hypervisor/arch/arm/psci.c b/hypervisor/arch/arm/psci.c
index 3ebbd50..bc1297d 100644
--- a/hypervisor/arch/arm/psci.c
+++ b/hypervisor/arch/arm/psci.c
@@ -78,11 +78,10 @@ int psci_wait_cpu_stopped(unsigned int cpu_id)
static long psci_emulate_cpu_on(struct per_cpu *cpu_data,
struct trap_context *ctx)
{
- unsigned int target = ctx->regs[1];
unsigned int cpu;
struct psci_mbox *mbox;

- cpu = arm_cpu_virt2phys(cpu_data->cell, target);
+ cpu = arm_cpu_by_mpid(cpu_data->cell, ctx->regs[1]);
if (cpu == -1)
/* Virtual id not in set */
return PSCI_DENIED;
@@ -97,7 +96,7 @@ static long psci_emulate_cpu_on(struct per_cpu *cpu_data,
static long psci_emulate_affinity_info(struct per_cpu *cpu_data,
struct trap_context *ctx)
{
- unsigned int cpu = arm_cpu_virt2phys(cpu_data->cell, ctx->regs[1]);
+ unsigned int cpu = arm_cpu_by_mpid(cpu_data->cell, ctx->regs[1]);

if (cpu == -1)
/* Virtual id not in set */
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index ef6f9e0..20a1384 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -56,6 +56,7 @@ int arch_cpu_init(struct per_cpu *cpu_data)

cpu_data->psci_mbox.entry = 0;
cpu_data->virt_id = cpu_data->cpu_id;
+ arm_read_sysreg(MPIDR_EL1, cpu_data->mpidr.val);

/*
* Copy the registers to restore from the linux stack here, because we
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:49 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

Currently during a panic, panic_stop will print the current cell
on the CPU where the panic occurred. However, if the hypervisor
panics sufficiently early during initialization, we may end up in
a situation where the root cell has not been initialized. This can
easily cause a trap loop, making the panic output less useful.

The fix is simple enough: just check the cell has been initialized
before attempting to print its name.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/control.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/hypervisor/control.c b/hypervisor/control.c
index 94ed6c6..5220d9c 100644
--- a/hypervisor/control.c
+++ b/hypervisor/control.c
@@ -814,8 +814,12 @@ long hypercall(unsigned long code, unsigned long arg1, unsigned long arg2)
*/
void __attribute__((noreturn)) panic_stop(void)
{
- panic_printk("Stopping CPU %d (Cell: \"%s\")\n", this_cpu_id(),
- this_cell()->config->name);
+ panic_printk("Stopping CPU %d", this_cpu_id());
+
+ if (this_cell() && this_cell()->config)
+ panic_printk(" (Cell: \"%s\")", this_cell()->config->name);
+
+ panic_printk("\n");

if (phys_processor_id() == panic_cpu)
panic_in_progress = 0;
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:49 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

Hide TLB flushes issues by the MMU code behind a macro, so we can
increase our chances of reusing some of this code.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/processor.h | 2 ++
hypervisor/arch/arm/mmu_cell.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index c6144a7..907a28e 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -197,6 +197,8 @@ static inline bool is_el2(void)
return (psr & PSR_MODE_MASK) == PSR_HYP_MODE;
}

+#define tlb_flush_guest() arm_write_sysreg(TLBIALL, 1)
+
#endif /* !__ASSEMBLY__ */

#endif /* !_JAILHOUSE_ASM_PROCESSOR_H */
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 05c7591..989d468 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -110,7 +110,7 @@ void arch_cpu_tlb_flush(struct per_cpu *cpu_data)
* Invalidate all stage-1 and 2 TLB entries for the current VMID
* ERET will ensure completion of these ops
*/
- arm_write_sysreg(TLBIALL, 1);
+ tlb_flush_guest();
dsb(nsh);
cpu_data->flush_vcpu_caches = false;
}
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:49 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

The function page_alloc allows us to allocate any number of
pages, however they will always be aligned on page boundaries.

The new page_alloc implementation takes an extra bool align
parameter, which allows us to allocate N pages that will be
aligned by N * PAGE_SIZE. N needs to be a power of two.

This will be used on the AArch64 port of Jailhouse to support
physical address ranges from 40 to 44 bits: in these
configurations, the initial page table level may take up
multiple consecutive pages.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/paging.h | 3 +-
hypervisor/arch/arm/irqchip.c | 3 +-
hypervisor/arch/arm/mmu_cell.c | 2 +-
hypervisor/arch/x86/apic.c | 2 +-
hypervisor/arch/x86/include/asm/paging.h | 3 +-
hypervisor/arch/x86/ioapic.c | 4 +--
hypervisor/arch/x86/svm.c | 6 ++--
hypervisor/arch/x86/vmx.c | 2 +-
hypervisor/arch/x86/vtd.c | 12 +++----
hypervisor/control.c | 4 +--
hypervisor/include/jailhouse/paging.h | 2 +-
hypervisor/mmio.c | 2 +-
hypervisor/paging.c | 58 ++++++++++++++++++--------------
hypervisor/pci.c | 9 ++---
hypervisor/pci_ivshmem.c | 2 +-
15 files changed, 63 insertions(+), 51 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 0372b2c..28ba3e0 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -18,7 +18,8 @@
#include <asm/processor.h>
#include <asm/sysregs.h>

-#define PAGE_SIZE 4096
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1 << PAGE_SHIFT)
#define PAGE_MASK ~(PAGE_SIZE - 1)
#define PAGE_OFFS_MASK (PAGE_SIZE - 1)

diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index 2d7840e..ff3c88f 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -39,7 +39,8 @@ static int irqchip_init_pending(struct per_cpu *cpu_data)
struct pending_irq *pend_array;

if (cpu_data->pending_irqs == NULL) {
- cpu_data->pending_irqs = pend_array = page_alloc(&mem_pool, 1);
+ cpu_data->pending_irqs = pend_array =
+ page_alloc(&mem_pool, 1, 0);
if (pend_array == NULL)
return -ENOMEM;
} else {
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 4885f8c..05c7591 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -58,7 +58,7 @@ unsigned long arch_paging_gphys2phys(struct per_cpu *cpu_data,
int arch_mmu_cell_init(struct cell *cell)
{
cell->arch.mm.root_paging = hv_paging;
- cell->arch.mm.root_table = page_alloc(&mem_pool, 1);
+ cell->arch.mm.root_table = page_alloc(&mem_pool, 1, 0);
if (!cell->arch.mm.root_table)
return -ENOMEM;

diff --git a/hypervisor/arch/x86/apic.c b/hypervisor/arch/x86/apic.c
index d3b4211..7560ac0 100644
--- a/hypervisor/arch/x86/apic.c
+++ b/hypervisor/arch/x86/apic.c
@@ -170,7 +170,7 @@ int apic_init(void)
apic_ops.send_ipi = send_x2apic_ipi;
using_x2apic = true;
} else if (apicbase & APIC_BASE_EN) {
- xapic_page = page_alloc(&remap_pool, 1);
+ xapic_page = page_alloc(&remap_pool, 1, 0);
if (!xapic_page)
return trace_error(-ENOMEM);
err = paging_create(&hv_paging_structs, XAPIC_BASE, PAGE_SIZE,
diff --git a/hypervisor/arch/x86/include/asm/paging.h b/hypervisor/arch/x86/include/asm/paging.h
index e90077b..064790c 100644
--- a/hypervisor/arch/x86/include/asm/paging.h
+++ b/hypervisor/arch/x86/include/asm/paging.h
@@ -16,7 +16,8 @@
#include <jailhouse/types.h>
#include <asm/processor.h>

-#define PAGE_SIZE 4096
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1 << PAGE_SHIFT)
#define PAGE_MASK ~(PAGE_SIZE - 1)
#define PAGE_OFFS_MASK (PAGE_SIZE - 1)

diff --git a/hypervisor/arch/x86/ioapic.c b/hypervisor/arch/x86/ioapic.c
index 82521fb..17b71a1 100644
--- a/hypervisor/arch/x86/ioapic.c
+++ b/hypervisor/arch/x86/ioapic.c
@@ -230,7 +230,7 @@ ioapic_get_or_add_phys(const struct jailhouse_irqchip *irqchip)
if (num_phys_ioapics == IOAPIC_MAX_CHIPS)
return trace_error(NULL);

- phys_ioapic->reg_base = page_alloc(&remap_pool, 1);
+ phys_ioapic->reg_base = page_alloc(&remap_pool, 1, 0);
if (!phys_ioapic->reg_base)
return trace_error(NULL);
err = paging_create(&hv_paging_structs, irqchip->address, PAGE_SIZE,
@@ -343,7 +343,7 @@ int ioapic_cell_init(struct cell *cell)
if (cell->config->num_irqchips > IOAPIC_MAX_CHIPS)
return trace_error(-ERANGE);

- cell->arch.ioapics = page_alloc(&mem_pool, 1);
+ cell->arch.ioapics = page_alloc(&mem_pool, 1, 0);
if (!cell->arch.ioapics)
return -ENOMEM;

diff --git a/hypervisor/arch/x86/svm.c b/hypervisor/arch/x86/svm.c
index 72df24b..5f38f7c 100644
--- a/hypervisor/arch/x86/svm.c
+++ b/hypervisor/arch/x86/svm.c
@@ -269,7 +269,7 @@ int vcpu_vendor_init(void)

/* Map guest parking code (shared between cells and CPUs) */
parking_pt.root_paging = npt_paging;
- parking_pt.root_table = parked_mode_npt = page_alloc(&mem_pool, 1);
+ parking_pt.root_table = parked_mode_npt = page_alloc(&mem_pool, 1, 0);
if (!parked_mode_npt)
return -ENOMEM;
err = paging_create(&parking_pt, paging_hvirt2phys(parking_code),
@@ -288,7 +288,7 @@ int vcpu_vendor_init(void)
msrpm[SVM_MSRPM_0000][MSR_X2APIC_ICR/4] = 0x02;
} else {
if (has_avic) {
- avic_page = page_alloc(&remap_pool, 1);
+ avic_page = page_alloc(&remap_pool, 1, 0);
if (!avic_page)
return trace_error(-ENOMEM);
}
@@ -303,7 +303,7 @@ int vcpu_vendor_cell_init(struct cell *cell)
u64 flags;

/* allocate iopm */
- cell->arch.svm.iopm = page_alloc(&mem_pool, IOPM_PAGES);
+ cell->arch.svm.iopm = page_alloc(&mem_pool, IOPM_PAGES, 0);
if (!cell->arch.svm.iopm)
return err;

diff --git a/hypervisor/arch/x86/vmx.c b/hypervisor/arch/x86/vmx.c
index 9b57d8c..0f7fb06 100644
--- a/hypervisor/arch/x86/vmx.c
+++ b/hypervisor/arch/x86/vmx.c
@@ -329,7 +329,7 @@ int vcpu_vendor_cell_init(struct cell *cell)
int err;

/* allocate io_bitmap */
- cell->arch.vmx.io_bitmap = page_alloc(&mem_pool, PIO_BITMAP_PAGES);
+ cell->arch.vmx.io_bitmap = page_alloc(&mem_pool, PIO_BITMAP_PAGES, 0);
if (!cell->arch.vmx.io_bitmap)
return -ENOMEM;

diff --git a/hypervisor/arch/x86/vtd.c b/hypervisor/arch/x86/vtd.c
index 18d6e4c..524777a 100644
--- a/hypervisor/arch/x86/vtd.c
+++ b/hypervisor/arch/x86/vtd.c
@@ -429,7 +429,7 @@ static int vtd_init_ir_emulation(unsigned int unit_no, void *reg_base)
unit->irt_entries = 2 << (unit->irta & VTD_IRTA_SIZE_MASK);

size = PAGE_ALIGN(sizeof(struct vtd_irte_usage) * unit->irt_entries);
- unit->irte_map = page_alloc(&mem_pool, size / PAGE_SIZE);
+ unit->irte_map = page_alloc(&mem_pool, size / PAGE_SIZE, 0);
if (!unit->irte_map)
return -ENOMEM;

@@ -465,7 +465,7 @@ int iommu_init(void)
return trace_error(-EINVAL);

int_remap_table =
- page_alloc(&mem_pool, PAGES(sizeof(union vtd_irte) << n));
+ page_alloc(&mem_pool, PAGES(sizeof(union vtd_irte) << n), 0);
if (!int_remap_table)
return -ENOMEM;

@@ -475,11 +475,11 @@ int iommu_init(void)
if (units == 0)
return trace_error(-EINVAL);

- dmar_reg_base = page_alloc(&remap_pool, units);
+ dmar_reg_base = page_alloc(&remap_pool, units, 0);
if (!dmar_reg_base)
return trace_error(-ENOMEM);

- unit_inv_queue = page_alloc(&mem_pool, units);
+ unit_inv_queue = page_alloc(&mem_pool, units, 0);
if (!unit_inv_queue)
return -ENOMEM;

@@ -673,7 +673,7 @@ int iommu_add_pci_device(struct cell *cell, struct pci_device *device)
context_entry_table =
paging_phys2hvirt(*root_entry_lo & PAGE_MASK);
} else {
- context_entry_table = page_alloc(&mem_pool, 1);
+ context_entry_table = page_alloc(&mem_pool, 1, 0);
if (!context_entry_table)
goto error_nomem;
*root_entry_lo = VTD_ROOT_PRESENT |
@@ -741,7 +741,7 @@ int iommu_cell_init(struct cell *cell)
return trace_error(-ERANGE);

cell->arch.vtd.pg_structs.root_paging = vtd_paging;
- cell->arch.vtd.pg_structs.root_table = page_alloc(&mem_pool, 1);
+ cell->arch.vtd.pg_structs.root_table = page_alloc(&mem_pool, 1, 0);
if (!cell->arch.vtd.pg_structs.root_table)
return -ENOMEM;

diff --git a/hypervisor/control.c b/hypervisor/control.c
index 5220d9c..a1e6a1b 100644
--- a/hypervisor/control.c
+++ b/hypervisor/control.c
@@ -183,7 +183,7 @@ int cell_init(struct cell *cell)
if (cpu_set_size > PAGE_SIZE)
return trace_error(-EINVAL);
if (cpu_set_size > sizeof(cell->small_cpu_set.bitmap)) {
- cpu_set = page_alloc(&mem_pool, 1);
+ cpu_set = page_alloc(&mem_pool, 1, 0);
if (!cpu_set)
return -ENOMEM;
} else {
@@ -386,7 +386,7 @@ static int cell_create(struct per_cpu *cpu_data, unsigned long config_address)
}

cell_pages = PAGES(sizeof(*cell) + cfg_total_size);
- cell = page_alloc(&mem_pool, cell_pages);
+ cell = page_alloc(&mem_pool, cell_pages, 0);
if (!cell) {
err = -ENOMEM;
goto err_resume;
diff --git a/hypervisor/include/jailhouse/paging.h b/hypervisor/include/jailhouse/paging.h
index 27286f0..cef4fcf 100644
--- a/hypervisor/include/jailhouse/paging.h
+++ b/hypervisor/include/jailhouse/paging.h
@@ -182,7 +182,7 @@ extern struct paging_structures hv_paging_structs;

unsigned long paging_get_phys_invalid(pt_entry_t pte, unsigned long virt);

-void *page_alloc(struct page_pool *pool, unsigned int num);
+void *page_alloc(struct page_pool *pool, unsigned int num, bool aligned);
void page_free(struct page_pool *pool, void *first_page, unsigned int num);

/**
diff --git a/hypervisor/mmio.c b/hypervisor/mmio.c
index 94dc286..6dbb12b 100644
--- a/hypervisor/mmio.c
+++ b/hypervisor/mmio.c
@@ -40,7 +40,7 @@ int mmio_cell_init(struct cell *cell)
pages = page_alloc(&mem_pool,
PAGES(cell->max_mmio_regions *
(sizeof(struct mmio_region_location) +
- sizeof(struct mmio_region_handler))));
+ sizeof(struct mmio_region_handler))), 0);
if (!pages)
return -ENOMEM;

diff --git a/hypervisor/paging.c b/hypervisor/paging.c
index 5f127d7..d58d677 100644
--- a/hypervisor/paging.c
+++ b/hypervisor/paging.c
@@ -91,38 +91,45 @@ static unsigned long find_next_free_page(struct page_pool *pool,
* Allocate consecutive pages from the specified pool.
* @param pool Page pool to allocate from.
* @param num Number of pages.
+ * @param align Pages should be aligned by num * PAGE_SIZE.
+ * In this case, num absolutely needs to be a power
+ * of 2, or a giant octopus will eat your machine.
*
* @return Pointer to first page or NULL if allocation failed.
*
* @see page_free
*/
-void *page_alloc(struct page_pool *pool, unsigned int num)
+void *page_alloc(struct page_pool *pool, unsigned int num, bool align)
{
- unsigned long start, last, next;
- unsigned int allocated;
+ unsigned long start, next, i;
+ /* the pool itself might not be aligned to our desired size */
+ unsigned long offset_mask = num - 1;
+ unsigned int offset = ((unsigned long) pool->base_address >> PAGE_SHIFT)
+ & offset_mask;

- start = find_next_free_page(pool, 0);
- if (start == INVALID_PAGE_NR)
- return NULL;
+ next = align ? offset : 0;

-restart:
- for (allocated = 1, last = start; allocated < num;
- allocated++, last = next) {
- next = find_next_free_page(pool, last + 1);
- if (next == INVALID_PAGE_NR)
- return NULL;
- if (next != last + 1) {
- start = next;
- goto restart;
- }
- }
+ while ((start = find_next_free_page(pool, next)) != INVALID_PAGE_NR) {
+
+ if (align && (start - offset) & offset_mask)
+ goto next_chunk; /* not aligned */

- for (allocated = 0; allocated < num; allocated++)
- set_bit(start + allocated, pool->used_bitmap);
+ for (i = start; i < start + num; i++)
+ if (test_bit(i, pool->used_bitmap))
+ goto next_chunk; /* not available */

- pool->used_pages += num;
+ for (i = start; i < start + num; i++)
+ set_bit(i, pool->used_bitmap);
+
+ pool->used_pages += num;
+
+ return pool->base_address + start * PAGE_SIZE;
+
+next_chunk:
+ next += align ? num - ((start - offset) & offset_mask) : 1;
+ }

- return pool->base_address + start * PAGE_SIZE;
+ return NULL;
}

/**
@@ -208,7 +215,7 @@ static int split_hugepage(const struct paging *paging, pt_entry_t pte,
flags = paging->get_flags(pte);

sub_structs.root_paging = paging + 1;
- sub_structs.root_table = page_alloc(&mem_pool, 1);
+ sub_structs.root_table = page_alloc(&mem_pool, 1, 0);
if (!sub_structs.root_table)
return -ENOMEM;
paging->set_next_pt(pte, paging_hvirt2phys(sub_structs.root_table));
@@ -277,7 +284,7 @@ int paging_create(const struct paging_structures *pg_structs,
pt = paging_phys2hvirt(
paging->get_next_pt(pte));
} else {
- pt = page_alloc(&mem_pool, 1);
+ pt = page_alloc(&mem_pool, 1, 0);
if (!pt)
return -ENOMEM;
paging->set_next_pt(pte,
@@ -489,7 +496,8 @@ int paging_init(void)
set_bit(n, mem_pool.used_bitmap);
mem_pool.flags = PAGE_SCRUB_ON_FREE;

- remap_pool.used_bitmap = page_alloc(&mem_pool, NUM_REMAP_BITMAP_PAGES);
+ remap_pool.used_bitmap =
+ page_alloc(&mem_pool, NUM_REMAP_BITMAP_PAGES, 0);
remap_pool.used_pages =
hypervisor_header.max_cpus * NUM_TEMPORARY_PAGES;
for (n = 0; n < remap_pool.used_pages; n++)
@@ -498,7 +506,7 @@ int paging_init(void)
arch_paging_init();

hv_paging_structs.root_paging = hv_paging;
- hv_paging_structs.root_table = page_alloc(&mem_pool, 1);
+ hv_paging_structs.root_table = page_alloc(&mem_pool, 1, 0);
if (!hv_paging_structs.root_table)
return -ENOMEM;

diff --git a/hypervisor/pci.c b/hypervisor/pci.c
index 2b08ef5..4fbe300 100644
--- a/hypervisor/pci.c
+++ b/hypervisor/pci.c
@@ -365,7 +365,7 @@ int pci_init(void)
end_bus = system_config->platform_info.x86.mmconfig_end_bus;
mmcfg_size = (end_bus + 1) * 256 * 4096;

- pci_space = page_alloc(&remap_pool, mmcfg_size / PAGE_SIZE);
+ pci_space = page_alloc(&remap_pool, mmcfg_size / PAGE_SIZE, 0);
if (!pci_space)
return trace_error(-ENOMEM);

@@ -572,7 +572,8 @@ static int pci_add_physical_device(struct cell *cell, struct pci_device *device)
err = arch_pci_add_physical_device(cell, device);

if (!err && device->info->msix_address) {
- device->msix_table = page_alloc(&remap_pool, size / PAGE_SIZE);
+ device->msix_table =
+ page_alloc(&remap_pool, size / PAGE_SIZE, 0);
if (!device->msix_table) {
err = trace_error(-ENOMEM);
goto error_remove_dev;
@@ -589,7 +590,7 @@ static int pci_add_physical_device(struct cell *cell, struct pci_device *device)
if (device->info->num_msix_vectors > PCI_EMBEDDED_MSIX_VECTS) {
pages = PAGES(sizeof(union pci_msix_vector) *
device->info->num_msix_vectors);
- device->msix_vectors = page_alloc(&mem_pool, pages);
+ device->msix_vectors = page_alloc(&mem_pool, pages, 0);
if (!device->msix_vectors) {
err = -ENOMEM;
goto error_unmap_table;
@@ -661,7 +662,7 @@ int pci_cell_init(struct cell *cell)
mmio_region_register(cell, mmcfg_start, mmcfg_size,
pci_mmconfig_access_handler, NULL);

- cell->pci_devices = page_alloc(&mem_pool, devlist_pages);
+ cell->pci_devices = page_alloc(&mem_pool, devlist_pages, 0);
if (!cell->pci_devices)
return -ENOMEM;

diff --git a/hypervisor/pci_ivshmem.c b/hypervisor/pci_ivshmem.c
index 5c1e6a7..e22beaf 100644
--- a/hypervisor/pci_ivshmem.c
+++ b/hypervisor/pci_ivshmem.c
@@ -448,7 +448,7 @@ int pci_ivshmem_init(struct cell *cell, struct pci_device *device)
/* this is the first endpoint, allocate a new datastructure */
for (ivp = &ivshmem_list; *ivp; ivp = &((*ivp)->next))
; /* empty loop */
- *ivp = page_alloc(&mem_pool, 1);
+ *ivp = page_alloc(&mem_pool, 1, 0);
if (!(*ivp))
return -ENOMEM;
ivshmem_connect_cell(*ivp, device, mem, 0);
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:50 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

The previous version of the macro allows for more false positives
than necessary.

The SVC32 and SVC64 versions of the PSCI function ids differ only
on one bit. Mask this bit from the function id prefix and compare.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/psci.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/include/asm/psci.h b/hypervisor/arch/arm/include/asm/psci.h
index 43a9c65..ba0adac 100644
--- a/hypervisor/arch/arm/include/asm/psci.h
+++ b/hypervisor/arch/arm/include/asm/psci.h
@@ -46,7 +46,7 @@
#define PSCI_CPU_IS_ON 0
#define PSCI_CPU_IS_OFF 1

-#define IS_PSCI_FN(hvc) ((((hvc) >> 24) & 0x84) == 0x84)
+#define IS_PSCI_FN(hvc) ((((hvc) >> 24) | 0x40) == 0xc4)

#define PSCI_INVALID_ADDRESS 0xffffffff

--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:51 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

We can reuse the code under hypervisor/arch/arm/mmu_cell.c for the
AArch64 port, save for the value we use for the VTCRL. AArch64 will
need in addition to the flags set by the AArch32 port, to set the
size of the address space to 40 bits; at least initially, until we
implement the new MMU features in ARMv8.

We put this behind a define in asm/paging.h to allow this reuse.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/paging.h | 6 ++++++
hypervisor/arch/arm/mmu_cell.c | 7 +------
2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 8bd9e7a..a1775ed 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -120,6 +120,12 @@
#define TCR_SL0_SHIFT 6
#define TCR_S_SHIFT 4

+#define VTCR_CELL (T0SZ | SL0 << TCR_SL0_SHIFT \
+ | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT) \
+ | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT) \
+ | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)\
+ | VTCR_RES1)
+
/*
* Hypervisor memory attribute indexes:
* 0: normal WB, RA, WA, non-transient
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index c4aec96..bcce958 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -80,12 +80,7 @@ int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
struct cell *cell = cpu_data->cell;
unsigned long cell_table = paging_hvirt2phys(cell->arch.mm.root_table);
u64 vttbr = 0;
- u32 vtcr = T0SZ
- | SL0 << TCR_SL0_SHIFT
- | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT)
- | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT)
- | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)
- | VTCR_RES1;
+ u32 vtcr = VTCR_CELL;

/* We share page tables between CPUs, so we need to check
* that all CPUs support the same PARange. */
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:51 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

We currently support 3 levels of page tables for a 39 bits PA range
on ARM. This patch implements support for 4 level page tables,
and 3 level page tables with a concatonated level 1 root page
table.

On AArch32 we stick with the current restriction of building for
a 39 bit physical address space; however this change will allow
us to support a 40 to 48 bit PARange on AArch64.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/paging.h | 17 +++++-
hypervisor/arch/arm/include/asm/paging_modes.h | 2 +
hypervisor/arch/arm/mmu_cell.c | 16 ++++-
hypervisor/arch/arm/paging.c | 81 ++++++++++++++++++++++++++
4 files changed, 110 insertions(+), 6 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 28ba3e0..8bd9e7a 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -31,11 +31,13 @@
* by IPA[20:12].
* This would allows to cover a 4GB memory map by using 4 concatenated level-2
* page tables and thus provide better table walk performances.
- * For the moment, the core doesn't allow to use concatenated pages, so we will
- * use three levels instead, starting at level 1.
+ * For the moment, we will implement the first level for AArch32 using only
+ * one level.
*
- * TODO: add a "u32 concatenated" field to the paging struct
+ * TODO: implement larger PARange support for AArch32
*/
+#define ARM_CELL_ROOT_PT_SZ 1
+
#if MAX_PAGE_TABLE_LEVELS < 3
#define T0SZ 0
#define SL0 0
@@ -164,6 +166,15 @@

typedef u64 *pt_entry_t;

+extern unsigned int cpu_parange;
+
+/* cpu_parange initialized in arch_paging_init */
+static inline unsigned int get_cpu_parange(void)
+{
+ /* TODO: implement proper PARange support on AArch32 */
+ return 39;
+}
+
/* Only executed on hypervisor paging struct changes */
static inline void arch_paging_flush_page_tlbs(unsigned long page_addr)
{
diff --git a/hypervisor/arch/arm/include/asm/paging_modes.h b/hypervisor/arch/arm/include/asm/paging_modes.h
index 72950eb..14d4a26 100644
--- a/hypervisor/arch/arm/include/asm/paging_modes.h
+++ b/hypervisor/arch/arm/include/asm/paging_modes.h
@@ -16,6 +16,8 @@

/* Long-descriptor paging */
extern const struct paging arm_paging[];
+extern const struct paging arm_s2_paging_alt[];
+extern const struct paging *cell_paging;

#define hv_paging arm_paging

diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 989d468..c4aec96 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -57,8 +57,13 @@ unsigned long arch_paging_gphys2phys(struct per_cpu *cpu_data,

int arch_mmu_cell_init(struct cell *cell)
{
- cell->arch.mm.root_paging = hv_paging;
- cell->arch.mm.root_table = page_alloc(&mem_pool, 1, 0);
+ if (!get_cpu_parange())
+ return trace_error(-EINVAL);
+
+ cell->arch.mm.root_paging = cell_paging;
+ cell->arch.mm.root_table =
+ page_alloc(&mem_pool, ARM_CELL_ROOT_PT_SZ, 1);
+
if (!cell->arch.mm.root_table)
return -ENOMEM;

@@ -67,7 +72,7 @@ int arch_mmu_cell_init(struct cell *cell)

void arch_mmu_cell_destroy(struct cell *cell)
{
- page_free(&mem_pool, cell->arch.mm.root_table, 1);
+ page_free(&mem_pool, cell->arch.mm.root_table, ARM_CELL_ROOT_PT_SZ);
}

int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
@@ -82,6 +87,11 @@ int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
| (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)
| VTCR_RES1;

+ /* We share page tables between CPUs, so we need to check
+ * that all CPUs support the same PARange. */
+ if (cpu_parange != get_cpu_parange())
+ return trace_error(-EINVAL);
+
if (cell->id > 0xff) {
panic_printk("No cell ID available\n");
return -E2BIG;
diff --git a/hypervisor/arch/arm/paging.c b/hypervisor/arch/arm/paging.c
index 8fdd034..93b3ba4 100644
--- a/hypervisor/arch/arm/paging.c
+++ b/hypervisor/arch/arm/paging.c
@@ -12,6 +12,8 @@

#include <jailhouse/paging.h>

+unsigned int cpu_parange = 0;
+
static bool arm_entry_valid(pt_entry_t entry, unsigned long flags)
{
// FIXME: validate flags!
@@ -40,6 +42,20 @@ static bool arm_page_table_empty(page_table_t page_table)
return true;
}

+#if MAX_PAGE_TABLE_LEVELS > 3
+static pt_entry_t arm_get_l0_entry(page_table_t page_table, unsigned long virt)
+{
+ return &page_table[(virt & L0_VADDR_MASK) >> 39];
+}
+
+static unsigned long arm_get_l0_phys(pt_entry_t pte, unsigned long virt)
+{
+ if ((*pte & PTE_TABLE_FLAGS) == PTE_TABLE_FLAGS)
+ return INVALID_PHYS_ADDR;
+ return (*pte & PTE_L0_BLOCK_ADDR_MASK) | (virt & BLOCK_512G_VADDR_MASK);
+}
+#endif
+
#if MAX_PAGE_TABLE_LEVELS > 2
static pt_entry_t arm_get_l1_entry(page_table_t page_table, unsigned long virt)
{
@@ -59,6 +75,18 @@ static unsigned long arm_get_l1_phys(pt_entry_t pte, unsigned long virt)
}
#endif

+static pt_entry_t arm_get_l1_alt_entry(page_table_t page_table, unsigned long virt)
+{
+ return &page_table[(virt & BIT_MASK(48,30)) >> 30];
+}
+
+static unsigned long arm_get_l1_alt_phys(pt_entry_t pte, unsigned long virt)
+{
+ if ((*pte & PTE_TABLE_FLAGS) == PTE_TABLE_FLAGS)
+ return INVALID_PHYS_ADDR;
+ return (*pte & BIT_MASK(48,30)) | (virt & BIT_MASK(29,0));
+}
+
static pt_entry_t arm_get_l2_entry(page_table_t page_table, unsigned long virt)
{
return &page_table[(virt & L2_VADDR_MASK) >> 21];
@@ -110,6 +138,18 @@ static unsigned long arm_get_l3_phys(pt_entry_t pte, unsigned long virt)
.page_table_empty = arm_page_table_empty,

const struct paging arm_paging[] = {
+#if MAX_PAGE_TABLE_LEVELS > 3
+ {
+ ARM_PAGING_COMMON
+ /* No block entries for level 0! */
+ .page_size = 0,
+ .get_entry = arm_get_l0_entry,
+ .get_phys = arm_get_l0_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+#endif
#if MAX_PAGE_TABLE_LEVELS > 2
{
ARM_PAGING_COMMON
@@ -144,6 +184,47 @@ const struct paging arm_paging[] = {
}
};

+const struct paging arm_s2_paging_alt[] = {
+ {
+ ARM_PAGING_COMMON
+ .page_size = 0,
+ .get_entry = arm_get_l1_alt_entry,
+ .get_phys = arm_get_l1_alt_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+ {
+ ARM_PAGING_COMMON
+ /* Block entry: 2MB */
+ .page_size = 2 * 1024 * 1024,
+ .get_entry = arm_get_l2_entry,
+ .set_terminal = arm_set_l2_block,
+ .get_phys = arm_get_l2_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+ {
+ ARM_PAGING_COMMON
+ /* Page entry: 4kB */
+ .page_size = 4 * 1024,
+ .get_entry = arm_get_l3_entry,
+ .set_terminal = arm_set_l3_page,
+ .get_phys = arm_get_l3_phys,
+ }
+};
+
+const struct paging *cell_paging;
+
void arch_paging_init(void)
{
+ cpu_parange = get_cpu_parange();
+
+ if (cpu_parange < 44)
+ /* 4 level page tables not supported for stage 2.
+ * We need to use multiple consecutive pages for L1 */
+ cell_paging = arm_s2_paging_alt;
+ else
+ cell_paging = arm_paging;
}
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:24:51 AM2/24/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

Currently the function phys_processor_id identifies the current
CPU by reading the MPIDR register. However, on systems with
multiple implemented affinity levels, the scheme used by this
register is hierarchical and does not correspond to the logical
IDs allocated to CPUs.

Change phys_processor_id to return the logical CPU id, so we
don't run into problems when we start implementing support for
affinity levels.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/lib.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/hypervisor/arch/arm/lib.c b/hypervisor/arch/arm/lib.c
index c96d18b..aa41de9 100644
--- a/hypervisor/arch/arm/lib.c
+++ b/hypervisor/arch/arm/lib.c
@@ -13,14 +13,12 @@
#include <jailhouse/processor.h>
#include <jailhouse/string.h>
#include <jailhouse/types.h>
+#include <asm/percpu.h>
#include <asm/sysregs.h>

int phys_processor_id(void)
{
- u32 mpidr;
-
- arm_read_sysreg(MPIDR_EL1, mpidr);
- return mpidr & MPIDR_CPUID_MASK;
+ return this_cpu_data()->cpu_id;
}

void *memcpy(void *dest, const void *src, unsigned long n)
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:25:07 AM2/24/16
to jailho...@googlegroups.com, Dmitry Voytik, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Dmitry Voytik <dmitry...@huawei.com>

Syncronize I-cache with D-cache after loading the hypervisor
image or a cell image. This must be done in arm64 according to
ARMv8 ARM spec. See page 1712, D3.4.6 "Non-cacheable accesses
and instruction caches".

This patch fixes coherency problems observed on real HW targets.
On x86 this operation is a NOP.

Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
---
driver/cell.c | 7 +++++++
driver/main.c | 7 +++++++
2 files changed, 14 insertions(+)

diff --git a/driver/cell.c b/driver/cell.c
index dc1b3c8..853b3bf 100644
--- a/driver/cell.c
+++ b/driver/cell.c
@@ -14,6 +14,7 @@
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
+#include <asm/cacheflush.h>

#include "cell.h"
#include "main.h"
@@ -323,6 +324,12 @@ static int load_image(struct cell *cell,
(void __user *)(unsigned long)image.source_address,
image.size))
err = -EFAULT;
+ /*
+ * ARMv8 requires to clean D-cache and invalidate I-cache for memory
+ * containing new instructions. On x86 this is a NOP.
+ */
+ flush_icache_range((unsigned long)(image_mem + page_offs),
+ (unsigned long)(image_mem + page_offs) + image.size);

vunmap(image_mem);

diff --git a/driver/main.c b/driver/main.c
index 5644641..0ce33a9 100644
--- a/driver/main.c
+++ b/driver/main.c
@@ -256,6 +256,13 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
header = (struct jailhouse_header *)hypervisor_mem;
header->max_cpus = max_cpus;

+ /*
+ * ARMv8 requires to clean D-cache and invalidate I-cache for memory
+ * containing new instructions. On x86 this is a NOP.
+ */
+ flush_icache_range((unsigned long)hypervisor_mem,
+ (unsigned long)(hypervisor_mem + header->core_size));
+
config = (struct jailhouse_system *)
(hypervisor_mem + hv_core_and_percpu_size);
if (copy_from_user(config, arg, config_size)) {
--
2.7.0


antonios...@huawei.com

unread,
Feb 24, 2016, 11:25:07 AM2/24/16
to jailho...@googlegroups.com, Claudio Fontana, jan.k...@siemens.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com, Antonios Motakis
From: Claudio Fontana <claudio...@huawei.com>

Move the memcpy implementation from the ARM port,
to the core library.

Signed-off-by: Claudio Fontana <claudio...@huawei.com>
Signed-off-by: Antonios Motakis <antonios...@huawei.com>
[antonios...@huawei.com: removed all signs of weakness!]
---
hypervisor/arch/arm/lib.c | 12 ------------
hypervisor/lib.c | 12 ++++++++++++
2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/hypervisor/arch/arm/lib.c b/hypervisor/arch/arm/lib.c
index aa41de9..7fb42f9 100644
--- a/hypervisor/arch/arm/lib.c
+++ b/hypervisor/arch/arm/lib.c
@@ -20,15 +20,3 @@ int phys_processor_id(void)
{
return this_cpu_data()->cpu_id;
}
-
-void *memcpy(void *dest, const void *src, unsigned long n)
-{
- unsigned long i;
- const char *csrc = src;
- char *cdest = dest;
-
- for (i = 0; i < n; i++)
- cdest[i] = csrc[i];
-
- return dest;
-}
diff --git a/hypervisor/lib.c b/hypervisor/lib.c
index f2a27eb..bfa5647 100644
--- a/hypervisor/lib.c
+++ b/hypervisor/lib.c
@@ -32,3 +32,15 @@ int strcmp(const char *s1, const char *s2)
}
return *(unsigned char *)s1 - *(unsigned char *)s2;
}
+
+void *memcpy(void *dest, const void *src, unsigned long n)
+{
+ unsigned long i;
+ const char *csrc = src;
+ char *cdest = dest;
+
+ for (i = 0; i < n; i++)
+ cdest[i] = csrc[i];
+
+ return dest;
+}
--
2.7.0


Jan Kiszka

unread,
Mar 7, 2016, 2:58:40 AM3/7/16
to antonios...@huawei.com, jailho...@googlegroups.com, Claudio Fontana, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
Let's refactor at this chance:

const u8 *s = src;
u8 *d = dest;

while (n-- > 0)
*d++ = *s++;
return dest;

Ideally, "char" shouldn't be used for data, only for characters.

Jan

--
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

Jan Kiszka

unread,
Mar 7, 2016, 2:58:52 AM3/7/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
On 2016-02-24 17:23, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> The function page_alloc allows us to allocate any number of
> pages, however they will always be aligned on page boundaries.
>
> The new page_alloc implementation takes an extra bool align
> parameter, which allows us to allocate N pages that will be
> aligned by N * PAGE_SIZE. N needs to be a power of two.

Why did you drop the page_alloc_aligned interface? I was only asking for
a common core, not a common interface. Then your patch would be much
less invasive.

Antonios Motakis

unread,
Mar 7, 2016, 12:32:06 PM3/7/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com


On 07-Mar-16 08:58, Jan Kiszka wrote:
> On 2016-02-24 17:23, antonios...@huawei.com wrote:
>> From: Antonios Motakis <antonios...@huawei.com>
>>
>> The function page_alloc allows us to allocate any number of
>> pages, however they will always be aligned on page boundaries.
>>
>> The new page_alloc implementation takes an extra bool align
>> parameter, which allows us to allocate N pages that will be
>> aligned by N * PAGE_SIZE. N needs to be a power of two.
>
> Why did you drop the page_alloc_aligned interface? I was only asking for
> a common core, not a common interface. Then your patch would be much
> less invasive.

I misunderstood then, I will split the interface again!

--
Antonios Motakis
Virtualization Engineer
Huawei Technologies Duesseldorf GmbH
European Research Center
Riesstrasse 25, 80992 München

Jan Kiszka

unread,
Mar 21, 2016, 7:27:13 AM3/21/16
to antonios...@huawei.com, jailho...@googlegroups.com, Dmitry Voytik, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
On 2016-02-24 17:23, antonios...@huawei.com wrote:
How is the situation on ARMv7? If it's the same, we should adjust
comments and log.

Jan Kiszka

unread,
Mar 21, 2016, 7:51:57 AM3/21/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
On 2016-02-24 17:23, antonios...@huawei.com wrote:
Let's save some #if's here: console will harmlessly remain NULL if the
block below is not present. Or does the very latest and greatest gcc
find some dead code then?

> unsigned long config_size;
> const char *fw_name;
> long max_cpus;
> @@ -235,8 +240,9 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
> config_size >= hv_mem->size - hv_core_and_percpu_size)
> goto error_release_fw;
>
> - hypervisor_mem = jailhouse_ioremap(hv_mem->phys_start, JAILHOUSE_BASE,
> - hv_mem->size);
> + hypervisor_mem = jailhouse_ioremap(hv_mem->phys_start,
> + JAILHOUSE_BORROW_ROOT_PT ?
> + JAILHOUSE_BASE : 0, hv_mem->size);

I suppose the code will look better formatted when factoring out some
remap_addr or so variable here.

> if (!hypervisor_mem) {
> pr_err("jailhouse: Unable to map RAM reserved for hypervisor "
> "at %08lx\n", (unsigned long)hv_mem->phys_start);
> @@ -258,6 +264,7 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
> }
>
> if (config->debug_console.flags & JAILHOUSE_MEM_IO) {
> +#if JAILHOUSE_BORROW_ROOT_PT == 1

Actually, #ifdef would be nicer (more common pattern).

> console = ioremap(config->debug_console.phys_start,
> config->debug_console.size);
> if (!console) {
> @@ -270,6 +277,10 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
> /* The hypervisor has no notion of address spaces, so we need
> * to enforce conversion. */
> header->debug_console_base = (void * __force)console;
> +#else
> + header->debug_console_base =
> + (void * __force) config->debug_console.phys_start;
> +#endif
> }
>
> err = jailhouse_cell_prepare_root(&config->root_cell);
> @@ -294,8 +305,10 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
> goto error_free_cell;
> }
>
> +#if JAILHOUSE_BORROW_ROOT_PT == 1
> if (console)
> iounmap(console);
> +#endif

If console remains NULL, you can keep the code without #if here as well.

Jan Kiszka

unread,
Mar 21, 2016, 7:58:24 AM3/21/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
On 2016-02-24 17:23, antonios...@huawei.com wrote:
Good point. I would just prefer:

struct cell *cell = this_cell();

panic_printk("Stopping CPU %d (Cell: \"%s\")\n", this_cpu_id(),
cell && cell->config ? cell->config->name : "<UNSET>");

Antonios Motakis

unread,
Mar 21, 2016, 8:31:21 AM3/21/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
No problem, if you don't mind the unused variable on ARMv8. I think we can get GCC to do our biding :)

>
>> unsigned long config_size;
>> const char *fw_name;
>> long max_cpus;
>> @@ -235,8 +240,9 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
>> config_size >= hv_mem->size - hv_core_and_percpu_size)
>> goto error_release_fw;
>>
>> - hypervisor_mem = jailhouse_ioremap(hv_mem->phys_start, JAILHOUSE_BASE,
>> - hv_mem->size);
>> + hypervisor_mem = jailhouse_ioremap(hv_mem->phys_start,
>> + JAILHOUSE_BORROW_ROOT_PT ?
>> + JAILHOUSE_BASE : 0, hv_mem->size);
>
> I suppose the code will look better formatted when factoring out some
> remap_addr or so variable here.

Can do!

>
>> if (!hypervisor_mem) {
>> pr_err("jailhouse: Unable to map RAM reserved for hypervisor "
>> "at %08lx\n", (unsigned long)hv_mem->phys_start);
>> @@ -258,6 +264,7 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
>> }
>>
>> if (config->debug_console.flags & JAILHOUSE_MEM_IO) {
>> +#if JAILHOUSE_BORROW_ROOT_PT == 1
>
> Actually, #ifdef would be nicer (more common pattern).
>

Ack! (I guess the this whole chunk will get a bit more readable).

Antonios Motakis

unread,
Mar 21, 2016, 8:31:23 AM3/21/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
Ack!

Jan Kiszka

unread,
Apr 29, 2016, 1:21:44 AM4/29/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
On 2016-02-24 17:23, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
>
> This patch series has been split off from the main Jailhouse for
> AArch64 patch series, in order to keep each series shorter.
>
> In this series, a few changes are included to the core in
> preparation to the main patch series. In addition, most of the
> patches are on the ARM AArch32 architecture port of Jailhouse.
> Since the AArch64 port attempts to share some code with AArch32,
> a few changes and code moves are needed.
>

How is your plan to move forward? I had some comments on these part of
the series, so maybe you already addressed them and could role out a new
version? Or will you send a complete new round anyway in the near future?

Jan

Antonios Motakis

unread,
Apr 29, 2016, 9:31:48 AM4/29/16
to Jan Kiszka, jailho...@googlegroups.com, Dmitry Voytik, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
On ARMv7, the firmware does its own cache maintenance and doesn't rely on Linux maintaining the range containing the new cell code.

Also the firmware on ARMv7 does funky things, like running during one part of the initialization with the MMU turned off, and has to do some extra dcache maintenance to pull that off. So that flush keeps things simple on ARMv8, but on ARMv7 it would be just an extraneous flush, and I believe not sufficient to keep things safe.

>
> Jan

Antonios Motakis

unread,
Apr 29, 2016, 9:32:52 AM4/29/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
Hello,

I was focused on figuring out the instability issues you reported on your board, among other things. A few I have addressed already locally.

If you think this part of the series can be considered for upstreaming sooner than the rest (i.e. if you don't have many more issues with it), then I think it's best that I prepare a new version ahead of the other series.

Or, I can prepare a new version of all three series addressing all issues discovered so far (minus the instability issue which I still can't reproduce).

Jan Kiszka

unread,
Apr 29, 2016, 11:14:14 AM4/29/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
Let's put it like this: if a full update will likely be ready in a week
or so, then I can wait. If it will probably take longer, then let's
refresh the preparatory series first so that I can do my testing and
re-review of the changes. Just to parallelize things a bit.
Reply all
Reply to author
Forward
0 new messages