[PATCH v3 05/13] core: Add support for aligned page allocation

9 views
Skip to first unread message

antonios...@huawei.com

unread,
Jun 17, 2016, 3:11:27 PM6/17/16
to jailho...@googlegroups.com, Jan Kiszka, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Jan Kiszka <jan.k...@siemens.com>

Refactor page_alloc to page_alloc_internal which accepts an additional
constraint for its allocation: align_mask. The allocated region will now
have its start page chosen so that page_number & align_mask is zero. If
no alignment is required, align_mask just needs to be set to 0. This is
what page_alloc exploits.

However, the new function page_alloc_aligned is introduces to return
page regions aligned according to their size (num pages will be aligned
by num * PAGE_SIZE). This implied that num needs to be a power of two.

This will be used on the AArch64 port of Jailhouse to support physical
address ranges from 40 to 44 bits: in these configurations, the initial
page table level may take up multiple consecutive pages.

Based on patch by Antonios Motakis.

Signed-off-by: Jan Kiszka <jan.k...@siemens.com>
---
hypervisor/arch/arm/include/asm/paging.h | 3 +-
hypervisor/arch/x86/include/asm/paging.h | 3 +-
hypervisor/include/jailhouse/paging.h | 1 +
hypervisor/paging.c | 60 ++++++++++++++++++++++++++------
4 files changed, 55 insertions(+), 12 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 0372b2c..28ba3e0 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -18,7 +18,8 @@
#include <asm/processor.h>
#include <asm/sysregs.h>

-#define PAGE_SIZE 4096
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1 << PAGE_SHIFT)
#define PAGE_MASK ~(PAGE_SIZE - 1)
#define PAGE_OFFS_MASK (PAGE_SIZE - 1)

diff --git a/hypervisor/arch/x86/include/asm/paging.h b/hypervisor/arch/x86/include/asm/paging.h
index e90077b..064790c 100644
--- a/hypervisor/arch/x86/include/asm/paging.h
+++ b/hypervisor/arch/x86/include/asm/paging.h
@@ -16,7 +16,8 @@
#include <jailhouse/types.h>
#include <asm/processor.h>

-#define PAGE_SIZE 4096
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1 << PAGE_SHIFT)
#define PAGE_MASK ~(PAGE_SIZE - 1)
#define PAGE_OFFS_MASK (PAGE_SIZE - 1)

diff --git a/hypervisor/include/jailhouse/paging.h b/hypervisor/include/jailhouse/paging.h
index 27286f0..6c2555f 100644
--- a/hypervisor/include/jailhouse/paging.h
+++ b/hypervisor/include/jailhouse/paging.h
@@ -183,6 +183,7 @@ extern struct paging_structures hv_paging_structs;
unsigned long paging_get_phys_invalid(pt_entry_t pte, unsigned long virt);

void *page_alloc(struct page_pool *pool, unsigned int num);
+void *page_alloc_aligned(struct page_pool *pool, unsigned int num);
void page_free(struct page_pool *pool, void *first_page, unsigned int num);

/**
diff --git a/hypervisor/paging.c b/hypervisor/paging.c
index f24d56c..1f22887 100644
--- a/hypervisor/paging.c
+++ b/hypervisor/paging.c
@@ -89,32 +89,44 @@ static unsigned long find_next_free_page(struct page_pool *pool,

/**
* Allocate consecutive pages from the specified pool.
- * @param pool Page pool to allocate from.
- * @param num Number of pages.
+ * @param pool Page pool to allocate from.
+ * @param num Number of pages.
+ * @param align_mask Choose start so that start_page_no & align_mask == 0.
*
* @return Pointer to first page or NULL if allocation failed.
*
* @see page_free
*/
-void *page_alloc(struct page_pool *pool, unsigned int num)
+static void *page_alloc_internal(struct page_pool *pool, unsigned int num,
+ unsigned long align_mask)
{
- unsigned long start, last, next;
+ /* The pool itself might not be aligned as required. */
+ unsigned long aligned_start =
+ ((unsigned long)pool->base_address >> PAGE_SHIFT) & align_mask;
+ unsigned long next = aligned_start;
+ unsigned long start, last;
unsigned int allocated;

- start = find_next_free_page(pool, 0);
+restart:
+ /* Forward the search start to the next aligned page. */
+ if ((next - aligned_start) & align_mask)
+ next += num - ((next - aligned_start) & align_mask);
+
+ start = next = find_next_free_page(pool, next);
if (start == INVALID_PAGE_NR || num == 0)
return NULL;

-restart:
+ /* Enforce alignment (none of align_mask is 0). */
+ if ((start - aligned_start) & align_mask)
+ goto restart;
+
for (allocated = 1, last = start; allocated < num;
allocated++, last = next) {
next = find_next_free_page(pool, last + 1);
if (next == INVALID_PAGE_NR)
return NULL;
- if (next != last + 1) {
- start = next;
- goto restart;
- }
+ if (next != last + 1)
+ goto restart; /* not consecutive */
}

for (allocated = 0; allocated < num; allocated++)
@@ -126,6 +138,34 @@ restart:
}

/**
+ * Allocate consecutive pages from the specified pool.
+ * @param pool Page pool to allocate from.
+ * @param num Number of pages.
+ *
+ * @return Pointer to first page or NULL if allocation failed.
+ *
+ * @see page_free
+ */
+void *page_alloc(struct page_pool *pool, unsigned int num)
+{
+ return page_alloc_internal(pool, num, 0);
+}
+
+/**
+ * Allocate aligned consecutive pages from the specified pool.
+ * @param pool Page pool to allocate from.
+ * @param num Number of pages. Num needs to be a power of 2.
+ *
+ * @return Pointer to first page or NULL if allocation failed.
+ *
+ * @see page_free
+ */
+void *page_alloc_aligned(struct page_pool *pool, unsigned int num)
+{
+ return page_alloc_internal(pool, num, num - 1);
+}
+
+/**
* Release pages to the specified pool.
* @param pool Page pool to release to.
* @param page Address of first page.
--
2.8.0.rc3


antonios...@huawei.com

unread,
Jun 17, 2016, 3:11:27 PM6/17/16
to jailho...@googlegroups.com, Claudio Fontana, jan.k...@siemens.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com, Antonios Motakis
From: Claudio Fontana <claudio...@huawei.com>

Remove the memcpy implementation from the ARM port, and add a
generic version to the core library for all architectures.

Signed-off-by: Claudio Fontana <claudio...@huawei.com>
Signed-off-by: Antonios Motakis <antonios...@huawei.com>
[antonios...@huawei.com: removed all signs of weakness!]
---
hypervisor/arch/arm/lib.c | 12 ------------
hypervisor/lib.c | 10 ++++++++++
2 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/hypervisor/arch/arm/lib.c b/hypervisor/arch/arm/lib.c
index cf81117..038bf9a 100644
--- a/hypervisor/arch/arm/lib.c
+++ b/hypervisor/arch/arm/lib.c
@@ -22,15 +22,3 @@ unsigned long phys_processor_id(void)
arm_read_sysreg(MPIDR_EL1, mpidr);
return mpidr & MPIDR_CPUID_MASK;
}
-
-void *memcpy(void *dest, const void *src, unsigned long n)
-{
- unsigned long i;
- const char *csrc = src;
- char *cdest = dest;
-
- for (i = 0; i < n; i++)
- cdest[i] = csrc[i];
-
- return dest;
-}
diff --git a/hypervisor/lib.c b/hypervisor/lib.c
index f2a27eb..fc9af7a 100644
--- a/hypervisor/lib.c
+++ b/hypervisor/lib.c
@@ -32,3 +32,13 @@ int strcmp(const char *s1, const char *s2)
}
return *(unsigned char *)s1 - *(unsigned char *)s2;
}
+
+void *memcpy(void *dest, const void *src, unsigned long n)
+{
+ const u8 *s = src;
+ u8 *d = dest;
+
+ while (n-- > 0)
+ *d++ = *s++;
+ return dest;
+}
--
2.8.0.rc3


antonios...@huawei.com

unread,
Jun 17, 2016, 3:11:27 PM6/17/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

This patch series has been split off from the main Jailhouse for
AArch64 patch series, in order to keep each series shorter.

In this series, a few changes are included to the core in
preparation to the main patch series. In addition, most of the
patches are on the ARM AArch32 architecture port of Jailhouse.
Since the AArch64 port attempts to share some code with AArch32,
a few changes and code moves are needed.

Changes from v2:
- Many minor touch ups...
Changes from v1:
- Dropped TLB related patch from the series, as this part might
still need more significant changes
- Mostly minor code improvements

Antonios Motakis (10):
driver: ioremap the hypervisor firmware to any kernel address
core: make phys_processor_id() return unsigned long
core: panic_stop: print current cell only if it has been set
arm: pass SPIs with large ids to the root cell
arm: psci: support multiple affinity levels in MPIDR
arm: replace IS_PSCI_FN macro with more explicit versions
arm: move the handle_irq_route function to the GICv3 module
arm: prepare port for 48 bit PARange support
arm: put the value of VTCR for cells in a define
arm: hide TLB flush behind a macro

Claudio Fontana (1):
core: lib: replace ARM memcpy implementation with generic version

Dmitry Voytik (1):
driver: sync I-cache, D-cache and memory

Jan Kiszka (1):
core: Add support for aligned page allocation

driver/cell.c | 9 +++
driver/main.c | 25 ++++++-
hypervisor/arch/arm/gic-common.c | 43 +-----------
hypervisor/arch/arm/gic-v3.c | 40 +++++++++++
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/gic_common.h | 1 -
hypervisor/arch/arm/include/asm/gic_v3.h | 3 +
.../arch/arm/include/asm/jailhouse_hypercall.h | 1 +
hypervisor/arch/arm/include/asm/paging.h | 28 ++++++--
hypervisor/arch/arm/include/asm/paging_modes.h | 5 +-
hypervisor/arch/arm/include/asm/percpu.h | 1 +
hypervisor/arch/arm/include/asm/processor.h | 2 +
hypervisor/arch/arm/include/asm/psci.h | 3 +-
hypervisor/arch/arm/irqchip.c | 2 +-
hypervisor/arch/arm/lib.c | 20 +++---
hypervisor/arch/arm/mmu_cell.c | 17 ++---
hypervisor/arch/arm/paging.c | 82 +++++++++++++++++++++-
hypervisor/arch/arm/psci.c | 5 +-
hypervisor/arch/arm/setup.c | 1 +
hypervisor/arch/arm/traps.c | 4 +-
hypervisor/arch/x86/apic.c | 4 +-
hypervisor/arch/x86/control.c | 2 +-
.../arch/x86/include/asm/jailhouse_hypercall.h | 3 +-
hypervisor/arch/x86/include/asm/paging.h | 3 +-
hypervisor/control.c | 4 +-
hypervisor/include/jailhouse/paging.h | 1 +
hypervisor/include/jailhouse/printk.h | 2 +-
hypervisor/include/jailhouse/processor.h | 2 +-
hypervisor/lib.c | 10 +++
hypervisor/paging.c | 60 +++++++++++++---
hypervisor/printk.c | 4 +-
hypervisor/setup.c | 2 +-
32 files changed, 291 insertions(+), 99 deletions(-)

--
2.8.0.rc3


antonios...@huawei.com

unread,
Jun 17, 2016, 3:11:33 PM6/17/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

Hide TLB flushes issues by the MMU code behind a macro, so we can
increase our chances of reusing some of this code.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/processor.h | 2 ++
hypervisor/arch/arm/mmu_cell.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index c6144a7..907a28e 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -197,6 +197,8 @@ static inline bool is_el2(void)
return (psr & PSR_MODE_MASK) == PSR_HYP_MODE;
}

+#define tlb_flush_guest() arm_write_sysreg(TLBIALL, 1)
+
#endif /* !__ASSEMBLY__ */

#endif /* !_JAILHOUSE_ASM_PROCESSOR_H */
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index d3031de..d16c5ea 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -107,7 +107,7 @@ void arch_cpu_tlb_flush(struct per_cpu *cpu_data)
* Invalidate all stage-1 and 2 TLB entries for the current VMID
* ERET will ensure completion of these ops
*/
- arm_write_sysreg(TLBIALL, 1);
+ tlb_flush_guest();
dsb(nsh);
cpu_data->flush_vcpu_caches = false;
}
--
2.8.0.rc3


antonios...@huawei.com

unread,
Jun 17, 2016, 3:11:34 PM6/17/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

We can reuse the code under hypervisor/arch/arm/mmu_cell.c for the
AArch64 port, save for the value we use for the VTCRL. AArch64 will
need in addition to the flags set by the AArch32 port, to set the
size of the address space.

We put this behind a define in asm/paging.h to allow this reuse.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/paging.h | 6 ++++++
hypervisor/arch/arm/mmu_cell.c | 7 +------
2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 98fc343..0afbc86 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -120,6 +120,12 @@
#define TCR_SL0_SHIFT 6
#define TCR_S_SHIFT 4

+#define VTCR_CELL (T0SZ | SL0 << TCR_SL0_SHIFT \
+ | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT) \
+ | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT) \
+ | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)\
+ | VTCR_RES1)
+
/*
* Hypervisor memory attribute indexes:
* 0: normal WB, RA, WA, non-transient
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index fb5ad83..d3031de 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -77,12 +77,7 @@ int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
struct cell *cell = cpu_data->cell;
unsigned long cell_table = paging_hvirt2phys(cell->arch.mm.root_table);
u64 vttbr = 0;
- u32 vtcr = T0SZ
- | SL0 << TCR_SL0_SHIFT
- | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT)
- | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT)
- | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)
- | VTCR_RES1;
+ u32 vtcr = VTCR_CELL;

if (cell->id > 0xff) {
panic_printk("No cell ID available\n");
--
2.8.0.rc3


antonios...@huawei.com

unread,
Jun 17, 2016, 3:11:36 PM6/17/16
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
From: Antonios Motakis <antonios...@huawei.com>

We currently support 3 levels of page tables for a 39 bits PA range
on ARM. This patch implements support for 4 level page tables,
and 3 level page tables with a concatenated level 1 root page
table.

On AArch32 we stick with the current restriction of building for
a 39 bit physical address space; however this change will allow
us to support a 40 to 48 bit PARange on AArch64.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/paging.h | 19 +++++-
hypervisor/arch/arm/include/asm/paging_modes.h | 5 +-
hypervisor/arch/arm/mmu_cell.c | 8 ++-
hypervisor/arch/arm/paging.c | 82 +++++++++++++++++++++++++-
4 files changed, 104 insertions(+), 10 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 28ba3e0..98fc343 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -31,11 +31,13 @@
* by IPA[20:12].
* This would allows to cover a 4GB memory map by using 4 concatenated level-2
* page tables and thus provide better table walk performances.
- * For the moment, the core doesn't allow to use concatenated pages, so we will
- * use three levels instead, starting at level 1.
+ * For the moment, we will implement the first level for AArch32 using only
+ * one level.
*
- * TODO: add a "u32 concatenated" field to the paging struct
+ * TODO: implement larger PARange support for AArch32
*/
+#define ARM_CELL_ROOT_PT_SZ 1
+
#if MAX_PAGE_TABLE_LEVELS < 3
#define T0SZ 0
#define SL0 0
@@ -164,6 +166,17 @@

typedef u64 *pt_entry_t;

+extern unsigned int cpu_parange;
+
+/* return the bits supported for the physical address range for this
+ * machine; in arch_paging_init this value will be kept in
+ * cpu_parange for later reference */
+static inline unsigned int get_cpu_parange(void)
+{
+ /* TODO: implement proper PARange support on AArch32 */
+ return 39;
+}
+
/* Only executed on hypervisor paging struct changes */
static inline void arch_paging_flush_page_tlbs(unsigned long page_addr)
{
diff --git a/hypervisor/arch/arm/include/asm/paging_modes.h b/hypervisor/arch/arm/include/asm/paging_modes.h
index 72950eb..6634f9f 100644
--- a/hypervisor/arch/arm/include/asm/paging_modes.h
+++ b/hypervisor/arch/arm/include/asm/paging_modes.h
@@ -15,8 +15,7 @@
#include <jailhouse/paging.h>

/* Long-descriptor paging */
-extern const struct paging arm_paging[];
-
-#define hv_paging arm_paging
+extern const struct paging *hv_paging;
+extern const struct paging *cell_paging;

#endif /* !__ASSEMBLY__ */
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 4885f8c..fb5ad83 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -57,8 +57,10 @@ unsigned long arch_paging_gphys2phys(struct per_cpu *cpu_data,

int arch_mmu_cell_init(struct cell *cell)
{
- cell->arch.mm.root_paging = hv_paging;
- cell->arch.mm.root_table = page_alloc(&mem_pool, 1);
+ cell->arch.mm.root_paging = cell_paging;
+ cell->arch.mm.root_table =
+ page_alloc_aligned(&mem_pool, ARM_CELL_ROOT_PT_SZ);
+
if (!cell->arch.mm.root_table)
return -ENOMEM;

@@ -67,7 +69,7 @@ int arch_mmu_cell_init(struct cell *cell)

void arch_mmu_cell_destroy(struct cell *cell)
{
- page_free(&mem_pool, cell->arch.mm.root_table, 1);
+ page_free(&mem_pool, cell->arch.mm.root_table, ARM_CELL_ROOT_PT_SZ);
}

int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
diff --git a/hypervisor/arch/arm/paging.c b/hypervisor/arch/arm/paging.c
index 8fdd034..2ba7da6 100644
--- a/hypervisor/arch/arm/paging.c
+++ b/hypervisor/arch/arm/paging.c
@@ -12,6 +12,8 @@

#include <jailhouse/paging.h>

+unsigned int cpu_parange = 0;
+
static bool arm_entry_valid(pt_entry_t entry, unsigned long flags)
{
// FIXME: validate flags!
@@ -40,6 +42,20 @@ static bool arm_page_table_empty(page_table_t page_table)
return true;
}

+#if MAX_PAGE_TABLE_LEVELS > 3
+static pt_entry_t arm_get_l0_entry(page_table_t page_table, unsigned long virt)
+{
+ return &page_table[(virt & L0_VADDR_MASK) >> 39];
+}
+
+static unsigned long arm_get_l0_phys(pt_entry_t pte, unsigned long virt)
+{
+ if ((*pte & PTE_TABLE_FLAGS) == PTE_TABLE_FLAGS)
+ return INVALID_PHYS_ADDR;
+ return (*pte & PTE_L0_BLOCK_ADDR_MASK) | (virt & BLOCK_512G_VADDR_MASK);
+}
+#endif
+
#if MAX_PAGE_TABLE_LEVELS > 2
static pt_entry_t arm_get_l1_entry(page_table_t page_table, unsigned long virt)
{
@@ -59,6 +75,18 @@ static unsigned long arm_get_l1_phys(pt_entry_t pte, unsigned long virt)
}
#endif

+static pt_entry_t arm_get_l1_alt_entry(page_table_t page_table, unsigned long virt)
+{
+ return &page_table[(virt & BIT_MASK(48,30)) >> 30];
+}
+
+static unsigned long arm_get_l1_alt_phys(pt_entry_t pte, unsigned long virt)
+{
+ if ((*pte & PTE_TABLE_FLAGS) == PTE_TABLE_FLAGS)
+ return INVALID_PHYS_ADDR;
+ return (*pte & BIT_MASK(48,30)) | (virt & BIT_MASK(29,0));
+}
+
static pt_entry_t arm_get_l2_entry(page_table_t page_table, unsigned long virt)
{
return &page_table[(virt & L2_VADDR_MASK) >> 21];
@@ -109,7 +137,18 @@ static unsigned long arm_get_l3_phys(pt_entry_t pte, unsigned long virt)
.clear_entry = arm_clear_entry, \
.page_table_empty = arm_page_table_empty,

-const struct paging arm_paging[] = {
+const static struct paging arm_paging[] = {
+#if MAX_PAGE_TABLE_LEVELS > 3
+ {
+ ARM_PAGING_COMMON
+ /* No block entries for level 0, so no need to set page_size */
+ .get_entry = arm_get_l0_entry,
+ .get_phys = arm_get_l0_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+#endif
#if MAX_PAGE_TABLE_LEVELS > 2
{
ARM_PAGING_COMMON
@@ -144,6 +183,47 @@ const struct paging arm_paging[] = {
}
};

+const static struct paging arm_s2_paging_alt[] = {
+ {
+ ARM_PAGING_COMMON
+ .get_entry = arm_get_l1_alt_entry,
+ .get_phys = arm_get_l1_alt_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+ {
+ ARM_PAGING_COMMON
+ /* Block entry: 2MB */
+ .page_size = 2 * 1024 * 1024,
+ .get_entry = arm_get_l2_entry,
+ .set_terminal = arm_set_l2_block,
+ .get_phys = arm_get_l2_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+ {
+ ARM_PAGING_COMMON
+ /* Page entry: 4kB */
+ .page_size = 4 * 1024,
+ .get_entry = arm_get_l3_entry,
+ .set_terminal = arm_set_l3_page,
+ .get_phys = arm_get_l3_phys,
+ }
+};
+
+const struct paging *hv_paging = arm_paging;
+const struct paging *cell_paging;
+
void arch_paging_init(void)
{
+ cpu_parange = get_cpu_parange();
+
+ if (cpu_parange < 44)
+ /* 4 level page tables not supported for stage 2.
+ * We need to use multiple consecutive pages for L1 */
+ cell_paging = arm_s2_paging_alt;
+ else
+ cell_paging = arm_paging;
}
--
2.8.0.rc3


Jan Kiszka

unread,
Jun 23, 2016, 4:27:50 AM6/23/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
I've merged this series with two modifications (build fix for patch 8,
slight refactoring of patch 10) into next. I've also rebased my
wip/arm64 mirror with the remaining patches on top (trivial conflict in
.travis.yml). Still works on ARM, tests on the Seattle pending.

Still need to read the other two series.

Thanks,
Jan

--
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

Jan Kiszka

unread,
Jun 23, 2016, 2:38:04 PM6/23/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
Bad news: there seems to be a regression with my branch compared to a
state from February: interrupts no longer work for the second NIC when
assigned to a non-root Linux cell. Does this work for you?

Jan Kiszka

unread,
Jun 23, 2016, 2:45:16 PM6/23/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, mazda....@amd.com
Interestingly, the UART does generate some:

# cat /proc/interrupts
CPU0 CPU1
1: 0 0 GIC 29 Edge arch_timer
2: 1263 1307 GIC 30 Edge arch_timer
3: 104 0 GIC 360 Level uart-pl011
4: 0 0 GIC 354 Level eth0-pcs
5: 0 0 GIC 356 Level eth0
6: 0 0 GIC 373 Edge eth0-TxRx-0
7: 0 0 GIC 374 Edge eth0-TxRx-1

Please cross-check, then we can decide who may do some bisecting... :-/
Reply all
Reply to author
Forward
0 new messages