[RFCv7 01/45] driver: ioremap the hypervisor firmware to any kernel address

240 views
Skip to first unread message

antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:22 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

At the moment the Linux driver maps the Jailhouse binary to
JAILHOUSE_BASE. The underlying assumption is that Linux may map the
firmware (in the Linux kernel space), to the same virtual address it
has been built to run from.

This assumption is unworkable on ARMv8 processors running in AArch64
mode. Kernel memory is allocated in a high address region, that is
not addressable from EL2, where the hypervisor will run from.

This patch removes the assumption, by introducing the
JAILHOUSE_BORROW_ROOT_PT define, which describes the behavior of the
current architectures.

We also turn the entry point in the header, into an offset from the
Jailhouse load address, so we can enter the image regardless of
where it will be mapped.

On AArch64, JAILHOUSE_BASE will be the physical address the
hypervisor will be loaded to. This way, Jailhouse will run with
identity mapping in EL2. The Linux driver sets the address to the
debug UART accordingly.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
driver/main.c | 20 ++++++++++++++------
.../arch/arm/include/asm/jailhouse_hypercall.h | 1 +
.../arch/x86/include/asm/jailhouse_hypercall.h | 3 ++-
hypervisor/setup.c | 2 +-
4 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/driver/main.c b/driver/main.c
index 92b985a..f5ff1ce 100644
--- a/driver/main.c
+++ b/driver/main.c
@@ -139,11 +139,14 @@ static void enter_hypervisor(void *info)
{
struct jailhouse_header *header = info;
unsigned int cpu = smp_processor_id();
+ int (*entry)(unsigned int);
int err;

+ entry = header->entry + (unsigned long) hypervisor_mem;
+
if (cpu < header->max_cpus)
/* either returns 0 or the same error code across all CPUs */
- err = header->entry(cpu);
+ err = entry(cpu);
else
err = -EINVAL;

@@ -235,8 +238,9 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
config_size >= hv_mem->size - hv_core_and_percpu_size)
goto error_release_fw;

- hypervisor_mem = jailhouse_ioremap(hv_mem->phys_start, JAILHOUSE_BASE,
- hv_mem->size);
+ hypervisor_mem = jailhouse_ioremap(hv_mem->phys_start,
+ JAILHOUSE_BORROW_ROOT_PT ?
+ JAILHOUSE_BASE : 0, hv_mem->size);
if (!hypervisor_mem) {
pr_err("jailhouse: Unable to map RAM reserved for hypervisor "
"at %08lx\n", (unsigned long)hv_mem->phys_start);
@@ -258,8 +262,12 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
}

if (config->debug_uart.flags & JAILHOUSE_MEM_IO) {
- uart = ioremap(config->debug_uart.phys_start,
- config->debug_uart.size);
+ if (JAILHOUSE_BORROW_ROOT_PT)
+ uart = ioremap(config->debug_uart.phys_start,
+ config->debug_uart.size);
+ else
+ uart = (void *) config->debug_uart.phys_start;
+
if (!uart) {
err = -EINVAL;
pr_err("jailhouse: Unable to map hypervisor UART at "
@@ -294,7 +302,7 @@ static int jailhouse_cmd_enable(struct jailhouse_system __user *arg)
goto error_free_cell;
}

- if (uart)
+ if (uart && JAILHOUSE_BORROW_ROOT_PT)
iounmap(uart);

release_firmware(hypervisor);
diff --git a/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h b/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h
index 480f487..45e7a3d 100644
--- a/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h
+++ b/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h
@@ -37,6 +37,7 @@
*/

#define JAILHOUSE_BASE 0xf0000000
+#define JAILHOUSE_BORROW_ROOT_PT 1

#define JAILHOUSE_CALL_INS ".arch_extension virt\n\t" \
"hvc #0x4a48"
diff --git a/hypervisor/arch/x86/include/asm/jailhouse_hypercall.h b/hypervisor/arch/x86/include/asm/jailhouse_hypercall.h
index ed72b28..1f6b85a 100644
--- a/hypervisor/arch/x86/include/asm/jailhouse_hypercall.h
+++ b/hypervisor/arch/x86/include/asm/jailhouse_hypercall.h
@@ -36,7 +36,8 @@
* THE POSSIBILITY OF SUCH DAMAGE.
*/

-#define JAILHOUSE_BASE __MAKE_UL(0xfffffffff0000000)
+#define JAILHOUSE_BASE __MAKE_UL(0xfffffffff0000000)
+#define JAILHOUSE_BORROW_ROOT_PT 1

/*
* As this is never called on a CPU without VM extensions,
diff --git a/hypervisor/setup.c b/hypervisor/setup.c
index 2139148..f502219 100644
--- a/hypervisor/setup.c
+++ b/hypervisor/setup.c
@@ -205,5 +205,5 @@ hypervisor_header = {
.signature = JAILHOUSE_SIGNATURE,
.core_size = (unsigned long)__hv_core_end - JAILHOUSE_BASE,
.percpu_size = sizeof(struct per_cpu),
- .entry = arch_entry,
+ .entry = arch_entry - JAILHOUSE_BASE,
};
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:22 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Ho ho ho, just in time for the holidays!

This patch series is an RFC towards AArch64 support in the Jailhouse
hypervisor. It applies on the latest next branch from upstream, and
can also be pulled from https://github.com/tvelocity/jailhouse.git
(branch arm64_v7)

The patch series includes contributions by Claudio Fontana, and
Dmitry Voytik.

This version of the patch series features significant progress from
the last one. Not only do we have working inmates now, not only
we can demonstrate Linux inmates, we can showcase this on two
targets: besides the ARM Foundation ARMv8 model, we now include
cell configuration files for a real hardware target, the AMD Seattle
development board!

However, this series is still an RFC; these patches DO break ARMv7
temporarily, and might cause problems in other archs as well. This
breakage is minor, and we are not very far from dropping the RFC tag :)

The patch series has a few distinct parts:

Changes from RFCv6:
- Probably too many to list here!
- Initial support for MPID affinity levels (as needed by PSCI)
- Working inmates
- Linux inmate support, by Dmitry Voytik!
- Improved /fixed cache coherency handling by Dmitry Voytik
- Support for the 4th level of page tables, allowing for a PARange of 40-48
- Many fixes that were discovered by running Jailhouse on the AMD Seattle

Changes from RFCv5:
- PSCI support
- Hypercalls to the hypervisor
- Hypervisor disable, and also return to Linux properly when
initialization fails
- More clean ups, clean ups, fixes
Contributions by Dmitry Voytik:
- Implement cache flushes, maintenance of the memory system
- Refactored a lot of trap handling code and other mmio bits
- Dump cell registers support for AArch64.

Changes from RFCv4:
- Stubs now use trace_error, or block, to make it more obvious when
we run into a missing stub during development.
- Working root cell! Thanks to working MMU mappings, and working
GICv2 handling.
- MMU mappings are being set up for the hypervisor (EL2), and for
the root cell (Stage 2 EL1).
- Reworked the JAILHOUSE_IOMAP_ADDR decoupling from JAILHOUSE_BASE
- Clean ups, clean ups, fixes

Still to be improved:
- GICv3 support
- SMMU support
- Fix AArch32 again. Minor breakage due to me recklessly using
division in paging.c
- Clean things up; there's a lot of room for refactoring to
share more code between AArch32 and AArch64

Epilogue:
I aimed to publish this version before the holiday period, since
it has been a long while since the last version was posted. Hopefully
now everyone can be up to date on the work around this port. Problems
with this RFC are bound to crop up nonetheless, and I'm looking
forward to get feedback.

Happy holidays!


Antonios Motakis (36):
driver: ioremap the hypervisor firmware to any kernel address
core: implement page_alloc_aligned function
hypervisor: arm: pass SPIs with large ids to the root cell
hypervisor: arm: make IS_PSCI_FN macro more restrictive
hypervisor: arm: lib.c: change type of mpidr from u32 to unsigned long
hypervisor: arm: hide TLB flush behind a macro
hypervisor: arm: put the value of VTCR for cells in a define
hypervisor: arm64: add sysregs helper macros
hypervisor: arm64: add asm/processor.h header for AArch64
hypervisor: arm64: add definitions for the AArch64 page table format
hypervisor: arm64: spinlock implementation
hypervisor: arm64: add percpu.h header file
hypervisor: arm64: add cell.h header file
hypervisor: arm64: add jailhouse_hypercall.h header file
hypervisor: arm64: minimum stubs to allow building on AArch64
core: add root cell configuration for the ARMv8 Foundation model
hypervisor: arm64: root cell configuration for the AMD Seattle
hypervisor: arm64: implement entry code to the hypervisor firmware
hypervisor: arm64: initial exception handling and catch EL2 aborts
hypervisor: arm64: plug the hypervisor mmu code
hypervisor: arm64: implement support for PA range of up to 48 bits
hypervisor: arm64: handle accesses to emulated mmio regions
hypervisor: arm64: plug the irqchip and GICv2 code from AArch32
hypervisor: arm64: PSCI support for SMP on AArch64
hypervisor: arm/arm64: psci: support multiple affinity levels in MPIDR
hypervisor: arm64: reanimate the root cell back from the dead
hypervisor: arm64: hande hypercalls from the cells
hypervisor: arm64: hypervisor disable support
hypervisor: arm64: implement cell control infrastructure
inmates: arm64: port imate demos from AArch32 to AArch64
hypervisor: arm/arm64: add work around for large SPIs on AMD Seattle
hypervisor: arm64: add uart demo cell config for Foundation v8
hypervisor: arm64: gic inmate cell config for foundation-v8
hypervisor: arm64: UART demo cell config for the AMD Seattle
hypervisor: arm64: gic demo cell config for the AMD Seattle
hypervisor: arm64: add linux inmate cell config for AMD Seattle

Claudio Fontana (2):
core: lib: make generic memory ops weak
arm64: implement bitops

Dmitry Voytik (7):
driver: sync I-cache, D-cache and memory
hypervisor: arm64: add control.h header file
hypervisor: arm64: add types.h
hypervisor: arm64: dump stack on unhandled exceptions
arm64: add non-root linux support
hypervisor: arm64: add linux inmate cell config for foundation-v8
tools: arm64: add exception dump parser tool

ci/jailhouse-config-amd-seattle.h | 5 +
ci/jailhouse-config-foundation-v8.h | 5 +
ci/kernel-inmate-amd-seattle.dts | 150 +++++++++
ci/kernel-inmate-foundation-v8.dts | 101 ++++++
configs/amd-seattle-gic-demo.c | 55 ++++
configs/amd-seattle-linux-demo.c | 91 ++++++
configs/amd-seattle-uart-demo.c | 55 ++++
configs/amd-seattle.c | 163 ++++++++++
configs/foundation-v8-gic-demo.c | 55 ++++
configs/foundation-v8-linux-demo.c | 72 +++++
configs/foundation-v8-uart-demo.c | 55 ++++
configs/foundation-v8.c | 120 +++++++
driver/cell.c | 7 +
driver/main.c | 27 +-
hypervisor/Makefile | 4 +
hypervisor/arch/arm/control.c | 11 +
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/irqchip.h | 17 +-
.../arch/arm/include/asm/jailhouse_hypercall.h | 1 +
hypervisor/arch/arm/include/asm/paging.h | 23 +-
hypervisor/arch/arm/include/asm/paging_modes.h | 1 +
hypervisor/arch/arm/include/asm/percpu.h | 13 +
hypervisor/arch/arm/include/asm/processor.h | 2 +
hypervisor/arch/arm/include/asm/psci.h | 4 +-
hypervisor/arch/arm/include/asm/uart_pl011.h | 2 +
hypervisor/arch/arm/lib.c | 14 +-
hypervisor/arch/arm/mmu_cell.c | 25 +-
hypervisor/arch/arm/paging.c | 81 +++++
hypervisor/arch/arm/psci.c | 9 +-
hypervisor/arch/arm64/Makefile | 26 ++
hypervisor/arch/arm64/asm-defines.c | 19 ++
hypervisor/arch/arm64/control.c | 344 +++++++++++++++++++++
hypervisor/arch/arm64/entry.S | 258 ++++++++++++++++
hypervisor/arch/arm64/exception.S | 96 ++++++
hypervisor/arch/arm64/include/asm/bitops.h | 141 +++++++++
hypervisor/arch/arm64/include/asm/cell.h | 37 +++
hypervisor/arch/arm64/include/asm/control.h | 43 +++
hypervisor/arch/arm64/include/asm/head.h | 16 +
.../arch/arm64/include/asm/jailhouse_hypercall.h | 93 ++++++
hypervisor/arch/arm64/include/asm/paging.h | 257 +++++++++++++++
hypervisor/arch/arm64/include/asm/percpu.h | 122 ++++++++
hypervisor/arch/arm64/include/asm/platform.h | 69 +++++
hypervisor/arch/arm64/include/asm/processor.h | 191 ++++++++++++
hypervisor/arch/arm64/include/asm/setup.h | 29 ++
hypervisor/arch/arm64/include/asm/spinlock.h | 67 ++++
hypervisor/arch/arm64/include/asm/sysregs.h | 26 ++
hypervisor/arch/arm64/include/asm/traps.h | 37 +++
hypervisor/arch/arm64/include/asm/types.h | 46 +++
hypervisor/arch/arm64/mmio.c | 148 +++++++++
hypervisor/arch/arm64/psci_low.S | 62 ++++
hypervisor/arch/arm64/setup.c | 126 ++++++++
hypervisor/arch/arm64/traps.c | 203 ++++++++++++
.../arch/x86/include/asm/jailhouse_hypercall.h | 3 +-
hypervisor/include/jailhouse/paging.h | 1 +
hypervisor/lib.c | 15 +
hypervisor/paging.c | 42 +++
hypervisor/setup.c | 2 +-
inmates/Makefile | 1 +
inmates/demos/arm64/Makefile | 20 ++
inmates/demos/arm64/gic-demo.c | 58 ++++
inmates/demos/arm64/uart-demo.c | 40 +++
inmates/lib/arm64/Makefile | 19 ++
inmates/lib/arm64/Makefile.lib | 46 +++
inmates/lib/arm64/gic-v2.c | 39 +++
inmates/lib/arm64/gic.c | 43 +++
inmates/lib/arm64/header.S | 66 ++++
inmates/lib/arm64/include/inmates/gic.h | 25 ++
inmates/lib/arm64/include/inmates/inmate.h | 54 ++++
.../arm64/include/mach-amd-seattle/mach/gic_v2.h | 14 +
.../arm64/include/mach-amd-seattle/mach/timer.h | 13 +
.../lib/arm64/include/mach-amd-seattle/mach/uart.h | 13 +
.../arm64/include/mach-foundation-v8/mach/gic_v2.h | 14 +
.../arm64/include/mach-foundation-v8/mach/timer.h | 13 +
.../arm64/include/mach-foundation-v8/mach/uart.h | 13 +
inmates/lib/arm64/inmate.lds | 40 +++
inmates/lib/arm64/printk.c | 55 ++++
inmates/lib/arm64/timer.c | 55 ++++
inmates/lib/arm64/uart-pl011.c | 23 ++
inmates/tools/arm64/Makefile | 19 ++
inmates/tools/arm64/linux-loader.c | 65 ++++
tools/jailhouse-loadlinux-amd-seattle.sh | 23 ++
tools/jailhouse-loadlinux-foundation-v8.sh | 23 ++
tools/jailhouse-parsedump | 157 ++++++++++
83 files changed, 4597 insertions(+), 42 deletions(-)
create mode 100644 ci/jailhouse-config-amd-seattle.h
create mode 100644 ci/jailhouse-config-foundation-v8.h
create mode 100644 ci/kernel-inmate-amd-seattle.dts
create mode 100644 ci/kernel-inmate-foundation-v8.dts
create mode 100644 configs/amd-seattle-gic-demo.c
create mode 100644 configs/amd-seattle-linux-demo.c
create mode 100644 configs/amd-seattle-uart-demo.c
create mode 100644 configs/amd-seattle.c
create mode 100644 configs/foundation-v8-gic-demo.c
create mode 100644 configs/foundation-v8-linux-demo.c
create mode 100644 configs/foundation-v8-uart-demo.c
create mode 100644 configs/foundation-v8.c
create mode 100644 hypervisor/arch/arm64/Makefile
create mode 100644 hypervisor/arch/arm64/asm-defines.c
create mode 100644 hypervisor/arch/arm64/control.c
create mode 100644 hypervisor/arch/arm64/entry.S
create mode 100644 hypervisor/arch/arm64/exception.S
create mode 100644 hypervisor/arch/arm64/include/asm/bitops.h
create mode 100644 hypervisor/arch/arm64/include/asm/cell.h
create mode 100644 hypervisor/arch/arm64/include/asm/control.h
create mode 100644 hypervisor/arch/arm64/include/asm/head.h
create mode 100644 hypervisor/arch/arm64/include/asm/jailhouse_hypercall.h
create mode 100644 hypervisor/arch/arm64/include/asm/paging.h
create mode 100644 hypervisor/arch/arm64/include/asm/percpu.h
create mode 100644 hypervisor/arch/arm64/include/asm/platform.h
create mode 100644 hypervisor/arch/arm64/include/asm/processor.h
create mode 100644 hypervisor/arch/arm64/include/asm/setup.h
create mode 100644 hypervisor/arch/arm64/include/asm/spinlock.h
create mode 100644 hypervisor/arch/arm64/include/asm/sysregs.h
create mode 100644 hypervisor/arch/arm64/include/asm/traps.h
create mode 100644 hypervisor/arch/arm64/include/asm/types.h
create mode 100644 hypervisor/arch/arm64/mmio.c
create mode 100644 hypervisor/arch/arm64/psci_low.S
create mode 100644 hypervisor/arch/arm64/setup.c
create mode 100644 hypervisor/arch/arm64/traps.c
create mode 100644 inmates/demos/arm64/Makefile
create mode 100644 inmates/demos/arm64/gic-demo.c
create mode 100644 inmates/demos/arm64/uart-demo.c
create mode 100644 inmates/lib/arm64/Makefile
create mode 100644 inmates/lib/arm64/Makefile.lib
create mode 100644 inmates/lib/arm64/gic-v2.c
create mode 100644 inmates/lib/arm64/gic.c
create mode 100644 inmates/lib/arm64/header.S
create mode 100644 inmates/lib/arm64/include/inmates/gic.h
create mode 100644 inmates/lib/arm64/include/inmates/inmate.h
create mode 100644 inmates/lib/arm64/include/mach-amd-seattle/mach/gic_v2.h
create mode 100644 inmates/lib/arm64/include/mach-amd-seattle/mach/timer.h
create mode 100644 inmates/lib/arm64/include/mach-amd-seattle/mach/uart.h
create mode 100644 inmates/lib/arm64/include/mach-foundation-v8/mach/gic_v2.h
create mode 100644 inmates/lib/arm64/include/mach-foundation-v8/mach/timer.h
create mode 100644 inmates/lib/arm64/include/mach-foundation-v8/mach/uart.h
create mode 100644 inmates/lib/arm64/inmate.lds
create mode 100644 inmates/lib/arm64/printk.c
create mode 100644 inmates/lib/arm64/timer.c
create mode 100644 inmates/lib/arm64/uart-pl011.c
create mode 100644 inmates/tools/arm64/Makefile
create mode 100644 inmates/tools/arm64/linux-loader.c
create mode 100755 tools/jailhouse-loadlinux-amd-seattle.sh
create mode 100755 tools/jailhouse-loadlinux-foundation-v8.sh
create mode 100755 tools/jailhouse-parsedump

--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:22 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

The function page_alloc allows us to allocate any number of pages,
however they will be aligned on page boundaries.
The page_alloc_aligned implemented here allows us to allocate N
pages that will be aligned on N-page boundaries.

This will be used on the AArch64 port of Jailhouse to support
physical address ranges from 40 to 44 bits: in these configurations,
the initial page table level may take up multiple pages.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/include/jailhouse/paging.h | 1 +
hypervisor/paging.c | 42 +++++++++++++++++++++++++++++++++++
2 files changed, 43 insertions(+)

diff --git a/hypervisor/include/jailhouse/paging.h b/hypervisor/include/jailhouse/paging.h
index 27286f0..6c2555f 100644
--- a/hypervisor/include/jailhouse/paging.h
+++ b/hypervisor/include/jailhouse/paging.h
@@ -183,6 +183,7 @@ extern struct paging_structures hv_paging_structs;
unsigned long paging_get_phys_invalid(pt_entry_t pte, unsigned long virt);

void *page_alloc(struct page_pool *pool, unsigned int num);
+void *page_alloc_aligned(struct page_pool *pool, unsigned int num);
void page_free(struct page_pool *pool, void *first_page, unsigned int num);

/**
diff --git a/hypervisor/paging.c b/hypervisor/paging.c
index 1fd7402..201bf75 100644
--- a/hypervisor/paging.c
+++ b/hypervisor/paging.c
@@ -126,6 +126,48 @@ restart:
}

/**
+ * Allocate consecutive and aligned pages from the specified pool.
+ * Pages will be aligned to num * PAGE_SIZE
+ * @param pool Page pool to allocate from.
+ * @param num Number of pages.
+ *
+ * @return Pointer to first page or NULL if allocation failed.
+ *
+ * @see page_free
+ */
+void *page_alloc_aligned(struct page_pool *pool, unsigned int num)
+{
+ unsigned int offset;
+ unsigned long start, next, i;
+
+ /* the pool itself might not be aligned to our desired size */
+ offset = (- (unsigned long) pool->base_address / PAGE_SIZE) % num;
+ next = offset;
+
+ while ((start = find_next_free_page(pool, next)) != INVALID_PAGE_NR) {
+
+ if ((start - offset) % num)
+ goto next_chunk;
+
+ for (i = start; i < start + num; i++)
+ if (test_bit(i, pool->used_bitmap))
+ goto next_chunk;
+
+ for (i = start; i < start + num; i++)
+ set_bit(i, pool->used_bitmap);
+
+ pool->used_pages += num;
+
+ return pool->base_address + start * PAGE_SIZE;
+
+next_chunk:
+ next += num - (start - offset) % num;
+ }
+
+ return NULL;
+}
+
+/**
* Release pages to the specified pool.
* @param pool Page pool to release to.
* @param page Address of first page.
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:24 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

The current design for cell configuration files, defines the SPIs
to be passed to a cell as 64 bit bitmap. In order to use Jailhouse
on targets that have SPI ids larger than 64, we need to work
around this limitation.

Pass large SPIs to the root cell for now. A permanent solution to
this problem will need to tackle the cell configuration format.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/irqchip.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index 17ba90a..581c10f 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -108,7 +108,7 @@ static inline bool spi_in_cell(struct cell *cell, unsigned int spi)
u32 spi_mask;

if (spi >= 64)
- return false;
+ return (cell == &root_cell);
else if (spi >= 32)
spi_mask = cell->arch.spis >> 32;
else
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:28 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

The previous version of the macro allows for more false positives
than necessary.

The SVC32 and SVC64 versions of the PSCI function ids differ only
on one bit. Mask this bit from the function id prefix and compare.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/psci.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/include/asm/psci.h b/hypervisor/arch/arm/include/asm/psci.h
index 43a9c65..ba0adac 100644
--- a/hypervisor/arch/arm/include/asm/psci.h
+++ b/hypervisor/arch/arm/include/asm/psci.h
@@ -46,7 +46,7 @@
#define PSCI_CPU_IS_ON 0
#define PSCI_CPU_IS_OFF 1

-#define IS_PSCI_FN(hvc) ((((hvc) >> 24) & 0x84) == 0x84)
+#define IS_PSCI_FN(hvc) ((((hvc) >> 24) | 0x40) == 0xc4)

#define PSCI_INVALID_ADDRESS 0xffffffff

--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:28 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Hide TLB flushes issues by the MMU code behind a macro, so we can
increase our chances of reusing some of this code.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/processor.h | 2 ++
hypervisor/arch/arm/mmu_cell.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index c6144a7..907a28e 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -197,6 +197,8 @@ static inline bool is_el2(void)
return (psr & PSR_MODE_MASK) == PSR_HYP_MODE;
}

+#define tlb_flush_guest() arm_write_sysreg(TLBIALL, 1)
+
#endif /* !__ASSEMBLY__ */

#endif /* !_JAILHOUSE_ASM_PROCESSOR_H */
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 4885f8c..5e25eb6 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -110,7 +110,7 @@ void arch_cpu_tlb_flush(struct per_cpu *cpu_data)
* Invalidate all stage-1 and 2 TLB entries for the current VMID
* ERET will ensure completion of these ops
*/
- arm_write_sysreg(TLBIALL, 1);
+ tlb_flush_guest();
dsb(nsh);
cpu_data->flush_vcpu_caches = false;
}
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:28 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Unsigned long is 32 bits on AArch32 and 64 bits on AArch64. This
allows us to reuse the file on AArch64.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/lib.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/lib.c b/hypervisor/arch/arm/lib.c
index c96d18b..6396a0d 100644
--- a/hypervisor/arch/arm/lib.c
+++ b/hypervisor/arch/arm/lib.c
@@ -17,7 +17,7 @@

int phys_processor_id(void)
{
- u32 mpidr;
+ unsigned long mpidr;

arm_read_sysreg(MPIDR_EL1, mpidr);
return mpidr & MPIDR_CPUID_MASK;
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:28 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

We can reuse the code under hypervisor/arch/arm/mmu_cell.c for the
AArch64 port, save for the value we use for the VTCRL. AArch64 will
need in addition to the flags set by the AArch32 port, to set the
size of the address space to 40 bits; at least initially, until we
implement the new MMU features in ARMv8.

We put this behind a define in asm/paging.h to allow this reuse.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/paging.h | 6 ++++++
hypervisor/arch/arm/mmu_cell.c | 7 +------
2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 0372b2c..6d54b71 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -117,6 +117,12 @@
#define TCR_SL0_SHIFT 6
#define TCR_S_SHIFT 4

+#define VTCR_CELL (T0SZ | SL0 << TCR_SL0_SHIFT \
+ | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT) \
+ | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT) \
+ | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)\
+ | VTCR_RES1)
+
/*
* Hypervisor memory attribute indexes:
* 0: normal WB, RA, WA, non-transient
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 5e25eb6..e9d2044 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -75,12 +75,7 @@ int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
struct cell *cell = cpu_data->cell;
unsigned long cell_table = paging_hvirt2phys(cell->arch.mm.root_table);
u64 vttbr = 0;
- u32 vtcr = T0SZ
- | SL0 << TCR_SL0_SHIFT
- | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT)
- | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT)
- | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)
- | VTCR_RES1;
+ u32 vtcr = VTCR_CELL;

if (cell->id > 0xff) {
panic_printk("No cell ID available\n");
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:29 PM12/18/15
to jailho...@googlegroups.com, Claudio Fontana, jan.k...@siemens.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Claudio Fontana <claudio...@huawei.com>

allow more efficient per-arch implementations
of the usual memory / string ops by making the
generic implementations weak.

Signed-off-by: Claudio Fontana <claudio...@huawei.com>
---
hypervisor/arch/arm/lib.c | 12 ------------
hypervisor/lib.c | 15 +++++++++++++++
2 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/hypervisor/arch/arm/lib.c b/hypervisor/arch/arm/lib.c
index 6396a0d..c2636ec 100644
--- a/hypervisor/arch/arm/lib.c
+++ b/hypervisor/arch/arm/lib.c
@@ -22,15 +22,3 @@ int phys_processor_id(void)
arm_read_sysreg(MPIDR_EL1, mpidr);
return mpidr & MPIDR_CPUID_MASK;
}
-
-void *memcpy(void *dest, const void *src, unsigned long n)
-{
- unsigned long i;
- const char *csrc = src;
- char *cdest = dest;
-
- for (i = 0; i < n; i++)
- cdest[i] = csrc[i];
-
- return dest;
-}
diff --git a/hypervisor/lib.c b/hypervisor/lib.c
index f2a27eb..39cb873 100644
--- a/hypervisor/lib.c
+++ b/hypervisor/lib.c
@@ -13,6 +13,7 @@
#include <jailhouse/string.h>
#include <jailhouse/types.h>

+__attribute__((weak))
void *memset(void *s, int c, unsigned long n)
{
u8 *p = s;
@@ -22,6 +23,7 @@ void *memset(void *s, int c, unsigned long n)
return s;
}

+__attribute__((weak))
int strcmp(const char *s1, const char *s2)
{
while (*s1 == *s2) {
@@ -32,3 +34,16 @@ int strcmp(const char *s1, const char *s2)
}
return *(unsigned char *)s1 - *(unsigned char *)s2;
}
+
+__attribute__ ((weak))
+void *memcpy(void *dest, const void *src, unsigned long n)
+{
+ unsigned long i;
+ const char *csrc = src;
+ char *cdest = dest;
+
+ for (i = 0; i < n; i++)
+ cdest[i] = csrc[i];
+
+ return dest;
+}
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:29 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

On the AArch32 port of Jailhouse, the arm_write_sysreg and
arm_read_sysreg macros are used to access the system registers.
Implementing these helpers on AArch64 allows to approach some low
hanging fruit when looking for code to reuse from the AArch32 port.

On AArch64 we always get a u64 value from a system register, while
on AArch32 it might be either u32 or u64. We need to look at the
users of this macro on a case by case basis and see if it makes
sense to reuse code.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/include/asm/sysregs.h | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/sysregs.h

diff --git a/hypervisor/arch/arm64/include/asm/sysregs.h b/hypervisor/arch/arm64/include/asm/sysregs.h
new file mode 100644
index 0000000..fa13ea3
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/sysregs.h
@@ -0,0 +1,26 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_SYSREGS_H
+#define _JAILHOUSE_ASM_SYSREGS_H
+
+#ifndef __ASSEMBLY__
+
+#define arm_write_sysreg(sysreg, val) \
+ asm volatile ("msr "#sysreg", %0\n" : : "r"((u64)(val)))
+
+#define arm_read_sysreg(sysreg, val) \
+ asm volatile ("mrs %0, "#sysreg"\n" : "=r"((u64)(val)))
+
+#endif
+
+#endif
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:29 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add an initial asm/processor.h header for AArch64. This header is
loosely based on the version for AArch32, but we have kept the stuff
that we use.

More correctness checking still needs to be done.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/include/asm/processor.h | 191 ++++++++++++++++++++++++++
1 file changed, 191 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/processor.h

diff --git a/hypervisor/arch/arm64/include/asm/processor.h b/hypervisor/arch/arm64/include/asm/processor.h
new file mode 100644
index 0000000..db42c4f
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/processor.h
@@ -0,0 +1,191 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_PROCESSOR_H
+#define _JAILHOUSE_ASM_PROCESSOR_H
+
+#include <jailhouse/types.h>
+#include <jailhouse/utils.h>
+
+#define PSR_MODE_MASK 0xf
+#define PSR_MODE_EL0t 0x0
+#define PSR_MODE_EL1t 0x4
+#define PSR_MODE_EL1h 0x5
+#define PSR_MODE_EL2t 0x8
+#define PSR_MODE_EL2h 0x9
+
+#define PSR_F_BIT (1 << 6)
+#define PSR_I_BIT (1 << 7)
+#define PSR_A_BIT (1 << 8)
+#define PSR_D_BIT (1 << 9)
+#define PSR_IL_BIT (1 << 20)
+#define PSR_SS_BIT (1 << 21)
+#define RESET_PSR (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT \
+ | PSR_MODE_EL1h)
+
+#define MPIDR_CPUID_MASK 0xff00ffffff
+#define MPIDR_MP_BIT (1 << 31)
+#define MPIDR_U_BIT (1 << 30)
+
+#define SCTLR_M_BIT (1 << 0)
+#define SCTLR_A_BIT (1 << 1)
+#define SCTLR_C_BIT (1 << 2)
+#define SCTLR_SA_BIT (1 << 3)
+#define SCTLR_SA0_BIT (1 << 4)
+#define SCTLR_CP15B_BIT (1 << 5)
+#define SCTLR_ITD_BIT (1 << 7)
+#define SCTLR_SED_BIT (1 << 8)
+#define SCTLR_UMA_BIT (1 << 9)
+#define SCTLR_I_BIT (1 << 12)
+#define SCTLR_DZE_BIT (1 << 14)
+#define SCTLR_UCT_BIT (1 << 15)
+#define SCTLR_nTWI (1 << 16)
+#define SCTLR_nTWE (1 << 18)
+#define SCTLR_WXN_BIT (1 << 19)
+#define SCTLR_E0E_BIT (1 << 24)
+#define SCTLR_EE_BIT (1 << 25)
+#define SCTLR_UCI_BIT (1 << 26)
+
+#define SCTLR_EL1_RES1 ((1 << 11) | (1 << 20) | (3 << 22) | (3 << 28))
+#define SCTLR_EL2_RES1 ((3 << 4) | (1 << 11) | (1 << 16) | (1 << 18) \
+ | (3 << 22) | (3 << 28))
+
+#define HCR_MIOCNCE_BIT (1u << 38)
+#define HCR_ID_BIT (1u << 33)
+#define HCR_CD_BIT (1u << 32)
+#define HCR_RW_BIT (1u << 31)
+#define HCR_TRVM_BIT (1u << 30)
+#define HCR_HDC_BIT (1u << 29)
+#define HCR_TDZ_BIT (1u << 28)
+#define HCR_TGE_BIT (1u << 27)
+#define HCR_TVM_BIT (1u << 26)
+#define HCR_TTLB_BIT (1u << 25)
+#define HCR_TPU_BIT (1u << 24)
+#define HCR_TPC_BIT (1u << 23)
+#define HCR_TSW_BIT (1u << 22)
+#define HCR_TAC_BIT (1u << 21)
+#define HCR_TIDCP_BIT (1u << 20)
+#define HCR_TSC_BIT (1u << 19)
+#define HCR_TID3_BIT (1u << 18)
+#define HCR_TID2_BIT (1u << 17)
+#define HCR_TID1_BIT (1u << 16)
+#define HCR_TID0_BIT (1u << 15)
+#define HCR_TWE_BIT (1u << 14)
+#define HCR_TWI_BIT (1u << 13)
+#define HCR_DC_BIT (1u << 12)
+#define HCR_BSU_BITS (3u << 10)
+#define HCR_BSU_INNER (1u << 10)
+#define HCR_BSU_OUTER (2u << 10)
+#define HCR_BSU_FULL HCR_BSU_BITS
+#define HCR_FB_BIT (1u << 9)
+#define HCR_VA_BIT (1u << 8)
+#define HCR_VI_BIT (1u << 7)
+#define HCR_VF_BIT (1u << 6)
+#define HCR_AMO_BIT (1u << 5)
+#define HCR_IMO_BIT (1u << 4)
+#define HCR_FMO_BIT (1u << 3)
+#define HCR_PTW_BIT (1u << 2)
+#define HCR_SWIO_BIT (1u << 1)
+#define HCR_VM_BIT (1u << 0)
+
+/* exception class */
+#define ESR_EC_SHIFT 26
+#define ESR_EC(hsr) ((hsr) >> ESR_EC_SHIFT & 0x3f)
+/* instruction length */
+#define ESR_IL_SHIFT 25
+#define ESR_IL(hsr) ((hsr) >> ESR_IL_SHIFT & 0x1)
+/* Instruction specific syndrom */
+#define ESR_ISS_MASK 0x1ffffff
+#define ESR_ISS(esr) ((esr) & ESR_ISS_MASK)
+/* Exception classes values */
+#define ESR_EC_UNKNOWN 0x00
+#define ESR_EC_WFx 0x01
+#define ESR_EC_CP15_32 0x03
+#define ESR_EC_CP15_64 0x04
+#define ESR_EC_CP14_MR 0x05
+#define ESR_EC_CP14_LS 0x06
+#define ESR_EC_FP_ASIMD 0x07
+#define ESR_EC_CP10_ID 0x08
+#define ESR_EC_CP14_64 0x0C
+#define ESR_EC_ILL 0x0E
+#define ESR_EC_SVC32 0x11
+#define ESR_EC_HVC32 0x12
+#define ESR_EC_SMC32 0x13
+#define ESR_EC_SVC64 0x15
+#define ESR_EC_HVC64 0x16
+#define ESR_EC_SMC64 0x17
+#define ESR_EC_SYS64 0x18
+#define ESR_EC_IMP_DEF 0x1f
+#define ESR_EC_IABT_LOW 0x20
+#define ESR_EC_IABT_CUR 0x21
+#define ESR_EC_PC_ALIGN 0x22
+#define ESR_EC_DABT_LOW 0x24
+#define ESR_EC_DABT_CUR 0x25
+#define ESR_EC_SP_ALIGN 0x26
+#define ESR_EC_FP_EXC32 0x28
+#define ESR_EC_FP_EXC64 0x2C
+#define ESR_EC_SERROR 0x2F
+#define ESR_EC_BREAKPT_LOW 0x30
+#define ESR_EC_BREAKPT_CUR 0x31
+#define ESR_EC_SOFTSTP_LOW 0x32
+#define ESR_EC_SOFTSTP_CUR 0x33
+#define ESR_EC_WATCHPT_LOW 0x34
+#define ESR_EC_WATCHPT_CUR 0x35
+#define ESR_EC_BKPT32 0x38
+#define ESR_EC_VECTOR32 0x3A
+#define ESR_EC_BRK64 0x3C
+
+#define EXIT_REASON_EL2_ABORT 0x0
+#define EXIT_REASON_EL1_ABORT 0x1
+#define EXIT_REASON_EL1_IRQ 0x2
+
+#define NUM_USR_REGS 31
+
+/* exception level in SPSR_ELx */
+#define SPSR_EL(spsr) (((spsr) & 0xc) >> 2)
+
+#ifndef __ASSEMBLY__
+
+struct registers {
+ unsigned long exit_reason;
+ unsigned long usr[NUM_USR_REGS];
+};
+
+#define dmb(domain) asm volatile("dmb " #domain "\n" : : : "memory")
+#define dsb(domain) asm volatile("dsb " #domain "\n" : : : "memory")
+#define isb() asm volatile("isb\n")
+
+#define wfe() asm volatile("wfe\n")
+#define wfi() asm volatile("wfi\n")
+#define sev() asm volatile("sev\n")
+
+unsigned int smc(unsigned int r0, ...);
+
+static inline void cpu_relax(void)
+{
+ asm volatile("" : : : "memory");
+}
+
+static inline void memory_barrier(void)
+{
+ dmb(ish);
+}
+
+static inline void memory_load_barrier(void)
+{
+}
+
+#define tlb_flush_guest() asm volatile("tlbi vmalls12e1\n")
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* !_JAILHOUSE_ASM_PROCESSOR_H */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:32 PM12/18/15
to jailho...@googlegroups.com, Dmitry Voytik, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, Antonios Motakis
From: Dmitry Voytik <dmitry...@huawei.com>

Add the header file control.h to the AArch64 port of Jailhouse.

Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
Signed-off-by: Antonios Motakis <antonios...@huawei.com>
[antonios...@huawei.com: split off as a separate patch]
---
hypervisor/arch/arm64/include/asm/control.h | 42 +++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/control.h

diff --git a/hypervisor/arch/arm64/include/asm/control.h b/hypervisor/arch/arm64/include/asm/control.h
new file mode 100644
index 0000000..1957d55
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/control.h
@@ -0,0 +1,42 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Dmitry Voytik <dmitry...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_CONTROL_H
+#define _JAILHOUSE_ASM_CONTROL_H
+
+#define SGI_INJECT 0
+#define SGI_CPU_OFF 1
+
+#define CACHES_CLEAN 0
+#define CACHES_CLEAN_INVALIDATE 1
+
+#include <asm/percpu.h>
+
+static inline void arch_cpu_dcaches_flush(unsigned int action) { }
+static inline void arch_cpu_icache_flush(void) { }
+
+void arch_cpu_tlb_flush(struct per_cpu *cpu_data);
+void arch_cell_caches_flush(struct cell *cell);
+int arch_mmu_cell_init(struct cell *cell);
+void arch_mmu_cell_destroy(struct cell *cell);
+int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
+void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn);
+struct registers* arch_handle_exit(struct per_cpu *cpu_data,
+ struct registers *regs);
+bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn);
+void arch_reset_self(struct per_cpu *cpu_data);
+void arch_shutdown_self(struct per_cpu *cpu_data);
+
+void __attribute__((noreturn)) vmreturn(struct registers *guest_regs);
+void __attribute__((noreturn)) arch_shutdown_mmu(struct per_cpu *cpu_data);
+
+#endif /* !_JAILHOUSE_ASM_CONTROL_H */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:33 PM12/18/15
to jailho...@googlegroups.com, Dmitry Voytik, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Dmitry Voytik <dmitry...@huawei.com>

Add the asm/types.h header file, which defines the size of the data
types.

Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
---
hypervisor/arch/arm64/include/asm/types.h | 46 +++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/types.h

diff --git a/hypervisor/arch/arm64/include/asm/types.h b/hypervisor/arch/arm64/include/asm/types.h
new file mode 100644
index 0000000..10760a0
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/types.h
@@ -0,0 +1,46 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Dmitry Voytik <dmitry...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_TYPES_H
+#define _JAILHOUSE_ASM_TYPES_H
+
+#define BITS_PER_LONG 64
+
+#ifndef __ASSEMBLY__
+
+typedef signed char s8;
+typedef unsigned char u8;
+
+typedef signed short s16;
+typedef unsigned short u16;
+
+typedef signed int s32;
+typedef unsigned int u32;
+
+typedef signed long long s64;
+typedef unsigned long long u64;
+
+typedef s8 __s8;
+typedef u8 __u8;
+
+typedef s16 __s16;
+typedef u16 __u16;
+
+typedef s32 __s32;
+typedef u32 __u32;
+
+typedef s64 __s64;
+typedef u64 __u64;
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* !_JAILHOUSE_ASM_TYPES_H */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:33 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Implement spinlocks for the hypervisor firmware, on AArch64.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/include/asm/spinlock.h | 67 ++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/spinlock.h

diff --git a/hypervisor/arch/arm64/include/asm/spinlock.h b/hypervisor/arch/arm64/include/asm/spinlock.h
new file mode 100644
index 0000000..5284101
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/spinlock.h
@@ -0,0 +1,67 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ * Copied from arch/arm64/include/asm/spinlock.h in Linux
+ */
+
+#ifndef _JAILHOUSE_ASM_SPINLOCK_H
+#define _JAILHOUSE_ASM_SPINLOCK_H
+
+#define DEFINE_SPINLOCK(name) spinlock_t (name)
+#define TICKET_SHIFT 16
+
+/* TODO: fix this if we add support for BE */
+typedef struct {
+ u16 owner;
+ u16 next;
+} spinlock_t __attribute__((aligned(4)));
+
+static inline void spin_lock(spinlock_t *lock)
+{
+ unsigned int tmp;
+ spinlock_t lockval, newval;
+
+ asm volatile(
+ /* Atomically increment the next ticket. */
+" prfm pstl1strm, %3\n"
+"1: ldaxr %w0, %3\n"
+" add %w1, %w0, %w5\n"
+" stxr %w2, %w1, %3\n"
+" cbnz %w2, 1b\n"
+ /* Did we get the lock? */
+" eor %w1, %w0, %w0, ror #16\n"
+" cbz %w1, 3f\n"
+ /*
+ * No: spin on the owner. Send a local event to avoid missing an
+ * unlock before the exclusive load.
+ */
+" sevl\n"
+"2: wfe\n"
+" ldaxrh %w2, %4\n"
+" eor %w1, %w2, %w0, lsr #16\n"
+" cbnz %w1, 2b\n"
+ /* We got the lock. Critical section starts here. */
+"3:"
+ : "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock)
+ : "Q" (lock->owner), "I" (1 << TICKET_SHIFT)
+ : "memory");
+}
+
+static inline void spin_unlock(spinlock_t *lock)
+{
+ asm volatile(
+" stlrh %w1, %0\n"
+ : "=Q" (lock->owner)
+ : "r" (lock->owner + 1)
+ : "memory");
+}
+
+#endif /* !_JAILHOUSE_ASM_SPINLOCK_H */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:33 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add the percpu.h header file for the AArch64 implementation. This is
the bare bones version of the header needed to compile a stub
hypervisor binary on AArch64. A lot of these fields could probably
be moved to an arch independent header.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/include/asm/percpu.h | 68 ++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/percpu.h

diff --git a/hypervisor/arch/arm64/include/asm/percpu.h b/hypervisor/arch/arm64/include/asm/percpu.h
new file mode 100644
index 0000000..17dd1ad
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/percpu.h
@@ -0,0 +1,68 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_PERCPU_H
+#define _JAILHOUSE_ASM_PERCPU_H
+
+#include <jailhouse/types.h>
+#include <asm/paging.h>
+
+#ifndef __ASSEMBLY__
+
+#include <asm/cell.h>
+#include <asm/spinlock.h>
+
+struct per_cpu {
+ /* common fields */
+ unsigned int cpu_id;
+ struct cell *cell;
+ u32 stats[JAILHOUSE_NUM_CPU_STATS];
+ int shutdown_state;
+ bool failed;
+
+ bool flush_vcpu_caches;
+} __attribute__((aligned(PAGE_SIZE)));
+
+static inline struct per_cpu *this_cpu_data(void)
+{
+ while (1);
+ return NULL;
+}
+
+#define DEFINE_PER_CPU_ACCESSOR(field) \
+static inline typeof(((struct per_cpu *)0)->field) this_##field(void) \
+{ \
+ return this_cpu_data()->field; \
+}
+
+DEFINE_PER_CPU_ACCESSOR(cpu_id)
+DEFINE_PER_CPU_ACCESSOR(cell)
+
+static inline struct per_cpu *per_cpu(unsigned int cpu)
+{
+ while (1);
+ return NULL;
+}
+
+unsigned int arm_cpu_phys2virt(unsigned int cpu_id);
+unsigned int arm_cpu_virt2phys(struct cell *cell, unsigned int virt_id);
+
+/* Validate defines */
+#define CHECK_ASSUMPTION(assume) ((void)sizeof(char[1 - 2*!(assume)]))
+
+static inline void __check_assumptions(void)
+{
+ CHECK_ASSUMPTION(sizeof(unsigned long) == (8));
+}
+#endif /* !__ASSEMBLY__ */
+
+#endif /* !_JAILHOUSE_ASM_PERCPU_H */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:33 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

The AArch64 page table format is very similar, almost identical to
the AArch32 page table format. Add a header file for the AArch64 page
table format, based on the AArch32 implementation.

AArch64 introduces an extra level of page tables, for a total of
four, and support for different translation granule sizes. Disabling
the level zero of the page tables, and using a granule size of 4Kb,
results in an identical page table format with AArch32. With these
parameters, we can address 39 bits of memory.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/include/asm/paging.h | 191 +++++++++++++++++++++++++++++
1 file changed, 191 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/paging.h

diff --git a/hypervisor/arch/arm64/include/asm/paging.h b/hypervisor/arch/arm64/include/asm/paging.h
new file mode 100644
index 0000000..deb5733
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/paging.h
@@ -0,0 +1,191 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_PAGING_H
+#define _JAILHOUSE_ASM_PAGING_H
+
+#include <jailhouse/types.h>
+#include <jailhouse/utils.h>
+#include <asm/processor.h>
+#include <asm/sysregs.h>
+
+/*
+ * This file is based on hypervisor/arch/arm/include/asm/paging.h for AArch32.
+ * However, there are some differences. AArch64 supports different granule
+ * sizes for pages (4Kb, 16Kb, and 64Kb), while AArch32 supports only a 4Kb
+ * native page size. AArch64 also supports 4 levels of page tables, numbered
+ * L0-3, while AArch32 supports only 3 levels numbered L1-3.
+ *
+ * Otherwise, the page table format is identical. By setting the TCR registers
+ * appropriately, for 4Kb page tables and starting address translations from
+ * level 1, we can use the same page tables and page table generation code that
+ * we use on AArch64.
+ *
+ * This gives us 39 addressable bits for the moment.
+ * AARCH64_TODO: implement 4 level page tables, different granule sizes.
+ */
+
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1 << PAGE_SHIFT)
+#define PAGE_MASK ~(PAGE_SIZE - 1)
+#define PAGE_OFFS_MASK (PAGE_SIZE - 1)
+
+#define MAX_PAGE_TABLE_LEVELS 3
+
+#define T0SZ (64 - 39)
+#define SL0 01
+#define L1_VADDR_MASK BIT_MASK(38, 30)
+#define L2_VADDR_MASK BIT_MASK(29, 21)
+
+#define L3_VADDR_MASK BIT_MASK(20, 12)
+
+/*
+ * Stage-1 and Stage-2 lower attributes.
+ * The contiguous bit is a hint that allows the PE to store blocks of 16 pages
+ * in the TLB. This may be a useful optimisation.
+ */
+#define PTE_ACCESS_FLAG (0x1 << 10)
+/*
+ * When combining shareability attributes, the stage-1 ones prevail. So we can
+ * safely leave everything non-shareable at stage 2.
+ */
+#define PTE_NON_SHAREABLE (0x0 << 8)
+#define PTE_OUTER_SHAREABLE (0x2 << 8)
+#define PTE_INNER_SHAREABLE (0x3 << 8)
+
+#define PTE_MEMATTR(val) ((val) << 2)
+#define PTE_FLAG_TERMINAL (0x1 << 1)
+#define PTE_FLAG_VALID (0x1 << 0)
+
+/* These bits differ in stage 1 and 2 translations */
+#define S1_PTE_NG (0x1 << 11)
+#define S1_PTE_ACCESS_RW (0x0 << 7)
+#define S1_PTE_ACCESS_RO (0x1 << 7)
+/* Res1 for EL2 stage-1 tables */
+#define S1_PTE_ACCESS_EL0 (0x1 << 6)
+
+#define S2_PTE_ACCESS_RO (0x1 << 6)
+#define S2_PTE_ACCESS_WO (0x2 << 6)
+#define S2_PTE_ACCESS_RW (0x3 << 6)
+
+/*
+ * Descriptor pointing to a page table
+ * (only for L1 and L2. L3 uses this encoding for terminal entries...)
+ */
+#define PTE_TABLE_FLAGS 0x3
+
+#define PTE_L1_BLOCK_ADDR_MASK BIT_MASK(39, 30)
+#define PTE_L2_BLOCK_ADDR_MASK BIT_MASK(39, 21)
+#define PTE_TABLE_ADDR_MASK BIT_MASK(39, 12)
+#define PTE_PAGE_ADDR_MASK BIT_MASK(39, 12)
+
+#define BLOCK_1G_VADDR_MASK BIT_MASK(29, 0)
+#define BLOCK_2M_VADDR_MASK BIT_MASK(20, 0)
+
+/*
+ * AARCH64_TODO: the way TTBR_MASK is handled is almost certainly wrong. The
+ * low bits of the TTBR should be zero, however this is an alignment requirement
+ * as well for the actual location of the page table root. We get around the
+ * buggy behaviour in the AArch32 code we share, by setting the mask to the
+ * de facto alignment employed by the arch independent code: one page.
+ */
+#define TTBR_MASK BIT_MASK(47, 12)
+#define VTTBR_VMID_SHIFT 48
+
+#define TCR_EL2_RES1 ((1 << 31) | (1 << 23))
+#define VTCR_RES1 ((1 << 31))
+#define TCR_PS_40B 0x2
+#define TCR_RGN_NON_CACHEABLE 0x0
+#define TCR_RGN_WB_WA 0x1
+#define TCR_RGN_WT 0x2
+#define TCR_RGN_WB 0x3
+#define TCR_NON_SHAREABLE 0x0
+#define TCR_OUTER_SHAREABLE 0x2
+#define TCR_INNER_SHAREABLE 0x3
+
+#define TCR_PS_SHIFT 16
+#define TCR_SH0_SHIFT 12
+#define TCR_ORGN0_SHIFT 10
+#define TCR_IRGN0_SHIFT 8
+#define TCR_SL0_SHIFT 6
+#define TCR_S_SHIFT 4
+
+/* AARCH64_TODO: we statically assume a 40 bit address space. Need to fix this,
+ * along with the support for the 0th level page table available in AArch64 */
+#define VTCR_CELL (T0SZ | SL0 << TCR_SL0_SHIFT \
+ | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT) \
+ | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT) \
+ | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)\
+ | (TCR_PS_40B << TCR_PS_SHIFT) \
+ | VTCR_RES1)
+
+/*
+ * Hypervisor memory attribute indexes:
+ * 0: normal WB, RA, WA, non-transient
+ * 1: device
+ * 2: normal non-cacheable
+ * 3-7: unused
+ */
+#define DEFAULT_MAIR_EL2 0x00000000004404ff
+#define MAIR_IDX_WBRAWA 0
+#define MAIR_IDX_DEV 1
+#define MAIR_IDX_NC 2
+
+/* Stage 2 memory attributes (MemAttr[3:0]) */
+#define S2_MEMATTR_OWBIWB 0xf
+#define S2_MEMATTR_DEV 0x1
+
+#define S1_PTE_FLAG_NORMAL PTE_MEMATTR(MAIR_IDX_WBRAWA)
+#define S1_PTE_FLAG_DEVICE PTE_MEMATTR(MAIR_IDX_DEV)
+#define S1_PTE_FLAG_UNCACHED PTE_MEMATTR(MAIR_IDX_NC)
+
+#define S2_PTE_FLAG_NORMAL PTE_MEMATTR(S2_MEMATTR_OWBIWB)
+#define S2_PTE_FLAG_DEVICE PTE_MEMATTR(S2_MEMATTR_DEV)
+
+#define S1_DEFAULT_FLAGS (PTE_FLAG_VALID | PTE_ACCESS_FLAG \
+ | S1_PTE_FLAG_NORMAL | PTE_INNER_SHAREABLE\
+ | S1_PTE_ACCESS_EL0)
+
+/* Macros used by the core, only for the EL2 stage-1 mappings */
+#define PAGE_FLAG_DEVICE S1_PTE_FLAG_DEVICE
+#define PAGE_DEFAULT_FLAGS (S1_DEFAULT_FLAGS | S1_PTE_ACCESS_RW)
+#define PAGE_READONLY_FLAGS (S1_DEFAULT_FLAGS | S1_PTE_ACCESS_RO)
+#define PAGE_PRESENT_FLAGS PTE_FLAG_VALID
+#define PAGE_NONPRESENT_FLAGS 0
+
+#define INVALID_PHYS_ADDR (~0UL)
+
+#define REMAP_BASE 0x00100000UL
+#define NUM_REMAP_BITMAP_PAGES 1
+
+#define NUM_TEMPORARY_PAGES 16
+
+#ifndef __ASSEMBLY__
+
+typedef u64 *pt_entry_t;
+
+/* Only executed on hypervisor paging struct changes */
+static inline void arch_paging_flush_page_tlbs(unsigned long page_addr)
+{
+ asm volatile("tlbi vae2, %0\n"
+ : : "r" (page_addr >> PAGE_SHIFT));
+}
+
+/* Used to clean the PAGE_MAP_COHERENT page table changes */
+static inline void arch_paging_flush_cpu_caches(void *addr, long size)
+{
+ /* AARCH64_TODO */
+}
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* !_JAILHOUSE_ASM_PAGING_H */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:34 PM12/18/15
to jailho...@googlegroups.com, Claudio Fontana, jan.k...@siemens.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Claudio Fontana <claudio...@huawei.com>

implement set_bit, clear_bit, test_and_set_bit.

test_and_set_bit is used only in the panic_printk apparently,
while set_bit and clear_bit are used in page table handling code.

Signed-off-by: Claudio Fontana <claudio...@huawei.com>
---
hypervisor/arch/arm64/include/asm/bitops.h | 141 +++++++++++++++++++++++++++++
1 file changed, 141 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/bitops.h

diff --git a/hypervisor/arch/arm64/include/asm/bitops.h b/hypervisor/arch/arm64/include/asm/bitops.h
new file mode 100644
index 0000000..c2c8fd9
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/bitops.h
@@ -0,0 +1,141 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ * Claudio Fontana <claudio...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_BITOPS_H
+#define _JAILHOUSE_ASM_BITOPS_H
+
+#include <jailhouse/types.h>
+
+#ifndef __ASSEMBLY__
+
+#define BITOPT_ALIGN(bits, addr) \
+ do { \
+ (addr) = (unsigned long *)((u64)(addr) & ~0x7) \
+ + (bits) / BITS_PER_LONG; \
+ (bits) %= BITS_PER_LONG; \
+ } while (0)
+
+static inline __attribute__((always_inline)) void
+clear_bit(int nr, volatile unsigned long *addr)
+{
+ u32 ret;
+ u64 tmp;
+
+ BITOPT_ALIGN(nr, addr);
+
+ /* AARCH64_TODO: do we need to preload? */
+ do {
+ asm volatile (
+ "ldxr %2, %1\n\t"
+ "bic %2, %2, %3\n\t"
+ "stxr %w0, %2, %1\n\t"
+ : "=r" (ret),
+ "+Q" (*(volatile unsigned long *)addr),
+ "=r" (tmp)
+ : "r" (1ul << nr));
+ } while (ret);
+}
+
+static inline __attribute__((always_inline)) void
+set_bit(unsigned int nr, volatile unsigned long *addr)
+{
+ u32 ret;
+ u64 tmp;
+
+ BITOPT_ALIGN(nr, addr);
+
+ /* AARCH64_TODO: do we need to preload? */
+ do {
+ asm volatile (
+ "ldxr %2, %1\n\t"
+ "orr %2, %2, %3\n\t"
+ "stxr %w0, %2, %1\n\t"
+ : "=r" (ret),
+ "+Q" (*(volatile unsigned long *)addr),
+ "=r" (tmp)
+ : "r" (1ul << nr));
+ } while (ret);
+}
+
+static inline __attribute__((always_inline)) int
+test_bit(unsigned int nr, const volatile unsigned long *addr)
+{
+ return ((1UL << (nr % BITS_PER_LONG)) &
+ (addr[nr / BITS_PER_LONG])) != 0;
+}
+
+static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+ u32 ret;
+ u64 test, tmp;
+
+ BITOPT_ALIGN(nr, addr);
+
+ /* AARCH64_TODO: using Inner Shareable DMB at the moment,
+ * revisit when we will deal with shareability domains */
+
+ do {
+ asm volatile (
+ "ldxr %3, %2\n\t"
+ "ands %1, %3, %4\n\t"
+ "b.ne 1f\n\t"
+ "orr %3, %3, %4\n\t"
+ "1:\n\t"
+ "stxr %w0, %3, %2\n\t"
+ "dmb ish\n\t"
+ : "=r" (ret), "=&r" (test),
+ "+Q" (*(volatile unsigned long *)addr),
+ "=r" (tmp)
+ : "r" (1ul << nr));
+ } while (ret);
+ return !!(test);
+}
+
+/* Count leading zeroes */
+static inline unsigned long clz(unsigned long word)
+{
+ unsigned long val;
+
+ asm volatile ("clz %0, %1" : "=r" (val) : "r" (word));
+ return val;
+}
+
+/* Returns the position of the least significant 1, MSB=63, LSB=0*/
+static inline unsigned long ffsl(unsigned long word)
+{
+ if (!word)
+ return 0;
+ asm volatile ("rbit %0, %0" : "+r" (word));
+ return clz(word);
+}
+
+static inline unsigned long ffzl(unsigned long word)
+{
+ return ffsl(~word);
+}
+
+/* AARCH64_TODO: we can use SXTB, SXTH, SXTW */
+/* Extend the value of 'size' bits to a signed long */
+static inline unsigned long sign_extend(unsigned long val, unsigned int size)
+{
+ unsigned long mask;
+
+ if (size >= sizeof(unsigned long) * 8)
+ return val;
+
+ mask = 1ul << (size - 1);
+ return (val ^ mask) - mask;
+}
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_BITOPS_H */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:35 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add the jailhouse_hypercall.h header file for AArch64. We will need
this also from the Linux side, in order to load Jailhouse in memory
and to issue hypercalls to an already loaded instance of the
hypervisor.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
.../arch/arm64/include/asm/jailhouse_hypercall.h | 93 ++++++++++++++++++++++
1 file changed, 93 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/jailhouse_hypercall.h

diff --git a/hypervisor/arch/arm64/include/asm/jailhouse_hypercall.h b/hypervisor/arch/arm64/include/asm/jailhouse_hypercall.h
new file mode 100644
index 0000000..662b2b1
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/jailhouse_hypercall.h
@@ -0,0 +1,93 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <jailhouse/config.h>
+
+#define JAILHOUSE_BORROW_ROOT_PT 0
+
+#define JAILHOUSE_CALL_INS "hvc #0x4a48"
+#define JAILHOUSE_CALL_NUM_RESULT "x0"
+#define JAILHOUSE_CALL_ARG1 "x1"
+#define JAILHOUSE_CALL_ARG2 "x2"
+
+/* CPU statistics */
+#define JAILHOUSE_CPU_STAT_VMEXITS_MAINTENANCE JAILHOUSE_GENERIC_CPU_STATS
+#define JAILHOUSE_CPU_STAT_VMEXITS_VIRQ JAILHOUSE_GENERIC_CPU_STATS + 1
+#define JAILHOUSE_CPU_STAT_VMEXITS_VSGI JAILHOUSE_GENERIC_CPU_STATS + 2
+#define JAILHOUSE_NUM_CPU_STATS JAILHOUSE_GENERIC_CPU_STATS + 3
+
+#ifndef __ASSEMBLY__
+
+struct jailhouse_comm_region {
+ COMM_REGION_GENERIC_HEADER;
+};
+
+static inline __u64 jailhouse_call(__u64 num)
+{
+ register __u64 num_result asm(JAILHOUSE_CALL_NUM_RESULT) = num;
+
+ asm volatile(
+ JAILHOUSE_CALL_INS
+ : "=r" (num_result)
+ : "r" (num_result)
+ : "memory");
+ return num_result;
+}
+
+static inline __u64 jailhouse_call_arg1(__u64 num, __u64 arg1)
+{
+ register __u64 num_result asm(JAILHOUSE_CALL_NUM_RESULT) = num;
+ register __u64 __arg1 asm(JAILHOUSE_CALL_ARG1) = arg1;
+
+ asm volatile(
+ JAILHOUSE_CALL_INS
+ : "=r" (num_result)
+ : "r" (num_result), "r" (__arg1)
+ : "memory");
+ return num_result;
+}
+
+static inline __u64 jailhouse_call_arg2(__u64 num, __u64 arg1, __u64 arg2)
+{
+ register __u64 num_result asm(JAILHOUSE_CALL_NUM_RESULT) = num;
+ register __u64 __arg1 asm(JAILHOUSE_CALL_ARG1) = arg1;
+ register __u64 __arg2 asm(JAILHOUSE_CALL_ARG2) = arg2;
+
+ asm volatile(
+ JAILHOUSE_CALL_INS
+ : "=r" (num_result)
+ : "r" (num_result), "r" (__arg1), "r" (__arg2)
+ : "memory");
+ return num_result;
+}
+
+static inline void
+jailhouse_send_msg_to_cell(struct jailhouse_comm_region *comm_region,
+ __u64 msg)
+{
+ comm_region->reply_from_cell = JAILHOUSE_MSG_NONE;
+ /* ensure reply was cleared before sending new message */
+ asm volatile("dmb ishst" : : : "memory");
+ comm_region->msg_to_cell = msg;
+}
+
+static inline void
+jailhouse_send_reply_from_cell(struct jailhouse_comm_region *comm_region,
+ __u64 reply)
+{
+ comm_region->msg_to_cell = JAILHOUSE_MSG_NONE;
+ /* ensure message was cleared before sending reply */
+ asm volatile("dmb ishst" : : : "memory");
+ comm_region->reply_from_cell = reply;
+}
+
+#endif /* !__ASSEMBLY__ */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:36 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add under config/foundation-v8.c a root cell configuration for the
ARMv8 Foundation model, so we can in use this target with Jailhouse.
We also add the neccessary parameters in asm/platform.h for this
model.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
ci/jailhouse-config-foundation-v8.h | 5 ++
configs/foundation-v8.c | 120 +++++++++++++++++++++++++++
hypervisor/arch/arm64/include/asm/platform.h | 32 +++++++
3 files changed, 157 insertions(+)
create mode 100644 ci/jailhouse-config-foundation-v8.h
create mode 100644 configs/foundation-v8.c

diff --git a/ci/jailhouse-config-foundation-v8.h b/ci/jailhouse-config-foundation-v8.h
new file mode 100644
index 0000000..d59aa85
--- /dev/null
+++ b/ci/jailhouse-config-foundation-v8.h
@@ -0,0 +1,5 @@
+#define CONFIG_TRACE_ERROR 1
+#define CONFIG_ARM_GIC 1
+#define CONFIG_MACH_FOUNDATION_V8 1
+#define CONFIG_SERIAL_AMBA_PL011 1
+#define JAILHOUSE_BASE 0xfc000000
diff --git a/configs/foundation-v8.c b/configs/foundation-v8.c
new file mode 100644
index 0000000..b90a29d
--- /dev/null
+++ b/configs/foundation-v8.c
@@ -0,0 +1,120 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/types.h>
+#include <jailhouse/cell-config.h>
+
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
+
+struct {
+ struct jailhouse_system header;
+ __u64 cpus[1];
+ struct jailhouse_memory mem_regions[9];
+ struct jailhouse_irqchip irqchips[1];
+} __attribute__((packed)) config = {
+ .header = {
+ .signature = JAILHOUSE_SYSTEM_SIGNATURE,
+ .hypervisor_memory = {
+ .phys_start = 0xfc000000,
+ .size = 0x4000000,
+ },
+ .debug_uart = {
+ .phys_start = 0x1c090000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_IO,
+ },
+ .root_cell = {
+ .name = "Foundation ARMv8",
+
+ .cpu_set_size = sizeof(config.cpus),
+ .num_memory_regions = ARRAY_SIZE(config.mem_regions),
+ .num_irqchips = 1,
+ },
+ },
+
+ .cpus = {
+ 0xf,
+ },
+
+ .mem_regions = {
+ /* ethernet */ {
+ .phys_start = 0x1a000000,
+ .virt_start = 0x1a000000,
+ .size = 0x00010000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* sysreg */ {
+ .phys_start = 0x1c010000,
+ .virt_start = 0x1c010000,
+ .size = 0x00001000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* uart0 */ {
+ .phys_start = 0x1c090000,
+ .virt_start = 0x1c090000,
+ .size = 0x00001000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* uart1 */ {
+ .phys_start = 0x1c0a0000,
+ .virt_start = 0x1c0a0000,
+ .size = 0x00001000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* uart2 */ {
+ .phys_start = 0x1c0b0000,
+ .virt_start = 0x1c0b0000,
+ .size = 0x00001000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* uart3 */ {
+ .phys_start = 0x1c0c0000,
+ .virt_start = 0x1c0c0000,
+ .size = 0x00001000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* virtio_block */ {
+ .phys_start = 0x1c130000,
+ .virt_start = 0x1c130000,
+ .size = 0x00001000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* RAM */ {
+ .phys_start = 0x80000000,
+ .virt_start = 0x80000000,
+ .size = 0x7c000000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE,
+ },
+ /* RAM */ {
+ .phys_start = 0x880000000,
+ .virt_start = 0x880000000,
+ .size = 0x80000000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE,
+ },
+ },
+ .irqchips = {
+ /* GIC */ {
+ .address = 0x2c001000,
+ .pin_bitmap = 0xffffffffffffffff,
+ },
+ },
+
+};
diff --git a/hypervisor/arch/arm64/include/asm/platform.h b/hypervisor/arch/arm64/include/asm/platform.h
index afd7e72..f8d4d91 100644
--- a/hypervisor/arch/arm64/include/asm/platform.h
+++ b/hypervisor/arch/arm64/include/asm/platform.h
@@ -15,4 +15,36 @@

#include <jailhouse/config.h>

+#ifdef CONFIG_MACH_FOUNDATION_V8
+
+# ifdef CONFIG_ARM_GIC_V3
+# define GICD_BASE ((void *)0x2f000000)
+# define GICD_SIZE 0x10000
+# define GICR_BASE ((void *)0x2f100000)
+# define GICR_SIZE 0x100000
+
+# include <asm/gic_v3.h>
+# else /* GICv2 */
+# define GICD_BASE ((void *)0x2c001000)
+# define GICD_SIZE 0x1000
+# define GICC_BASE ((void *)0x2c002000)
+/*
+ * WARN: most device trees are broken and report only one page for the GICC.
+ * It will brake the handle_irq code, since the GICC_DIR register is located at
+ * offset 0x1000...
+ */
+# define GICC_SIZE 0x2000
+# define GICH_BASE ((void *)0x2c004000)
+# define GICH_SIZE 0x2000
+# define GICV_BASE ((void *)0x2c006000)
+# define GICV_SIZE 0x2000
+
+# include <asm/gic_v2.h>
+# endif /* GIC */
+
+# define MAINTENANCE_IRQ 25
+# define UART_BASE 0x1c090000
+
+#endif /* CONFIG_MACH_FOUNDATION_V8 */
+
#endif /* !_JAILHOUSE_ASM_PLATFORM_H */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:37 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add the minimum stub functions expected by the rest of the codebase
to enable building on AArch64. We may implement the missing AArch64
functionality from here.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/Makefile | 4 ++
hypervisor/arch/arm64/Makefile | 22 ++++++++
hypervisor/arch/arm64/asm-defines.c | 19 +++++++
hypervisor/arch/arm64/control.c | 83 ++++++++++++++++++++++++++++
hypervisor/arch/arm64/entry.S | 18 ++++++
hypervisor/arch/arm64/include/asm/head.h | 16 ++++++
hypervisor/arch/arm64/include/asm/platform.h | 18 ++++++
hypervisor/arch/arm64/mmio.c | 27 +++++++++
hypervisor/arch/arm64/setup.c | 41 ++++++++++++++
inmates/demos/arm64/Makefile | 0
inmates/lib/arm64/Makefile | 0
inmates/tools/arm64/Makefile | 0
12 files changed, 248 insertions(+)
create mode 100644 hypervisor/arch/arm64/Makefile
create mode 100644 hypervisor/arch/arm64/asm-defines.c
create mode 100644 hypervisor/arch/arm64/control.c
create mode 100644 hypervisor/arch/arm64/entry.S
create mode 100644 hypervisor/arch/arm64/include/asm/head.h
create mode 100644 hypervisor/arch/arm64/include/asm/platform.h
create mode 100644 hypervisor/arch/arm64/mmio.c
create mode 100644 hypervisor/arch/arm64/setup.c
create mode 100644 inmates/demos/arm64/Makefile
create mode 100644 inmates/lib/arm64/Makefile
create mode 100644 inmates/tools/arm64/Makefile

diff --git a/hypervisor/Makefile b/hypervisor/Makefile
index 0532e4e..c037ed0 100644
--- a/hypervisor/Makefile
+++ b/hypervisor/Makefile
@@ -33,6 +33,10 @@ ifeq ($(SRCARCH),arm)
KBUILD_CFLAGS += -marm
endif

+ifeq ($(SRCARCH),arm64)
+LINUXINCLUDE += -I$(src)/arch/arm/include
+endif
+
ifneq ($(wildcard $(obj)/include/jailhouse/config.h),)
KBUILD_CFLAGS += -include $(obj)/include/jailhouse/config.h
endif
diff --git a/hypervisor/arch/arm64/Makefile b/hypervisor/arch/arm64/Makefile
new file mode 100644
index 0000000..fbb36df
--- /dev/null
+++ b/hypervisor/arch/arm64/Makefile
@@ -0,0 +1,22 @@
+#
+# Jailhouse AArch64 support
+#
+# Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+#
+# Authors:
+# Antonios Motakis <antonios...@huawei.com>
+#
+# This work is licensed under the terms of the GNU GPL, version 2. See
+# the COPYING file in the top-level directory.
+#
+
+include $(CONFIG_MK)
+
+KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))
+
+always := built-in.o
+
+obj-y := entry.o setup.o control.o mmio.o
+obj-y += ../arm/mmu_cell.o ../arm/paging.o ../arm/dbg-write.o ../arm/lib.o
+
+obj-$(CONFIG_SERIAL_AMBA_PL011) += ../arm/dbg-write-pl011.o
diff --git a/hypervisor/arch/arm64/asm-defines.c b/hypervisor/arch/arm64/asm-defines.c
new file mode 100644
index 0000000..c026a3c
--- /dev/null
+++ b/hypervisor/arch/arm64/asm-defines.c
@@ -0,0 +1,19 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <jailhouse/gen-defines.h>
+
+void common(void);
+
+void common(void)
+{
+}
diff --git a/hypervisor/arch/arm64/control.c b/hypervisor/arch/arm64/control.c
new file mode 100644
index 0000000..a1c4774
--- /dev/null
+++ b/hypervisor/arch/arm64/control.c
@@ -0,0 +1,83 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <jailhouse/control.h>
+#include <jailhouse/printk.h>
+
+int arch_cell_create(struct cell *cell)
+{
+ return trace_error(-EINVAL);
+}
+
+void arch_flush_cell_vcpu_caches(struct cell *cell)
+{
+ /* AARCH64_TODO */
+ trace_error(-EINVAL);
+}
+
+void arch_cell_destroy(struct cell *cell)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
+
+void arch_config_commit(struct cell *cell_added_removed)
+{
+}
+
+void arch_shutdown(void)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
+
+void arch_suspend_cpu(unsigned int cpu_id)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
+
+void arch_resume_cpu(unsigned int cpu_id)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
+
+void arch_reset_cpu(unsigned int cpu_id)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
+
+void arch_park_cpu(unsigned int cpu_id)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
+
+void arch_shutdown_cpu(unsigned int cpu_id)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
+
+void __attribute__((noreturn)) arch_panic_stop(void)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
+
+void arch_panic_park(void)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
diff --git a/hypervisor/arch/arm64/entry.S b/hypervisor/arch/arm64/entry.S
new file mode 100644
index 0000000..9f4e6c4
--- /dev/null
+++ b/hypervisor/arch/arm64/entry.S
@@ -0,0 +1,18 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+/* Entry point for Linux loader module on JAILHOUSE_ENABLE */
+ .text
+ .globl arch_entry
+arch_entry:
+ mov x0, -22
+ ret
diff --git a/hypervisor/arch/arm64/include/asm/head.h b/hypervisor/arch/arm64/include/asm/head.h
new file mode 100644
index 0000000..53dd26a
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/head.h
@@ -0,0 +1,16 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_HEAD_H
+#define _JAILHOUSE_ASM_HEAD_H_
+
+#endif /* !_JAILHOUSE_ASM_HEAD_H */
diff --git a/hypervisor/arch/arm64/include/asm/platform.h b/hypervisor/arch/arm64/include/asm/platform.h
new file mode 100644
index 0000000..afd7e72
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/platform.h
@@ -0,0 +1,18 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_PLATFORM_H
+#define _JAILHOUSE_ASM_PLATFORM_H
+
+#include <jailhouse/config.h>
+
+#endif /* !_JAILHOUSE_ASM_PLATFORM_H */
diff --git a/hypervisor/arch/arm64/mmio.c b/hypervisor/arch/arm64/mmio.c
new file mode 100644
index 0000000..37745d7
--- /dev/null
+++ b/hypervisor/arch/arm64/mmio.c
@@ -0,0 +1,27 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <jailhouse/entry.h>
+#include <jailhouse/mmio.h>
+#include <jailhouse/printk.h>
+
+unsigned int arch_mmio_count_regions(struct cell *cell)
+{
+ /* not entirely a lie :) */
+ return 0;
+}
+
+void arm_mmio_perform_access(unsigned long base, struct mmio_access *mmio)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
diff --git a/hypervisor/arch/arm64/setup.c b/hypervisor/arch/arm64/setup.c
new file mode 100644
index 0000000..ca83940
--- /dev/null
+++ b/hypervisor/arch/arm64/setup.c
@@ -0,0 +1,41 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <jailhouse/entry.h>
+#include <jailhouse/printk.h>
+
+int arch_init_early(void)
+{
+ return trace_error(-EINVAL);
+}
+
+int arch_cpu_init(struct per_cpu *cpu_data)
+{
+ return trace_error(-EINVAL);
+}
+
+int arch_init_late(void)
+{
+ return trace_error(-EINVAL);
+}
+
+void __attribute__((noreturn)) arch_cpu_activate_vmm(struct per_cpu *cpu_data)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
+
+void arch_cpu_restore(struct per_cpu *cpu_data, int return_code)
+{
+ trace_error(-EINVAL);
+ while (1);
+}
diff --git a/inmates/demos/arm64/Makefile b/inmates/demos/arm64/Makefile
new file mode 100644
index 0000000..e69de29
diff --git a/inmates/lib/arm64/Makefile b/inmates/lib/arm64/Makefile
new file mode 100644
index 0000000..e69de29
diff --git a/inmates/tools/arm64/Makefile b/inmates/tools/arm64/Makefile
new file mode 100644
index 0000000..e69de29
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:37 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Implement the entry point we will jump to after the Linux driver
loads the firmware to memory. Here we also set up a stack for the
hypervisor to use.

Unlike AArch32, we jump to EL2 as soon as we enter the hypervisor
binary.

To do this, we also set up temporary MMU mappings. We use just two
pages, to statically configure the MMU for identity mapping;
we need this in order to perform unaligned accesses from the
hypervisor binary during early initialization.

To generate the early page tables at build time, we need to know the
Jailhouse physical address, and the physical address of the debug
UART. We introduce this change of behaviour in AArch64, because in
this architecture we do not have the option of using the page tables
set up previously by Linux.

Reviewed for cache coherency correctness by Dmitry Voytik.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
---
hypervisor/arch/arm64/entry.S | 181 ++++++++++++++++++++++++++++-
hypervisor/arch/arm64/include/asm/percpu.h | 32 ++++-
2 files changed, 208 insertions(+), 5 deletions(-)

diff --git a/hypervisor/arch/arm64/entry.S b/hypervisor/arch/arm64/entry.S
index 9f4e6c4..9b9976f 100644
--- a/hypervisor/arch/arm64/entry.S
+++ b/hypervisor/arch/arm64/entry.S
@@ -10,9 +10,188 @@
* the COPYING file in the top-level directory.
*/

+#include <asm/head.h>
+#include <asm/percpu.h>
+#include <asm/platform.h>
+#include <asm/jailhouse_hypercall.h>
+
/* Entry point for Linux loader module on JAILHOUSE_ENABLE */
.text
.globl arch_entry
arch_entry:
- mov x0, -22
+ /*
+ * x0: cpuid
+ *
+ * We don't have access to our own address space yet, so we will
+ * abuse some caller saved registers to preserve accross calls:
+ * x16: saved hyp vectors
+ * x17: cpuid
+ * x18: caller lr
+ */
+ mov x17, x0
+ mov x18, x30
+
+ /* Note 1: After turning MMU off the CPU can start bypassing caches.
+ * But cached before data is kept in caches either until the CPU turns
+ * MMU on again or other coherent agents move cached data out. That's
+ * why there is no need to clean D-cache before turning MMU off.
+ *
+ * Note 2: We don't have to clean D-cache to protect against malicious
+ * guests, which can execute 'dc isw' (data or unified Cache line
+ * Invalidate by Set/Way) because when virtualization is enabled
+ * (HCR_EL2.VM == 1) then HW automatically upgrade 'dc isw' to
+ * 'dc cisw' (Clean + Invallidate). Executing Clean operation before
+ * Invalidate is safe in guests.
+ */
+
+ /* keep the linux stub EL2 vectors for later */
+ mov x0, xzr
+ hvc #0
+ mov x16, x0
+
+ /* install bootstrap_vectors */
+ ldr x0, =bootstrap_vectors
+ hvc #0
+ hvc #0 /* bootstrap vectors enter EL2 */
+
+ /* the bootstrap vector returns us here in physical addressing */
+el2_entry:
+ mrs x1, esr_el2
+ lsr x1, x1, #26
+ cmp x1, #0x16
+ b.ne . /* not hvc */
+
+ /* enable temporary mmu mappigns for early initialization */
+ ldr x0, =bootstrap_pt_l1
+ bl enable_mmu_el2
+
+ mov x0, x17 /* preserved cpuid, will be passed to entry */
+ ldr x1, =__page_pool
+ mov x2, #(1 << PERCPU_SIZE_SHIFT)
+ /*
+ * percpu data = pool + cpuid * shift
+ * AARCH64_TODO: handle affinities
+ */
+ madd x1, x2, x0, x1
+ msr tpidr_el2, x1
+
+ /* set up the stack and push the root cell's callee saved registers */
+ add sp, x1, #PERCPU_STACK_END
+ stp x29, x18, [sp, #-16]! /* note: our caller lr is in x18 */
+ stp x27, x28, [sp, #-16]!
+ stp x25, x26, [sp, #-16]!
+ stp x23, x24, [sp, #-16]!
+ stp x21, x22, [sp, #-16]!
+ stp x19, x20, [sp, #-16]!
+ /*
+ * We pad the stack, so we can consistently access the guest
+ * registers from either the initialization, or the exception
+ * handling code paths. 19 caller saved registers plus the
+ * exit_reason, which we don't use on entry.
+ */
+ sub sp, sp, 20 * 8
+
+ mov x29, xzr
+
+ /* save the Linux stub vectors we kept earlier */
+ add x2, x1, #PERCPU_LINUX_SAVED_VECTORS
+ str x16, [x2]
+
+ /* Call entry(cpuid, struct per_cpu*). Should not return. */
+ bl entry
+ b .
+
+ .globl enable_mmu_el2
+enable_mmu_el2:
+ /*
+ * x0: u64 ttbr0_el2
+ */
+
+ /* setup the MMU for EL2 hypervisor mappings */
+ ldr x1, =DEFAULT_MAIR_EL2
+ msr mair_el2, x1
+
+ /* AARCH64_TODO: ARM architecture supports CPU clusters which could be
+ * in separate inner shareable domains. At the same time: "The Inner
+ * Shareable domain is expected to be the set of PEs controlled by
+ * a single hypervisor or operating system." (see p. 93 of ARM ARM)
+ * We should think what hw configuration we support by one instance of
+ * the hypervisor and choose Inner or Outter sharable domain.
+ */
+ ldr x1, =(T0SZ | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT) \
+ | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT) \
+ | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT) \
+ | (TCR_PS_40B << TCR_PS_SHIFT) \
+ | TCR_EL2_RES1)
+ msr tcr_el2, x1
+
+ msr ttbr0_el2, x0
+
+ tlbi alle2
+ dsb nsh
+
+ /* Enable MMU, allow cacheability for instructions and data */
+ ldr x1, =(SCTLR_I_BIT | SCTLR_C_BIT | SCTLR_M_BIT | SCTLR_EL2_RES1)
+ msr sctlr_el2, x1
+ isb
+
ret
+
+/*
+ * Using two pages, we can economically identity map the whole address space
+ * of the machine (that is accessible with mappings starting from L1 with a 4KB
+ * translation granule), with the 2MB block that includes the UART marked as
+ * device memory. This allows us to start initializing the hypervisor before
+ * we set up the final EL2 page tables.
+ */
+.align 12
+bootstrap_pt_l1:
+ addr = 0
+ blk_sz = 1 << 30
+ .rept 512
+ .if (addr ^ UART_BASE) >> 30
+ .quad addr | PAGE_DEFAULT_FLAGS
+ .else
+ .quad bootstrap_pt_l2 + PTE_TABLE_FLAGS
+ .endif
+ addr = addr + blk_sz
+ .endr
+bootstrap_pt_l2:
+ addr = UART_BASE & ~((1 << 30) - 1)
+ blk_sz = 1 << 21
+ .rept 512
+ .if (addr ^ UART_BASE) >> 21
+ .quad addr | PAGE_DEFAULT_FLAGS
+ .else
+ .quad addr | PAGE_DEFAULT_FLAGS | PAGE_FLAG_DEVICE
+ .endif
+ addr = addr + blk_sz
+ .endr
+
+.macro ventry label
+ .align 7
+ b \label
+.endm
+
+ .globl bootstrap_vectors
+ .align 11
+bootstrap_vectors:
+ ventry .
+ ventry .
+ ventry .
+ ventry .
+
+ ventry .
+ ventry .
+ ventry .
+ ventry .
+
+ ventry el2_entry
+ ventry .
+ ventry .
+ ventry .
+
+ ventry .
+ ventry .
+ ventry .
+ ventry .
diff --git a/hypervisor/arch/arm64/include/asm/percpu.h b/hypervisor/arch/arm64/include/asm/percpu.h
index 17dd1ad..381d7fc 100644
--- a/hypervisor/arch/arm64/include/asm/percpu.h
+++ b/hypervisor/arch/arm64/include/asm/percpu.h
@@ -16,12 +16,20 @@
#include <jailhouse/types.h>
#include <asm/paging.h>

+/* Keep in sync with struct per_cpu! */
+#define PERCPU_SIZE_SHIFT 13
+#define PERCPU_STACK_END PAGE_SIZE
+#define PERCPU_LINUX_SAVED_VECTORS PERCPU_STACK_END
+
#ifndef __ASSEMBLY__

#include <asm/cell.h>
#include <asm/spinlock.h>

struct per_cpu {
+ u8 stack[PAGE_SIZE];
+ unsigned long saved_vectors;
+
/* common fields */
unsigned int cpu_id;
struct cell *cell;
@@ -34,8 +42,10 @@ struct per_cpu {

static inline struct per_cpu *this_cpu_data(void)
{
- while (1);
- return NULL;
+ struct per_cpu *cpu_data;
+
+ arm_read_sysreg(TPIDR_EL2, cpu_data);
+ return cpu_data;
}

#define DEFINE_PER_CPU_ACCESSOR(field) \
@@ -49,8 +59,16 @@ DEFINE_PER_CPU_ACCESSOR(cell)

static inline struct per_cpu *per_cpu(unsigned int cpu)
{
- while (1);
- return NULL;
+ extern u8 __page_pool[];
+
+ return (struct per_cpu *)(__page_pool + (cpu << PERCPU_SIZE_SHIFT));
+}
+
+static inline struct registers *guest_regs(struct per_cpu *cpu_data)
+{
+ /* assumes that the cell registers are at the beginning of the stack */
+ return (struct registers *)(cpu_data->stack + PERCPU_STACK_END
+ - sizeof(struct registers));
}

unsigned int arm_cpu_phys2virt(unsigned int cpu_id);
@@ -61,7 +79,13 @@ unsigned int arm_cpu_virt2phys(struct cell *cell, unsigned int virt_id);

static inline void __check_assumptions(void)
{
+ struct per_cpu cpu_data;
+
CHECK_ASSUMPTION(sizeof(unsigned long) == (8));
+ CHECK_ASSUMPTION(sizeof(struct per_cpu) == (1 << PERCPU_SIZE_SHIFT));
+ CHECK_ASSUMPTION(sizeof(cpu_data.stack) == PERCPU_STACK_END);
+ CHECK_ASSUMPTION(__builtin_offsetof(struct per_cpu, saved_vectors) ==
+ PERCPU_LINUX_SAVED_VECTORS);

antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:37 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add the initial cell.h header file needed to build on AArch64.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/include/asm/cell.h | 34 ++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
create mode 100644 hypervisor/arch/arm64/include/asm/cell.h

diff --git a/hypervisor/arch/arm64/include/asm/cell.h b/hypervisor/arch/arm64/include/asm/cell.h
new file mode 100644
index 0000000..4ba8224
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/cell.h
@@ -0,0 +1,34 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_CELL_H
+#define _JAILHOUSE_ASM_CELL_H
+
+#include <jailhouse/types.h>
+#include <asm/spinlock.h>
+
+#ifndef __ASSEMBLY__
+
+#include <jailhouse/cell-config.h>
+#include <jailhouse/hypercall.h>
+#include <jailhouse/paging.h>
+
+struct arch_cell {
+ struct paging_structures mm;
+ spinlock_t caches_lock;
+ bool needs_flush;
+};
+
+extern struct cell root_cell;
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_CELL_H */
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:39 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add the root cell configuration and necessary headers to build
and run Jailhouse on the AMD Seattle development board.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
ci/jailhouse-config-amd-seattle.h | 5 +
configs/amd-seattle.c | 163 +++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/uart_pl011.h | 2 +
hypervisor/arch/arm64/include/asm/platform.h | 19 ++++
4 files changed, 189 insertions(+)
create mode 100644 ci/jailhouse-config-amd-seattle.h
create mode 100644 configs/amd-seattle.c

diff --git a/ci/jailhouse-config-amd-seattle.h b/ci/jailhouse-config-amd-seattle.h
new file mode 100644
index 0000000..c721a46
--- /dev/null
+++ b/ci/jailhouse-config-amd-seattle.h
@@ -0,0 +1,5 @@
+#define CONFIG_TRACE_ERROR 1
+#define CONFIG_ARM_GIC 1
+#define CONFIG_MACH_AMD_SEATTLE 1
+#define CONFIG_SERIAL_AMBA_PL011 1
+#define JAILHOUSE_BASE 0x82fc000000
diff --git a/configs/amd-seattle.c b/configs/amd-seattle.c
new file mode 100644
index 0000000..1f86193
--- /dev/null
+++ b/configs/amd-seattle.c
@@ -0,0 +1,163 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/types.h>
+#include <jailhouse/cell-config.h>
+
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
+
+struct {
+ struct jailhouse_system header;
+ __u64 cpus[1];
+ struct jailhouse_memory mem_regions[15];
+ struct jailhouse_irqchip irqchips[1];
+} __attribute__((packed)) config = {
+ .header = {
+ .signature = JAILHOUSE_SYSTEM_SIGNATURE,
+ .hypervisor_memory = {
+ .phys_start = 0x82fc000000,
+ .phys_start = 0x82fc000000,
+ .size = 0x4000000,
+ },
+ .debug_uart = {
+ .phys_start = 0xe1010000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_IO,
+ },
+ .root_cell = {
+ .name = "AMD Seattle",
+
+ .cpu_set_size = sizeof(config.cpus),
+ .num_memory_regions = ARRAY_SIZE(config.mem_regions),
+ .num_irqchips = 1,
+ },
+ },
+
+ .cpus = {
+ 0xff,
+ },
+
+ .mem_regions = {
+ /* gpio */ {
+ .phys_start = 0xe0030000,
+ .virt_start = 0xe0030000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* gpio */ {
+ .phys_start = 0xe0080000,
+ .virt_start = 0xe0080000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* gpio */ {
+ .phys_start = 0xe1050000,
+ .virt_start = 0xe1050000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* sata */ {
+ .phys_start = 0xe0300000,
+ .virt_start = 0xe0300000,
+ .size = 0x10000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* xgmac */ {
+ .phys_start = 0xe0700000,
+ .virt_start = 0xe0700000,
+ .size = 0x100000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* xgmac */ {
+ .phys_start = 0xe0900000,
+ .virt_start = 0xe0900000,
+ .size = 0x100000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* smmu */ {
+ .phys_start = 0xe0600000,
+ .virt_start = 0xe0600000,
+ .size = 0x10000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* smmu */ {
+ .phys_start = 0xe0800000,
+ .virt_start = 0xe0800000,
+ .size = 0x10000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* serial */ {
+ .phys_start = 0xe1010000,
+ .virt_start = 0xe1010000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* ssp */ {
+ .phys_start = 0xe1020000,
+ .virt_start = 0xe1020000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* ssp */ {
+ .phys_start = 0xe1030000,
+ .virt_start = 0xe1030000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* phy */ {
+ .phys_start = 0xe1240000,
+ .virt_start = 0xe1240000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* phy */ {
+ .phys_start = 0xe1250000,
+ .virt_start = 0xe1250000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* ccn */ {
+ .phys_start = 0xe8000000,
+ .virt_start = 0xe8000000,
+ .size = 0x1000000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* RAM */ {
+ .phys_start = 0x8000000000,
+ .virt_start = 0x8000000000,
+ .size = 0x400000000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE,
+ },
+ },
+ .irqchips = {
+ /* GIC */ {
+ .address = 0xe1100000,
+ .pin_bitmap = 0xffffffffffffffff,
+ },
+ },
+
+};
diff --git a/hypervisor/arch/arm/include/asm/uart_pl011.h b/hypervisor/arch/arm/include/asm/uart_pl011.h
index 8548c86..9505161 100644
--- a/hypervisor/arch/arm/include/asm/uart_pl011.h
+++ b/hypervisor/arch/arm/include/asm/uart_pl011.h
@@ -70,6 +70,7 @@

static void uart_init(struct uart_chip *chip)
{
+#ifndef CONFIG_MACH_AMD_SEATTLE
/* 115200 8N1 */
/* FIXME: Can be improved with an implementation of __aeabi_uidiv */
u32 bauddiv = UART_CLK / (16 * 115200);
@@ -83,6 +84,7 @@ static void uart_init(struct uart_chip *chip)
mmio_write16(base + UARTIBRD, bauddiv);
mmio_write16(base + UARTCR, (UARTCR_EN | UARTCR_TXE | UARTCR_RXE |
UARTCR_Out1 | UARTCR_Out2));
+#endif
}

static void uart_wait(struct uart_chip *chip)
diff --git a/hypervisor/arch/arm64/include/asm/platform.h b/hypervisor/arch/arm64/include/asm/platform.h
index f8d4d91..e97b0f9 100644
--- a/hypervisor/arch/arm64/include/asm/platform.h
+++ b/hypervisor/arch/arm64/include/asm/platform.h
@@ -47,4 +47,23 @@

#endif /* CONFIG_MACH_FOUNDATION_V8 */

+#ifdef CONFIG_MACH_AMD_SEATTLE
+
+/* the device tree shipped with the kernel is wrong;
+ * these are the corrected values */
+# define GICD_BASE ((void *)0xe1110000)
+# define GICD_SIZE 0x1000
+# define GICC_BASE ((void *)0xe112f000)
+# define GICC_SIZE 0x2000
+# define GICH_BASE ((void *)0xe1140000)
+# define GICH_SIZE 0x10000
+# define GICV_BASE ((void *)0xe116f000)
+# define GICV_SIZE 0x2000
+
+# include <asm/gic_v2.h>
+# define MAINTENANCE_IRQ 25
+# define UART_BASE 0xe1010000
+
+#endif /* CONFIG_MACH_AMD_SEATTLE */

antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:40 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Catch accesses to the mmio regions that we want to handle from the
hypervisor. These are used also by the GIC code.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
---
hypervisor/arch/arm64/include/asm/traps.h | 2 +
hypervisor/arch/arm64/mmio.c | 123 +++++++++++++++++++++++++++++-
hypervisor/arch/arm64/traps.c | 10 ++-
3 files changed, 132 insertions(+), 3 deletions(-)

diff --git a/hypervisor/arch/arm64/include/asm/traps.h b/hypervisor/arch/arm64/include/asm/traps.h
index 3a60e30..2f2e0f6 100644
--- a/hypervisor/arch/arm64/include/asm/traps.h
+++ b/hypervisor/arch/arm64/include/asm/traps.h
@@ -31,5 +31,7 @@ struct trap_context {

void arch_skip_instruction(struct trap_context *ctx);

+int arch_handle_dabt(struct trap_context *ctx);
+
#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_TRAPS_H */
diff --git a/hypervisor/arch/arm64/mmio.c b/hypervisor/arch/arm64/mmio.c
index 37745d7..a885410 100644
--- a/hypervisor/arch/arm64/mmio.c
+++ b/hypervisor/arch/arm64/mmio.c
@@ -2,10 +2,14 @@
* Jailhouse AArch64 support
*
* Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ * Copyright (C) 2014 ARM Limited
*
* Authors:
* Antonios Motakis <antonios...@huawei.com>
*
+ * Part of the fuctionality is derived from the AArch32 implementation, under
+ * hypervisor/arch/arm/mmio.c by Jean-Philippe Brucker.
+ *
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*/
@@ -13,6 +17,12 @@
#include <jailhouse/entry.h>
#include <jailhouse/mmio.h>
#include <jailhouse/printk.h>
+#include <asm/bitops.h>
+#include <asm/percpu.h>
+#include <asm/sysregs.h>
+#include <asm/traps.h>
+
+/* AARCH64_TODO: consider merging this with the AArch32 version */

unsigned int arch_mmio_count_regions(struct cell *cell)
{
@@ -20,8 +30,119 @@ unsigned int arch_mmio_count_regions(struct cell *cell)
return 0;
}

-void arm_mmio_perform_access(unsigned long base, struct mmio_access *mmio)
+static void arch_inject_dabt(struct trap_context *ctx, unsigned long addr)
{
trace_error(-EINVAL);
while (1);
}
+
+void arm_mmio_perform_access(unsigned long base, struct mmio_access *mmio)
+{
+ void *addr = (void *)(base + mmio->address);
+
+ if (mmio->is_write)
+ switch (mmio->size) {
+ case 1:
+ mmio_write8(addr, mmio->value);
+ return;
+ case 2:
+ mmio_write16(addr, mmio->value);
+ return;
+ case 4:
+ mmio_write32(addr, mmio->value);
+ return;
+ case 8:
+ mmio_write64(addr, mmio->value);
+ return;
+ }
+ else
+ switch (mmio->size) {
+ case 1:
+ mmio->value = mmio_read8(addr);
+ return;
+ case 2:
+ mmio->value = mmio_read16(addr);
+ return;
+ case 4:
+ mmio->value = mmio_read32(addr);
+ return;
+ case 8:
+ mmio->value = mmio_read64(addr);
+ return;
+ }
+
+ printk("WARNING: Ignoring unsupported MMIO access size %d\n",
+ mmio->size);
+}
+
+int arch_handle_dabt(struct trap_context *ctx)
+{
+ enum mmio_result mmio_result;
+ struct mmio_access mmio;
+ unsigned long hpfar;
+ unsigned long hdfar;
+ /* Decode the syndrome fields */
+ u32 iss = ESR_ISS(ctx->esr);
+ u32 isv = iss >> 24;
+ u32 sas = iss >> 22 & 0x3;
+ u32 sse = iss >> 21 & 0x1;
+ u32 srt = iss >> 16 & 0x1f;
+ u32 ea = iss >> 9 & 0x1;
+ u32 cm = iss >> 8 & 0x1;
+ u32 s1ptw = iss >> 7 & 0x1;
+ u32 is_write = iss >> 6 & 0x1;
+ u32 size = 1 << sas;
+
+ arm_read_sysreg(HPFAR_EL2, hpfar);
+ arm_read_sysreg(FAR_EL2, hdfar);
+ mmio.address = hpfar << 8;
+ mmio.address |= hdfar & 0xfff;
+
+ this_cpu_data()->stats[JAILHOUSE_CPU_STAT_VMEXITS_MMIO]++;
+
+ /*
+ * Invalid instruction syndrome means multiple access or writeback,
+ * there is nothing we can do.
+ */
+ if (!isv)
+ goto error_unhandled;
+
+ /* Re-inject abort during page walk, cache maintenance or external */
+ if (s1ptw || ea || cm) {
+ arch_inject_dabt(ctx, hdfar);
+ return TRAP_HANDLED;
+ }
+
+ if (is_write) {
+ /* Load the value to write from the src register */
+ mmio.value = ctx->regs[srt];
+ if (sse)
+ mmio.value = sign_extend(mmio.value, 8 * size);
+ } else {
+ mmio.value = 0;
+ }
+ mmio.is_write = is_write;
+ mmio.size = size;
+
+ mmio_result = mmio_handle_access(&mmio);
+ if (mmio_result == MMIO_ERROR)
+ return TRAP_FORBIDDEN;
+ if (mmio_result == MMIO_UNHANDLED)
+ goto error_unhandled;
+
+ /* Put the read value into the dest register */
+ if (!is_write) {
+ if (sse)
+ mmio.value = sign_extend(mmio.value, 8 * size);
+ ctx->regs[srt] = mmio.value;
+ }
+
+ arch_skip_instruction(ctx);
+ return TRAP_HANDLED;
+
+error_unhandled:
+ panic_printk("Unhandled data %s at 0x%x(%d)\n",
+ (is_write ? "write" : "read"), mmio.address, size);
+
+ return TRAP_UNHANDLED;
+}
diff --git a/hypervisor/arch/arm64/traps.c b/hypervisor/arch/arm64/traps.c
index 199b497..cc6fe6c 100644
--- a/hypervisor/arch/arm64/traps.c
+++ b/hypervisor/arch/arm64/traps.c
@@ -15,6 +15,7 @@
#include <jailhouse/printk.h>
#include <asm/control.h>
#include <asm/gic_common.h>
+#include <asm/mmio.h>
#include <asm/platform.h>
#include <asm/psci.h>
#include <asm/sysregs.h>
@@ -23,8 +24,9 @@

void arch_skip_instruction(struct trap_context *ctx)
{
- trace_error(-EINVAL);
- while(1);
+ u32 instruction_length = ESR_IL(ctx->esr);
+
+ ctx->pc += (instruction_length ? 4 : 2);
}

static void dump_regs(struct trap_context *ctx)
@@ -105,6 +107,10 @@ static void arch_handle_trap(struct per_cpu *cpu_data,

/* exception class */
switch (ESR_EC(ctx.esr)) {
+ case ESR_EC_DABT_LOW:
+ ret = arch_handle_dabt(&ctx);
+ break;
+
default:
ret = TRAP_UNHANDLED;
}
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:41 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

We currently support 3 levels of page tables for a 39 bits PA range
on ARM. This patch implements support for 4 level page tables on
AArch64, for PA ranges from 40 to 48 bits. This will allow to use
Jailhouse on more targets.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/paging.h | 17 ++++-
hypervisor/arch/arm/include/asm/paging_modes.h | 1 +
hypervisor/arch/arm/mmu_cell.c | 16 ++++-
hypervisor/arch/arm/paging.c | 81 +++++++++++++++++++++
hypervisor/arch/arm64/entry.S | 38 ++++++++--
hypervisor/arch/arm64/include/asm/paging.h | 99 ++++++++++++++++++++------
6 files changed, 216 insertions(+), 36 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 6d54b71..1fe4d18 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -30,11 +30,13 @@
* by IPA[20:12].
* This would allows to cover a 4GB memory map by using 4 concatenated level-2
* page tables and thus provide better table walk performances.
- * For the moment, the core doesn't allow to use concatenated pages, so we will
- * use three levels instead, starting at level 1.
+ * For the moment, we will implement the first level for AArch32 using only
+ * one level.
*
- * TODO: add a "u32 concatenated" field to the paging struct
+ * TODO: implement larger PARange support for AArch32
*/
+#define ARM_CELL_ROOT_PT_SZ 1
+
#if MAX_PAGE_TABLE_LEVELS < 3
#define T0SZ 0
#define SL0 0
@@ -169,6 +171,15 @@

typedef u64 *pt_entry_t;

+extern unsigned int cpu_parange;
+
+/* cpu_parange initialized in arch_paging_init */
+static inline unsigned int get_cpu_parange(void)
+{
+ /* TODO: implement proper PARange support on AArch32 */
+ return 39;
+}
+
/* Only executed on hypervisor paging struct changes */
static inline void arch_paging_flush_page_tlbs(unsigned long page_addr)
{
diff --git a/hypervisor/arch/arm/include/asm/paging_modes.h b/hypervisor/arch/arm/include/asm/paging_modes.h
index 72950eb..391845c 100644
--- a/hypervisor/arch/arm/include/asm/paging_modes.h
+++ b/hypervisor/arch/arm/include/asm/paging_modes.h
@@ -16,6 +16,7 @@

/* Long-descriptor paging */
extern const struct paging arm_paging[];
+extern const struct paging *cell_paging;

#define hv_paging arm_paging

diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index e9d2044..8b543d0 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -57,8 +57,13 @@ unsigned long arch_paging_gphys2phys(struct per_cpu *cpu_data,

int arch_mmu_cell_init(struct cell *cell)
{
- cell->arch.mm.root_paging = hv_paging;
- cell->arch.mm.root_table = page_alloc(&mem_pool, 1);
+ if (get_cpu_parange() < 39)
+ return trace_error(-EINVAL);
+
+ cell->arch.mm.root_paging = cell_paging;
+ cell->arch.mm.root_table =
+ page_alloc_aligned(&mem_pool, ARM_CELL_ROOT_PT_SZ);
+
if (!cell->arch.mm.root_table)
return -ENOMEM;

@@ -67,7 +72,7 @@ int arch_mmu_cell_init(struct cell *cell)

void arch_mmu_cell_destroy(struct cell *cell)
{
- page_free(&mem_pool, cell->arch.mm.root_table, 1);
+ page_free(&mem_pool, cell->arch.mm.root_table, ARM_CELL_ROOT_PT_SZ);
}

int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
@@ -77,6 +82,11 @@ int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
u64 vttbr = 0;
u32 vtcr = VTCR_CELL;

+ /* We share page tables between CPUs, so we need to check
+ * that all CPUs support the same PARange. */
+ if (cpu_parange != get_cpu_parange())
+ return trace_error(-EINVAL);
+
if (cell->id > 0xff) {
panic_printk("No cell ID available\n");
return -E2BIG;
diff --git a/hypervisor/arch/arm/paging.c b/hypervisor/arch/arm/paging.c
index 8fdd034..93b3ba4 100644
--- a/hypervisor/arch/arm/paging.c
+++ b/hypervisor/arch/arm/paging.c
@@ -12,6 +12,8 @@

#include <jailhouse/paging.h>

+unsigned int cpu_parange = 0;
+
static bool arm_entry_valid(pt_entry_t entry, unsigned long flags)
{
// FIXME: validate flags!
@@ -40,6 +42,20 @@ static bool arm_page_table_empty(page_table_t page_table)
return true;
}

+#if MAX_PAGE_TABLE_LEVELS > 3
+static pt_entry_t arm_get_l0_entry(page_table_t page_table, unsigned long virt)
+{
+ return &page_table[(virt & L0_VADDR_MASK) >> 39];
+}
+
+static unsigned long arm_get_l0_phys(pt_entry_t pte, unsigned long virt)
+{
+ if ((*pte & PTE_TABLE_FLAGS) == PTE_TABLE_FLAGS)
+ return INVALID_PHYS_ADDR;
+ return (*pte & PTE_L0_BLOCK_ADDR_MASK) | (virt & BLOCK_512G_VADDR_MASK);
+}
+#endif
+
#if MAX_PAGE_TABLE_LEVELS > 2
static pt_entry_t arm_get_l1_entry(page_table_t page_table, unsigned long virt)
{
@@ -59,6 +75,18 @@ static unsigned long arm_get_l1_phys(pt_entry_t pte, unsigned long virt)
}
#endif

+static pt_entry_t arm_get_l1_alt_entry(page_table_t page_table, unsigned long virt)
+{
+ return &page_table[(virt & BIT_MASK(48,30)) >> 30];
+}
+
+static unsigned long arm_get_l1_alt_phys(pt_entry_t pte, unsigned long virt)
+{
+ if ((*pte & PTE_TABLE_FLAGS) == PTE_TABLE_FLAGS)
+ return INVALID_PHYS_ADDR;
+ return (*pte & BIT_MASK(48,30)) | (virt & BIT_MASK(29,0));
+}
+
static pt_entry_t arm_get_l2_entry(page_table_t page_table, unsigned long virt)
{
return &page_table[(virt & L2_VADDR_MASK) >> 21];
@@ -110,6 +138,18 @@ static unsigned long arm_get_l3_phys(pt_entry_t pte, unsigned long virt)
.page_table_empty = arm_page_table_empty,

const struct paging arm_paging[] = {
+#if MAX_PAGE_TABLE_LEVELS > 3
+ {
+ ARM_PAGING_COMMON
+ /* No block entries for level 0! */
+ .page_size = 0,
+ .get_entry = arm_get_l0_entry,
+ .get_phys = arm_get_l0_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+#endif
#if MAX_PAGE_TABLE_LEVELS > 2
{
ARM_PAGING_COMMON
@@ -144,6 +184,47 @@ const struct paging arm_paging[] = {
}
};

+const struct paging arm_s2_paging_alt[] = {
+ {
+ ARM_PAGING_COMMON
+ .page_size = 0,
+ .get_entry = arm_get_l1_alt_entry,
+ .get_phys = arm_get_l1_alt_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+ {
+ ARM_PAGING_COMMON
+ /* Block entry: 2MB */
+ .page_size = 2 * 1024 * 1024,
+ .get_entry = arm_get_l2_entry,
+ .set_terminal = arm_set_l2_block,
+ .get_phys = arm_get_l2_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+ {
+ ARM_PAGING_COMMON
+ /* Page entry: 4kB */
+ .page_size = 4 * 1024,
+ .get_entry = arm_get_l3_entry,
+ .set_terminal = arm_set_l3_page,
+ .get_phys = arm_get_l3_phys,
+ }
+};
+
+const struct paging *cell_paging;
+
void arch_paging_init(void)
{
+ cpu_parange = get_cpu_parange();
+
+ if (cpu_parange < 44)
+ /* 4 level page tables not supported for stage 2.
+ * We need to use multiple consecutive pages for L1 */
+ cell_paging = arm_s2_paging_alt;
+ else
+ cell_paging = arm_paging;
}
diff --git a/hypervisor/arch/arm64/entry.S b/hypervisor/arch/arm64/entry.S
index 9f221b9..9f24063 100644
--- a/hypervisor/arch/arm64/entry.S
+++ b/hypervisor/arch/arm64/entry.S
@@ -66,7 +66,7 @@ el2_entry:
msr vbar_el2, x1

/* enable temporary mmu mappigns for early initialization */
- ldr x0, =bootstrap_pt_l1
+ ldr x0, =bootstrap_pt_l0
bl enable_mmu_el2

mov x0, x17 /* preserved cpuid, will be passed to entry */
@@ -122,11 +122,11 @@ enable_mmu_el2:
* We should think what hw configuration we support by one instance of
* the hypervisor and choose Inner or Outter sharable domain.
*/
- ldr x1, =(T0SZ | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT) \
- | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT) \
- | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT) \
- | (TCR_PS_40B << TCR_PS_SHIFT) \
- | TCR_EL2_RES1)
+ ldr x1, =(T0SZ(48) | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT) \
+ | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT) \
+ | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)\
+ | (TCR_PS_48B << TCR_PS_SHIFT) \
+ | TCR_EL2_RES1)
msr tcr_el2, x1

msr ttbr0_el2, x0
@@ -149,8 +149,32 @@ enable_mmu_el2:
* we set up the final EL2 page tables.
*/
.align 12
-bootstrap_pt_l1:
+bootstrap_pt_l0:
addr = 0
+ blk_sz = 1 << 39
+ .rept 512
+ .if (addr >> 39) == (UART_BASE >> 39)
+ .quad bootstrap_pt_l1_uart + PTE_TABLE_FLAGS
+ .else
+ .if (addr >> 39) == (JAILHOUSE_BASE >> 39)
+ .quad bootstrap_pt_l1 + PTE_TABLE_FLAGS
+ .else
+ .quad 0
+ .endif
+ .endif
+ addr = addr + blk_sz
+ .endr
+bootstrap_pt_l1:
+#if (JAILHOUSE_BASE >> 39) != (UART_BASE)
+ addr = JAILHOUSE_BASE & ~((1 << 39) - 1)
+ blk_sz = 1 << 30
+ .rept 512
+ .quad addr | PAGE_DEFAULT_FLAGS
+ addr = addr + blk_sz
+ .endr
+#endif
+bootstrap_pt_l1_uart:
+ addr = UART_BASE & ~((1 << 39) - 1)
blk_sz = 1 << 30
.rept 512
.if (addr ^ UART_BASE) >> 30
diff --git a/hypervisor/arch/arm64/include/asm/paging.h b/hypervisor/arch/arm64/include/asm/paging.h
index deb5733..1b5c2fd 100644
--- a/hypervisor/arch/arm64/include/asm/paging.h
+++ b/hypervisor/arch/arm64/include/asm/paging.h
@@ -25,13 +25,9 @@
* native page size. AArch64 also supports 4 levels of page tables, numbered
* L0-3, while AArch32 supports only 3 levels numbered L1-3.
*
- * Otherwise, the page table format is identical. By setting the TCR registers
- * appropriately, for 4Kb page tables and starting address translations from
- * level 1, we can use the same page tables and page table generation code that
- * we use on AArch64.
- *
- * This gives us 39 addressable bits for the moment.
- * AARCH64_TODO: implement 4 level page tables, different granule sizes.
+ * We currently only implement 4Kb granule size for the page tables.
+ * We support physical address ranges from 40 to 48 bits. We don't handle
+ * currently platforms with 32 or 36 bit physical address ranges.
*/

#define PAGE_SHIFT 12
@@ -39,10 +35,9 @@
#define PAGE_MASK ~(PAGE_SIZE - 1)
#define PAGE_OFFS_MASK (PAGE_SIZE - 1)

-#define MAX_PAGE_TABLE_LEVELS 3
+#define MAX_PAGE_TABLE_LEVELS 4

-#define T0SZ (64 - 39)
-#define SL0 01
+#define L0_VADDR_MASK BIT_MASK(47, 39)
#define L1_VADDR_MASK BIT_MASK(38, 30)
#define L2_VADDR_MASK BIT_MASK(29, 21)

@@ -83,11 +78,13 @@
*/
#define PTE_TABLE_FLAGS 0x3

-#define PTE_L1_BLOCK_ADDR_MASK BIT_MASK(39, 30)
-#define PTE_L2_BLOCK_ADDR_MASK BIT_MASK(39, 21)
-#define PTE_TABLE_ADDR_MASK BIT_MASK(39, 12)
-#define PTE_PAGE_ADDR_MASK BIT_MASK(39, 12)
+#define PTE_L0_BLOCK_ADDR_MASK BIT_MASK(47, 39)
+#define PTE_L1_BLOCK_ADDR_MASK BIT_MASK(47, 30)
+#define PTE_L2_BLOCK_ADDR_MASK BIT_MASK(47, 21)
+#define PTE_TABLE_ADDR_MASK BIT_MASK(47, 12)
+#define PTE_PAGE_ADDR_MASK BIT_MASK(47, 12)

+#define BLOCK_512G_VADDR_MASK BIT_MASK(38, 0)
#define BLOCK_1G_VADDR_MASK BIT_MASK(29, 0)
#define BLOCK_2M_VADDR_MASK BIT_MASK(20, 0)

@@ -103,7 +100,16 @@

#define TCR_EL2_RES1 ((1 << 31) | (1 << 23))
#define VTCR_RES1 ((1 << 31))
+#define T0SZ(parange) (64 - parange)
+#define SL0_L0 2
+#define SL0_L1 1
+#define SL0_L2 0
+#define TCR_PS_32B 0x0
+#define TCR_PS_36B 0x1
#define TCR_PS_40B 0x2
+#define TCR_PS_42B 0x3
+#define TCR_PS_44B 0x4
+#define TCR_PS_48B 0x5
#define TCR_RGN_NON_CACHEABLE 0x0
#define TCR_RGN_WB_WA 0x1
#define TCR_RGN_WT 0x2
@@ -119,15 +125,6 @@
#define TCR_SL0_SHIFT 6
#define TCR_S_SHIFT 4

-/* AARCH64_TODO: we statically assume a 40 bit address space. Need to fix this,
- * along with the support for the 0th level page table available in AArch64 */
-#define VTCR_CELL (T0SZ | SL0 << TCR_SL0_SHIFT \
- | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT) \
- | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT) \
- | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)\
- | (TCR_PS_40B << TCR_PS_SHIFT) \
- | VTCR_RES1)
-
/*
* Hypervisor memory attribute indexes:
* 0: normal WB, RA, WA, non-transient
@@ -173,6 +170,62 @@

typedef u64 *pt_entry_t;

+extern unsigned int cpu_parange;
+
+/* cpu_parange initialized in arch_paging_init */
+static inline unsigned int get_cpu_parange(void)
+{
+ unsigned long id_aa64mmfr0;
+
+ arm_read_sysreg(ID_AA64MMFR0_EL1, id_aa64mmfr0);
+
+ switch (id_aa64mmfr0 & 0xf) {
+ case TCR_PS_32B:
+ return 32;
+ case TCR_PS_36B:
+ return 36;
+ case TCR_PS_40B:
+ return 40;
+ case TCR_PS_42B:
+ return 42;
+ case TCR_PS_44B:
+ return 44;
+ case TCR_PS_48B:
+ return 48;
+ default:
+ return 0;
+ }
+}
+
+/* The size of the cpu_parange, determines from which level we can
+ * start from the S2 translations, and the size of the first level
+ * page table */
+#define T0SZ_CELL T0SZ(cpu_parange)
+#define SL0_CELL ((cpu_parange >= 44) ? SL0_L0 : SL0_L1)
+#define ARM_CELL_ROOT_PT_SZ ((cpu_parange >= 44) ? 1 : \
+ (1 << (cpu_parange - 39)))
+
+/* Just match the host's PARange */
+#define TCR_PS_CELL \
+ ({ unsigned int ret = 0; \
+ switch (cpu_parange) { \
+ case 32: ret = TCR_PS_32B; break; \
+ case 36: ret = TCR_PS_36B; break; \
+ case 40: ret = TCR_PS_40B; break; \
+ case 42: ret = TCR_PS_42B; break; \
+ case 44: ret = TCR_PS_44B; break; \
+ case 48: ret = TCR_PS_48B; break; \
+ } \
+ ret; })
+
+#define VTCR_CELL (T0SZ_CELL | (SL0_CELL << TCR_SL0_SHIFT)\
+ | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT) \
+ | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT) \
+ | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)\
+ | (TCR_PS_CELL << TCR_PS_SHIFT) \
+ | VTCR_RES1)
+
+
/* Only executed on hypervisor paging struct changes */
static inline void arch_paging_flush_page_tlbs(unsigned long page_addr)
{
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:41 PM12/18/15
to jailho...@googlegroups.com, Dmitry Voytik, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Dmitry Voytik <dmitry...@huawei.com>

Dump stack in the following cases:
* exception in EL2. We can determine the stack size
* unhandled exceptions in EL1/0. We can't determine the stack
size thus we just print 512 bytes.

For EL2 exceptions the debug output will be like this:

FATAL: Unhandled HYP exception: synchronous abort from EL2
pc: 00000000fc00469c lr: 00000000fc004688 spsr: 200003c9 EL2
sp: 00000000fc015e30 esr: 25 1 0000044
x0: ffffffff00000000 x1: 0000000000000001 x2: 00000000fc00bd14
x3: ffffff80ffffffc8 x4: 00000000fc010000 x5: 0000000000000004
x6: ffffffc000afe000 x7: 00000000ffffe188 x8: 0000000000005d25
x9: 0000000000000001 x10: ffffffc035766a40 x11: ffffffbdc2d23f80
x12: 0000000000000862 x13: 0000007f92bd7cb0 x14: 0000007f92a67bc8
x15: 0000000000005798 x16: ffffffc0000a2794 x17: 0000000000412288
x18: 0000000000000000 x19: 0000000001930047 x20: 0000000000000004
x21: 0000000000000001 x22: 0000000000000001 x23: 00000000fc015eb8
x24: 00000000000001c0 x25: 0000000000000000 x26: ffffffc000afe6d8
x27: ffffffc035470000 x28: ffffffc034e08000 x29: 00000000fc015e30

Hypervisor stack before exception (0x00000000fc015e30 - 0x00000000fc016000):
5e20: fc015e90 00000000 fc00a298 00000000
5e40: fc015f00 00000000 fc015000 00000000 00559cb8 ffffffc0 00b69000 ffffffc0
5e60: 00b00000 ffffffc0 000001c0 00000000 0000001e 00000000 fc015cf5 00000000
5e80: fc015e90 00000000 0000001e 001e0100 fc015ee0 00000000 fc00a3b0 00000000
5ea0: fc015f00 00000000 fc00b478 00000000 fc015ee0 00000000 fc015f08 00000000
5ec0: 93930047 00000000 200001c5 00000000 00310820 ffffffc0 34e0bbc0 ffffffc0
5ee0: 00000000 00000000 fc009c54 00000000 00040000 00000000 00b00ea0 ffffffc0
5f00: 00000001 00000000 00000f00 ffffff80 00000004 00000000 00000040 00000000
5f20: 00b00ee0 ffffffc0 00559cc0 ffffffc0 00000004 00000000 00afe000 ffffffc0
5f40: ffffe188 00000000 00005d25 00000000 00000001 00000000 35766a40 ffffffc0
5f60: c2d23f80 ffffffbd 00000862 00000000 92bd7cb0 0000007f 92a67bc8 0000007f
5f80: 00005798 00000000 000a2794 ffffffc0 00412288 00000000 00000000 00000000
5fa0: 00040000 00000000 00b00ea0 ffffffc0 00559cb8 ffffffc0 00b69000 ffffffc0
5fc0: 00b00000 ffffffc0 000001c0 00000000 00000000 00000000 00afe6d8 ffffffc0
5fe0: 35470000 ffffffc0 34e08000 ffffffc0 34e0bbc0 ffffffc0 003107fc ffffffc0
6000: 8008a800

Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
---
hypervisor/arch/arm64/traps.c | 38 +++++++++++++++++++++++++++++++++++++-
1 file changed, 37 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm64/traps.c b/hypervisor/arch/arm64/traps.c
index b27bb2e..199b497 100644
--- a/hypervisor/arch/arm64/traps.c
+++ b/hypervisor/arch/arm64/traps.c
@@ -42,6 +42,38 @@ static void dump_regs(struct trap_context *ctx)
panic_printk("\n");
}

+/* TODO: move this function to an arch-independent code if other architectures
+ * will need it.
+ */
+static void dump_mem(unsigned long start, unsigned long stop)
+{
+ unsigned long caddr = start & ~0x1f;
+
+ if (stop <= start)
+ return;
+ printk("(0x%016lx - 0x%016lx):", start, stop);
+ for (;;) {
+ printk("\n%04lx: ", caddr & 0xffe0);
+ do {
+ if (caddr >= start)
+ printk("%08x ", *(unsigned int *)caddr);
+ else
+ printk(" ", *(unsigned int *)caddr);
+ caddr += 4;
+ } while ((caddr & 0x1f) && caddr < stop);
+ if (caddr >= stop)
+ break;
+ }
+ printk("\n");
+}
+
+static void dump_hyp_stack(const struct trap_context *ctx)
+{
+ panic_printk("Hypervisor stack before exception ");
+ dump_mem(ctx->sp, (unsigned long)this_cpu_data()->stack +
+ PERCPU_STACK_END);
+}
+
static void fill_trap_context(struct trap_context *ctx, struct registers *regs)
{
arm_read_sysreg(ELR_EL2, ctx->pc);
@@ -52,7 +84,10 @@ static void fill_trap_context(struct trap_context *ctx, struct registers *regs)
case 1:
arm_read_sysreg(SP_EL1, ctx->sp); break;
case 2:
- arm_read_sysreg(SP_EL2, ctx->sp); break;
+ /* SP_EL2 is not accessible in EL2. To obtain SP value before
+ * the excepton we can use the addres of *regs parameter. *regs
+ * is located in the stack (see handle_vmexit in exception.S) */
+ ctx->sp = (u64)(regs) + 16 * 16; break;
default:
ctx->sp = 0; break; /* should never happen */
}
@@ -93,6 +128,7 @@ static void arch_dump_exit(struct registers *regs, const char *reason)
fill_trap_context(&ctx, regs);
panic_printk("\nFATAL: Unhandled HYP exception: %s\n", reason);
dump_regs(&ctx);
+ dump_hyp_stack(&ctx);
}

struct registers *arch_handle_exit(struct per_cpu *cpu_data,
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:42 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Catch exceptions on the AArch64 target of Jailhouse. Catch and aborts
from EL2 that might be caused by the hypervisor.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
---
hypervisor/arch/arm64/Makefile | 1 +
hypervisor/arch/arm64/entry.S | 4 +
hypervisor/arch/arm64/exception.S | 96 ++++++++++++++++++++++++
hypervisor/arch/arm64/include/asm/traps.h | 35 +++++++++
hypervisor/arch/arm64/traps.c | 119 ++++++++++++++++++++++++++++++
5 files changed, 255 insertions(+)
create mode 100644 hypervisor/arch/arm64/exception.S
create mode 100644 hypervisor/arch/arm64/include/asm/traps.h
create mode 100644 hypervisor/arch/arm64/traps.c

diff --git a/hypervisor/arch/arm64/Makefile b/hypervisor/arch/arm64/Makefile
index fbb36df..1959918 100644
--- a/hypervisor/arch/arm64/Makefile
+++ b/hypervisor/arch/arm64/Makefile
@@ -18,5 +18,6 @@ always := built-in.o

obj-y := entry.o setup.o control.o mmio.o
obj-y += ../arm/mmu_cell.o ../arm/paging.o ../arm/dbg-write.o ../arm/lib.o
+obj-y += exception.o traps.o

obj-$(CONFIG_SERIAL_AMBA_PL011) += ../arm/dbg-write-pl011.o
diff --git a/hypervisor/arch/arm64/entry.S b/hypervisor/arch/arm64/entry.S
index 9b9976f..9f221b9 100644
--- a/hypervisor/arch/arm64/entry.S
+++ b/hypervisor/arch/arm64/entry.S
@@ -61,6 +61,10 @@ el2_entry:
cmp x1, #0x16
b.ne . /* not hvc */

+ /* install jailhouse vectors */
+ ldr x1, =hyp_vectors
+ msr vbar_el2, x1
+
/* enable temporary mmu mappigns for early initialization */
ldr x0, =bootstrap_pt_l1
bl enable_mmu_el2
diff --git a/hypervisor/arch/arm64/exception.S b/hypervisor/arch/arm64/exception.S
new file mode 100644
index 0000000..a98d134
--- /dev/null
+++ b/hypervisor/arch/arm64/exception.S
@@ -0,0 +1,96 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/head.h>
+#include <asm/processor.h>
+#include <asm/sysregs.h>
+
+.macro ventry label
+ .align 7
+ b \label
+.endm
+
+.macro handle_vmexit exit_reason
+ .align 7
+ /* Fill the struct registers. Should comply with NUM_USR_REGS */
+ stp x29, x30, [sp, #-16]!
+ stp x27, x28, [sp, #-16]!
+ stp x25, x26, [sp, #-16]!
+ stp x23, x24, [sp, #-16]!
+ stp x21, x22, [sp, #-16]!
+ stp x19, x20, [sp, #-16]!
+ stp x17, x18, [sp, #-16]!
+ stp x15, x16, [sp, #-16]!
+ stp x13, x14, [sp, #-16]!
+ stp x11, x12, [sp, #-16]!
+ stp x9, x10, [sp, #-16]!
+ stp x7, x8, [sp, #-16]!
+ stp x5, x6, [sp, #-16]!
+ stp x3, x4, [sp, #-16]!
+ stp x1, x2, [sp, #-16]!
+
+ mov x1, #\exit_reason
+ stp x1, x0, [sp, #-16]!
+
+ mov x29, xzr
+ mov x30, xzr
+ mrs x0, tpidr_el2
+ mov x1, sp
+ bl arch_handle_exit
+ b .
+.endm
+
+ .text
+ .globl hyp_vectors
+ .align 11
+hyp_vectors:
+ ventry .
+ ventry .
+ ventry .
+ ventry .
+
+ handle_vmexit EXIT_REASON_EL2_ABORT
+ ventry .
+ ventry .
+ ventry .
+
+ handle_vmexit EXIT_REASON_EL1_ABORT
+ handle_vmexit EXIT_REASON_EL1_IRQ
+ ventry .
+ ventry .
+
+ ventry .
+ ventry .
+ ventry .
+ ventry .
+
+ .globl vmreturn
+vmreturn:
+ /* x0: struct registers* */
+ mov sp, x0
+ ldp x1, x0, [sp], #16 /* x1 is the exit_reason */
+ ldp x1, x2, [sp], #16
+ ldp x3, x4, [sp], #16
+ ldp x5, x6, [sp], #16
+ ldp x7, x8, [sp], #16
+ ldp x9, x10, [sp], #16
+ ldp x11, x12, [sp], #16
+ ldp x13, x14, [sp], #16
+ ldp x15, x16, [sp], #16
+ ldp x17, x18, [sp], #16
+ ldp x19, x20, [sp], #16
+ ldp x21, x22, [sp], #16
+ ldp x23, x24, [sp], #16
+ ldp x25, x26, [sp], #16
+ ldp x27, x28, [sp], #16
+ ldp x29, x30, [sp], #16
+ eret
diff --git a/hypervisor/arch/arm64/include/asm/traps.h b/hypervisor/arch/arm64/include/asm/traps.h
new file mode 100644
index 0000000..3a60e30
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/traps.h
@@ -0,0 +1,35 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_TRAPS_H
+#define _JAILHOUSE_ASM_TRAPS_H
+
+#ifndef __ASSEMBLY__
+
+enum trap_return {
+ TRAP_HANDLED = 1,
+ TRAP_UNHANDLED = 0,
+ TRAP_FORBIDDEN = -1,
+};
+
+struct trap_context {
+ unsigned long *regs;
+ u64 esr;
+ u64 spsr;
+ u64 pc;
+ u64 sp;
+};
+
+void arch_skip_instruction(struct trap_context *ctx);
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_TRAPS_H */
diff --git a/hypervisor/arch/arm64/traps.c b/hypervisor/arch/arm64/traps.c
new file mode 100644
index 0000000..b27bb2e
--- /dev/null
+++ b/hypervisor/arch/arm64/traps.c
@@ -0,0 +1,119 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ * Dmitry Voytik <dmitry...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <jailhouse/control.h>
+#include <jailhouse/printk.h>
+#include <asm/control.h>
+#include <asm/gic_common.h>
+#include <asm/platform.h>
+#include <asm/psci.h>
+#include <asm/sysregs.h>
+#include <asm/traps.h>
+#include <asm/processor.h>
+
+void arch_skip_instruction(struct trap_context *ctx)
+{
+ trace_error(-EINVAL);
+ while(1);
+}
+
+static void dump_regs(struct trap_context *ctx)
+{
+ unsigned char i;
+
+ panic_printk(" pc: %016lx lr: %016lx spsr: %08lx EL%1d\n"
+ " sp: %016lx esr: %02x %01x %07lx\n",
+ ctx->pc, ctx->regs[30], ctx->spsr, SPSR_EL(ctx->spsr),
+ ctx->sp, ESR_EC(ctx->esr), ESR_IL(ctx->esr),
+ ESR_ISS(ctx->esr));
+ for (i = 0; i < NUM_USR_REGS - 1; i++)
+ panic_printk("%sx%d: %016lx%s", i < 10 ? " " : "", i,
+ ctx->regs[i], i % 3 == 2 ? "\n" : " ");
+ panic_printk("\n");
+}
+
+static void fill_trap_context(struct trap_context *ctx, struct registers *regs)
+{
+ arm_read_sysreg(ELR_EL2, ctx->pc);
+ arm_read_sysreg(SPSR_EL2, ctx->spsr);
+ switch (SPSR_EL(ctx->spsr)) { /* exception level */
+ case 0:
+ arm_read_sysreg(SP_EL0, ctx->sp); break;
+ case 1:
+ arm_read_sysreg(SP_EL1, ctx->sp); break;
+ case 2:
+ arm_read_sysreg(SP_EL2, ctx->sp); break;
+ default:
+ ctx->sp = 0; break; /* should never happen */
+ }
+ arm_read_sysreg(ESR_EL2, ctx->esr);
+ ctx->regs = regs->usr;
+}
+
+static void arch_handle_trap(struct per_cpu *cpu_data,
+ struct registers *guest_regs)
+{
+ struct trap_context ctx;
+ int ret;
+
+ fill_trap_context(&ctx, guest_regs);
+
+ /* exception class */
+ switch (ESR_EC(ctx.esr)) {
+ default:
+ ret = TRAP_UNHANDLED;
+ }
+
+ if (ret == TRAP_UNHANDLED || ret == TRAP_FORBIDDEN) {
+ panic_printk("\nFATAL: exception %s\n", (ret == TRAP_UNHANDLED ?
+ "unhandled trap" :
+ "forbidden access"));
+ panic_printk("Cell state before exception:\n");
+ dump_regs(&ctx);
+ panic_park();
+ }
+
+ arm_write_sysreg(ELR_EL2, ctx.pc);
+}
+
+static void arch_dump_exit(struct registers *regs, const char *reason)
+{
+ struct trap_context ctx;
+
+ fill_trap_context(&ctx, regs);
+ panic_printk("\nFATAL: Unhandled HYP exception: %s\n", reason);
+ dump_regs(&ctx);
+}
+
+struct registers *arch_handle_exit(struct per_cpu *cpu_data,
+ struct registers *regs)
+{
+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_TOTAL]++;
+
+ switch (regs->exit_reason) {
+ case EXIT_REASON_EL1_ABORT:
+ arch_handle_trap(cpu_data, regs);
+ break;
+
+ case EXIT_REASON_EL2_ABORT:
+ arch_dump_exit(regs, "synchronous abort from EL2");
+ panic_stop();
+ break;
+
+ default:
+ arch_dump_exit(regs, "unexpected");
+ panic_stop();
+ }
+
+ vmreturn(regs);
+}
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:42 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Enable the MMU mappings for the hypervisor running in EL2, and add
functions to map device regions to the hypervisor address space.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/include/asm/setup.h | 29 +++++++++++++++++++++++++++++
hypervisor/arch/arm64/setup.c | 25 ++++++++++++++++++++++---
2 files changed, 51 insertions(+), 3 deletions(-)
create mode 100644 hypervisor/arch/arm64/include/asm/setup.h

diff --git a/hypervisor/arch/arm64/include/asm/setup.h b/hypervisor/arch/arm64/include/asm/setup.h
new file mode 100644
index 0000000..a2d1930
--- /dev/null
+++ b/hypervisor/arch/arm64/include/asm/setup.h
@@ -0,0 +1,29 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_SETUP_H
+#define _JAILHOUSE_ASM_SETUP_H
+
+#include <asm/head.h>
+#include <asm/percpu.h>
+
+#ifndef __ASSEMBLY__
+
+#include <jailhouse/string.h>
+
+void enable_mmu_el2(page_table_t ttbr0_el2);
+
+int arch_map_device(void *paddr, void *vaddr, unsigned long size);
+int arch_unmap_device(void *addr, unsigned long size);
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_SETUP_H */
diff --git a/hypervisor/arch/arm64/setup.c b/hypervisor/arch/arm64/setup.c
index ca83940..13e6387 100644
--- a/hypervisor/arch/arm64/setup.c
+++ b/hypervisor/arch/arm64/setup.c
@@ -12,20 +12,25 @@

#include <jailhouse/entry.h>
#include <jailhouse/printk.h>
+#include <asm/control.h>
+#include <asm/setup.h>

int arch_init_early(void)
{
- return trace_error(-EINVAL);
+ return arch_mmu_cell_init(&root_cell);
}

int arch_cpu_init(struct per_cpu *cpu_data)
{
- return trace_error(-EINVAL);
+ /* switch to the permanent page tables */
+ enable_mmu_el2(hv_paging_structs.root_table);
+
+ return arch_mmu_cpu_cell_init(cpu_data);
}

int arch_init_late(void)
{
- return trace_error(-EINVAL);
+ return map_root_memory_regions();
}

void __attribute__((noreturn)) arch_cpu_activate_vmm(struct per_cpu *cpu_data)
@@ -34,6 +39,20 @@ void __attribute__((noreturn)) arch_cpu_activate_vmm(struct per_cpu *cpu_data)
while (1);
}

+int arch_map_device(void *paddr, void *vaddr, unsigned long size)
+{
+ return paging_create(&hv_paging_structs, (unsigned long)paddr, size,
+ (unsigned long)vaddr,
+ PAGE_DEFAULT_FLAGS | S1_PTE_FLAG_DEVICE,
+ PAGING_NON_COHERENT);
+}
+
+int arch_unmap_device(void *vaddr, unsigned long size)
+{
+ return paging_destroy(&hv_paging_structs, (unsigned long)vaddr, size,
+ PAGING_NON_COHERENT);
+}
+
void arch_cpu_restore(struct per_cpu *cpu_data, int return_code)
{
trace_error(-EINVAL);
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:42 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

On AArch64 we pretty much rely on PSCI being present for SMP
support (turning multiple cores on and off). This patch implements
the helpers needed for SMP and plugs in the PSCI code from AArch32.

On AArch64 PSCI calls can be issued via SVC64 hypercalls as well,
contrary to AArch32 which uses SVC32 calls only. We add the changes
necessary to support the hypercalls that are used by a Linux root
cell. CPU hotplug now is still working after we enable Jailhouse.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/psci.h | 2 +-
hypervisor/arch/arm/psci.c | 6 +-
hypervisor/arch/arm64/Makefile | 1 +
hypervisor/arch/arm64/control.c | 98 +++++++++++++++++++++++++++++-
hypervisor/arch/arm64/include/asm/percpu.h | 6 ++
hypervisor/arch/arm64/psci_low.S | 62 +++++++++++++++++++
hypervisor/arch/arm64/setup.c | 4 ++
hypervisor/arch/arm64/traps.c | 34 +++++++++++
8 files changed, 208 insertions(+), 5 deletions(-)
create mode 100644 hypervisor/arch/arm64/psci_low.S

diff --git a/hypervisor/arch/arm/include/asm/psci.h b/hypervisor/arch/arm/include/asm/psci.h
index ba0adac..4e4a6c9 100644
--- a/hypervisor/arch/arm/include/asm/psci.h
+++ b/hypervisor/arch/arm/include/asm/psci.h
@@ -48,7 +48,7 @@

#define IS_PSCI_FN(hvc) ((((hvc) >> 24) | 0x40) == 0xc4)

-#define PSCI_INVALID_ADDRESS 0xffffffff
+#define PSCI_INVALID_ADDRESS (-1)

#ifndef __ASSEMBLY__

diff --git a/hypervisor/arch/arm/psci.c b/hypervisor/arch/arm/psci.c
index 3ebbd50..6da0961 100644
--- a/hypervisor/arch/arm/psci.c
+++ b/hypervisor/arch/arm/psci.c
@@ -78,8 +78,8 @@ int psci_wait_cpu_stopped(unsigned int cpu_id)
static long psci_emulate_cpu_on(struct per_cpu *cpu_data,
struct trap_context *ctx)
{
- unsigned int target = ctx->regs[1];
- unsigned int cpu;
+ unsigned long target = ctx->regs[1];
+ unsigned long cpu;
struct psci_mbox *mbox;

cpu = arm_cpu_virt2phys(cpu_data->cell, target);
@@ -154,10 +154,12 @@ long psci_dispatch(struct trap_context *ctx)
return 0;

case PSCI_CPU_ON_32:
+ case PSCI_CPU_ON_64:
case PSCI_CPU_ON_V0_1_UBOOT:
return psci_emulate_cpu_on(cpu_data, ctx);

case PSCI_AFFINITY_INFO_32:
+ case PSCI_AFFINITY_INFO_64:
return psci_emulate_affinity_info(cpu_data, ctx);

default:
diff --git a/hypervisor/arch/arm64/Makefile b/hypervisor/arch/arm64/Makefile
index 60e572f..5f13642 100644
--- a/hypervisor/arch/arm64/Makefile
+++ b/hypervisor/arch/arm64/Makefile
@@ -20,6 +20,7 @@ obj-y := entry.o setup.o control.o mmio.o
obj-y += ../arm/mmu_cell.o ../arm/paging.o ../arm/dbg-write.o ../arm/lib.o
obj-y += exception.o traps.o
obj-y += ../arm/irqchip.o ../arm/gic-common.o
+obj-y += ../arm/psci.o psci_low.o

obj-$(CONFIG_SERIAL_AMBA_PL011) += ../arm/dbg-write-pl011.o
obj-$(CONFIG_ARM_GIC) += ../arm/gic-v2.o
diff --git a/hypervisor/arch/arm64/control.c b/hypervisor/arch/arm64/control.c
index f47cbe7..a804ae4 100644
--- a/hypervisor/arch/arm64/control.c
+++ b/hypervisor/arch/arm64/control.c
@@ -12,11 +12,98 @@

#include <jailhouse/control.h>
#include <jailhouse/printk.h>
+#include <jailhouse/string.h>
#include <asm/control.h>
#include <asm/irqchip.h>
#include <asm/platform.h>
#include <asm/traps.h>

+static void arch_reset_el1(struct registers *regs)
+{
+ /* put the cpu in a reset state */
+ /* AARCH64_TODO: handle big endian support */
+ arm_write_sysreg(SPSR_EL2, RESET_PSR);
+ arm_write_sysreg(SCTLR_EL1, SCTLR_EL1_RES1);
+ arm_write_sysreg(CNTKCTL_EL1, 0);
+ arm_write_sysreg(PMCR_EL0, 0);
+
+ /* wipe any other state to avoid leaking information accross cells */
+ memset(regs, 0, sizeof(struct registers));
+
+ /* AARCH64_TODO: wipe floating point registers */
+
+ /* wipe special registers */
+ arm_write_sysreg(SP_EL0, 0);
+ arm_write_sysreg(SP_EL1, 0);
+ arm_write_sysreg(SPSR_EL1, 0);
+
+ /* wipe the system registers */
+ arm_write_sysreg(AFSR0_EL1, 0);
+ arm_write_sysreg(AFSR1_EL1, 0);
+ arm_write_sysreg(AMAIR_EL1, 0);
+ arm_write_sysreg(CONTEXTIDR_EL1, 0);
+ arm_write_sysreg(CPACR_EL1, 0);
+ arm_write_sysreg(CSSELR_EL1, 0);
+ arm_write_sysreg(ESR_EL1, 0);
+ arm_write_sysreg(FAR_EL1, 0);
+ arm_write_sysreg(MAIR_EL1, 0);
+ arm_write_sysreg(PAR_EL1, 0);
+ arm_write_sysreg(TCR_EL1, 0);
+ arm_write_sysreg(TPIDRRO_EL0, 0);
+ arm_write_sysreg(TPIDR_EL0, 0);
+ arm_write_sysreg(TPIDR_EL1, 0);
+ arm_write_sysreg(TTBR0_EL1, 0);
+ arm_write_sysreg(TTBR1_EL1, 0);
+ arm_write_sysreg(VBAR_EL1, 0);
+
+ /* wipe timer registers */
+ arm_write_sysreg(CNTP_CTL_EL0, 0);
+ arm_write_sysreg(CNTP_CVAL_EL0, 0);
+ arm_write_sysreg(CNTP_TVAL_EL0, 0);
+ arm_write_sysreg(CNTV_CTL_EL0, 0);
+ arm_write_sysreg(CNTV_CVAL_EL0, 0);
+ arm_write_sysreg(CNTV_TVAL_EL0, 0);
+
+ /* AARCH64_TODO: handle PMU registers */
+ /* AARCH64_TODO: handle debug registers */
+ /* AARCH64_TODO: handle system registers for AArch32 state */
+}
+
+void arch_reset_self(struct per_cpu *cpu_data)
+{
+ int err = 0;
+ unsigned long reset_address;
+ struct cell *cell = cpu_data->cell;
+ struct registers *regs = guest_regs(cpu_data);
+
+ if (cell != &root_cell) {
+ trace_error(-EINVAL);
+ panic_stop();
+ }
+
+ /*
+ * Note: D-cache cleaning and I-cache invalidation is done on driver
+ * level after image is loaded.
+ */
+
+ err = irqchip_cpu_reset(cpu_data);
+ if (err)
+ printk("IRQ setup failed\n");
+
+ /* Wait for the driver to call cpu_up */
+ reset_address = psci_emulate_spin(cpu_data);
+
+ /* Set the new MPIDR */
+ arm_write_sysreg(VMPIDR_EL2, cpu_data->virt_id | MPIDR_MP_BIT);
+
+ /* Restore an empty context */
+ arch_reset_el1(regs);
+
+ arm_write_sysreg(ELR_EL2, reset_address);
+
+ vmreturn(regs);
+}
+
int arch_cell_create(struct cell *cell)
{
return trace_error(-EINVAL);
@@ -101,12 +188,19 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)

unsigned int arm_cpu_virt2phys(struct cell *cell, unsigned int virt_id)
{
- return trace_error(-EINVAL);
+ unsigned int cpu;
+
+ for_each_cpu(cpu, cell->cpu_set) {
+ if (per_cpu(cpu)->virt_id == virt_id)
+ return cpu;
+ }
+
+ return -1;
}

unsigned int arm_cpu_phys2virt(unsigned int cpu_id)
{
- return trace_error(-EINVAL);
+ return per_cpu(cpu_id)->virt_id;
}

/*
diff --git a/hypervisor/arch/arm64/include/asm/percpu.h b/hypervisor/arch/arm64/include/asm/percpu.h
index 261c594..dc2c2c7 100644
--- a/hypervisor/arch/arm64/include/asm/percpu.h
+++ b/hypervisor/arch/arm64/include/asm/percpu.h
@@ -25,6 +25,7 @@

#include <jailhouse/printk.h>
#include <asm/cell.h>
+#include <asm/psci.h>
#include <asm/spinlock.h>

struct pending_irq;
@@ -48,6 +49,11 @@ struct per_cpu {
void *gicr_base;

bool flush_vcpu_caches;
+
+ __attribute__((aligned(16))) struct psci_mbox psci_mbox;
+ struct psci_mbox guest_mbox;
+
+ unsigned int virt_id;
} __attribute__((aligned(PAGE_SIZE)));

static inline struct per_cpu *this_cpu_data(void)
diff --git a/hypervisor/arch/arm64/psci_low.S b/hypervisor/arch/arm64/psci_low.S
new file mode 100644
index 0000000..c0b1ccc
--- /dev/null
+++ b/hypervisor/arch/arm64/psci_low.S
@@ -0,0 +1,62 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/head.h>
+#include <asm/psci.h>
+
+ .globl smc
+ /*
+ * Since we trap all SMC instructions, it may be useful to forward them
+ * when it isn't a PSCI call. The shutdown code will also have to issue
+ * a real PSCI_OFF call on secondary CPUs.
+ */
+smc:
+ smc #0
+ ret
+
+ .global _psci_cpu_off
+ /* x0: struct psci_mbox* */
+_psci_cpu_off:
+ ldr x2, =PSCI_INVALID_ADDRESS
+ /* Clear mbox */
+ str x2, [x0]
+
+ /* Wait for a CPU_ON call that updates the mbox */
+1: wfe
+ ldr x3, [x0]
+ cmp x3, #PSCI_INVALID_ADDRESS
+ b.eq 1b
+
+ /* Jump to the requested entry, with a parameter */
+ ldr x0, [x0, #8]
+ br x3
+ ret
+
+ .global _psci_cpu_on
+ /* x0: struct psci_mbox*, x1: entry, x2: context */
+_psci_cpu_on:
+1: ldxp x4, x5, [x0]
+ cmp x4, #PSCI_INVALID_ADDRESS
+ b.ne store_failed
+ stxp w7, x1, x2, [x0]
+ cbnz w7, 1b
+
+ mov x0, #0
+ ret
+
+store_failed:
+ mov x0, #PSCI_INVALID_ADDRESS
+ ret
+
+ .global _psci_suspend_return
+_psci_suspend_return:
+ ret
diff --git a/hypervisor/arch/arm64/setup.c b/hypervisor/arch/arm64/setup.c
index 838c541..e999c02 100644
--- a/hypervisor/arch/arm64/setup.c
+++ b/hypervisor/arch/arm64/setup.c
@@ -15,6 +15,7 @@
#include <asm/control.h>
#include <asm/irqchip.h>
#include <asm/setup.h>
+#include <asm/smp.h>

int arch_init_early(void)
{
@@ -34,6 +35,9 @@ int arch_cpu_init(struct per_cpu *cpu_data)
/* switch to the permanent page tables */
enable_mmu_el2(hv_paging_structs.root_table);

+ cpu_data->psci_mbox.entry = 0;
+ cpu_data->virt_id = cpu_data->cpu_id;
+
err = arch_mmu_cpu_cell_init(cpu_data);
if (err)
return err;
diff --git a/hypervisor/arch/arm64/traps.c b/hypervisor/arch/arm64/traps.c
index fec057d..7a0b24c 100644
--- a/hypervisor/arch/arm64/traps.c
+++ b/hypervisor/arch/arm64/traps.c
@@ -30,6 +30,32 @@ void arch_skip_instruction(struct trap_context *ctx)
ctx->pc += (instruction_length ? 4 : 2);
}

+static int arch_handle_smc(struct trap_context *ctx)
+{
+ unsigned long *regs = ctx->regs;
+
+ if (!IS_PSCI_FN(regs[0]))
+ return TRAP_UNHANDLED;
+
+ regs[0] = psci_dispatch(ctx);
+ arch_skip_instruction(ctx);
+
+ return TRAP_HANDLED;
+}
+
+static int arch_handle_hvc(struct trap_context *ctx)
+{
+ unsigned long *regs = ctx->regs;
+
+ if (!IS_PSCI_FN(regs[0]))
+ return TRAP_UNHANDLED;
+
+ regs[0] = psci_dispatch(ctx);
+ arch_skip_instruction(ctx);
+
+ return TRAP_HANDLED;
+}
+
static void dump_regs(struct trap_context *ctx)
{
unsigned char i;
@@ -112,6 +138,14 @@ static void arch_handle_trap(struct per_cpu *cpu_data,
ret = arch_handle_dabt(&ctx);
break;

+ case ESR_EC_SMC64:
+ ret = arch_handle_smc(&ctx);
+ break;
+
+ case ESR_EC_HVC64:
+ ret = arch_handle_hvc(&ctx);

antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:44 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

We plug the irqchip handling code, and the GICv2 implementation from
AArch32.

GICv3 is slightly trickier; it makes heavier use of the sysregs, so
we will need to review more carefully that the sysregs macros do
the right thing.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/Makefile | 2 ++
hypervisor/arch/arm64/control.c | 47 ++++++++++++++++++++++++++++++
hypervisor/arch/arm64/include/asm/cell.h | 2 ++
hypervisor/arch/arm64/include/asm/percpu.h | 10 +++++++
hypervisor/arch/arm64/mmio.c | 4 +--
hypervisor/arch/arm64/setup.c | 24 +++++++++++++--
hypervisor/arch/arm64/traps.c | 5 ++++
7 files changed, 90 insertions(+), 4 deletions(-)

diff --git a/hypervisor/arch/arm64/Makefile b/hypervisor/arch/arm64/Makefile
index 1959918..60e572f 100644
--- a/hypervisor/arch/arm64/Makefile
+++ b/hypervisor/arch/arm64/Makefile
@@ -19,5 +19,7 @@ always := built-in.o
obj-y := entry.o setup.o control.o mmio.o
obj-y += ../arm/mmu_cell.o ../arm/paging.o ../arm/dbg-write.o ../arm/lib.o
obj-y += exception.o traps.o
+obj-y += ../arm/irqchip.o ../arm/gic-common.o

obj-$(CONFIG_SERIAL_AMBA_PL011) += ../arm/dbg-write-pl011.o
+obj-$(CONFIG_ARM_GIC) += ../arm/gic-v2.o
diff --git a/hypervisor/arch/arm64/control.c b/hypervisor/arch/arm64/control.c
index a1c4774..f47cbe7 100644
--- a/hypervisor/arch/arm64/control.c
+++ b/hypervisor/arch/arm64/control.c
@@ -12,6 +12,10 @@

#include <jailhouse/control.h>
#include <jailhouse/printk.h>
+#include <asm/control.h>
+#include <asm/irqchip.h>
+#include <asm/platform.h>
+#include <asm/traps.h>

int arch_cell_create(struct cell *cell)
{
@@ -81,3 +85,46 @@ void arch_panic_park(void)
trace_error(-EINVAL);
while (1);
}
+
+void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
+{
+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_MANAGEMENT]++;
+
+ switch (irqn) {
+ case SGI_INJECT:
+ irqchip_inject_pending(cpu_data);
+ break;
+ default:
+ printk("WARN: unknown SGI received %d\n", irqn);
+ }
+}
+
+unsigned int arm_cpu_virt2phys(struct cell *cell, unsigned int virt_id)
+{
+ return trace_error(-EINVAL);
+}
+
+unsigned int arm_cpu_phys2virt(unsigned int cpu_id)
+{
+ return trace_error(-EINVAL);
+}
+
+/*
+ * Handle the maintenance interrupt, the rest is injected into the cell.
+ * Return true when the IRQ has been handled by the hyp.
+ */
+bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn)
+{
+ if (irqn == MAINTENANCE_IRQ) {
+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_MAINTENANCE]++;
+
+ irqchip_inject_pending(cpu_data);
+ return true;
+ }
+
+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_VIRQ]++;
+
+ irqchip_set_pending(cpu_data, irqn, true);
+
+ return false;
+}
diff --git a/hypervisor/arch/arm64/include/asm/cell.h b/hypervisor/arch/arm64/include/asm/cell.h
index 4ba8224..9a9689e 100644
--- a/hypervisor/arch/arm64/include/asm/cell.h
+++ b/hypervisor/arch/arm64/include/asm/cell.h
@@ -26,6 +26,8 @@ struct arch_cell {
struct paging_structures mm;
spinlock_t caches_lock;
bool needs_flush;
+
+ u64 spis;
};

extern struct cell root_cell;
diff --git a/hypervisor/arch/arm64/include/asm/percpu.h b/hypervisor/arch/arm64/include/asm/percpu.h
index 381d7fc..261c594 100644
--- a/hypervisor/arch/arm64/include/asm/percpu.h
+++ b/hypervisor/arch/arm64/include/asm/percpu.h
@@ -23,9 +23,12 @@

#ifndef __ASSEMBLY__

+#include <jailhouse/printk.h>
#include <asm/cell.h>
#include <asm/spinlock.h>

+struct pending_irq;
+
struct per_cpu {
u8 stack[PAGE_SIZE];
unsigned long saved_vectors;
@@ -37,6 +40,13 @@ struct per_cpu {
int shutdown_state;
bool failed;

+ /* Other CPUs can insert sgis into the pending array */
+ spinlock_t gic_lock;
+ struct pending_irq *pending_irqs;
+ struct pending_irq *first_pending;
+ /* Only GICv3: redistributor base */
+ void *gicr_base;
+
bool flush_vcpu_caches;
} __attribute__((aligned(PAGE_SIZE)));

diff --git a/hypervisor/arch/arm64/mmio.c b/hypervisor/arch/arm64/mmio.c
index a885410..9eeb86e 100644
--- a/hypervisor/arch/arm64/mmio.c
+++ b/hypervisor/arch/arm64/mmio.c
@@ -18,6 +18,7 @@
#include <jailhouse/mmio.h>
#include <jailhouse/printk.h>
#include <asm/bitops.h>
+#include <asm/irqchip.h>
#include <asm/percpu.h>
#include <asm/sysregs.h>
#include <asm/traps.h>
@@ -26,8 +27,7 @@

unsigned int arch_mmio_count_regions(struct cell *cell)
{
- /* not entirely a lie :) */
- return 0;
+ return irqchip_mmio_count_regions(cell);
}

static void arch_inject_dabt(struct trap_context *ctx, unsigned long addr)
diff --git a/hypervisor/arch/arm64/setup.c b/hypervisor/arch/arm64/setup.c
index 13e6387..838c541 100644
--- a/hypervisor/arch/arm64/setup.c
+++ b/hypervisor/arch/arm64/setup.c
@@ -13,23 +13,43 @@
#include <jailhouse/entry.h>
#include <jailhouse/printk.h>
#include <asm/control.h>
+#include <asm/irqchip.h>
#include <asm/setup.h>

int arch_init_early(void)
{
- return arch_mmu_cell_init(&root_cell);
+ int err = 0;
+
+ err = arch_mmu_cell_init(&root_cell);
+ if (err)
+ return err;
+
+ return irqchip_init();
}

int arch_cpu_init(struct per_cpu *cpu_data)
{
+ int err = 0;
+
/* switch to the permanent page tables */
enable_mmu_el2(hv_paging_structs.root_table);

- return arch_mmu_cpu_cell_init(cpu_data);
+ err = arch_mmu_cpu_cell_init(cpu_data);
+ if (err)
+ return err;
+
+ return irqchip_cpu_init(cpu_data);
}

int arch_init_late(void)
{
+ int err;
+
+ /* Setup the SPI bitmap */
+ err = irqchip_cell_init(&root_cell);
+ if (err)
+ return err;
+
return map_root_memory_regions();
}

diff --git a/hypervisor/arch/arm64/traps.c b/hypervisor/arch/arm64/traps.c
index cc6fe6c..fec057d 100644
--- a/hypervisor/arch/arm64/traps.c
+++ b/hypervisor/arch/arm64/traps.c
@@ -21,6 +21,7 @@
#include <asm/sysregs.h>
#include <asm/traps.h>
#include <asm/processor.h>
+#include <asm/irqchip.h>

void arch_skip_instruction(struct trap_context *ctx)
{
@@ -143,6 +144,10 @@ struct registers *arch_handle_exit(struct per_cpu *cpu_data,
cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_TOTAL]++;

switch (regs->exit_reason) {
+ case EXIT_REASON_EL1_IRQ:
+ irqchip_handle_irq(cpu_data);
+ break;
+
case EXIT_REASON_EL1_ABORT:
arch_handle_trap(cpu_data, regs);
break;
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:45 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

We have now enough implemented functionality to return to the root
cell. We just need to enable guest traps, which will be handled by
the MMU, MMIO, and GIC code we already plugged to the port. Finally,
we restore the state of the root cell that we previously stored in
the stack.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/setup.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/hypervisor/arch/arm64/setup.c b/hypervisor/arch/arm64/setup.c
index 8dd1095..a6b35ab 100644
--- a/hypervisor/arch/arm64/setup.c
+++ b/hypervisor/arch/arm64/setup.c
@@ -31,6 +31,8 @@ int arch_init_early(void)
int arch_cpu_init(struct per_cpu *cpu_data)
{
int err = 0;
+ unsigned long hcr = HCR_VM_BIT | HCR_IMO_BIT | HCR_FMO_BIT
+ | HCR_TSC_BIT | HCR_TAC_BIT | HCR_RW_BIT;

/* switch to the permanent page tables */
enable_mmu_el2(hv_paging_structs.root_table);
@@ -39,6 +41,9 @@ int arch_cpu_init(struct per_cpu *cpu_data)
cpu_data->virt_id = cpu_data->cpu_id;
arm_read_sysreg(MPIDR_EL1, cpu_data->mpidr.val);

+ /* Setup guest traps */
+ arm_write_sysreg(HCR_EL2, hcr);
+
err = arch_mmu_cpu_cell_init(cpu_data);
if (err)
return err;
@@ -60,8 +65,12 @@ int arch_init_late(void)

void __attribute__((noreturn)) arch_cpu_activate_vmm(struct per_cpu *cpu_data)
{
- trace_error(-EINVAL);
- while (1);
+ struct registers *regs = guest_regs(cpu_data);
+
+ /* return to the caller in Linux */
+ arm_write_sysreg(ELR_EL2, regs->usr[30]);
+
+ vmreturn(regs);
}

int arch_map_device(void *paddr, void *vaddr, unsigned long size)
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:45 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add hypervisor disable support to the Jailhouse firmware. Handle
Jailhouse disable calls from the root cell, and also disable the
hypervisor in case of an error during initialization.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/control.c | 7 ++++--
hypervisor/arch/arm64/entry.S | 33 +++++++++++++++++++++++++++
hypervisor/arch/arm64/include/asm/paging.h | 15 ++++++++++++-
hypervisor/arch/arm64/include/asm/percpu.h | 1 +
hypervisor/arch/arm64/setup.c | 36 ++++++++++++++++++++++++++++--
hypervisor/arch/arm64/traps.c | 4 ++++
6 files changed, 91 insertions(+), 5 deletions(-)

diff --git a/hypervisor/arch/arm64/control.c b/hypervisor/arch/arm64/control.c
index 0a15296..2f71eaa 100644
--- a/hypervisor/arch/arm64/control.c
+++ b/hypervisor/arch/arm64/control.c
@@ -124,8 +124,11 @@ void arch_config_commit(struct cell *cell_added_removed)

void arch_shutdown(void)
{
- trace_error(-EINVAL);
- while (1);
+ unsigned int cpu;
+
+ /* turn off the hypervisor when we return from the exit handler */
+ for_each_cpu(cpu, root_cell.cpu_set)
+ per_cpu(cpu)->shutdown = true;
}

void arch_suspend_cpu(unsigned int cpu_id)
diff --git a/hypervisor/arch/arm64/entry.S b/hypervisor/arch/arm64/entry.S
index 9f24063..971db8f 100644
--- a/hypervisor/arch/arm64/entry.S
+++ b/hypervisor/arch/arm64/entry.S
@@ -105,6 +105,39 @@ el2_entry:
bl entry
b .

+ .globl arch_shutdown_mmu
+arch_shutdown_mmu:
+ /* x0: struct percpu* */
+ mov x19, x0
+
+ /* Note: no memory accesses must be done after turning MMU off. There
+ * is non-zero probability that cached data can be not syncronized with
+ * system memory. CPU can access data bypassing D-cache when MMU is off.
+ */
+
+ /* hand over control of EL2 back to Linux */
+ add x1, x19, #PERCPU_LINUX_SAVED_VECTORS
+ ldr x2, [x1]
+ msr vbar_el2, x2
+
+ /* disable the hypervisor MMU */
+ mrs x1, sctlr_el2
+ ldr x2, =(SCTLR_M_BIT | SCTLR_C_BIT | SCTLR_I_BIT)
+ bic x1, x1, x2
+ msr sctlr_el2, x1
+ isb
+
+ msr mair_el2, xzr
+ msr ttbr0_el2, xzr
+ msr tcr_el2, xzr
+ isb
+
+ msr tpidr_el2, xzr
+
+ /* Call vmreturn(guest_registers) */
+ add x0, x19, #(PERCPU_STACK_END - 32 * 8)
+ b vmreturn
+
.globl enable_mmu_el2
enable_mmu_el2:
/*
diff --git a/hypervisor/arch/arm64/include/asm/paging.h b/hypervisor/arch/arm64/include/asm/paging.h
index 1b5c2fd..cb172cb 100644
--- a/hypervisor/arch/arm64/include/asm/paging.h
+++ b/hypervisor/arch/arm64/include/asm/paging.h
@@ -236,7 +236,20 @@ static inline void arch_paging_flush_page_tlbs(unsigned long page_addr)
/* Used to clean the PAGE_MAP_COHERENT page table changes */
static inline void arch_paging_flush_cpu_caches(void *addr, long size)
{
- /* AARCH64_TODO */
+ unsigned int cache_line_size;
+ u64 ctr;
+
+ arm_read_sysreg(CTR_EL0, ctr);
+ /* Extract the minimal cache line size */
+ cache_line_size = 4 << (ctr >> 16 & 0xf);
+
+ do {
+ /* Clean & invalidate by MVA to PoC */
+ asm volatile ("dc civac, %0" : : "r" (addr));
+ size -= cache_line_size;
+ addr += cache_line_size;
+ } while (size > 0);
+
}

#endif /* !__ASSEMBLY__ */
diff --git a/hypervisor/arch/arm64/include/asm/percpu.h b/hypervisor/arch/arm64/include/asm/percpu.h
index d9e0a84..8d0623c 100644
--- a/hypervisor/arch/arm64/include/asm/percpu.h
+++ b/hypervisor/arch/arm64/include/asm/percpu.h
@@ -67,6 +67,7 @@ struct per_cpu {

unsigned int virt_id;
union mpidr mpidr;
+ bool shutdown;
} __attribute__((aligned(PAGE_SIZE)));

static inline struct per_cpu *this_cpu_data(void)
diff --git a/hypervisor/arch/arm64/setup.c b/hypervisor/arch/arm64/setup.c
index a6b35ab..95fe23f 100644
--- a/hypervisor/arch/arm64/setup.c
+++ b/hypervisor/arch/arm64/setup.c
@@ -87,8 +87,40 @@ int arch_unmap_device(void *vaddr, unsigned long size)
PAGING_NON_COHERENT);
}

+/* disable the hypervisor on the current CPU */
+void arch_shutdown_self(struct per_cpu *cpu_data)
+{
+ irqchip_cpu_shutdown(cpu_data);
+
+ /* Free the guest */
+ arm_write_sysreg(HCR_EL2, HCR_RW_BIT);
+ arm_write_sysreg(VTCR_EL2, VTCR_RES1);
+
+ /* Remove stage-2 mappings */
+ arch_cpu_tlb_flush(cpu_data);
+
+ /* TLB flush needs the cell's VMID */
+ isb();
+ arm_write_sysreg(VTTBR_EL2, 0);
+
+ /* we will restore the root cell state with the MMU turned off,
+ * so we need to make sure it has been commited to memory */
+ arch_paging_flush_cpu_caches(guest_regs(cpu_data),
+ sizeof(struct registers));
+ dsb(ish);
+
+ /* Return to EL1 */
+ arch_shutdown_mmu(cpu_data);
+}
+
void arch_cpu_restore(struct per_cpu *cpu_data, int return_code)
{
- trace_error(-EINVAL);
- while (1);
+ struct registers *regs = guest_regs(cpu_data);
+
+ /* Jailhouse initialization failed; return to the caller in EL1 */
+ arm_write_sysreg(ELR_EL2, regs->usr[30]);
+
+ regs->usr[0] = return_code;
+
+ arch_shutdown_self(cpu_data);
}
diff --git a/hypervisor/arch/arm64/traps.c b/hypervisor/arch/arm64/traps.c
index 8797165..afedab8 100644
--- a/hypervisor/arch/arm64/traps.c
+++ b/hypervisor/arch/arm64/traps.c
@@ -195,5 +195,9 @@ struct registers *arch_handle_exit(struct per_cpu *cpu_data,
panic_stop();
}

+ if (cpu_data->shutdown)
+ /* Won't return here. */
+ arch_shutdown_self(cpu_data);
+
vmreturn(regs);
}
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:46 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

PSCI actually takes CPU parameters by the MPIDR id, which may
differ from the logical id of the CPU. This patch is the first step
into properly handling the CPU affinity levels in the MPIDR.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/control.c | 11 +++++++++++
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/percpu.h | 13 +++++++++++++
hypervisor/arch/arm/psci.c | 5 ++---
hypervisor/arch/arm64/control.c | 14 +++++++++++---
hypervisor/arch/arm64/include/asm/control.h | 1 +
hypervisor/arch/arm64/include/asm/percpu.h | 13 +++++++++++++
hypervisor/arch/arm64/setup.c | 1 +
8 files changed, 53 insertions(+), 6 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 1c17c31..804bf61 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -302,6 +302,17 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
}
}

+unsigned int arm_cpu_by_mpid(struct cell *cell, unsigned long mpid)
+{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, cell->cpu_set)
+ if (mpid == (per_cpu(cpu)->mpidr.val & 0xff00fffffful))
+ return cpu;
+
+ return -1;
+}
+
unsigned int arm_cpu_virt2phys(struct cell *cell, unsigned int virt_id)
{
unsigned int cpu;
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index f050e76..f81f879 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -38,6 +38,7 @@ struct registers* arch_handle_exit(struct per_cpu *cpu_data,
bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn);
void arch_reset_self(struct per_cpu *cpu_data);
void arch_shutdown_self(struct per_cpu *cpu_data);
+unsigned int arm_cpu_by_mpid(struct cell *cell, unsigned long mpid);

void __attribute__((noreturn)) vmreturn(struct registers *guest_regs);
void __attribute__((noreturn)) arch_shutdown_mmu(struct per_cpu *cpu_data);
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 3ab3a68..d5013c0 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -32,6 +32,18 @@

struct pending_irq;

+union mpidr {
+ u64 val;
+ struct {
+ u8 aff0;
+ u8 aff1;
+ u8 aff2;
+ u8 pad1;
+ u8 aff3;
+ u8 pad2[3];
+ } f;
+};
+
struct per_cpu {
/* Keep these two in sync with defines above! */
u8 stack[PAGE_SIZE];
@@ -63,6 +75,7 @@ struct per_cpu {
bool flush_vcpu_caches;
int shutdown_state;
bool shutdown;
+ union mpidr mpidr;
bool failed;
} __attribute__((aligned(PAGE_SIZE)));

diff --git a/hypervisor/arch/arm/psci.c b/hypervisor/arch/arm/psci.c
index 6da0961..8d9cca3 100644
--- a/hypervisor/arch/arm/psci.c
+++ b/hypervisor/arch/arm/psci.c
@@ -78,11 +78,10 @@ int psci_wait_cpu_stopped(unsigned int cpu_id)
static long psci_emulate_cpu_on(struct per_cpu *cpu_data,
struct trap_context *ctx)
{
- unsigned long target = ctx->regs[1];
unsigned long cpu;
struct psci_mbox *mbox;

- cpu = arm_cpu_virt2phys(cpu_data->cell, target);
+ cpu = arm_cpu_by_mpid(cpu_data->cell, ctx->regs[1]);
if (cpu == -1)
/* Virtual id not in set */
return PSCI_DENIED;
@@ -97,7 +96,7 @@ static long psci_emulate_cpu_on(struct per_cpu *cpu_data,
static long psci_emulate_affinity_info(struct per_cpu *cpu_data,
struct trap_context *ctx)
{
- unsigned int cpu = arm_cpu_virt2phys(cpu_data->cell, ctx->regs[1]);
+ unsigned int cpu = arm_cpu_by_mpid(cpu_data->cell, ctx->regs[1]);

if (cpu == -1)
/* Virtual id not in set */
diff --git a/hypervisor/arch/arm64/control.c b/hypervisor/arch/arm64/control.c
index a804ae4..0a15296 100644
--- a/hypervisor/arch/arm64/control.c
+++ b/hypervisor/arch/arm64/control.c
@@ -93,9 +93,6 @@ void arch_reset_self(struct per_cpu *cpu_data)
/* Wait for the driver to call cpu_up */
reset_address = psci_emulate_spin(cpu_data);

- /* Set the new MPIDR */
- arm_write_sysreg(VMPIDR_EL2, cpu_data->virt_id | MPIDR_MP_BIT);
-
/* Restore an empty context */
arch_reset_el1(regs);

@@ -186,6 +183,17 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
}
}

+unsigned int arm_cpu_by_mpid(struct cell *cell, unsigned long mpid)
+{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, cell->cpu_set)
+ if (mpid == (per_cpu(cpu)->mpidr.val & 0xff00fffffful))
+ return cpu;
+
+ return -1;
+}
+
unsigned int arm_cpu_virt2phys(struct cell *cell, unsigned int virt_id)
{
unsigned int cpu;
diff --git a/hypervisor/arch/arm64/include/asm/control.h b/hypervisor/arch/arm64/include/asm/control.h
index 1957d55..6db6bd0 100644
--- a/hypervisor/arch/arm64/include/asm/control.h
+++ b/hypervisor/arch/arm64/include/asm/control.h
@@ -35,6 +35,7 @@ struct registers* arch_handle_exit(struct per_cpu *cpu_data,
bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn);
void arch_reset_self(struct per_cpu *cpu_data);
void arch_shutdown_self(struct per_cpu *cpu_data);
+unsigned int arm_cpu_by_mpid(struct cell *cell, unsigned long mpid);

void __attribute__((noreturn)) vmreturn(struct registers *guest_regs);
void __attribute__((noreturn)) arch_shutdown_mmu(struct per_cpu *cpu_data);
diff --git a/hypervisor/arch/arm64/include/asm/percpu.h b/hypervisor/arch/arm64/include/asm/percpu.h
index dc2c2c7..d9e0a84 100644
--- a/hypervisor/arch/arm64/include/asm/percpu.h
+++ b/hypervisor/arch/arm64/include/asm/percpu.h
@@ -30,6 +30,18 @@

struct pending_irq;

+union mpidr {
+ u64 val;
+ struct {
+ u8 aff0;
+ u8 aff1;
+ u8 aff2;
+ u8 pad1;
+ u8 aff3;
+ u8 pad2[3];
+ } f;
+};
+
struct per_cpu {
u8 stack[PAGE_SIZE];
unsigned long saved_vectors;
@@ -54,6 +66,7 @@ struct per_cpu {
struct psci_mbox guest_mbox;

unsigned int virt_id;
+ union mpidr mpidr;
} __attribute__((aligned(PAGE_SIZE)));

static inline struct per_cpu *this_cpu_data(void)
diff --git a/hypervisor/arch/arm64/setup.c b/hypervisor/arch/arm64/setup.c
index e999c02..8dd1095 100644
--- a/hypervisor/arch/arm64/setup.c
+++ b/hypervisor/arch/arm64/setup.c
@@ -37,6 +37,7 @@ int arch_cpu_init(struct per_cpu *cpu_data)

cpu_data->psci_mbox.entry = 0;
cpu_data->virt_id = cpu_data->cpu_id;
+ arm_read_sysreg(MPIDR_EL1, cpu_data->mpidr.val);

err = arch_mmu_cpu_cell_init(cpu_data);
if (err)
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:47 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Plug in the core handler for hypercalls, so we can start implementing
the more interesting stuff.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/traps.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/hypervisor/arch/arm64/traps.c b/hypervisor/arch/arm64/traps.c
index 7a0b24c..8797165 100644
--- a/hypervisor/arch/arm64/traps.c
+++ b/hypervisor/arch/arm64/traps.c
@@ -47,11 +47,10 @@ static int arch_handle_hvc(struct trap_context *ctx)
{
unsigned long *regs = ctx->regs;

- if (!IS_PSCI_FN(regs[0]))
- return TRAP_UNHANDLED;
-
- regs[0] = psci_dispatch(ctx);
- arch_skip_instruction(ctx);
+ if (IS_PSCI_FN(regs[0]))
+ regs[0] = psci_dispatch(ctx);
+ else
+ regs[0] = hypercall(regs[0], regs[1], regs[2]);

return TRAP_HANDLED;
}
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:47 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

This is a straight port of the inmate demos from AArch32 to AArch64.
These can now be loaded as cells onto a Foundation ARMv8 model.
Code reuse can possible be increased here as well.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
inmates/Makefile | 1 +
inmates/demos/arm64/Makefile | 20 +++++++
inmates/demos/arm64/gic-demo.c | 58 +++++++++++++++++++
inmates/demos/arm64/uart-demo.c | 40 +++++++++++++
inmates/lib/arm64/Makefile | 19 +++++++
inmates/lib/arm64/Makefile.lib | 46 +++++++++++++++
inmates/lib/arm64/gic-v2.c | 39 +++++++++++++
inmates/lib/arm64/gic.c | 43 ++++++++++++++
inmates/lib/arm64/header.S | 66 ++++++++++++++++++++++
inmates/lib/arm64/include/inmates/gic.h | 25 ++++++++
inmates/lib/arm64/include/inmates/inmate.h | 54 ++++++++++++++++++
.../arm64/include/mach-amd-seattle/mach/gic_v2.h | 14 +++++
.../arm64/include/mach-amd-seattle/mach/timer.h | 13 +++++
.../lib/arm64/include/mach-amd-seattle/mach/uart.h | 13 +++++
.../arm64/include/mach-foundation-v8/mach/gic_v2.h | 14 +++++
.../arm64/include/mach-foundation-v8/mach/timer.h | 13 +++++
.../arm64/include/mach-foundation-v8/mach/uart.h | 13 +++++
inmates/lib/arm64/inmate.lds | 40 +++++++++++++
inmates/lib/arm64/printk.c | 55 ++++++++++++++++++
inmates/lib/arm64/timer.c | 55 ++++++++++++++++++
inmates/lib/arm64/uart-pl011.c | 23 ++++++++
21 files changed, 664 insertions(+)
create mode 100644 inmates/demos/arm64/gic-demo.c
create mode 100644 inmates/demos/arm64/uart-demo.c
create mode 100644 inmates/lib/arm64/Makefile.lib
create mode 100644 inmates/lib/arm64/gic-v2.c
create mode 100644 inmates/lib/arm64/gic.c
create mode 100644 inmates/lib/arm64/header.S
create mode 100644 inmates/lib/arm64/include/inmates/gic.h
create mode 100644 inmates/lib/arm64/include/inmates/inmate.h
create mode 100644 inmates/lib/arm64/include/mach-amd-seattle/mach/gic_v2.h
create mode 100644 inmates/lib/arm64/include/mach-amd-seattle/mach/timer.h
create mode 100644 inmates/lib/arm64/include/mach-amd-seattle/mach/uart.h
create mode 100644 inmates/lib/arm64/include/mach-foundation-v8/mach/gic_v2.h
create mode 100644 inmates/lib/arm64/include/mach-foundation-v8/mach/timer.h
create mode 100644 inmates/lib/arm64/include/mach-foundation-v8/mach/uart.h
create mode 100644 inmates/lib/arm64/inmate.lds
create mode 100644 inmates/lib/arm64/printk.c
create mode 100644 inmates/lib/arm64/timer.c
create mode 100644 inmates/lib/arm64/uart-pl011.c

diff --git a/inmates/Makefile b/inmates/Makefile
index 0d4ea5d..904be0a 100644
--- a/inmates/Makefile
+++ b/inmates/Makefile
@@ -15,6 +15,7 @@ export INMATES_LIB

INCLUDES := -I$(INMATES_LIB) \
-I$(src)/../hypervisor/arch/$(SRCARCH)/include \
+ -I$(src)/../hypervisor/arch/arm/include \
-I$(src)/../hypervisor/include

LINUXINCLUDE :=
diff --git a/inmates/demos/arm64/Makefile b/inmates/demos/arm64/Makefile
index e69de29..f61667a 100644
--- a/inmates/demos/arm64/Makefile
+++ b/inmates/demos/arm64/Makefile
@@ -0,0 +1,20 @@
+#
+# Jailhouse AArch64 support
+#
+# Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+#
+# Authors:
+# Antonios Motakis <antonios...@huawei.com>
+#
+# This work is licensed under the terms of the GNU GPL, version 2. See
+# the COPYING file in the top-level directory.
+#
+
+include $(INMATES_LIB)/Makefile.lib
+
+INMATES := gic-demo.bin uart-demo.bin
+
+gic-demo-y := gic-demo.o
+uart-demo-y := uart-demo.o
+
+$(eval $(call DECLARE_TARGETS,$(INMATES)))
diff --git a/inmates/demos/arm64/gic-demo.c b/inmates/demos/arm64/gic-demo.c
new file mode 100644
index 0000000..61b432e
--- /dev/null
+++ b/inmates/demos/arm64/gic-demo.c
@@ -0,0 +1,58 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-phili...@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/gic_common.h>
+#include <inmates/inmate.h>
+#include <mach/timer.h>
+
+#define BEATS_PER_SEC 10
+
+static u64 ticks_per_beat;
+static volatile u64 expected_ticks;
+
+static void handle_IRQ(unsigned int irqn)
+{
+ static u64 min_delta = ~0ULL, max_delta = 0;
+ u64 delta;
+
+ if (irqn != TIMER_IRQ)
+ return;
+
+ delta = timer_get_ticks() - expected_ticks;
+ if (delta < min_delta)
+ min_delta = delta;
+ if (delta > max_delta)
+ max_delta = delta;
+
+ printk("Timer fired, jitter: %6ld ns, min: %6ld ns, max: %6ld ns\n",
+ (long)timer_ticks_to_ns(delta),
+ (long)timer_ticks_to_ns(min_delta),
+ (long)timer_ticks_to_ns(max_delta));
+
+ expected_ticks = timer_get_ticks() + ticks_per_beat;
+ timer_start(ticks_per_beat);
+}
+
+void inmate_main(void)
+{
+ printk("Initializing the GIC...\n");
+ gic_setup(handle_IRQ);
+ gic_enable_irq(TIMER_IRQ);
+
+ printk("Initializing the timer...\n");
+ ticks_per_beat = timer_get_frequency() / BEATS_PER_SEC;
+ expected_ticks = timer_get_ticks() + ticks_per_beat;
+ timer_start(ticks_per_beat);
+
+ while (1)
+ asm volatile("wfi" : : : "memory");
+}
diff --git a/inmates/demos/arm64/uart-demo.c b/inmates/demos/arm64/uart-demo.c
new file mode 100644
index 0000000..3e030d4
--- /dev/null
+++ b/inmates/demos/arm64/uart-demo.c
@@ -0,0 +1,40 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-phili...@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <inmates/inmate.h>
+
+/*
+ * To ease the debugging, we can send a spurious hypercall, which should return
+ * -ENOSYS, but appear in the hypervisor stats for this cell.
+ */
+static void heartbeat(void)
+{
+ asm volatile (
+ "mov x0, %0\n"
+ "hvc #0\n"
+ : : "r" (0xbea7) : "x0");
+}
+
+void inmate_main(void)
+{
+ unsigned int i = 0, j;
+ /*
+ * The cell config can set up a mapping to access UARTx instead of UART0
+ */
+ while(++i) {
+ for (j = 0; j < 100000000; j++);
+ printk("Hello %d from cell!\n", i);
+ heartbeat();
+ }
+
+ /* lr should be 0, so a return will go back to the reset vector */
+}
diff --git a/inmates/lib/arm64/Makefile b/inmates/lib/arm64/Makefile
index e69de29..2859134 100644
--- a/inmates/lib/arm64/Makefile
+++ b/inmates/lib/arm64/Makefile
@@ -0,0 +1,19 @@
+#
+# Jailhouse, a Linux-based partitioning hypervisor
+#
+# Copyright (c) Siemens AG, 2015
+#
+# Authors:
+# Jan Kiszka <jan.k...@siemens.com>
+#
+# This work is licensed under the terms of the GNU GPL, version 2. See
+# the COPYING file in the top-level directory.
+#
+
+include $(INMATES_LIB)/Makefile.lib
+
+always := lib.a
+
+lib-y := header.o gic.o printk.o timer.o
+lib-$(CONFIG_ARM_GIC) += gic-v2.o
+lib-$(CONFIG_SERIAL_AMBA_PL011) += uart-pl011.o
diff --git a/inmates/lib/arm64/Makefile.lib b/inmates/lib/arm64/Makefile.lib
new file mode 100644
index 0000000..0196c51
--- /dev/null
+++ b/inmates/lib/arm64/Makefile.lib
@@ -0,0 +1,46 @@
+#
+# Jailhouse, a Linux-based partitioning hypervisor
+#
+# Copyright (c) ARM Limited, 2014
+# Copyright (c) Siemens AG, 2014
+#
+# Authors:
+# Jean-Philippe Brucker <jean-phili...@arm.com>
+# Jan Kiszka <jan.k...@siemens.com>
+#
+# This work is licensed under the terms of the GNU GPL, version 2. See
+# the COPYING file in the top-level directory.
+#
+
+-include $(obj)/../../../hypervisor/include/generated/config.mk
+
+KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))
+
+KBUILD_CFLAGS += -I$(INMATES_LIB)/include
+KBUILD_AFLAGS += -I$(INMATES_LIB)/include
+
+define DECLARE_TARGETS =
+ _TARGETS = $(1)
+ always := $$(_TARGETS)
+ # $(NAME-y) NAME-linked.o NAME.bin
+ targets += $$(foreach t,$$(_TARGETS:.bin=-y),$$($$t)) \
+ $$(_TARGETS:.bin=-linked.o) $$(_TARGETS)
+endef
+
+mach-$(CONFIG_MACH_FOUNDATION_V8) := foundation-v8
+mach-$(CONFIG_MACH_AMD_SEATTLE) := amd-seattle
+
+MACHINE := mach-$(mach-y)
+KBUILD_CFLAGS += -I$(INMATES_LIB)/include/$(MACHINE)
+KBUILD_AFLAGS += -I$(INMATES_LIB)/include/$(MACHINE)
+
+# prevent deleting intermediate files which would cause rebuilds
+.SECONDARY: $(addprefix $(obj)/,$(targets))
+
+.SECONDEXPANSION:
+$(obj)/%-linked.o: $(INMATES_LIB)/inmate.lds $$(addprefix $$(obj)/,$$($$*-y)) \
+ $(INMATES_LIB)/lib.a
+ $(call if_changed,ld)
+
+$(obj)/%.bin: $(obj)/%-linked.o
+ $(call if_changed,objcopy)
diff --git a/inmates/lib/arm64/gic-v2.c b/inmates/lib/arm64/gic-v2.c
new file mode 100644
index 0000000..f14a742
--- /dev/null
+++ b/inmates/lib/arm64/gic-v2.c
@@ -0,0 +1,39 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-phili...@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+#include <asm/gic_common.h>
+#include <asm/gic_v2.h>
+#include <inmates/gic.h>
+#include <inmates/inmate.h>
+#include <mach/gic_v2.h>
+
+void gic_enable(unsigned int irqn)
+{
+ mmio_write32(GICD_BASE + GICD_ISENABLER, 1 << irqn);
+}
+
+int gic_init(void)
+{
+ mmio_write32(GICC_BASE + GICC_CTLR, GICC_CTLR_GRPEN1);
+ mmio_write32(GICC_BASE + GICC_PMR, GICC_PMR_DEFAULT);
+
+ return 0;
+}
+
+void gic_write_eoi(u32 irqn)
+{
+ mmio_write32(GICC_BASE + GICC_EOIR, irqn);
+}
+
+u32 gic_read_ack(void)
+{
+ return mmio_read32(GICC_BASE + GICC_IAR);
+}
diff --git a/inmates/lib/arm64/gic.c b/inmates/lib/arm64/gic.c
new file mode 100644
index 0000000..dd7a790
--- /dev/null
+++ b/inmates/lib/arm64/gic.c
@@ -0,0 +1,43 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-phili...@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <inmates/inmate.h>
+#include <inmates/gic.h>
+
+static irq_handler_t irq_handler = (irq_handler_t)NULL;
+
+/* Replaces the weak reference in header.S */
+void vector_irq(void)
+{
+ u32 irqn;
+
+ do {
+ irqn = gic_read_ack();
+
+ if (irq_handler)
+ irq_handler(irqn);
+
+ gic_write_eoi(irqn);
+
+ } while (irqn != 0x3ff);
+}
+
+void gic_setup(irq_handler_t handler)
+{
+ gic_init();
+ irq_handler = handler;
+}
+
+void gic_enable_irq(unsigned int irq)
+{
+ gic_enable(irq);
+}
diff --git a/inmates/lib/arm64/header.S b/inmates/lib/arm64/header.S
new file mode 100644
index 0000000..fe7bae7
--- /dev/null
+++ b/inmates/lib/arm64/header.S
@@ -0,0 +1,66 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+.macro ventry label
+ .align 7
+ b \label
+.endm
+
+ .section ".boot", "ax"
+ .globl __reset_entry
+__reset_entry:
+ ldr x0, =vectors
+ msr vbar_el1, x0
+
+ ldr x0, =stack_top
+ mov sp, x0
+
+ mov x0, #(3 << 20)
+ msr cpacr_el1, x0
+
+ msr daif, xzr
+
+ isb
+
+ b inmate_main
+
+handle_irq:
+ bl vector_irq
+ eret
+
+.weak vector_irq
+ b .
+
+ .globl vectors
+ .align 11
+vectors:
+ ventry .
+ ventry .
+ ventry .
+ ventry .
+
+ ventry .
+ ventry handle_irq
+ ventry .
+ ventry .
+
+ ventry .
+ ventry handle_irq
+ ventry .
+ ventry .
+
+ ventry .
+ ventry .
+ ventry .
+ ventry .
+
+ .ltorg
diff --git a/inmates/lib/arm64/include/inmates/gic.h b/inmates/lib/arm64/include/inmates/gic.h
new file mode 100644
index 0000000..4e5eabd
--- /dev/null
+++ b/inmates/lib/arm64/include/inmates/gic.h
@@ -0,0 +1,25 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-phili...@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+#ifndef _JAILHOUSE_INMATES_GIC_H
+#define _JAILHOUSE_INMATES_GIC_H
+
+#include <jailhouse/types.h>
+
+#ifndef __ASSEMBLY__
+
+int gic_init(void);
+void gic_enable(unsigned int irqn);
+void gic_write_eoi(u32 irqn);
+u32 gic_read_ack(void);
+
+#endif /* !__ASSEMBLY__ */
+#endif
diff --git a/inmates/lib/arm64/include/inmates/inmate.h b/inmates/lib/arm64/include/inmates/inmate.h
new file mode 100644
index 0000000..9ad962c
--- /dev/null
+++ b/inmates/lib/arm64/include/inmates/inmate.h
@@ -0,0 +1,54 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+#ifndef _JAILHOUSE_INMATES_INMATE_H
+#define _JAILHOUSE_INMATES_INMATE_H
+
+#ifndef __ASSEMBLY__
+typedef signed char s8;
+typedef unsigned char u8;
+
+typedef signed short s16;
+typedef unsigned short u16;
+
+typedef signed int s32;
+typedef unsigned int u32;
+
+typedef signed long long s64;
+typedef unsigned long long u64;
+
+static inline void *memset(void *addr, int val, unsigned long size)
+{
+ char *s = addr;
+ unsigned int i;
+ for (i = 0; i < size; i++)
+ *s++ = val;
+
+ return addr;
+}
+
+extern unsigned long printk_uart_base;
+void printk(const char *fmt, ...);
+void inmate_main(void);
+
+void __attribute__((used)) vector_irq(void);
+
+typedef void (*irq_handler_t)(unsigned int);
+void gic_setup(irq_handler_t handler);
+void gic_enable_irq(unsigned int irq);
+
+unsigned long timer_get_frequency(void);
+u64 timer_get_ticks(void);
+u64 timer_ticks_to_ns(u64 ticks);
+void timer_start(u64 timeout);
+
+#endif /* !__ASSEMBLY__ */
+#endif
diff --git a/inmates/lib/arm64/include/mach-amd-seattle/mach/gic_v2.h b/inmates/lib/arm64/include/mach-amd-seattle/mach/gic_v2.h
new file mode 100644
index 0000000..b357a21
--- /dev/null
+++ b/inmates/lib/arm64/include/mach-amd-seattle/mach/gic_v2.h
@@ -0,0 +1,14 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#define GICD_BASE ((void *)0xe1110000)
+#define GICC_BASE ((void *)0xe112f000)
diff --git a/inmates/lib/arm64/include/mach-amd-seattle/mach/timer.h b/inmates/lib/arm64/include/mach-amd-seattle/mach/timer.h
new file mode 100644
index 0000000..696b5cb
--- /dev/null
+++ b/inmates/lib/arm64/include/mach-amd-seattle/mach/timer.h
@@ -0,0 +1,13 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#define TIMER_IRQ 27
diff --git a/inmates/lib/arm64/include/mach-amd-seattle/mach/uart.h b/inmates/lib/arm64/include/mach-amd-seattle/mach/uart.h
new file mode 100644
index 0000000..512b6cb
--- /dev/null
+++ b/inmates/lib/arm64/include/mach-amd-seattle/mach/uart.h
@@ -0,0 +1,13 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#define UART_BASE ((void *)0xe1010000)
diff --git a/inmates/lib/arm64/include/mach-foundation-v8/mach/gic_v2.h b/inmates/lib/arm64/include/mach-foundation-v8/mach/gic_v2.h
new file mode 100644
index 0000000..bd3ec88
--- /dev/null
+++ b/inmates/lib/arm64/include/mach-foundation-v8/mach/gic_v2.h
@@ -0,0 +1,14 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#define GICD_BASE ((void *)0x2c001000)
+#define GICC_BASE ((void *)0x2c002000)
diff --git a/inmates/lib/arm64/include/mach-foundation-v8/mach/timer.h b/inmates/lib/arm64/include/mach-foundation-v8/mach/timer.h
new file mode 100644
index 0000000..696b5cb
--- /dev/null
+++ b/inmates/lib/arm64/include/mach-foundation-v8/mach/timer.h
@@ -0,0 +1,13 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#define TIMER_IRQ 27
diff --git a/inmates/lib/arm64/include/mach-foundation-v8/mach/uart.h b/inmates/lib/arm64/include/mach-foundation-v8/mach/uart.h
new file mode 100644
index 0000000..5ac3f87
--- /dev/null
+++ b/inmates/lib/arm64/include/mach-foundation-v8/mach/uart.h
@@ -0,0 +1,13 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#define UART_BASE ((void *)0x1c090000)
diff --git a/inmates/lib/arm64/inmate.lds b/inmates/lib/arm64/inmate.lds
new file mode 100644
index 0000000..e484bdc
--- /dev/null
+++ b/inmates/lib/arm64/inmate.lds
@@ -0,0 +1,40 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+SECTIONS {
+ . = 0;
+ .boot : { *(.boot) }
+
+ . = ALIGN(4096);
+ . = . + 0x1000;
+ stack_top = .;
+ bss_start = .;
+ .bss : {
+ *(.bss)
+ *(COMMON)
+ }
+
+ . = ALIGN(4);
+ .text : {
+ *(.text)
+ }
+
+ .rodata : {
+ *(.rodata)
+ }
+
+ .data : {
+ *(.data)
+ }
+}
+
+ENTRY(__reset_entry)
diff --git a/inmates/lib/arm64/printk.c b/inmates/lib/arm64/printk.c
new file mode 100644
index 0000000..0bc6e0a
--- /dev/null
+++ b/inmates/lib/arm64/printk.c
@@ -0,0 +1,55 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-phili...@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <jailhouse/types.h>
+#include <asm/debug.h>
+#include <stdarg.h>
+#include <inmates/inmate.h>
+
+static struct uart_chip chip;
+
+static void console_write(const char *msg)
+{
+ char c = 0;
+
+ while (1) {
+ if (c == '\n')
+ c = '\r';
+ else
+ c = *msg++;
+ if (!c)
+ break;
+
+ chip.wait(&chip);
+ chip.write(&chip, c);
+ chip.busy(&chip);
+ }
+}
+
+#include "../../../hypervisor/printk-core.c"
+
+void printk(const char *fmt, ...)
+{
+ static bool inited = false;
+ va_list ap;
+
+ if (!inited) {
+ uart_chip_init(&chip);
+ inited = true;
+ }
+
+ va_start(ap, fmt);
+
+ __vprintk(fmt, ap);
+
+ va_end(ap);
+}
diff --git a/inmates/lib/arm64/timer.c b/inmates/lib/arm64/timer.c
new file mode 100644
index 0000000..79f50ef
--- /dev/null
+++ b/inmates/lib/arm64/timer.c
@@ -0,0 +1,55 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ * Copyright (c) Siemens AG, 2015
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-phili...@arm.com>
+ * Jan Kiszka <jan.k...@siemens.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/sysregs.h>
+#include <inmates/inmate.h>
+
+unsigned long timer_get_frequency(void)
+{
+ unsigned long freq;
+
+ arm_read_sysreg(CNTFRQ_EL0, freq);
+ return freq;
+}
+
+u64 timer_get_ticks(void)
+{
+ u64 pct64;
+
+ arm_read_sysreg(CNTPCT_EL0, pct64);
+ return pct64;
+}
+
+static unsigned long emul_division(u64 val, u64 div)
+{
+ unsigned long cnt = 0;
+
+ while (val > div) {
+ val -= div;
+ cnt++;
+ }
+ return cnt;
+}
+
+u64 timer_ticks_to_ns(u64 ticks)
+{
+ return emul_division(ticks * 1000,
+ timer_get_frequency() / 1000 / 1000);
+}
+
+void timer_start(u64 timeout)
+{
+ arm_write_sysreg(CNTV_TVAL_EL0, timeout);
+ arm_write_sysreg(CNTV_CTL_EL0, 1);
+}
diff --git a/inmates/lib/arm64/uart-pl011.c b/inmates/lib/arm64/uart-pl011.c
new file mode 100644
index 0000000..8f07d78
--- /dev/null
+++ b/inmates/lib/arm64/uart-pl011.c
@@ -0,0 +1,23 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-phili...@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+#include <asm/uart_pl011.h>
+#include <mach/uart.h>
+
+void uart_chip_init(struct uart_chip *chip)
+{
+ chip->virt_base = UART_BASE;
+ chip->fifo_enabled = true;
+ chip->wait = uart_wait;
+ chip->write = uart_write;
+ chip->busy = uart_busy;
+ uart_init(chip);
+}
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:47 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

This patch mostly implements the functionality needed to create
and control new cells. The functionality is very similar to the
one from AArch32, and there is potential to unify some code in
the future between the architectures.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm64/control.c | 161 ++++++++++++++++++++++++++-----
hypervisor/arch/arm64/include/asm/cell.h | 1 +
2 files changed, 136 insertions(+), 26 deletions(-)

diff --git a/hypervisor/arch/arm64/control.c b/hypervisor/arch/arm64/control.c
index 2f71eaa..9d41c1e 100644
--- a/hypervisor/arch/arm64/control.c
+++ b/hypervisor/arch/arm64/control.c
@@ -72,50 +72,114 @@ static void arch_reset_el1(struct registers *regs)
void arch_reset_self(struct per_cpu *cpu_data)
{
int err = 0;
- unsigned long reset_address;
+ unsigned long reset_address = 0;
struct cell *cell = cpu_data->cell;
struct registers *regs = guest_regs(cpu_data);
+ bool is_shutdown = cpu_data->shutdown;

- if (cell != &root_cell) {
- trace_error(-EINVAL);
- panic_stop();
- }
+ if (!is_shutdown)
+ err = arch_mmu_cpu_cell_init(cpu_data);
+ if (err)
+ printk("MMU setup failed\n");

/*
* Note: D-cache cleaning and I-cache invalidation is done on driver
* level after image is loaded.
*/

- err = irqchip_cpu_reset(cpu_data);
- if (err)
- printk("IRQ setup failed\n");
+ /*
+ * We come from the IRQ handler, but we won't return there, so the IPI
+ * is deactivated here.
+ */
+ irqchip_eoi_irq(SGI_CPU_OFF, true);
+
+ if (is_shutdown) {
+ if (cell != &root_cell) {
+ irqchip_cpu_shutdown(cpu_data);
+
+ smc(PSCI_CPU_OFF, 0, 0, 0);
+ panic_printk("FATAL: PSCI_CPU_OFF failed\n");
+ panic_stop();
+ }
+ /* arch_shutdown_self resets the GIC on all remaining CPUs. */
+ } else {
+ err = irqchip_cpu_reset(cpu_data);
+ if (err)
+ printk("IRQ setup failed\n");
+ }

/* Wait for the driver to call cpu_up */
- reset_address = psci_emulate_spin(cpu_data);
+ if (cpu_data->virt_id != 0)
+ reset_address = psci_emulate_spin(cpu_data);

/* Restore an empty context */
arch_reset_el1(regs);

arm_write_sysreg(ELR_EL2, reset_address);

+ if (is_shutdown)
+ /* Won't return here. */
+ arch_shutdown_self(cpu_data);
+
vmreturn(regs);
}

int arch_cell_create(struct cell *cell)
{
- return trace_error(-EINVAL);
+ int err;
+ unsigned int cpu;
+ unsigned int virt_id = 0;
+
+ err = arch_mmu_cell_init(cell);
+ if (err)
+ return err;
+
+ /*
+ * Generate a virtual CPU id according to the position of each CPU in
+ * the cell set
+ */
+ for_each_cpu(cpu, cell->cpu_set) {
+ per_cpu(cpu)->virt_id = virt_id;
+ virt_id++;
+ }
+ cell->arch.last_virt_id = virt_id - 1;
+
+ err = irqchip_cell_init(cell);
+ if (err) {
+ arch_mmu_cell_destroy(cell);
+ return err;
+ }
+ irqchip_root_cell_shrink(cell);
+
+ return 0;
}

void arch_flush_cell_vcpu_caches(struct cell *cell)
{
- /* AARCH64_TODO */
- trace_error(-EINVAL);
+ unsigned int cpu;
+
+ for_each_cpu(cpu, cell->cpu_set)
+ if (cpu == this_cpu_id())
+ arch_cpu_tlb_flush(per_cpu(cpu));
+ else
+ per_cpu(cpu)->flush_vcpu_caches = true;
}

void arch_cell_destroy(struct cell *cell)
{
- trace_error(-EINVAL);
- while (1);
+ unsigned int cpu;
+ struct per_cpu *percpu;
+
+ for_each_cpu(cpu, cell->cpu_set) {
+ percpu = per_cpu(cpu);
+ /* Re-assign the physical IDs for the root cell */
+ percpu->virt_id = percpu->cpu_id;
+ arch_reset_cpu(cpu);
+ }
+
+ irqchip_cell_exit(cell);
+
+ arch_mmu_cell_destroy(cell);
}

void arch_config_commit(struct cell *cell_added_removed)
@@ -133,38 +197,72 @@ void arch_shutdown(void)

void arch_suspend_cpu(unsigned int cpu_id)
{
- trace_error(-EINVAL);
- while (1);
+ struct sgi sgi;
+
+ if (psci_cpu_stopped(cpu_id))
+ return;
+
+ sgi.routing_mode = 0;
+ sgi.aff1 = 0;
+ sgi.aff2 = 0;
+ sgi.aff3 = 0;
+ sgi.targets = 1 << cpu_id;
+ sgi.id = SGI_CPU_OFF;
+
+ irqchip_send_sgi(&sgi);
+
+ psci_wait_cpu_stopped(cpu_id);
}

void arch_resume_cpu(unsigned int cpu_id)
{
- trace_error(-EINVAL);
- while (1);
+ /*
+ * Simply get out of the spin loop by returning to handle_sgi
+ * If the CPU is being reset, it already has left the PSCI idle loop.
+ */
+ if (psci_cpu_stopped(cpu_id))
+ psci_resume(cpu_id);
}

void arch_reset_cpu(unsigned int cpu_id)
{
- trace_error(-EINVAL);
- while (1);
+ unsigned long cpu_data = (unsigned long)per_cpu(cpu_id);
+
+ if (psci_cpu_on(cpu_id, (unsigned long)arch_reset_self, cpu_data))
+ printk("ERROR: unable to reset CPU%d (was running)\n", cpu_id);
}

void arch_park_cpu(unsigned int cpu_id)
{
- trace_error(-EINVAL);
- while (1);
+ struct per_cpu *cpu_data = per_cpu(cpu_id);
+
+ /*
+ * Reset always follows park_cpu, so we just need to make sure that the
+ * CPU is suspended
+ */
+ if (psci_wait_cpu_stopped(cpu_id) != 0)
+ printk("ERROR: CPU%d is supposed to be stopped\n", cpu_id);
+ else
+ cpu_data->cell->arch.needs_flush = true;
}

void arch_shutdown_cpu(unsigned int cpu_id)
{
- trace_error(-EINVAL);
- while (1);
+ struct per_cpu *cpu_data = per_cpu(cpu_id);
+
+ cpu_data->virt_id = cpu_id;
+ cpu_data->shutdown = true;
+
+ if (psci_wait_cpu_stopped(cpu_id))
+ printk("FATAL: unable to stop CPU%d\n", cpu_id);
+
+ arch_reset_cpu(cpu_id);
}

void __attribute__((noreturn)) arch_panic_stop(void)
{
- trace_error(-EINVAL);
- while (1);
+ psci_cpu_off(this_cpu_data());
+ __builtin_unreachable();
}

void arch_panic_park(void)
@@ -173,6 +271,14 @@ void arch_panic_park(void)
while (1);
}

+static void arch_suspend_self(struct per_cpu *cpu_data)
+{
+ psci_suspend(cpu_data);
+
+ if (cpu_data->flush_vcpu_caches)
+ arch_cpu_tlb_flush(cpu_data);
+}
+
void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
{
cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_MANAGEMENT]++;
@@ -181,6 +287,9 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
case SGI_INJECT:
irqchip_inject_pending(cpu_data);
break;
+ case SGI_CPU_OFF:
+ arch_suspend_self(cpu_data);
+ break;
default:
printk("WARN: unknown SGI received %d\n", irqn);
}
diff --git a/hypervisor/arch/arm64/include/asm/cell.h b/hypervisor/arch/arm64/include/asm/cell.h
index 9a9689e..dbdc7b8 100644
--- a/hypervisor/arch/arm64/include/asm/cell.h
+++ b/hypervisor/arch/arm64/include/asm/cell.h
@@ -28,6 +28,7 @@ struct arch_cell {
bool needs_flush;

u64 spis;
+ unsigned int last_virt_id;
};

extern struct cell root_cell;
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:49 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

The AMD Seattle board features SPI ids that are larger than 64,
which we do not support properly. This workaround allows us to
demonstrate working cells on this target, until we have a proper fix.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
hypervisor/arch/arm/include/asm/irqchip.h | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index 581c10f..41b715c 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -107,8 +107,21 @@ static inline bool spi_in_cell(struct cell *cell, unsigned int spi)
/* FIXME: Change the configuration to a bitmask range */
u32 spi_mask;

- if (spi >= 64)
+ if (spi >= 64) {
+#ifdef CONFIG_MACH_AMD_SEATTLE
+ /* uart irq workaround */
+ if (spi == 328)
+ return (cell != &root_cell);
+
+ /* xgmac1 irq workaround */
+ if ((spi == 322) || (spi ==324) ||
+ ((spi >= 341) && (spi <= 345))) {
+
+ return (cell != &root_cell);
+ }
+#endif
return (cell == &root_cell);
+ }
else if (spi >= 32)
spi_mask = cell->arch.spis >> 32;
else
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:50 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add a cell configuration file for the foundation-v8 model, to
be used with the PL011 UART inmate demo.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
configs/foundation-v8-uart-demo.c | 55 +++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
create mode 100644 configs/foundation-v8-uart-demo.c

diff --git a/configs/foundation-v8-uart-demo.c b/configs/foundation-v8-uart-demo.c
new file mode 100644
index 0000000..7fde1aa
--- /dev/null
+++ b/configs/foundation-v8-uart-demo.c
@@ -0,0 +1,55 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/types.h>
+#include <jailhouse/cell-config.h>
+
+#define ARRAY_SIZE(a) sizeof(a) / sizeof(a[0])
+
+struct {
+ struct jailhouse_cell_desc cell;
+ __u64 cpus[1];
+ struct jailhouse_memory mem_regions[2];
+} __attribute__((packed)) config = {
+ .cell = {
+ .signature = JAILHOUSE_CELL_DESC_SIGNATURE,
+ .name = "pl011-demo",
+ .flags = JAILHOUSE_CELL_PASSIVE_COMMREG,
+
+ .cpu_set_size = sizeof(config.cpus),
+ .num_memory_regions = ARRAY_SIZE(config.mem_regions),
+ .num_irqchips = 0,
+ .pio_bitmap_size = 0,
+ .num_pci_devices = 0,
+ },
+
+ .cpus = {
+ 0x4,
+ },
+
+ .mem_regions = {
+ /* UART 2 */ {
+ .phys_start = 0x1c0b0000,
+ .virt_start = 0x1c090000,
+ .size = 0x10000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* RAM */ {
+ .phys_start = 0xfbff0000,
+ .virt_start = 0,
+ .size = 0x00010000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE | JAILHOUSE_MEM_LOADABLE,
+ },
+ }
+};
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:50 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add a cell configuration file for the gic inmate demo, for the
foundation-v8 model.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
configs/foundation-v8-gic-demo.c | 55 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
create mode 100644 configs/foundation-v8-gic-demo.c

diff --git a/configs/foundation-v8-gic-demo.c b/configs/foundation-v8-gic-demo.c
new file mode 100644
index 0000000..cac296a
--- /dev/null
+++ b/configs/foundation-v8-gic-demo.c
@@ -0,0 +1,55 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-phili...@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/types.h>
+#include <jailhouse/cell-config.h>
+
+#define ARRAY_SIZE(a) sizeof(a) / sizeof(a[0])
+
+struct {
+ struct jailhouse_cell_desc cell;
+ __u64 cpus[1];
+ struct jailhouse_memory mem_regions[2];
+} __attribute__((packed)) config = {
+ .cell = {
+ .signature = JAILHOUSE_CELL_DESC_SIGNATURE,
+ .name = "gic-demo",
+ .flags = JAILHOUSE_CELL_PASSIVE_COMMREG,
+
+ .cpu_set_size = sizeof(config.cpus),
+ .num_memory_regions = ARRAY_SIZE(config.mem_regions),
+ .num_irqchips = 0,
+ .pio_bitmap_size = 0,
+ .num_pci_devices = 0,
+ },
+
+ .cpus = {
+ 0x2,
+ },
+
+ .mem_regions = {
+ /* UART 1 */ {
+ .phys_start = 0x1c0a0000,
+ .virt_start = 0x1c090000,
+ .size = 0x10000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* RAM */ {
+ .phys_start = 0xfbfe0000,
+ .virt_start = 0,
+ .size = 0x00010000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE | JAILHOUSE_MEM_LOADABLE,
+ },
+ },
+};
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:50 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add a cell configuration file to use on the AMD Seattle, to be
used with the gic demo inmate.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
configs/amd-seattle-gic-demo.c | 55 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
create mode 100644 configs/amd-seattle-gic-demo.c

diff --git a/configs/amd-seattle-gic-demo.c b/configs/amd-seattle-gic-demo.c
new file mode 100644
index 0000000..c418e94
--- /dev/null
+++ b/configs/amd-seattle-gic-demo.c
@@ -0,0 +1,55 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ 0x10,
+ },
+
+ .mem_regions = {
+ /* UART */ {
+ .phys_start = 0xe1010000,
+ .virt_start = 0xe1010000,
+ .size = 0x10000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO | JAILHOUSE_MEM_ROOTSHARED,
+ },
+ /* RAM */ {
+ .phys_start = 0x82fbfe0000,
+ .virt_start = 0,
+ .size = 0x00010000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE | JAILHOUSE_MEM_LOADABLE,
+ },
+ }
+};
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:51 PM12/18/15
to jailho...@googlegroups.com, Dmitry Voytik, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, Antonios Motakis
From: Dmitry Voytik <dmitry...@huawei.com>

Add the cell configuration files, and some helper scripts and device
tree for the foundation-v8 model. These can be used to load a linux
inmate on a cell on that target.

Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
Signed-off-by: Antonios Motakis <antonios...@huawei.com>
[antonios...@huawei.com: split off as a separate patch,
some minor renaming for consistency]
---
ci/kernel-inmate-foundation-v8.dts | 101 +++++++++++++++++++++++++++++
configs/foundation-v8-linux-demo.c | 72 ++++++++++++++++++++
tools/jailhouse-loadlinux-foundation-v8.sh | 23 +++++++
3 files changed, 196 insertions(+)
create mode 100644 ci/kernel-inmate-foundation-v8.dts
create mode 100644 configs/foundation-v8-linux-demo.c
create mode 100755 tools/jailhouse-loadlinux-foundation-v8.sh

diff --git a/ci/kernel-inmate-foundation-v8.dts b/ci/kernel-inmate-foundation-v8.dts
new file mode 100644
index 0000000..e2c2293
--- /dev/null
+++ b/ci/kernel-inmate-foundation-v8.dts
@@ -0,0 +1,101 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Dmitry Voytik <dmitry...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ * ARMv8 Foundation model DTS
+ *
+ */
+
+/dts-v1/;
+
+/* 64 KiB */
+/memreserve/ 0x0 0x00010000;
+
+/ {
+ model = "Jailhouse-Foundation-v8A";
+ compatible = "arm,foundation-aarch64", "arm,vexpress";
+ interrupt-parent = <&gic>;
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ chosen {
+ bootargs = "earlyprintk console=ttyAMA0";
+ };
+
+ aliases {
+ serial0 = &serial0;
+ };
+
+ cpus {
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x2>;
+ enable-method = "psci";
+ next-level-cache = <&L2_0>;
+ };
+
+ cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x3>;
+ enable-method = "psci";
+ next-level-cache = <&L2_0>;
+ };
+
+ L2_0: l2-cache0 {
+ compatible = "cache";
+ };
+ };
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x0 0x0 0x0 0x10000000>; /* 256 MiB starts at 0x0 */
+ };
+
+ gic: interrupt-controller@2c001000 {
+ compatible = "arm,cortex-a15-gic", "arm,cortex-a9-gic";
+ #interrupt-cells = <3>;
+ #address-cells = <0>;
+ interrupt-controller;
+ reg = <0x0 0x2c001000 0 0x1000>,
+ <0x0 0x2c002000 0 0x1000>;
+ };
+
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <1 13 0xf08>,
+ <1 14 0xf08>;
+ clock-frequency = <100000000>;
+ };
+
+ v2m_clk24mhz: clk24mhz {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <24000000>;
+ clock-output-names = "v2m:clk24mhz";
+ };
+
+ serial0: uart@1c090000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x0 0x1c090000 0x0 0x1000>;
+ interrupts = <0 8 1>;
+ clocks = <&v2m_clk24mhz>, <&v2m_clk24mhz>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+
+ psci {
+ compatible = "arm,psci-0.2";
+ method = "smc";
+ };
+};
diff --git a/configs/foundation-v8-linux-demo.c b/configs/foundation-v8-linux-demo.c
new file mode 100644
index 0000000..33c44ab
--- /dev/null
+++ b/configs/foundation-v8-linux-demo.c
@@ -0,0 +1,72 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Dmitry Voytik <dmitry...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/types.h>
+#include <jailhouse/cell-config.h>
+
+#define ARRAY_SIZE(a) sizeof(a) / sizeof(a[0])
+
+struct {
+ struct jailhouse_cell_desc cell;
+ __u64 cpus[1];
+ struct jailhouse_memory mem_regions[2];
+ struct jailhouse_irqchip irqchips[1];
+} __attribute__((packed)) config = {
+ .cell = {
+ .signature = JAILHOUSE_CELL_DESC_SIGNATURE,
+ .name = "linux-inmate-demo",
+ .flags = JAILHOUSE_CELL_PASSIVE_COMMREG,
+
+ .cpu_set_size = sizeof(config.cpus),
+ .num_memory_regions = ARRAY_SIZE(config.mem_regions),
+ .num_irqchips = 1,
+ .pio_bitmap_size = 0,
+ .num_pci_devices = 0,
+ },
+
+ .cpus = {
+ 0xc, /* 2nd and 3rd CPUs */
+ },
+
+ /* Physical memory map:
+ * 0x0_0000_0000 - 0x0_7fff_ffff (2048 MiB) Devices
+ * 0x0_8000_0000 - 0x0_bbdf_ffff ( 958 MiB) Ram, root cell Kernel
+ * 0x0_bbe0_0000 - 0x0_fbff_ffff (1026 MiB) Ram, nothing
+ * 0x0_fc00_0000 - 0x1_0000_0000 ( 64 MiB) Ram, hypervisor
+ * ... ( 30 GiB)
+ * 0x8_8000_0000 - 0x9_0000_0000 (2048 MiB) Ram, nonroot cells
+ */
+ .mem_regions = {
+ /* uart3 */ {
+ .phys_start = 0x1c0c0000,
+ .virt_start = 0x1c090000, /* inmate lib uses */
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* RAM load */ {
+ .phys_start = 0x880000000,
+ .virt_start = 0x0,
+ .size = 0x10000000, /* 256 MiB */
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE | JAILHOUSE_MEM_LOADABLE,
+ },
+ },
+
+ .irqchips = {
+ /* GIC */ {
+ /* address should be the same as in root cell */
+ .address = 0x2c001000,
+ .pin_bitmap = (1 << 8), /* uart3 */
+ },
+ }
+};
diff --git a/tools/jailhouse-loadlinux-foundation-v8.sh b/tools/jailhouse-loadlinux-foundation-v8.sh
new file mode 100755
index 0000000..aca18cb
--- /dev/null
+++ b/tools/jailhouse-loadlinux-foundation-v8.sh
@@ -0,0 +1,23 @@
+#!/bin/sh
+
+# Note: this hacky script is a temporary solution and the functionality will
+# be moved to ./tools/jailhouse-cell-linux
+#
+# Note: put linux-loader.bin, kernel-inmate-foundation-v8.dtb, nonroot_Image
+# in the /root directory.
+
+DTB_ADDR=" 0x00 0x00 0xe0 0x0f 0x00 0x00 0x00 0x00"
+KERNEL_ADDR="0x00 0x00 0x28 0x00 0x00 0x00 0x00 0x00"
+
+R=/root
+echo -ne "$(printf '\\x%x' $DTB_ADDR)" > linux_cfg
+echo -ne "$(printf '\\x%x' $KERNEL_ADDR)" >> linux_cfg
+hexdump -C $R/linux_cfg
+
+jailhouse cell load --name linux-inmate-demo \
+ $R/linux-loader.bin -a 0x0 \
+ $R/linux_cfg -a 0x4000 \
+ $R/kernel-inmate-foundation-v8.dtb -a 0xfe00000 \
+ $R/nonroot_Image -a 0x280000 \
+
+jailhouse cell start --name linux-inmate-demo
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:51 PM12/18/15
to jailho...@googlegroups.com, Dmitry Voytik, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com, Antonios Motakis
From: Dmitry Voytik <dmitry...@huawei.com>

This is preliminary patch. Still need to get rid of ugly shell
script and move loader helper functionality to the python script
for x86 arc.

Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
Signed-off-by: Antonios Motakis <antonios...@huawei.com>
[antonios...@huawei.com: split foundation-v8 configuration to
a separate patch, and small fixes in the linux loader output]
---
inmates/tools/arm64/Makefile | 19 +++++++++++
inmates/tools/arm64/linux-loader.c | 65 ++++++++++++++++++++++++++++++++++++++
2 files changed, 84 insertions(+)
create mode 100644 inmates/tools/arm64/linux-loader.c

diff --git a/inmates/tools/arm64/Makefile b/inmates/tools/arm64/Makefile
index e69de29..4a72277 100644
--- a/inmates/tools/arm64/Makefile
+++ b/inmates/tools/arm64/Makefile
@@ -0,0 +1,19 @@
+#
+# Jailhouse, a Linux-based partitioning hypervisor
+#
+# Copyright (c) Siemens AG, 2013-2015
+#
+# Authors:
+# Jan Kiszka <jan.k...@siemens.com>
+#
+# This work is licensed under the terms of the GNU GPL, version 2. See
+# the COPYING file in the top-level directory.
+#
+
+include $(INMATES_LIB)/Makefile.lib
+
+INMATES := linux-loader.bin
+
+linux-loader-y := linux-loader.o
+
+$(eval $(call DECLARE_TARGETS,$(INMATES)))
diff --git a/inmates/tools/arm64/linux-loader.c b/inmates/tools/arm64/linux-loader.c
new file mode 100644
index 0000000..2c6fc73
--- /dev/null
+++ b/inmates/tools/arm64/linux-loader.c
@@ -0,0 +1,65 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Dmitry Voytik <dmitry...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <inmates/inmate.h>
+
+/* Example memory map:
+ * 0x00000000 - 0x00003fff (16K) this binary
+ * 0x00004000 - 0x0000400f (16) linux_cfg
+ * 0x00280000 Image
+ * 0x0fe00000 dtb
+ */
+
+#define LINUX_CFG_PADDR 0x4000UL
+
+struct arm64_linux_header {
+ u32 code0; /* Executable code */
+ u32 code1; /* Executable code */
+ u64 text_offset; /* Image load offset, little endian */
+ u64 image_size; /* Effective Image size, little endian */
+ u64 flags; /* kernel flags, little endian */
+ u64 res2; /* = 0, reserved */
+ u64 res3; /* = 0, reserved */
+ u64 res4; /* = 0, reserved */
+ u32 magic; /* 0x644d5241 Magic number, little endian,
+ "ARM\x64" */
+ u32 res5; /* reserved (used for PE COFF offset) */
+};
+
+struct linux_cfg {
+ unsigned long dtb_addr;
+ struct arm64_linux_header *kernel_header;
+};
+
+void inmate_main(void)
+{
+ struct linux_cfg *lin_cfg;
+ void (*entry)(unsigned long);
+ u64 kaddr;
+
+ lin_cfg = (struct linux_cfg*)LINUX_CFG_PADDR;
+
+ kaddr = (u64)lin_cfg->kernel_header;
+
+ entry = (void*)(unsigned long)kaddr;
+
+ printk("\nJailhouse ARM64 Linux bootloader\n");
+ printk("DTB: 0x%016lx\n", lin_cfg->dtb_addr);
+ printk("Image: 0x%016lx\n", lin_cfg->kernel_header);
+ printk("Image size: %lu Bytes\n", lin_cfg->kernel_header->image_size);
+ printk("entry: 0x%016lx\n", entry);
+ if (lin_cfg->kernel_header->magic != 0x644d5241)
+ printk("WARNING: wrong Linux Image header magic: 0x%08x\n",
+ lin_cfg->kernel_header->magic);
+
+ entry(lin_cfg->dtb_addr);
+}
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:51 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add a cell configuration file for the AMD Seattle development
board, to be used with the PL011 UART demo inmate.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
---
configs/amd-seattle-uart-demo.c | 55 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
create mode 100644 configs/amd-seattle-uart-demo.c

diff --git a/configs/amd-seattle-uart-demo.c b/configs/amd-seattle-uart-demo.c
new file mode 100644
index 0000000..6ef7644
--- /dev/null
+++ b/configs/amd-seattle-uart-demo.c
@@ -0,0 +1,55 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/types.h>
+#include <jailhouse/cell-config.h>
+
+#define ARRAY_SIZE(a) sizeof(a) / sizeof(a[0])
+
+struct {
+ struct jailhouse_cell_desc cell;
+ __u64 cpus[1];
+ struct jailhouse_memory mem_regions[2];
+} __attribute__((packed)) config = {
+ .cell = {
+ .signature = JAILHOUSE_CELL_DESC_SIGNATURE,
+ .name = "pl011-demo",
+ .flags = JAILHOUSE_CELL_PASSIVE_COMMREG,
+
+ .cpu_set_size = sizeof(config.cpus),
+ .num_memory_regions = ARRAY_SIZE(config.mem_regions),
+ .num_irqchips = 0,
+ .pio_bitmap_size = 0,
+ .num_pci_devices = 0,
+ },
+
+ .cpus = {
+ 0x4,
+ },
+
+ .mem_regions = {
+ /* UART 2 */ {
+ .phys_start = 0xe1010000,
+ .virt_start = 0xe1010000,
+ .size = 0x10000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO | JAILHOUSE_MEM_ROOTSHARED,
+ },
+ /* RAM */ {
+ .phys_start = 0x82fbff0000,
+ .virt_start = 0,
+ .size = 0x00010000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE | JAILHOUSE_MEM_LOADABLE,
+ },
+ }
+};
--
2.4.3.368.g7974889


antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:54 PM12/18/15
to jailho...@googlegroups.com, Antonios Motakis, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Antonios Motakis <antonios...@huawei.com>

Add the cell configuration files, and some helper scripts and device
tree for the AMD Seattle development board. These can be used to
load a linux inmate on a cell on that target.

Signed-off-by: Antonios Motakis <antonios...@huawei.com>
Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
---
ci/kernel-inmate-amd-seattle.dts | 150 +++++++++++++++++++++++++++++++
configs/amd-seattle-linux-demo.c | 91 +++++++++++++++++++
tools/jailhouse-loadlinux-amd-seattle.sh | 23 +++++
3 files changed, 264 insertions(+)
create mode 100644 ci/kernel-inmate-amd-seattle.dts
create mode 100644 configs/amd-seattle-linux-demo.c
create mode 100755 tools/jailhouse-loadlinux-amd-seattle.sh

diff --git a/ci/kernel-inmate-amd-seattle.dts b/ci/kernel-inmate-amd-seattle.dts
new file mode 100644
index 0000000..7d29f35
--- /dev/null
+++ b/ci/kernel-inmate-amd-seattle.dts
@@ -0,0 +1,150 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+/dts-v1/;
+
+/* 64 KiB */
+/memreserve/ 0x0 0x00010000;
+
+/ {
+ model = "Jailhouse cell on AMD Seattle";
+ compatible = "amd,seattle-overdrive", "amd,seattle";
+ interrupt-parent = <&gic>;
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ chosen {
+ bootargs = "earlyprintk console=ttyAMA0";
+ };
+
+ cpus {
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x300>;
+ enable-method = "psci";
+ next-level-cache = <&L2_0>;
+ };
+
+ cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x301>;
+ enable-method = "psci";
+ next-level-cache = <&L2_0>;
+ };
+
+ L2_0: l2-cache0 {
+ compatible = "cache";
+ };
+ };
+
+ aliases {
+ serial0 = &serial0;
+ };
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x82 0xe0000000 0x0 0x10000000>; /* 256 MiB starts at 0x0 */
+ };
+
+ gic: interrupt-controller@e1110000 {
+ compatible = "arm,gic-400", "arm,cortex-a15-gic";
+ #interrupt-cells = <3>;
+ #address-cells = <0>;
+ interrupt-controller;
+ reg = <0x0 0xe1110000 0 0x1000>,
+ <0x0 0xe112f000 0 0x2000>;
+ };
+
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <1 13 0xff04>,
+ <1 14 0xff04>;
+ };
+
+ uartspiclk_100mhz: clk100mhz_1 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <100000000>;
+ clock-output-names = "uartspiclk_100mhz";
+ };
+
+ serial0: uart@e1010000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x0 0xe1010000 0x0 0x1000>;
+ interrupts = <0 328 4>;
+ clocks = <&uartspiclk_100mhz>, <&uartspiclk_100mhz>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+
+ psci {
+ compatible = "arm,psci-0.2";
+ method = "smc";
+ };
+
+ smb0: smb {
+ compatible = "simple-bus";
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+
+ xgmacclk1_dma_250mhz: clk250mhz_2 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <250000000>;
+ clock-output-names = "xgmacclk1_dma_250mhz";
+ };
+
+ xgmacclk1_ptp_250mhz: clk250mhz_3 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <250000000>;
+ clock-output-names = "xgmacclk1_ptp_250mhz";
+ };
+
+ xgmac1_phy: phy@e1240c00 {
+ compatible = "amd,xgbe-phy-seattle-v1a";
+ reg = <0 0xe1240c00 0 0x00400>, /* SERDES RX/TX1 */
+ <0 0xe1250080 0 0x00060>, /* SERDES IR 1/2 */
+ <0 0xe12500fc 0 0x00004>; /* SERDES IR 2/2 */
+ interrupts = <0 322 4>;
+ amd,speed-set = <0>;
+ amd,serdes-blwc = <1>, <1>, <0>;
+ amd,serdes-cdr-rate = <2>, <2>, <7>;
+ amd,serdes-pq-skew = <10>, <10>, <18>;
+ amd,serdes-tx-amp = <15>, <15>, <10>;
+ amd,serdes-dfe-tap-config = <3>, <3>, <1>;
+ amd,serdes-dfe-tap-enable = <0>, <0>, <127>;
+ };
+
+ xgmac1: xgmac@e0900000 {
+ compatible = "amd,xgbe-seattle-v1a";
+ reg = <0 0xe0900000 0 0x80000>,
+ <0 0xe0980000 0 0x80000>;
+ interrupts = <0 324 4>,
+ <0 341 1>, <0 342 1>, <0 343 1>, <0 344 1>;
+ amd,per-channel-interrupt;
+ mac-address = [ 02 B1 B2 B3 B4 B5 ];
+ clocks = <&xgmacclk1_dma_250mhz>, <&xgmacclk1_ptp_250mhz>;
+ clock-names = "dma_clk", "ptp_clk";
+ phy-handle = <&xgmac1_phy>;
+ phy-mode = "xgmii";
+ #stream-id-cells = <24>;
+ dma-coherent;
+ };
+ };
+};
diff --git a/configs/amd-seattle-linux-demo.c b/configs/amd-seattle-linux-demo.c
new file mode 100644
index 0000000..ba618b5
--- /dev/null
+++ b/configs/amd-seattle-linux-demo.c
@@ -0,0 +1,91 @@
+/*
+ * Jailhouse AArch64 support
+ *
+ * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+ *
+ * Authors:
+ * Antonios Motakis <antonios...@huawei.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/types.h>
+#include <jailhouse/cell-config.h>
+
+#define ARRAY_SIZE(a) sizeof(a) / sizeof(a[0])
+
+struct {
+ struct jailhouse_cell_desc cell;
+ __u64 cpus[1];
+ struct jailhouse_memory mem_regions[6];
+ struct jailhouse_irqchip irqchips[1];
+} __attribute__((packed)) config = {
+ .cell = {
+ .signature = JAILHOUSE_CELL_DESC_SIGNATURE,
+ .name = "linux-inmate-demo",
+ .flags = JAILHOUSE_CELL_PASSIVE_COMMREG,
+
+ .cpu_set_size = sizeof(config.cpus),
+ .num_memory_regions = ARRAY_SIZE(config.mem_regions),
+ .num_irqchips = 1,
+ .pio_bitmap_size = 0,
+ .num_pci_devices = 0,
+ },
+
+ .cpus = {
+ 0xc0,
+ },
+
+ .mem_regions = {
+ /* UART */ {
+ .phys_start = 0xe1010000,
+ .virt_start = 0xe1010000,
+ .size = 0x10000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO | JAILHOUSE_MEM_ROOTSHARED,
+ },
+ /* xgmac */ {
+ .phys_start = 0xe0900000,
+ .virt_start = 0xe0900000,
+ .size = 0x100000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* phy */ {
+ .phys_start = 0xe1240000,
+ .virt_start = 0xe1240000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* phy */ {
+ .phys_start = 0xe1250000,
+ .virt_start = 0xe1250000,
+ .size = 0x1000,
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_IO,
+ },
+ /* RAM */ {
+ .phys_start = 0x82d0000000,
+ .virt_start = 0x0,
+ .size = 0x10000000, /* 256 MiB */
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE | JAILHOUSE_MEM_LOADABLE,
+ },
+ /* RAM */ {
+ .phys_start = 0x82e0000000,
+ .virt_start = 0x82e0000000,
+ .size = 0x10000000, /* 256 MiB */
+ .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE |
+ JAILHOUSE_MEM_EXECUTE | JAILHOUSE_MEM_LOADABLE,
+ },
+ },
+
+ .irqchips = {
+ /* GIC */ {
+ .address = 0x2c001000,
+ .pin_bitmap = 0, /* forward uart irq with a hack for now */
+ },
+ }
+};
diff --git a/tools/jailhouse-loadlinux-amd-seattle.sh b/tools/jailhouse-loadlinux-amd-seattle.sh
new file mode 100755
index 0000000..e82e17f
--- /dev/null
+++ b/tools/jailhouse-loadlinux-amd-seattle.sh
@@ -0,0 +1,23 @@
+#!/bin/sh
+
+# Note: this hacky script is a temporary solution and the functionality will
+# be moved to ./tools/jailhouse-cell-linux
+#
+# Note: put linux-loader.bin, kernel-inmate-amd-seattle.dtb, nonroot_Image
+# in the /root directory.
+
+DTB_ADDR=" 0x00 0x00 0xe0 0x0f 0x00 0x00 0x00 0x00"
+KERNEL_ADDR="0x00 0x00 0x08 0xe0 0x82 0x00 0x00 0x00"
+
+R=/root
+echo -ne "$(printf '\\x%x' $DTB_ADDR)" > linux_cfg
+echo -ne "$(printf '\\x%x' $KERNEL_ADDR)" >> linux_cfg
+hexdump -C $R/linux_cfg
+
+jailhouse cell load --name linux-inmate-demo \
+ $R/linux-loader.bin -a 0x0 \
+ $R/linux_cfg -a 0x4000 \
+ $R/kernel-inmate-amd-seattle.dtb -a 0xfe00000 \
+ $R/nonroot_Image -a 0x82e0080000 \

antonios...@huawei.com

unread,
Dec 18, 2015, 4:33:55 PM12/18/15
to jailho...@googlegroups.com, Dmitry Voytik, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
From: Dmitry Voytik <dmitry...@huawei.com>

Add ./tools/jailhouse-parsedump tool. This tool decodes an ARM64
exception dump and prints human-readable stack trace like this:

[0x00000000fc008688] arch_handle_dabt mmio.c:97
[0x00000000fc009acc] arch_handle_trap traps.c:143

The tool can read dumps from files (passed via -f parameter)
or from stdin stream (which could be also piped-in).

Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
---
tools/jailhouse-parsedump | 157 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 157 insertions(+)
create mode 100755 tools/jailhouse-parsedump

diff --git a/tools/jailhouse-parsedump b/tools/jailhouse-parsedump
new file mode 100755
index 0000000..6209f49
--- /dev/null
+++ b/tools/jailhouse-parsedump
@@ -0,0 +1,157 @@
+#!/usr/bin/env python
+
+# Jailhouse, a Linux-based partitioning hypervisor
+#
+# Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
+#
+# Authors:
+# Dmitry Voytik <dmitry...@huawei.com>
+#
+# ARM64 dump parser.
+# Usage ./tools/jailhouse-parsedump [dump.txt]
+#
+# This work is licensed under the terms of the GNU GPL, version 2. See
+# the COPYING file in the top-level directory.
+
+
+from __future__ import print_function
+import subprocess
+import sys, fileinput
+import fileinput
+import os
+import argparse
+
+split1 = "Cell's stack before exception "
+split2 = "Hypervisor stack before exception "
+
+# yep, this is most important feature
+class Col:
+ ENDC = '\033[0m'
+ BOLD = '\033[1m'
+ FAIL = '\033[91m'
+ @staticmethod
+ def init():
+ t = os.environ['TERM']
+ if t == '' or t == 'dumb' or t == 'vt220' or t == 'vt100':
+ # The terminal doesn't support colors
+ Col.ENDC = ''
+ Col.BOLD = ''
+ Col.FAIL = ''
+
+ @staticmethod
+ def bold(string):
+ return Col.BOLD + str(string) + Col.ENDC
+ @staticmethod
+ def pr_err(string):
+ print(Col.FAIL + "ERROR: " + Col.ENDC + str(string))
+ @staticmethod
+ def pr_note(string):
+ print(Col.BOLD + "NOTE: " + Col.ENDC + str(string))
+
+def addr2line(addr):
+ return subprocess.check_output(["addr2line", "-a", "-f", "-p", "-e",
+ objpath, hex(addr)])
+
+def print_faddr(addr):
+ s = addr2line(addr)
+ if s.find("?") != -1:
+ print("[{:#016x}] {}".format(addr, Col.bold("uknown")))
+ return
+ s = s.strip().split(" ")
+ print("[{}] {} {}".format(s[0][:-1], Col.bold(s[1]),
+ s[3].split('/')[-1]))
+
+class Dump:
+ def __init__(self, dump_str):
+ if len(dump_str) < 50:
+ raise ValueError('Dump is too small')
+ # parse CPU state
+ pc_i = dump_str.find("pc:") + 4
+ self.pc = int(dump_str[pc_i: pc_i+16], 16)
+ pc_i = dump_str.find("sp:") + 4
+ self.sp = int(dump_str[pc_i: pc_i+16], 16)
+ el_i = dump_str.rfind("EL") + 2
+ self.el = int(dump_str[el_i:el_i+1])
+ if (self.el != 2):
+ Col.pr_err("This version supports only EL2 exception dump")
+
+ # TODO: parse other registers: ESR, etc
+
+ # parse stack dump
+ stack_start = str.rfind(dump_str, split1)
+ if (stack_start == -1):
+ stack_start = str.rfind(dump_str, split2)
+ if (stack_start == -1):
+ raise ValueError('Dump is damaged')
+
+ stack_str = dump_str[stack_start:].strip().split('\n')
+ stack_addr_start = int(stack_str[0][35:53], 16)
+ stack_addr_end = int(stack_str[0][56:74], 16)
+
+ # parse stack memory dump
+ stack = []
+ for line in stack_str[1:]:
+ if (len(line) < 5): continue
+ if (line[4] != ':'): continue
+ line = line[5:].strip().split(" ")
+ for value in line:
+ stack.append(int(value, 16))
+
+ self.stack_mem = stack
+ self.stack_start = stack_addr_start
+ self.stack_end = stack_addr_end
+
+ def stack_get64(self, addr):
+ assert addr >= self.sp
+ i = (addr - self.sp) / 4
+ hi32 = self.stack_mem[i]
+ lo32 = self.stack_mem[i + 1]
+ return lo32 + (hi32 << 32)
+
+ def print_unwinded_stack(self):
+ print_faddr(self.pc)
+ addr = self.sp
+ while True:
+ prev_sp = self.stack_get64(addr)
+ print_faddr(self.stack_get64(addr+4))
+ addr = prev_sp
+ if (addr > self.stack_end - 256):
+ break
+
+def main():
+ Col.init()
+
+ parser = argparse.ArgumentParser(description='ARM64 exception dump parser')
+ parser.add_argument('--objpath', '-o', default="./hypervisor/hypervisor.o",
+ type=str, help="Path to hypervisor.o file")
+ parser.add_argument('-f', '--filedump', default="", type=str,
+ help="Exception dump text file")
+ args = parser.parse_args()
+
+ global objpath
+ objpath = args.objpath
+
+ stdin_used = False
+ infile = [args.filedump]
+ if args.filedump == "":
+ infile = []
+ Col.pr_note("Input dumped text then press Enter, Control+D, Control+D")
+ stdin_used = True
+
+ ilines = []
+ for line in fileinput.input(infile):
+ ilines.append(line)
+ dump_str = "".join(ilines)
+ if (not stdin_used):
+ print(dump_str)
+ else:
+ print("\n")
+ try:
+ dump = Dump(dump_str)
+ except ValueError as err:
+ Col.pr_err(err)
+ return
+ dump.print_unwinded_stack()
+
+if __name__ == "__main__":
+ main()
--
2.4.3.368.g7974889


Jan Kiszka

unread,
Dec 19, 2015, 7:13:35 AM12/19/15
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:31, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> Ho ho ho, just in time for the holidays!

Ho, ho! Indeed a lot to read over holidays. I'm planning to do this, but
the overdue AMD IOMMU review needs to happen first.

>
> This patch series is an RFC towards AArch64 support in the Jailhouse
> hypervisor. It applies on the latest next branch from upstream, and
> can also be pulled from https://github.com/tvelocity/jailhouse.git
> (branch arm64_v7)
>
> The patch series includes contributions by Claudio Fontana, and
> Dmitry Voytik.
>
> This version of the patch series features significant progress from
> the last one. Not only do we have working inmates now, not only
> we can demonstrate Linux inmates, we can showcase this on two
> targets: besides the ARM Foundation ARMv8 model, we now include
> cell configuration files for a real hardware target, the AMD Seattle
> development board!

Great progress! Hope I can try it out early next year on real hw as well.

>
> However, this series is still an RFC; these patches DO break ARMv7
> temporarily, and might cause problems in other archs as well. This
> breakage is minor, and we are not very far from dropping the RFC tag :)

Temporarily means between patches or also after everything is applied?
And what is broken?

What else is required from your POV to start discussing a merge seriously?

>
> The patch series has a few distinct parts:
>
> Changes from RFCv6:
> - Probably too many to list here!
> - Initial support for MPID affinity levels (as needed by PSCI)
> - Working inmates
> - Linux inmate support, by Dmitry Voytik!
> - Improved /fixed cache coherency handling by Dmitry Voytik
> - Support for the 4th level of page tables, allowing for a PARange of 40-48
> - Many fixes that were discovered by running Jailhouse on the AMD Seattle
>
> Changes from RFCv5:
> - PSCI support
> - Hypercalls to the hypervisor
> - Hypervisor disable, and also return to Linux properly when
> initialization fails
> - More clean ups, clean ups, fixes
> Contributions by Dmitry Voytik:
> - Implement cache flushes, maintenance of the memory system
> - Refactored a lot of trap handling code and other mmio bits
> - Dump cell registers support for AArch64.
>
> Changes from RFCv4:
> - Stubs now use trace_error, or block, to make it more obvious when
> we run into a missing stub during development.
> - Working root cell! Thanks to working MMU mappings, and working
> GICv2 handling.
> - MMU mappings are being set up for the hypervisor (EL2), and for
> the root cell (Stage 2 EL1).
> - Reworked the JAILHOUSE_IOMAP_ADDR decoupling from JAILHOUSE_BASE
> - Clean ups, clean ups, fixes
>
> Still to be improved:
> - GICv3 support
> - SMMU support
> - Fix AArch32 again. Minor breakage due to me recklessly using
> division in paging.c
> - Clean things up; there's a lot of room for refactoring to
> share more code between AArch32 and AArch64
>
> Epilogue:
> I aimed to publish this version before the holiday period, since
> it has been a long while since the last version was posted. Hopefully
> now everyone can be up to date on the work around this port. Problems
> with this RFC are bound to crop up nonetheless, and I'm looking
> forward to get feedback.
>
> Happy holidays!
>

Same to you!

Jan

Antonios Motakis

unread,
Dec 21, 2015, 11:03:20 AM12/21/15
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Unfortunately, it breaks ARMv7 after everything has been applied. I fixed most issues I introduced with the ARMv8 support, however the page_alloc_aligned function I introduced is the culprit, that still does not allow linking the hypervisor bimary on ARM.

The fix should be trivial enough; a reimplementation ot page_alloc_aligned to not use division and modulo operations would suffice. Alternatively, an implementation of the GCC integer division built ins could be added (__aeabi_uidivmod etc).

There's also the hack of hardcoding the SPIs linked to the UART and second NIC on the Seattle. A fix on the cell config format would be much preferable to this.

Otherwise the code can be cleaned up more, some common functions between ARMv7 and ARMv8 can be merged, some patches can be split further, etc.

Cheers,
Tony
--
Antonios Motakis
Virtualization Engineer
Huawei Technologies Duesseldorf GmbH
European Research Center
Riesstrasse 25, 80992 München

Marc Zyngier

unread,
Dec 21, 2015, 11:32:15 AM12/21/15
to antonios...@huawei.com, jailho...@googlegroups.com, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
This code seems to suffer from the same bug found on KVM and Xen, not
validating the use of the zero register.

> + if (sse)
> + mmio.value = sign_extend(mmio.value, 8 * size);
> + } else {
> + mmio.value = 0;
> + }
> + mmio.is_write = is_write;
> + mmio.size = size;
> +
> + mmio_result = mmio_handle_access(&mmio);
> + if (mmio_result == MMIO_ERROR)
> + return TRAP_FORBIDDEN;
> + if (mmio_result == MMIO_UNHANDLED)
> + goto error_unhandled;
> +
> + /* Put the read value into the dest register */
> + if (!is_write) {
> + if (sse)
> + mmio.value = sign_extend(mmio.value, 8 * size);
> + ctx->regs[srt] = mmio.value;

And here, you seem to be able to write to the zero register, clobbering
for further uses as a write.

You may want to also check the way you deal with sysreg access, as it
is likely that you have the same issue there.

Thanks,

M.
--
Without deviation from the norm, progress is not possible.

Jan Kiszka

unread,
Jan 8, 2016, 4:41:50 AM1/8/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:31, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> At the moment the Linux driver maps the Jailhouse binary to
> JAILHOUSE_BASE. The underlying assumption is that Linux may map the
> firmware (in the Linux kernel space), to the same virtual address it
> has been built to run from.
>
> This assumption is unworkable on ARMv8 processors running in AArch64
> mode. Kernel memory is allocated in a high address region, that is
> not addressable from EL2, where the hypervisor will run from.
>
> This patch removes the assumption, by introducing the
> JAILHOUSE_BORROW_ROOT_PT define, which describes the behavior of the
> current architectures.
>
> We also turn the entry point in the header, into an offset from the
> Jailhouse load address, so we can enter the image regardless of
> where it will be mapped.
>
> On AArch64, JAILHOUSE_BASE will be the physical address the
> hypervisor will be loaded to. This way, Jailhouse will run with
> identity mapping in EL2. The Linux driver sets the address to the
> debug UART accordingly.

Would it be possible to make AArch64 position-independent, like we had
it on x86 before 30e7e8ef45? Wondering this while considering to go back
there [1] - but only if it can be applied consistently.

Jan

[1] http://thread.gmane.org/gmane.linux.jailhouse/4155

--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

Antonios Motakis

unread,
Jan 11, 2016, 7:02:19 AM1/11/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com


On 1/8/2016 10:41 AM, Jan Kiszka wrote:
> On 2015-12-18 22:31, antonios...@huawei.com wrote:
>> From: Antonios Motakis <antonios...@huawei.com>
>>
>> At the moment the Linux driver maps the Jailhouse binary to
>> JAILHOUSE_BASE. The underlying assumption is that Linux may map the
>> firmware (in the Linux kernel space), to the same virtual address it
>> has been built to run from.
>>
>> This assumption is unworkable on ARMv8 processors running in AArch64
>> mode. Kernel memory is allocated in a high address region, that is
>> not addressable from EL2, where the hypervisor will run from.
>>
>> This patch removes the assumption, by introducing the
>> JAILHOUSE_BORROW_ROOT_PT define, which describes the behavior of the
>> current architectures.
>>
>> We also turn the entry point in the header, into an offset from the
>> Jailhouse load address, so we can enter the image regardless of
>> where it will be mapped.
>>
>> On AArch64, JAILHOUSE_BASE will be the physical address the
>> hypervisor will be loaded to. This way, Jailhouse will run with
>> identity mapping in EL2. The Linux driver sets the address to the
>> debug UART accordingly.
>
> Would it be possible to make AArch64 position-independent, like we had
> it on x86 before 30e7e8ef45? Wondering this while considering to go back
> there [1] - but only if it can be applied consistently.
>

I admit I don't know how PIC would affect the code generated by gcc, but people tell me it would be fine.

One think to consider is the early (static) MMU mappings we load during entry, to use during early init. Without them, we don't have unaligned accesses in the hypervisor, and if JAILHOUSE_BASE is not known at compile time, then we need to generate the early init page tables on entry instead of compile time. Which is possible, but maybe too complex for the early entry code.

Configuring gcc to not generate unaligned accesses, and packing all structures as needed, should also work for AArch64. Then we could get by with the MMU turned off for longer. However, then we will need to do a lot of cache maintenance during entry, any memory location that we touch during early init would need maintenance.

Antonios Motakis

unread,
Jan 11, 2016, 7:10:14 AM1/11/16
to Marc Zyngier, jailho...@googlegroups.com, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Thanks, we shall look into this!

Cheers,
Antonios

Jan Kiszka

unread,
Jan 11, 2016, 8:23:19 AM1/11/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
It basically means PC-relative addressing, sometimes combined with
offset table-based addressing. Doesn't make the output code compacter,
obviously.

>
> One think to consider is the early (static) MMU mappings we load during entry, to use during early init. Without them, we don't have unaligned accesses in the hypervisor, and if JAILHOUSE_BASE is not known at compile time, then we need to generate the early init page tables on entry instead of compile time. Which is possible, but maybe too complex for the early entry code.

Maybe we can outsource this page table building to the loader driver,
possibly using generic Linux services for it. Just a vague idea...

>
> Configuring gcc to not generate unaligned accesses, and packing all structures as needed, should also work for AArch64. Then we could get by with the MMU turned off for longer. However, then we will need to do a lot of cache maintenance during entry, any memory location that we touch during early init would need maintenance.
>

Doesn't sound like we should got that particular way.

Jan

Antonios Motakis

unread,
Jan 11, 2016, 9:38:53 AM1/11/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Maybe just keeping JAILHOUSE_BASE a known is not that big of an issue, we could still have 1:1 mapping. It will just be a 1:1 mapping that will need to be known at compile time.

But it will have to fit a specific "hole" from the root cell config anyway, no? It's not like we will be changing this a lot on a given system.

On the other hand, if we really want to have Jailhouse position independent, then maybe we can live with a small asm loop(s) in the entry code that fills in the early page tables with the right values. At least personally I find it more tasteful to keep the dependencies between the driver loader and the hypervisor as small as possible.

my 2c

>
>>
>> Configuring gcc to not generate unaligned accesses, and packing all structures as needed, should also work for AArch64. Then we could get by with the MMU turned off for longer. However, then we will need to do a lot of cache maintenance during entry, any memory location that we touch during early init would need maintenance.
>>
>
> Doesn't sound like we should got that particular way.
>
> Jan
>

--

Jan Kiszka

unread,
Jan 11, 2016, 1:04:41 PM1/11/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
That might be ok for certain ARM systems but it is a bit too restrictive
for more standardized, regular systems like x86. Here we could already
package most components and reuse them unmodified, only adjusting the
configurations on a per-system basis.

>
> But it will have to fit a specific "hole" from the root cell config anyway, no? It's not like we will be changing this a lot on a given system.

The question is if moving that hole requires just a config adjustment or
a complete rebuild. I'm not a big fan of the latter.

>
> On the other hand, if we really want to have Jailhouse position independent, then maybe we can live with a small asm loop(s) in the entry code that fills in the early page tables with the right values. At least personally I find it more tasteful to keep the dependencies between the driver loader and the hypervisor as small as possible.

The alternative to a complete 1:1 mapping is a partial one: hypervisor
memory at fixed virtual address that never conflicts with 1:1-mapped IO
resources.

What prevents such a model on AArch64 again? You can't use the virtual
memory the kernel chooses, ok, but can't you use a different one that
would fulfil the requirements above?

However, I'm not sure if there is a way to achieve this with the
restricted 32-bit address space of ARMv7.

Antonios Motakis

unread,
Jan 12, 2016, 3:18:26 AM1/12/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Fair enough!

>>
>> On the other hand, if we really want to have Jailhouse position independent, then maybe we can live with a small asm loop(s) in the entry code that fills in the early page tables with the right values. At least personally I find it more tasteful to keep the dependencies between the driver loader and the hypervisor as small as possible.
>
> The alternative to a complete 1:1 mapping is a partial one: hypervisor
> memory at fixed virtual address that never conflicts with 1:1-mapped IO
> resources.
>
> What prevents such a model on AArch64 again? You can't use the virtual
> memory the kernel chooses, ok, but can't you use a different one that
> would fulfil the requirements above?
>

Nothing prevents us really, we could pick a standard virtual address to run from. It's just that identity mapping is the most straightforward to implement (especially when including the early page tables statically in the hypervisor binary).

> However, I'm not sure if there is a way to achieve this with the
> restricted 32-bit address space of ARMv7.
>
> Jan
>

--

Jan Kiszka

unread,
Jan 20, 2016, 10:48:26 AM1/20/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:31, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> Ho ho ho, just in time for the holidays!
>
> This patch series is an RFC towards AArch64 support in the Jailhouse
> hypervisor. It applies on the latest next branch from upstream, and
> can also be pulled from https://github.com/tvelocity/jailhouse.git
> (branch arm64_v7)
>
> The patch series includes contributions by Claudio Fontana, and
> Dmitry Voytik.
>
> This version of the patch series features significant progress from
> the last one. Not only do we have working inmates now, not only
> we can demonstrate Linux inmates, we can showcase this on two
> targets: besides the ARM Foundation ARMv8 model, we now include
> cell configuration files for a real hardware target, the AMD Seattle
> development board!

For the next round, could you also consider extending the travis-ci
environment by that arch? Maybe pick the Seattle as reference target.
Not sure, though, if a suitable tool chain is easily available for that.

Thanks,
Jan

--
Siemens AG, Corporate Technology, CT RDA ITP SES-DE

Jan Kiszka

unread,
Jan 20, 2016, 11:28:04 AM1/20/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:31, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> The function page_alloc allows us to allocate any number of pages,
> however they will be aligned on page boundaries.
> The page_alloc_aligned implemented here allows us to allocate N
> pages that will be aligned on N-page boundaries.
>
> This will be used on the AArch64 port of Jailhouse to support
> physical address ranges from 40 to 44 bits: in these configurations,
> the initial page table level may take up multiple pages.
>
> Signed-off-by: Antonios Motakis <antonios...@huawei.com>
> ---
> hypervisor/include/jailhouse/paging.h | 1 +
> hypervisor/paging.c | 42 +++++++++++++++++++++++++++++++++++
> 2 files changed, 43 insertions(+)
>
> diff --git a/hypervisor/include/jailhouse/paging.h b/hypervisor/include/jailhouse/paging.h
> index 27286f0..6c2555f 100644
> --- a/hypervisor/include/jailhouse/paging.h
> +++ b/hypervisor/include/jailhouse/paging.h
> @@ -183,6 +183,7 @@ extern struct paging_structures hv_paging_structs;
> unsigned long paging_get_phys_invalid(pt_entry_t pte, unsigned long virt);
>
> void *page_alloc(struct page_pool *pool, unsigned int num);
> +void *page_alloc_aligned(struct page_pool *pool, unsigned int num);
> void page_free(struct page_pool *pool, void *first_page, unsigned int num);
>
> /**
> diff --git a/hypervisor/paging.c b/hypervisor/paging.c
> index 1fd7402..201bf75 100644
> --- a/hypervisor/paging.c
> +++ b/hypervisor/paging.c
> @@ -126,6 +126,48 @@ restart:
> }
>
> /**
> + * Allocate consecutive and aligned pages from the specified pool.
> + * Pages will be aligned to num * PAGE_SIZE
> + * @param pool Page pool to allocate from.
> + * @param num Number of pages.
> + *
> + * @return Pointer to first page or NULL if allocation failed.
> + *
> + * @see page_free
> + */
> +void *page_alloc_aligned(struct page_pool *pool, unsigned int num)
> +{
> + unsigned int offset;
> + unsigned long start, next, i;
> +
> + /* the pool itself might not be aligned to our desired size */
> + offset = (- (unsigned long) pool->base_address / PAGE_SIZE) % num;
> + next = offset;
> +
> + while ((start = find_next_free_page(pool, next)) != INVALID_PAGE_NR) {
> +
> + if ((start - offset) % num)
> + goto next_chunk;
> +
> + for (i = start; i < start + num; i++)
> + if (test_bit(i, pool->used_bitmap))
> + goto next_chunk;
> +
> + for (i = start; i < start + num; i++)
> + set_bit(i, pool->used_bitmap);
> +
> + pool->used_pages += num;
> +
> + return pool->base_address + start * PAGE_SIZE;
> +
> +next_chunk:
> + next += num - (start - offset) % num;
> + }
> +
> + return NULL;
> +}

Could we create a common core for both aligned and unaligned
allocations? That could could take an alignment requirement as parameter
which would be 1 for the unaligned mode.

Antonios Motakis

unread,
Jan 20, 2016, 11:51:35 AM1/20/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
I didn't think of that, I wonder if the implementation will be cleaner. But we can certainly do it; page_alloc_aligned users should be rare, so maybe they benefit from going through a common, more tested, code path?

Locally I've already fixed the function to not use modulo operations (which is the main issue that broke AAarch32).

Antonios Motakis

unread,
Jan 20, 2016, 11:59:51 AM1/20/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
I've never set up travis-ci I have to admit! Is this a trivial thing to do, or will it take some time? I am working on posting v8 soon, so you don't have to review code that I have already changed locally.

I am at a whopping 50 patches now locally! So for v8 I'm going to split the patches into 3 series (prep series, main series, and a third separate series for the inmate demos).

>
> Thanks,

Jan Kiszka

unread,
Jan 20, 2016, 12:11:48 PM1/20/16
to antonios...@huawei.com, jailho...@googlegroups.com, Claudio Fontana, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:31, antonios...@huawei.com wrote:
> From: Claudio Fontana <claudio...@huawei.com>
>
> allow more efficient per-arch implementations
> of the usual memory / string ops by making the
> generic implementations weak.
>
> Signed-off-by: Claudio Fontana <claudio...@huawei.com>
> ---
> hypervisor/arch/arm/lib.c | 12 ------------
> hypervisor/lib.c | 15 +++++++++++++++
> 2 files changed, 15 insertions(+), 12 deletions(-)
>
> diff --git a/hypervisor/arch/arm/lib.c b/hypervisor/arch/arm/lib.c
> index 6396a0d..c2636ec 100644
> --- a/hypervisor/arch/arm/lib.c
> +++ b/hypervisor/arch/arm/lib.c
> @@ -22,15 +22,3 @@ int phys_processor_id(void)
> arm_read_sysreg(MPIDR_EL1, mpidr);
> return mpidr & MPIDR_CPUID_MASK;
> }
> -
> -void *memcpy(void *dest, const void *src, unsigned long n)
> -{
> - unsigned long i;
> - const char *csrc = src;
> - char *cdest = dest;
> -
> - for (i = 0; i < n; i++)
> - cdest[i] = csrc[i];
> -
> - return dest;
> -}
> diff --git a/hypervisor/lib.c b/hypervisor/lib.c
> index f2a27eb..39cb873 100644
> --- a/hypervisor/lib.c
> +++ b/hypervisor/lib.c
> @@ -13,6 +13,7 @@
> #include <jailhouse/string.h>
> #include <jailhouse/types.h>
>
> +__attribute__((weak))
> void *memset(void *s, int c, unsigned long n)
> {
> u8 *p = s;
> @@ -22,6 +23,7 @@ void *memset(void *s, int c, unsigned long n)
> return s;
> }
>
> +__attribute__((weak))
> int strcmp(const char *s1, const char *s2)
> {
> while (*s1 == *s2) {
> @@ -32,3 +34,16 @@ int strcmp(const char *s1, const char *s2)
> }
> return *(unsigned char *)s1 - *(unsigned char *)s2;
> }
> +
> +__attribute__ ((weak))
> +void *memcpy(void *dest, const void *src, unsigned long n)
> +{
> + unsigned long i;
> + const char *csrc = src;
> + char *cdest = dest;
> +
> + for (i = 0; i < n; i++)
> + cdest[i] = csrc[i];
> +
> + return dest;
> +}
>

Moving memcpy here is fine (and a patch of its own), marking all those
symbols weak is overkill until the contrary has been proven (read:
benchmark numbers! :) ).

Jan Kiszka

unread,
Jan 20, 2016, 12:13:41 PM1/20/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2016-01-20 17:59, Antonios Motakis wrote:
>
>
> On 1/20/2016 4:48 PM, Jan Kiszka wrote:
>> On 2015-12-18 22:31, antonios...@huawei.com wrote:
>>> From: Antonios Motakis <antonios...@huawei.com>
>>>
>>> Ho ho ho, just in time for the holidays!
>>>
>>> This patch series is an RFC towards AArch64 support in the Jailhouse
>>> hypervisor. It applies on the latest next branch from upstream, and
>>> can also be pulled from https://github.com/tvelocity/jailhouse.git
>>> (branch arm64_v7)
>>>
>>> The patch series includes contributions by Claudio Fontana, and
>>> Dmitry Voytik.
>>>
>>> This version of the patch series features significant progress from
>>> the last one. Not only do we have working inmates now, not only
>>> we can demonstrate Linux inmates, we can showcase this on two
>>> targets: besides the ARM Foundation ARMv8 model, we now include
>>> cell configuration files for a real hardware target, the AMD Seattle
>>> development board!
>>
>> For the next round, could you also consider extending the travis-ci
>> environment by that arch? Maybe pick the Seattle as reference target.
>> Not sure, though, if a suitable tool chain is easily available for that.
>
> I've never set up travis-ci I have to admit! Is this a trivial thing to do, or will it take some time? I am working on posting v8 soon, so you don't have to review code that I have already changed locally.

It might be easy if all bits are quickly found. We basically need:

- a suitable kernel config (just has to build, not necessarily run),
see ci/kernel-config-*

- extension of ci/gen-kernel-build.sh, if unlucky, a kernel version
update there

- a hypervisor config, see ci/jailhouse-config-*

- check/update of ci/build-all-configs.sh

- selection of a suitable toolchain in the travis-ci environment
(Ubuntu Vivid based); there is probably nothing packaged, so you need
to pull one from an external source by extending .travis.yml rules

As the updated kernel-build-tar.xz is hosted on my server, we need to
cooperate on this anyway. So I would suggest that you collect the
required pieces and prepare an untested patch that I can try out and
massage afterwards.

>
> I am at a whopping 50 patches now locally! So for v8 I'm going to split the patches into 3 series (prep series, main series, and a third separate series for the inmate demos).

We can also do this after the v8 round, specifically if it requires more
work.

Jan Kiszka

unread,
Jan 20, 2016, 12:14:54 PM1/20/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:31, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> Add under config/foundation-v8.c a root cell configuration for the
> ARMv8 Foundation model, so we can in use this target with Jailhouse.
> We also add the neccessary parameters in asm/platform.h for this
> model.
>
> Signed-off-by: Antonios Motakis <antonios...@huawei.com>
> ---
> ci/jailhouse-config-foundation-v8.h | 5 ++
> configs/foundation-v8.c | 120 +++++++++++++++++++++++++++
> hypervisor/arch/arm64/include/asm/platform.h | 32 +++++++
> 3 files changed, 157 insertions(+)
> create mode 100644 ci/jailhouse-config-foundation-v8.h
> create mode 100644 configs/foundation-v8.c
>
> diff --git a/ci/jailhouse-config-foundation-v8.h b/ci/jailhouse-config-foundation-v8.h
> new file mode 100644
> index 0000000..d59aa85
> --- /dev/null
> +++ b/ci/jailhouse-config-foundation-v8.h
> @@ -0,0 +1,5 @@
> +#define CONFIG_TRACE_ERROR 1
> +#define CONFIG_ARM_GIC 1
> +#define CONFIG_MACH_FOUNDATION_V8 1
> +#define CONFIG_SERIAL_AMBA_PL011 1
> +#define JAILHOUSE_BASE 0xfc000000

Unless we go for the Foundation Model as CI target, this file isn't
needed, see my other reply.

Jan Kiszka

unread,
Jan 20, 2016, 12:19:06 PM1/20/16
to antonios...@huawei.com, jailho...@googlegroups.com, Dmitry Voytik, claudio...@huawei.com, jani.k...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:32, antonios...@huawei.com wrote:
> From: Dmitry Voytik <dmitry...@huawei.com>
>
> Add ./tools/jailhouse-parsedump tool. This tool decodes an ARM64
> exception dump and prints human-readable stack trace like this:
>
> [0x00000000fc008688] arch_handle_dabt mmio.c:97
> [0x00000000fc009acc] arch_handle_trap traps.c:143
>
> The tool can read dumps from files (passed via -f parameter)
> or from stdin stream (which could be also piped-in).
>
> Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
> ---
> tools/jailhouse-parsedump | 157 ++++++++++++++++++++++++++++++++++++++++++++++

As this is not for user entertainment, we should move it under
tools/debug/ or scripts/. Maybe also make clear that it is arm64-only.

Antonios Motakis

unread,
Jan 20, 2016, 12:20:52 PM1/20/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com


On 1/20/2016 6:14 PM, Jan Kiszka wrote:
What about having both? :) There are definitely problems that could trigger on the one, and not on the other! For example, they implement different PARanges, which result in completely different page table structures (can be either 3 or 4 levels of page table, etc).

Also the foundation model is what most people have access to, since it can be downloaded for free from ARM.

Jan Kiszka

unread,
Jan 20, 2016, 12:21:10 PM1/20/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:32, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> Add the cell configuration files, and some helper scripts and device
> tree for the AMD Seattle development board. These can be used to
> load a linux inmate on a cell on that target.
>
> Signed-off-by: Antonios Motakis <antonios...@huawei.com>
> Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
> ---
> ci/kernel-inmate-amd-seattle.dts | 150 +++++++++++++++++++++++++++++++

Will we actually need the dts for build tests?

Antonios Motakis

unread,
Jan 20, 2016, 12:24:33 PM1/20/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com


On 1/20/2016 6:21 PM, Jan Kiszka wrote:
> On 2015-12-18 22:32, antonios...@huawei.com wrote:
>> From: Antonios Motakis <antonios...@huawei.com>
>>
>> Add the cell configuration files, and some helper scripts and device
>> tree for the AMD Seattle development board. These can be used to
>> load a linux inmate on a cell on that target.
>>
>> Signed-off-by: Antonios Motakis <antonios...@huawei.com>
>> Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
>> ---
>> ci/kernel-inmate-amd-seattle.dts | 150 +++++++++++++++++++++++++++++++
>
> Will we actually need the dts for build tests?

Not really. Let's drop them, people who need them can fetch them from the mailing list.

Or we can test that the dts compiles :)

Jan Kiszka

unread,
Jan 20, 2016, 12:24:37 PM1/20/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
If those targets trigger significantly different builds, that's fine
(similar to banana-pi vs. vexpress on arm32). But we don't want to
include all supported into the CI builds, simply because they will get
too long, and there are time limits.

>
> Also the foundation model is what most people have access to, since it can be downloaded for free from ARM.

For CI, not necessarily an important argument. We need code coverage
here, we don't execute the results anywhere.

Jan Kiszka

unread,
Jan 20, 2016, 12:27:56 PM1/20/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2016-01-20 18:24, Antonios Motakis wrote:
>
>
> On 1/20/2016 6:21 PM, Jan Kiszka wrote:
>> On 2015-12-18 22:32, antonios...@huawei.com wrote:
>>> From: Antonios Motakis <antonios...@huawei.com>
>>>
>>> Add the cell configuration files, and some helper scripts and device
>>> tree for the AMD Seattle development board. These can be used to
>>> load a linux inmate on a cell on that target.
>>>
>>> Signed-off-by: Antonios Motakis <antonios...@huawei.com>
>>> Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
>>> ---
>>> ci/kernel-inmate-amd-seattle.dts | 150 +++++++++++++++++++++++++++++++
>>
>> Will we actually need the dts for build tests?
>
> Not really. Let's drop them, people who need them can fetch them from the mailing list.
>
> Or we can test that the dts compiles :)

If it helps users to start with non-root Linux on the Seattle, we could
create a samples/ folder for now. Eventually, this should probably be
merged with the cell configuration, if we choose DTS as input format.

Antonios Motakis

unread,
Jan 20, 2016, 12:28:34 PM1/20/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Fair enough, the builds are not that different. The PARange is read and handled at runtime.

Ok, let's stick with the files for the Seattle board only then.

>
>>
>> Also the foundation model is what most people have access to, since it can be downloaded for free from ARM.
>
> For CI, not necessarily an important argument. We need code coverage
> here, we don't execute the results anywhere.
>
> Jan
>

--

Mark Rutland

unread,
Jan 20, 2016, 12:42:04 PM1/20/16
to antonios...@huawei.com, jailho...@googlegroups.com, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On Fri, Dec 18, 2015 at 10:32:05PM +0100, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> Add the cell configuration files, and some helper scripts and device
> tree for the AMD Seattle development board. These can be used to
> load a linux inmate on a cell on that target.
>
> Signed-off-by: Antonios Motakis <antonios...@huawei.com>
> Signed-off-by: Dmitry Voytik <dmitry...@huawei.com>
> ---
> ci/kernel-inmate-amd-seattle.dts | 150 +++++++++++++++++++++++++++++++
> configs/amd-seattle-linux-demo.c | 91 +++++++++++++++++++
> tools/jailhouse-loadlinux-amd-seattle.sh | 23 +++++
> 3 files changed, 264 insertions(+)
> create mode 100644 ci/kernel-inmate-amd-seattle.dts
> create mode 100644 configs/amd-seattle-linux-demo.c
> create mode 100755 tools/jailhouse-loadlinux-amd-seattle.sh
>
> diff --git a/ci/kernel-inmate-amd-seattle.dts b/ci/kernel-inmate-amd-seattle.dts
> new file mode 100644
> index 0000000..7d29f35
> --- /dev/null
> +++ b/ci/kernel-inmate-amd-seattle.dts
> @@ -0,0 +1,150 @@
> +/*
> + * Jailhouse AArch64 support
> + *
> + * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
> + *
> + * Authors:
> + * Antonios Motakis <antonios...@huawei.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2. See
> + * the COPYING file in the top-level directory.
> + *
> + */
> +
> +/dts-v1/;
> +
> +/* 64 KiB */
> +/memreserve/ 0x0 0x00010000;

What for?

> +
> +/ {
> + model = "Jailhouse cell on AMD Seattle";
> + compatible = "amd,seattle-overdrive", "amd,seattle";
> + interrupt-parent = <&gic>;
> + #address-cells = <2>;
> + #size-cells = <2>;
> +
> + chosen {
> + bootargs = "earlyprintk console=ttyAMA0";
> + };

The kernel hasn't supported earlyprintk for a while now, and it's better
to use stdout-path, which will drive both console anyd earlycon:

chosen {
stdout-path = "serial0:115200n8";
bootargs = "earlycon";
};

> +
> + cpus {
> + #address-cells = <2>;
> + #size-cells = <0>;
> +
> + cpu@0 {
> + device_type = "cpu";
> + compatible = "arm,armv8";
> + reg = <0x0 0x300>;
> + enable-method = "psci";
> + next-level-cache = <&L2_0>;
> + };
> +
> + cpu@1 {
> + device_type = "cpu";
> + compatible = "arm,armv8";
> + reg = <0x0 0x301>;
> + enable-method = "psci";
> + next-level-cache = <&L2_0>;
> + };

These should be cpu@300 and cpu@301 (the unit-address should match the
reg).

> +
> + L2_0: l2-cache0 {
> + compatible = "cache";
> + };
> + };
> +
> + aliases {
> + serial0 = &serial0;
> + };
> +
> + memory@0 {
> + device_type = "memory";
> + reg = <0x82 0xe0000000 0x0 0x10000000>; /* 256 MiB starts at 0x0 */
> + };

The reg property doesn't match the unit-address, and the comment doesn't
match the value.

> +
> + gic: interrupt-controller@e1110000 {
> + compatible = "arm,gic-400", "arm,cortex-a15-gic";
> + #interrupt-cells = <3>;
> + #address-cells = <0>;
> + interrupt-controller;
> + reg = <0x0 0xe1110000 0 0x1000>,
> + <0x0 0xe112f000 0 0x2000>;
> + };
> +
> + timer {
> + compatible = "arm,armv8-timer";
> + interrupts = <1 13 0xff04>,
> + <1 14 0xff04>;
> + };

Where's the virtual timer interrupt? I would expect Linux to fail to
boot without it.

Thanks,
Mark.

Antonios Motakis

unread,
Jan 21, 2016, 11:46:15 AM1/21/16
to Jan Kiszka, jailho...@googlegroups.com, Claudio Fontana, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Ack!

Antonios Motakis

unread,
Jan 21, 2016, 12:07:13 PM1/21/16
to Mark Rutland, jailho...@googlegroups.com, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
During development, we used to map the cell memory to virtual address 0, and this was the location where the linux loader would be found; we shall remove it now.
Thanks for these, ack.

>> +
>> + L2_0: l2-cache0 {
>> + compatible = "cache";
>> + };
>> + };
>> +
>> + aliases {
>> + serial0 = &serial0;
>> + };
>> +
>> + memory@0 {
>> + device_type = "memory";
>> + reg = <0x82 0xe0000000 0x0 0x10000000>; /* 256 MiB starts at 0x0 */
>> + };
>
> The reg property doesn't match the unit-address, and the comment doesn't
> match the value.

Also a leftover from different times... will fix this, thanks.

>
>> +
>> + gic: interrupt-controller@e1110000 {
>> + compatible = "arm,gic-400", "arm,cortex-a15-gic";
>> + #interrupt-cells = <3>;
>> + #address-cells = <0>;
>> + interrupt-controller;
>> + reg = <0x0 0xe1110000 0 0x1000>,
>> + <0x0 0xe112f000 0 0x2000>;
>> + };
>> +
>> + timer {
>> + compatible = "arm,armv8-timer";
>> + interrupts = <1 13 0xff04>,
>> + <1 14 0xff04>;
>> + };
>
> Where's the virtual timer interrupt? I would expect Linux to fail to
> boot without it.

Linux does boot though, I think it is quite happy to use the physical timer instead...
The core is not shared with the root cell, or any other cell, and we wipe any timer state before handing over a CPU to a new inmate.

We can pass the virtual timer as well, but for Jailhouse I think there's no difference.

Thanks for your feedback!
Tony

>
> Thanks,
> Mark.

Mark Rutland

unread,
Jan 21, 2016, 12:39:11 PM1/21/16
to Antonios Motakis, jailho...@googlegroups.com, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On Thu, Jan 21, 2016 at 06:07:01PM +0100, Antonios Motakis wrote:
>
> On 1/20/2016 6:41 PM, Mark Rutland wrote:
> > On Fri, Dec 18, 2015 at 10:32:05PM +0100, antonios...@huawei.com wrote:
> >> From: Antonios Motakis <antonios...@huawei.com>
> >> + timer {
> >> + compatible = "arm,armv8-timer";
> >> + interrupts = <1 13 0xff04>,
> >> + <1 14 0xff04>;
> >> + };
> >
> > Where's the virtual timer interrupt? I would expect Linux to fail to
> > boot without it.
>
> Linux does boot though, I think it is quite happy to use the physical timer instead...

Ahh, I see what's going on there.

I guess you ensure CNTVOFF_EL2 is initialised to zero, and that
CNTHCTL_EL1.{PCEN,PCTEN} are initialised to one?

> The core is not shared with the root cell, or any other cell, and we
> wipe any timer state before handing over a CPU to a new inmate.
>
> We can pass the virtual timer as well, but for Jailhouse I think
> there's no difference.

It works for now, but I would strongly advise passing the virtual timer
in regardless.

We generally want to steer clear of the physical registers, as these are
only guaranteed to be accessible at EL2 or above. While it works for
jailhouse now, it would be preferable to consistently provide everything
required for the use of the virtual timers.

Thanks,
Mark.

Antonios Motakis

unread,
Jan 21, 2016, 12:49:10 PM1/21/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
I thought about this a bit more. I think we can implement a partial 1:1 mapping relatively straightforwardly:

Assuming that we know the virtual address at built time, and a 2MB size limit for the hypervisor binary, we only need to patch one page table entry in the early entry code. Plus one more for the UART. We could try 0x0, which has a low likelihood of colliding with other devices :)

We can fetch the physical addresses from the root cell configuration for the hypervisor and the UART.

However... switching on the MMU, needs to be done from an identity mapped location. There's no getting around that, this is an architectural requirement.

But we don't need the identity map forever: we'll just write two entries pointing to the hypervisor physical address; one for the desired virtual address, one for identity mapping. The early entry code can run with identity mapping, but jump to entry() using its virtual address. Soon thereafter, the temporary mappings will be replaced by the permanent ones.

The boostrap vector address will need adjustment as well.

This won't get rid of all build time dependencies of course: the resulting binary will still depend on the addresses and IRQ numbers for the GIC, etc.

What do you think of this approach?

Cheers,
Tony

Antonios Motakis

unread,
Jan 21, 2016, 12:56:25 PM1/21/16
to Mark Rutland, jailho...@googlegroups.com, jan.k...@siemens.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Thanks for the information. I will add it to the device trees, and I'll check that Linux actually receives the virtual timer as well.

Jan Kiszka

unread,
Jan 21, 2016, 1:03:52 PM1/21/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2016-01-21 18:49, Antonios Motakis wrote:
> I thought about this a bit more. I think we can implement a partial 1:1 mapping relatively straightforwardly:
>
> Assuming that we know the virtual address at built time, and a 2MB size limit for the hypervisor binary, we only need to patch one page table entry in the early entry code. Plus one more for the UART. We could try 0x0, which has a low likelihood of colliding with other devices :)

Really? Wouldn't something with a very high address be safer, like it's
done on x86?

>
> We can fetch the physical addresses from the root cell configuration for the hypervisor and the UART.
>
> However... switching on the MMU, needs to be done from an identity mapped location. There's no getting around that, this is an architectural requirement.
>
> But we don't need the identity map forever: we'll just write two entries pointing to the hypervisor physical address; one for the desired virtual address, one for identity mapping. The early entry code can run with identity mapping, but jump to entry() using its virtual address. Soon thereafter, the temporary mappings will be replaced by the permanent ones.
>
> The boostrap vector address will need adjustment as well.
>
> This won't get rid of all build time dependencies of course: the resulting binary will still depend on the addresses and IRQ numbers for the GIC, etc.

Yes, I know, but those parameters are actually config candidates as
well. We should try to reduce build dependencies where possible, rather
than increase them.

>
> What do you think of this approach?

For the long run, I'm still looking for the holy grail, the one approach
for all archs. And 32-bit ARM is still worrying me here, because I do
not yet see that we can apply the same pattern there due to the tight
address space.

Therefore, full 1:1 mapping may become relevant in the end. But if we
can avoid touching the core now for arm64 and even more that arch in a
direction that would possibly also help in a full 1:1 case, I would be
much happier. :)

Jan

--
Siemens AG, Corporate Technology, CT RDA ITP SES-DE

Jan Kiszka

unread,
Jan 22, 2016, 4:19:43 AM1/22/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
All done now in the wip/arm64 branch: I added two of the just posted CI
patches on top of your last round, and then there is the actual
CI-enabling commit for the Seattle board. That one should once also
include the jailhouse-config-amd-seattle.h which is so far part of a
different patch.

The current build breaks because of regressions on ARM, but I suppose
that's already fixed on your side.

Feel free to include my patch in your series. If you want to test it in
advance, you need to register with Travis CI and enable it for your own
github repository.

Antonios Motakis

unread,
Jan 22, 2016, 4:33:20 AM1/22/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com


On 1/21/2016 7:03 PM, Jan Kiszka wrote:
> On 2016-01-21 18:49, Antonios Motakis wrote:
>> I thought about this a bit more. I think we can implement a partial 1:1 mapping relatively straightforwardly:
>>
>> Assuming that we know the virtual address at built time, and a 2MB size limit for the hypervisor binary, we only need to patch one page table entry in the early entry code. Plus one more for the UART. We could try 0x0, which has a low likelihood of colliding with other devices :)
>
> Really? Wouldn't something with a very high address be safer, like it's
> done on x86?

To be honest... who knows, it's up to each SoC...

Anyway we can implement the concept, and agree on a appropriate JAILOUSE_BASE value as we go.

>
>>
>> We can fetch the physical addresses from the root cell configuration for the hypervisor and the UART.
>>
>> However... switching on the MMU, needs to be done from an identity mapped location. There's no getting around that, this is an architectural requirement.
>>
>> But we don't need the identity map forever: we'll just write two entries pointing to the hypervisor physical address; one for the desired virtual address, one for identity mapping. The early entry code can run with identity mapping, but jump to entry() using its virtual address. Soon thereafter, the temporary mappings will be replaced by the permanent ones.
>>
>> The boostrap vector address will need adjustment as well.
>>
>> This won't get rid of all build time dependencies of course: the resulting binary will still depend on the addresses and IRQ numbers for the GIC, etc.
>
> Yes, I know, but those parameters are actually config candidates as
> well. We should try to reduce build dependencies where possible, rather
> than increase them.
>
>>
>> What do you think of this approach?
>
> For the long run, I'm still looking for the holy grail, the one approach
> for all archs. And 32-bit ARM is still worrying me here, because I do
> not yet see that we can apply the same pattern there due to the tight
> address space.
>
> Therefore, full 1:1 mapping may become relevant in the end. But if we
> can avoid touching the core now for arm64 and even more that arch in a
> direction that would possibly also help in a full 1:1 case, I would be
> much happier. :)

Ok, let's try this then...

Antonios Motakis

unread,
Jan 22, 2016, 4:36:09 AM1/22/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Yep, ARM is working again on my side. I'll pick your patches, dankeschoen!

>
> Feel free to include my patch in your series. If you want to test it in
> advance, you need to register with Travis CI and enable it for your own
> github repository.
>
> Jan
>

--

Jan Kiszka

unread,
Jan 22, 2016, 1:47:38 PM1/22/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:31, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> Add the jailhouse_hypercall.h header file for AArch64. We will need
> this also from the Linux side, in order to load Jailhouse in memory
> and to issue hypercalls to an already loaded instance of the
> hypervisor.
>
> Signed-off-by: Antonios Motakis <antonios...@huawei.com>
> ---
> .../arch/arm64/include/asm/jailhouse_hypercall.h | 93 ++++++++++++++++++++++
> 1 file changed, 93 insertions(+)
> create mode 100644 hypervisor/arch/arm64/include/asm/jailhouse_hypercall.h
>
> diff --git a/hypervisor/arch/arm64/include/asm/jailhouse_hypercall.h b/hypervisor/arch/arm64/include/asm/jailhouse_hypercall.h
> new file mode 100644
> index 0000000..662b2b1
> --- /dev/null
> +++ b/hypervisor/arch/arm64/include/asm/jailhouse_hypercall.h
> @@ -0,0 +1,93 @@
> +/*
> + * Jailhouse AArch64 support
> + *
> + * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
> + *
> + * Authors:
> + * Antonios Motakis <antonios...@huawei.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2. See
> + * the COPYING file in the top-level directory.
> + */
> +
> +#include <jailhouse/config.h>

Not needed, the build system includes this for you.

Jan Kiszka

unread,
Jan 22, 2016, 1:49:29 PM1/22/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:31, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> Add the minimum stub functions expected by the rest of the codebase
> to enable building on AArch64. We may implement the missing AArch64
> functionality from here.
>
> Signed-off-by: Antonios Motakis <antonios...@huawei.com>

...

> diff --git a/hypervisor/arch/arm64/include/asm/platform.h b/hypervisor/arch/arm64/include/asm/platform.h
> new file mode 100644
> index 0000000..afd7e72
> --- /dev/null
> +++ b/hypervisor/arch/arm64/include/asm/platform.h
> @@ -0,0 +1,18 @@
> +/*
> + * Jailhouse AArch64 support
> + *
> + * Copyright (C) 2015 Huawei Technologies Duesseldorf GmbH
> + *
> + * Authors:
> + * Antonios Motakis <antonios...@huawei.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2. See
> + * the COPYING file in the top-level directory.
> + */
> +
> +#ifndef _JAILHOUSE_ASM_PLATFORM_H
> +#define _JAILHOUSE_ASM_PLATFORM_H
> +
> +#include <jailhouse/config.h>

And here again: implicitly included already.

Jan Kiszka

unread,
Jan 24, 2016, 1:33:00 PM1/24/16
to antonios...@huawei.com, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2015-12-18 22:31, antonios...@huawei.com wrote:
> From: Antonios Motakis <antonios...@huawei.com>
>
> Ho ho ho, just in time for the holidays!
>
> This patch series is an RFC towards AArch64 support in the Jailhouse
> hypervisor. It applies on the latest next branch from upstream, and
> can also be pulled from https://github.com/tvelocity/jailhouse.git
> (branch arm64_v7)
>
> The patch series includes contributions by Claudio Fontana, and
> Dmitry Voytik.
>
> This version of the patch series features significant progress from
> the last one. Not only do we have working inmates now, not only
> we can demonstrate Linux inmates, we can showcase this on two
> targets: besides the ARM Foundation ARMv8 model, we now include
> cell configuration files for a real hardware target, the AMD Seattle
> development board!

I'm trying to reproduce this result on the board we now have in hands.
Unfortunately, I'm not yet getting beyond

Initializing Jailhouse hypervisor v0.5 (196-g06ab410) on CPU 1
Code location: 0x00000082fc000030

I've tried the kernel config you sent me offlist, but as my distro was
missing some not yet identified features there, I switched to the distro
config minus KVM. The kernel is vanilla 4.4.0 release. I added mem=11G
to the command line to reserve "some" memory for hypervisor and cells.
Anything else I should look at?

Thanks,
Jan

Antonios Motakis

unread,
Jan 25, 2016, 6:43:03 AM1/25/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
KVM is the usual culprit, however you already disabled that...
I use mem=3g however that doesn't seem likely to be the culprit. Maybe check the root cell config just in case there's memory overlapping.

I could also test on 4.4.0 to see if it still works on my side with that kernel.

Any chance the kernel boots in EL1 instead of EL2, for some reason?

>
> Thanks,

Antonios Motakis

unread,
Jan 25, 2016, 6:48:36 AM1/25/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Actually, it looks like they are not included implicitly for asm files. So if either include is missing, building the entry.S code fails spectacularly; we reference JAILHOUSE_BASE and UART_BASE for the temporary early page tables.

Taking into account our previous discussion about those page tables, JAILHOUSE_BASE should move back to jailhouse_hypercall.h, so we won't need the include there anymore. However, the dependency in platform.h will still stand, so we can refer to the right UART_BASE.

Do you think I could take a look into the Makefiles and see if config.h can be implicitly included for asm files as well? Or will this break something elsewhere?

Cheers,
Tony

Jan Kiszka

unread,
Jan 25, 2016, 8:26:59 AM1/25/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Will do.

>
> I could also test on 4.4.0 to see if it still works on my side with that kernel.
>
> Any chance the kernel boots in EL1 instead of EL2, for some reason?

"CPU: All CPU(s) started at EL2" - nope.


In case that matters, here is my BIOS version:

NOTICE: BL3-1:
NOTICE: BL3-1: Built : 06:15:09, Nov 16 2015
INFO: BL3-1: Initializing runtime services
INFO: BL3-1: Preparing for EL3 exit to normal world
INFO: BL3-1: Next image address = 0x8000000000
INFO: BL3-1: Next image spsr = 0x3c9
Boot firmware (version built at 06:19:27 on Nov 16 2015)
[...]

Version 2.17.1249. Copyright (C) 2015 American Megatrends, Inc.
BIOS Date: 11/16/2015 06:15:38 Ver: TOD0000X00
[...]

Antonios Motakis

unread,
Jan 25, 2016, 8:50:19 AM1/25/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
What if you try to un-apply [RFCv7 37/45] hypervisor: arm/arm64: add work around for large SPIs on AMD Seattle ?

This patch is a hack and implies you want to test a very specific configuration... It means the root cell won't receive IRQs for the second xgmac anymore.

It doesn't sound like that the problem is there, but let's check.

Jan Kiszka

unread,
Jan 25, 2016, 9:08:22 AM1/25/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Nope, no change after reverting this.

Antonios Motakis

unread,
Jan 25, 2016, 10:43:10 AM1/25/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Hello,

I just tried on our board, using kernel 4.4.0 and the kernel configuration from ci/

I'm afraid, it still works here. However, there are now several changes since v7, so I need to publish v8 ASAP so we can test the same code.

Btw with mem=11G, Linux does overlap with the memory we use for some cell demos.

Also, you are running a more recent firmware than we do; we're running a firmware built on May 7 2015. I think upgrading the firmware is a bit tricky...

A current snapshot of my code is on https://github.com/tvelocity/jailhouse/tree/devel

Jan Kiszka

unread,
Jan 25, 2016, 11:32:17 AM1/25/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2016-01-25 16:43, Antonios Motakis wrote:
> Hello,
>
> I just tried on our board, using kernel 4.4.0 and the kernel configuration from ci/
>
> I'm afraid, it still works here. However, there are now several changes since v7, so I need to publish v8 ASAP so we can test the same code.
>
> Btw with mem=11G, Linux does overlap with the memory we use for some cell demos.
>
> Also, you are running a more recent firmware than we do; we're running a firmware built on May 7 2015. I think upgrading the firmware is a bit tricky...
>
> A current snapshot of my code is on https://github.com/tvelocity/jailhouse/tree/devel
>

Thanks, no change. But I made progress: The system config paging_init
sees is broken. Debugging further, maybe a toolchain[-triggered] issue.

Along this, I learned that failing the hypervisor initialization is not
yet working. I tried to return with an error from paging_init, and the
system exploded in infinite exceptions over EL2. Please have a look at
this eventually.

Thanks,
Jan

Antonios Motakis

unread,
Jan 25, 2016, 11:43:34 AM1/25/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Or maybe it turns out there's still some cache coherency issue! The Seattle has been very sensitive to those things...

Well, thanks, that gives me some stuff to think about...

Hm, any chance your revision of the board might have some changes in the memory map?

>
> Thanks,
> Jan

Jan Kiszka

unread,
Jan 25, 2016, 11:45:14 AM1/25/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2016-01-25 17:43, Antonios Motakis wrote:
>
>
> On 1/25/2016 5:32 PM, Jan Kiszka wrote:
>> On 2016-01-25 16:43, Antonios Motakis wrote:
>>> Hello,
>>>
>>> I just tried on our board, using kernel 4.4.0 and the kernel configuration from ci/
>>>
>>> I'm afraid, it still works here. However, there are now several changes since v7, so I need to publish v8 ASAP so we can test the same code.
>>>
>>> Btw with mem=11G, Linux does overlap with the memory we use for some cell demos.
>>>
>>> Also, you are running a more recent firmware than we do; we're running a firmware built on May 7 2015. I think upgrading the firmware is a bit tricky...
>>>
>>> A current snapshot of my code is on https://github.com/tvelocity/jailhouse/tree/devel
>>>
>>
>> Thanks, no change. But I made progress: The system config paging_init
>> sees is broken. Debugging further, maybe a toolchain[-triggered] issue.
>>
>> Along this, I learned that failing the hypervisor initialization is not
>> yet working. I tried to return with an error from paging_init, and the
>> system exploded in infinite exceptions over EL2. Please have a look at
>> this eventually.
>
> Or maybe it turns out there's still some cache coherency issue! The Seattle has been very sensitive to those things...
>
> Well, thanks, that gives me some stuff to think about...
>
> Hm, any chance your revision of the board might have some changes in the memory map?

Here is my /proc/iomem:


40000000-bfffffff : /smb/pcie@f0000000
e0030000-e0030fff : /smb/gpio@e0030000
e0030000-e0030fff : /smb/gpio@e0030000
e0050000-e0050fff : /smb/i2c@e0050000
e0080000-e0080fff : /smb/gpio@e0080000
e0080000-e0080fff : /smb/gpio@e0080000
e0100000-e010ffff : /smb/ccp@e0100000
e0300000-e03effff : /smb/sata@e0300000
e0600000-e060ffff : /smb/smmu@e0600000
e0700000-e077ffff : /smb/xgmac@e0700000
e0780000-e07fffff : /smb/xgmac@e0700000
e0800000-e080ffff : /smb/smmu@e0800000
e0900000-e097ffff : /smb/xgmac@e0900000
e0980000-e09fffff : /smb/xgmac@e0900000
e1000000-e1000fff : /smb/i2c@e1000000
e1010000-e1010fff : /smb/serial@e1010000
e1010000-e1010fff : /smb/serial@e1010000
e1020000-e1020fff : /smb/ssp@e1020000
e1030000-e1030fff : /smb/ssp@e1030000
e1030000-e1030fff : ssp-pl022
e1050000-e1050fff : /smb/gpio@e1050000
e1050000-e1050fff : /smb/gpio@e1050000
e1240800-e1240bff : /smb/phy@e1240800
e1240c00-e1240fff : /smb/phy@e1240c00
e1250000-e125005f : /smb/phy@e1240800
e1250080-e12500df : /smb/phy@e1240c00
e12500f8-e12500fb : /smb/phy@e1240800
e12500fc-e12500ff : /smb/phy@e1240c00
e8000000-e8ffffff : e8000000.ccn
f0000000-ffffffff : Configuration Space
100000000-7fffffffff : /smb/pcie@f0000000
8001000000-80c0ffffff : System RAM
8001080000-80019effff : Kernel code
8001b00000-8001dcffff : Kernel data


I'm currently checking if the driver puts the config elsewhere than the
hypervisor expects.

Jan

Antonios Motakis

unread,
Jan 25, 2016, 11:52:17 AM1/25/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
I think that indeed you have more stuff there than I do :)
Presumably my system does not try to initialize all of the board (e.g. PCI), so it doesn't map them. Maybe it's just a matter of adding the missing areas to the cell configuration.

Jan Kiszka

unread,
Jan 25, 2016, 12:05:06 PM1/25/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Looks like the right trace: there is a mismatch between __page_pool and
hypervisor_header.core_size.

>
> I think that indeed you have more stuff there than I do :)
> Presumably my system does not try to initialize all of the board (e.g. PCI), so it doesn't map them. Maybe it's just a matter of adding the missing areas to the cell configuration.

PCI seems dead here as well: the plugged multiport NIC is not detected.
But I'll recheck the config once I get past the basic issues.

Jan

Jan Kiszka

unread,
Jan 25, 2016, 12:12:08 PM1/25/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
PAGE_ALIGN... My kernel is on 64K pages while Jailhouse uses 4K -
sleeping bug in the driver <-> hypervisor interaction. Suggestions to
resolve it cleanly are taken.

Jan

Jan Kiszka

unread,
Jan 25, 2016, 12:26:04 PM1/25/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
Too late, patch sent ;). With that applied:


Initializing Jailhouse hypervisor v0.5 (247-g3486e6f-dirty) on CPU 5
Code location: 0x00000082fc000030
Page pool usage after early setup: mem 32/16362, remap 128/32768
Initializing processors:
CPU 5... OK
CPU 0... OK
CPU 2... OK
CPU 1... OK
CPU 6... OK
CPU 3... OK
CPU 7... OK
CPU 4... OK
Page pool usage after late setup: mem 46/16362, remap 128/32768
Activating hypervisor

8)

Jan

Jan Kiszka

unread,
Jan 25, 2016, 12:41:33 PM1/25/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2016-01-25 18:25, Jan Kiszka wrote:
> Initializing Jailhouse hypervisor v0.5 (247-g3486e6f-dirty) on CPU 5
> Code location: 0x00000082fc000030
> Page pool usage after early setup: mem 32/16362, remap 128/32768
> Initializing processors:
> CPU 5... OK
> CPU 0... OK
> CPU 2... OK
> CPU 1... OK
> CPU 6... OK
> CPU 3... OK
> CPU 7... OK
> CPU 4... OK
> Page pool usage after late setup: mem 46/16362, remap 128/32768
> Activating hypervisor
>
> 8)

Also the gic-demo works - well, mostly: In one case the latency first
increased, then the test simply stopped.

Timer fired, jitter: 2431 ns, min: 2343 ns, max: 2975 ns
Timer fired, jitter: 2223 ns, min: 2223 ns, max: 2975 ns
Timer fired, jitter: 3959 ns, min: 2223 ns, max: 3959 ns
Timer fired, jitter: 3607 ns, min: 2223 ns, max: 3959 ns
Timer fired, jitter: 3887 ns, min: 2223 ns, max: 3959 ns
Timer fired, jitter: 4175 ns, min: 2223 ns, max: 4175 ns
Timer fired, jitter: 3751 ns, min: 2223 ns, max: 4175 ns
[no more output]

Root was find all the time, and I was able to recover that cell, but
this remains strange. Could it be that I need to disable some kind of
power management for the root kernel?


And one more: jailhouse disable is also a todo? Just tried it while
running the gic-demo and it crashed the root cell. I didn't try yet
right after enabling, though.

Jan

Jan Kiszka

unread,
Jan 25, 2016, 1:20:16 PM1/25/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2016-01-25 18:41, Jan Kiszka wrote:
> Also the gic-demo works - well, mostly: In one case the latency first
> increased, then the test simply stopped.
>
> Timer fired, jitter: 2431 ns, min: 2343 ns, max: 2975 ns
> Timer fired, jitter: 2223 ns, min: 2223 ns, max: 2975 ns
> Timer fired, jitter: 3959 ns, min: 2223 ns, max: 3959 ns
> Timer fired, jitter: 3607 ns, min: 2223 ns, max: 3959 ns
> Timer fired, jitter: 3887 ns, min: 2223 ns, max: 3959 ns
> Timer fired, jitter: 4175 ns, min: 2223 ns, max: 4175 ns
> Timer fired, jitter: 3751 ns, min: 2223 ns, max: 4175 ns
> [no more output]
>
> Root was find all the time, and I was able to recover that cell, but
> this remains strange. Could it be that I need to disable some kind of
> power management for the root kernel?

This bug is apparently related to both the root cell and the gic-demo
using the same UART: When I enter something on the root console, the
demo locks up. Non-issue.

>
>
> And one more: jailhouse disable is also a todo? Just tried it while
> running the gic-demo and it crashed the root cell. I didn't try yet
> right after enabling, though.

Retested, seems to be a fundamentally missing feature :). Would be
nice-to-fix, of course, but it need not be a show-stopper for an
upstream merge.

Jan

Jan Kiszka

unread,
Jan 26, 2016, 2:00:17 AM1/26/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
I was able to eliminate config.h from platform.h this way:

diff --git a/hypervisor/Makefile b/hypervisor/Makefile
index c037ed0..7ecd239 100644
--- a/hypervisor/Makefile
+++ b/hypervisor/Makefile
@@ -39,6 +39,7 @@ endif

ifneq ($(wildcard $(obj)/include/jailhouse/config.h),)
KBUILD_CFLAGS += -include $(obj)/include/jailhouse/config.h
+KBUILD_AFLAGS += -include $(obj)/include/jailhouse/config.h
endif

CORE_OBJECTS = setup.o printk.o paging.o control.o lib.o mmio.o
diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index efa56c8..6f93dc5 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -12,7 +12,7 @@

include $(CONFIG_MK)

-KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))
+KBUILD_AFLAGS := $(filter-out "-include asm/unified.h",$(KBUILD_AFLAGS))

always := built-in.o

diff --git a/hypervisor/arch/arm64/Makefile b/hypervisor/arch/arm64/Makefile
index 5f13642..a138bd2 100644
--- a/hypervisor/arch/arm64/Makefile
+++ b/hypervisor/arch/arm64/Makefile
@@ -12,7 +12,7 @@

include $(CONFIG_MK)

-KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))
+KBUILD_AFLAGS := $(filter-out "-include asm/unified.h",$(KBUILD_AFLAGS))

always := built-in.o


I'll push the first two hunks as patches soon, so you'll just need to
include the last one in your series.

Jan Kiszka

unread,
Jan 26, 2016, 2:38:26 AM1/26/16
to Antonios Motakis, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com
On 2016-01-26 08:00, Jan Kiszka wrote:
>> Taking into account our previous discussion about those page tables, JAILHOUSE_BASE should move back to jailhouse_hypercall.h, so we won't need the include there anymore. However, the dependency in platform.h will still stand, so we can refer to the right UART_BASE.

Wait - why do you need UART_BASE here? We are on the way to remove that
constant from the hypervisor. For ARM, it is already delivered via the
system config. x86 will follow soon. Can't you read it from there as well?

Antonios Motakis

unread,
Jan 26, 2016, 3:33:40 AM1/26/16
to Jan Kiszka, jailho...@googlegroups.com, claudio...@huawei.com, jani.k...@huawei.com, dmitry...@huawei.com, veacesla...@huawei.com, jean-phili...@arm.com, marc.z...@arm.com, edgar.i...@xilinx.com, wuqi...@huawei.com


On 1/25/2016 7:20 PM, Jan Kiszka wrote:
> On 2016-01-25 18:41, Jan Kiszka wrote:
>> Also the gic-demo works - well, mostly: In one case the latency first
>> increased, then the test simply stopped.
>>
>> Timer fired, jitter: 2431 ns, min: 2343 ns, max: 2975 ns
>> Timer fired, jitter: 2223 ns, min: 2223 ns, max: 2975 ns
>> Timer fired, jitter: 3959 ns, min: 2223 ns, max: 3959 ns
>> Timer fired, jitter: 3607 ns, min: 2223 ns, max: 3959 ns
>> Timer fired, jitter: 3887 ns, min: 2223 ns, max: 3959 ns
>> Timer fired, jitter: 4175 ns, min: 2223 ns, max: 4175 ns
>> Timer fired, jitter: 3751 ns, min: 2223 ns, max: 4175 ns
>> [no more output]
>>
>> Root was find all the time, and I was able to recover that cell, but
>> this remains strange. Could it be that I need to disable some kind of
>> power management for the root kernel?
>
> This bug is apparently related to both the root cell and the gic-demo
> using the same UART: When I enter something on the root console, the
> demo locks up. Non-issue.

Yeah, sharing the UART is a necessary compromise in order to demo something on the Seattle... It is a bit more reliable when using SSH to control the root cell.

>
>>
>>
>> And one more: jailhouse disable is also a todo? Just tried it while
>> running the gic-demo and it crashed the root cell. I didn't try yet
>> right after enabling, though.
>
> Retested, seems to be a fundamentally missing feature :). Would be
> nice-to-fix, of course, but it need not be a show-stopper for an
> upstream merge.

It used to work, then it broke, then I fixed it. Apparently it broke again :)
Will look at it!
It is loading more messages.
0 new messages