[PATCH 00/21] x86: Trenchboot Secure Launch DRTM (Xen)

0 views
Skip to first unread message

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:05 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
The aim of the [TrenchBoot] project is to provide an implementation of
DRTM that is generic enough to cover various use cases:
- Intel TXT and AMD SKINIT on x86 CPUs
- legacy and UEFI boot
- TPM1.2 and TPM2.0
- (in the future) DRTM on Arm CPUs

DRTM is a version of a measured launch that starts on request rather
than at the start of a boot cycle. One of its advantages is in not
including the firmware in the chain of trust.

Xen already supports DRTM via [tboot] which targets Intel TXT only.
tboot employs encapsulates some of the DRTM details within itself while
with TrenchBoot Xen (or Linux) is meant to be a self-contained payload
for a TrenchBoot-enabled bootloader (think GRUB). The one exception is
that UEFI case requires calling back into bootloader to initiate DRTM,
which is necessary to give Xen a chance of querying all the information
it needs from the firmware before performing DRTM start.

From reading the above tboot might seem like a more abstracted, but the
reality is that the payload needs to have DRTM-specific knowledge either
way. TrenchBoot in principle allows coming up with independent
implementations of bootloaders and payloads that are compatible with
each other.

The "x86/boot: choose AP stack based on APIC ID" patch is shared with
[Parallelize AP bring-up] series which is required here because Intel
TXT always releases all APs simultaneously. The rest of the patches are
unique.

-----

[TrenchBoot]: https://trenchboot.org/
[tboot]: https://sourceforge.net/p/tboot/wiki/Home/
[Parallelize AP bring-up]: https://lore.kernel.org/xen-devel/cover.1699982111....@3mdeb.com/

-----

Kacper Stojek (2):
x86/boot: add MLE header and new entry point
xen/arch/x86: reserve TXT memory

Krystian Hebel (7):
x86/include/asm/intel_txt.h: constants and accessors for TXT registers
and heap
x86/boot/slaunch_early: early TXT checks and boot data retrieval
x86/intel_txt.c: restore boot MTRRs
lib/sha1.c: add file
x86/tpm.c: code for early hashing and extending PCRs (for TPM1.2)
x86/boot: choose AP stack based on APIC ID
x86/smpboot.c: TXT AP bringup

Michał Żygowski (2):
x86/hvm: Check for VMX in SMX when slaunch active
x86/cpu: report SMX, TXT and SKINIT capabilities

Sergii Dmytruk (10):
include/xen/slr_table.h: Secure Launch Resource Table definitions
x86/boot/slaunch_early: implement early initialization
x86/mtrr: expose functions for pausing caching
lib/sha256.c: add file
x86/tpm.c: support extending PCRs of TPM2.0
x86/tpm.c: implement event log for TPM2.0
arch/x86: process DRTM policy
x86/boot: find MBI and SLRT on AMD
arch/x86: support slaunch with AMD SKINIT
x86/slaunch: support EFI boot

.gitignore | 1 +
docs/hypervisor-guide/x86/how-xen-boots.rst | 7 +
xen/arch/x86/Makefile | 12 +-
xen/arch/x86/boot/Makefile | 10 +-
xen/arch/x86/boot/head.S | 250 +++++
xen/arch/x86/boot/slaunch_early.c | 105 ++
xen/arch/x86/boot/trampoline.S | 40 +-
xen/arch/x86/boot/x86_64.S | 42 +-
xen/arch/x86/cpu/amd.c | 14 +
xen/arch/x86/cpu/cpu.h | 1 +
xen/arch/x86/cpu/hygon.c | 1 +
xen/arch/x86/cpu/intel.c | 44 +
xen/arch/x86/cpu/mtrr/generic.c | 51 +-
xen/arch/x86/e820.c | 5 +
xen/arch/x86/efi/efi-boot.h | 90 +-
xen/arch/x86/efi/fixmlehdr.c | 122 +++
xen/arch/x86/hvm/vmx/vmcs.c | 3 +-
xen/arch/x86/include/asm/apicdef.h | 4 +
xen/arch/x86/include/asm/intel_txt.h | 452 ++++++++
xen/arch/x86/include/asm/mm.h | 3 +
xen/arch/x86/include/asm/msr-index.h | 3 +
xen/arch/x86/include/asm/mtrr.h | 8 +
xen/arch/x86/include/asm/processor.h | 1 +
xen/arch/x86/include/asm/slaunch.h | 91 ++
xen/arch/x86/include/asm/tpm.h | 19 +
xen/arch/x86/intel_txt.c | 177 ++++
xen/arch/x86/setup.c | 32 +-
xen/arch/x86/slaunch.c | 464 ++++++++
xen/arch/x86/smpboot.c | 57 +
xen/arch/x86/tboot.c | 20 +-
xen/arch/x86/tpm.c | 1057 +++++++++++++++++++
xen/common/efi/boot.c | 4 +
xen/common/efi/runtime.c | 1 +
xen/include/xen/efi.h | 1 +
xen/include/xen/sha1.h | 12 +
xen/include/xen/sha256.h | 12 +
xen/include/xen/slr_table.h | 274 +++++
xen/lib/Makefile | 2 +
xen/lib/sha1.c | 240 +++++
xen/lib/sha256.c | 238 +++++
40 files changed, 3914 insertions(+), 56 deletions(-)
create mode 100644 xen/arch/x86/boot/slaunch_early.c
create mode 100644 xen/arch/x86/efi/fixmlehdr.c
create mode 100644 xen/arch/x86/include/asm/intel_txt.h
create mode 100644 xen/arch/x86/include/asm/slaunch.h
create mode 100644 xen/arch/x86/include/asm/tpm.h
create mode 100644 xen/arch/x86/intel_txt.c
create mode 100644 xen/arch/x86/slaunch.c
create mode 100644 xen/arch/x86/tpm.c
create mode 100644 xen/include/xen/sha1.h
create mode 100644 xen/include/xen/sha256.h
create mode 100644 xen/include/xen/slr_table.h
create mode 100644 xen/lib/sha1.c
create mode 100644 xen/lib/sha256.c


base-commit: df68a4cb7ed9418f0c5af56a717714b5280737e4
prerequisite-patch-id: 1c3014908bc6e1a5cab8de609270efdb1c412335
prerequisite-patch-id: 850544a1f9639283f2269ea75b630400dd1976aa
prerequisite-patch-id: 69e042a46f8ac0e3f85853e77082caf250719a8d
prerequisite-patch-id: d6c6d27bbe8ff2f5d96852a6eed72a4c99b61356
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:09 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

The file contains TXT register spaces base address, registers offsets,
error codes and inline functions for accessing structures stored on
TXT heap.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/include/asm/intel_txt.h | 272 +++++++++++++++++++++++++++
xen/arch/x86/tboot.c | 20 +-
2 files changed, 274 insertions(+), 18 deletions(-)
create mode 100644 xen/arch/x86/include/asm/intel_txt.h

diff --git a/xen/arch/x86/include/asm/intel_txt.h b/xen/arch/x86/include/asm/intel_txt.h
new file mode 100644
index 0000000000..2cc6eb5be9
--- /dev/null
+++ b/xen/arch/x86/include/asm/intel_txt.h
@@ -0,0 +1,272 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+/*
+ * TXT configuration registers (offsets from TXT_{PUB, PRIV}_CONFIG_REGS_BASE)
+ */
+#define TXT_PUB_CONFIG_REGS_BASE 0xfed30000
+#define TXT_PRIV_CONFIG_REGS_BASE 0xfed20000
+
+/*
+ * The same set of registers is exposed twice (with different permissions) and
+ * they are allocated continuously with page alignment.
+ */
+#define NR_TXT_CONFIG_SIZE \
+ (TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE)
+
+/* Offsets from pub/priv config space. */
+#define TXTCR_STS 0x0000
+#define TXTCR_ESTS 0x0008
+#define TXTCR_ERRORCODE 0x0030
+#define TXTCR_CMD_RESET 0x0038
+#define TXTCR_CMD_CLOSE_PRIVATE 0x0048
+#define TXTCR_DIDVID 0x0110
+#define TXTCR_VER_EMIF 0x0200
+#define TXTCR_CMD_UNLOCK_MEM_CONFIG 0x0218
+#define TXTCR_SINIT_BASE 0x0270
+#define TXTCR_SINIT_SIZE 0x0278
+#define TXTCR_MLE_JOIN 0x0290
+#define TXTCR_HEAP_BASE 0x0300
+#define TXTCR_HEAP_SIZE 0x0308
+#define TXTCR_SCRATCHPAD 0x0378
+#define TXTCR_CMD_OPEN_LOCALITY1 0x0380
+#define TXTCR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXTCR_CMD_OPEN_LOCALITY2 0x0390
+#define TXTCR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXTCR_CMD_SECRETS 0x08e0
+#define TXTCR_CMD_NO_SECRETS 0x08e8
+#define TXTCR_E2STS 0x08f0
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SLAUNCH_ERROR_GENERIC 0xc0008001
+#define SLAUNCH_ERROR_TPM_INIT 0xc0008002
+#define SLAUNCH_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SLAUNCH_ERROR_TPM_LOGGING_FAILED 0xc0008004
+#define SLAUNCH_ERROR_REGION_STRADDLE_4GB 0xc0008005
+#define SLAUNCH_ERROR_TPM_EXTEND 0xc0008006
+#define SLAUNCH_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SLAUNCH_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SLAUNCH_ERROR_MTRR_INV_BASE 0xc0008009
+#define SLAUNCH_ERROR_MTRR_INV_MASK 0xc000800a
+#define SLAUNCH_ERROR_MSR_INV_MISC_EN 0xc000800b
+#define SLAUNCH_ERROR_INV_AP_INTERRUPT 0xc000800c
+#define SLAUNCH_ERROR_INTEGER_OVERFLOW 0xc000800d
+#define SLAUNCH_ERROR_HEAP_WALK 0xc000800e
+#define SLAUNCH_ERROR_HEAP_MAP 0xc000800f
+#define SLAUNCH_ERROR_REGION_ABOVE_4GB 0xc0008010
+#define SLAUNCH_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SLAUNCH_ERROR_HEAP_DMAR_SIZE 0xc0008012
+#define SLAUNCH_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SLAUNCH_ERROR_HI_PMR_BASE 0xc0008014
+#define SLAUNCH_ERROR_HI_PMR_SIZE 0xc0008015
+#define SLAUNCH_ERROR_LO_PMR_BASE 0xc0008016
+#define SLAUNCH_ERROR_LO_PMR_SIZE 0xc0008017
+#define SLAUNCH_ERROR_LO_PMR_MLE 0xc0008018
+#define SLAUNCH_ERROR_INITRD_TOO_BIG 0xc0008019
+#define SLAUNCH_ERROR_HEAP_ZERO_OFFSET 0xc000801a
+#define SLAUNCH_ERROR_WAKE_BLOCK_TOO_SMALL 0xc000801b
+#define SLAUNCH_ERROR_MLE_BUFFER_OVERLAP 0xc000801c
+#define SLAUNCH_ERROR_BUFFER_BEYOND_PMR 0xc000801d
+#define SLAUNCH_ERROR_OS_SINIT_BAD_VERSION 0xc000801e
+#define SLAUNCH_ERROR_EVENTLOG_MAP 0xc000801f
+#define SLAUNCH_ERROR_TPM_NUMBER_ALGS 0xc0008020
+#define SLAUNCH_ERROR_TPM_UNKNOWN_DIGEST 0xc0008021
+#define SLAUNCH_ERROR_TPM_INVALID_EVENT 0xc0008022
+
+#define SLAUNCH_BOOTLOADER_MAGIC 0x4c534254
+
+#ifndef __ASSEMBLY__
+
+/* Need to differentiate between pre- and post paging enabled. */
+#ifdef __EARLY_SLAUNCH__
+#include <xen/macros.h>
+#define _txt(x) _p(x)
+#else
+#include <xen/types.h>
+#include <asm/page.h> // __va()
+#define _txt(x) __va(x)
+#endif
+
+/*
+ * Always use private space as some of registers are either read-only or not
+ * present in public space.
+ */
+static inline uint64_t read_txt_reg(int reg_no)
+{
+ volatile uint64_t *reg = _txt(TXT_PRIV_CONFIG_REGS_BASE + reg_no);
+ return *reg;
+}
+
+static inline void write_txt_reg(int reg_no, uint64_t val)
+{
+ volatile uint64_t *reg = _txt(TXT_PRIV_CONFIG_REGS_BASE + reg_no);
+ *reg = val;
+ /* This serves as TXT register barrier */
+ (void)read_txt_reg(TXTCR_ESTS);
+}
+
+static inline void txt_reset(uint32_t error)
+{
+ write_txt_reg(TXTCR_ERRORCODE, error);
+ write_txt_reg(TXTCR_CMD_NO_SECRETS, 1);
+ write_txt_reg(TXTCR_CMD_UNLOCK_MEM_CONFIG, 1);
+ write_txt_reg(TXTCR_CMD_RESET, 1);
+ while (1);
+}
+
+/*
+ * Secure Launch defined OS/MLE TXT Heap table
+ */
+struct txt_os_mle_data {
+ uint32_t version;
+ uint32_t reserved;
+ uint64_t slrt;
+ uint64_t txt_info;
+ uint32_t ap_wake_block;
+ uint32_t ap_wake_block_size;
+ uint8_t mle_scratch[64];
+} __packed;
+
+/*
+ * TXT specification defined BIOS data TXT Heap table
+ */
+struct txt_bios_data {
+ uint32_t version; /* Currently 5 for TPM 1.2 and 6 for TPM 2.0 */
+ uint32_t bios_sinit_size;
+ uint64_t reserved1;
+ uint64_t reserved2;
+ uint32_t num_logical_procs;
+ /* Versions >= 3 && < 5 */
+ uint32_t sinit_flags;
+ /* Versions >= 5 with updates in version 6 */
+ uint32_t mle_flags;
+ /* Versions >= 4 */
+ /* Ext Data Elements */
+} __packed;
+
+/*
+ * TXT specification defined OS/SINIT TXT Heap table
+ */
+struct txt_os_sinit_data {
+ uint32_t version; /* Currently 6 for TPM 1.2 and 7 for TPM 2.0 */
+ uint32_t flags; /* Reserved in version 6 */
+ uint64_t mle_ptab;
+ uint64_t mle_size;
+ uint64_t mle_hdr_base;
+ uint64_t vtd_pmr_lo_base;
+ uint64_t vtd_pmr_lo_size;
+ uint64_t vtd_pmr_hi_base;
+ uint64_t vtd_pmr_hi_size;
+ uint64_t lcp_po_base;
+ uint64_t lcp_po_size;
+ uint32_t capabilities;
+ /* Version = 5 */
+ uint64_t efi_rsdt_ptr; /* RSD*P* in versions >= 6 */
+ /* Versions >= 6 */
+ /* Ext Data Elements */
+} __packed;
+
+/*
+ * TXT specification defined SINIT/MLE TXT Heap table
+ */
+struct txt_sinit_mle_data {
+ uint32_t version; /* Current values are 6 through 9 */
+ /* Versions <= 8, fields until lcp_policy_control must be 0 for >= 9 */
+ uint8_t bios_acm_id[20];
+ uint32_t edx_senter_flags;
+ uint64_t mseg_valid;
+ uint8_t sinit_hash[20];
+ uint8_t mle_hash[20];
+ uint8_t stm_hash[20];
+ uint8_t lcp_policy_hash[20];
+ uint32_t lcp_policy_control;
+ /* Versions >= 7 */
+ uint32_t rlp_wakeup_addr;
+ uint32_t reserved;
+ uint32_t num_of_sinit_mdrs;
+ uint32_t sinit_mdrs_table_offset;
+ uint32_t sinit_vtd_dmar_table_size;
+ uint32_t sinit_vtd_dmar_table_offset;
+ /* Versions >= 8 */
+ uint32_t processor_scrtm_status;
+ /* Versions >= 9 */
+ /* Ext Data Elements */
+} __packed;
+
+/*
+ * Functions to extract data from the Intel TXT Heap Memory. The layout
+ * of the heap is as follows:
+ * +------------------------------------+
+ * | Size of Bios Data table (uint64_t) |
+ * +------------------------------------+
+ * | Bios Data table |
+ * +------------------------------------+
+ * | Size of OS MLE table (uint64_t) |
+ * +------------------------------------+
+ * | OS MLE table |
+ * +-------------------------------- +
+ * | Size of OS SINIT table (uint64_t) |
+ * +------------------------------------+
+ * | OS SINIT table |
+ * +------------------------------------+
+ * | Size of SINIT MLE table (uint64_t) |
+ * +------------------------------------+
+ * | SINIT MLE table |
+ * +------------------------------------+
+ *
+ * NOTE: the table size fields include the 8 byte size field itself.
+ */
+static inline uint64_t txt_bios_data_size(void *heap)
+{
+ return *((uint64_t *)heap) - sizeof(uint64_t);
+}
+
+static inline void *txt_bios_data_start(void *heap)
+{
+ return heap + sizeof(uint64_t);
+}
+
+static inline uint64_t txt_os_mle_data_size(void *heap)
+{
+ return *((uint64_t *)(txt_bios_data_start(heap) +
+ txt_bios_data_size(heap))) -
+ sizeof(uint64_t);
+}
+
+static inline void *txt_os_mle_data_start(void *heap)
+{
+ return txt_bios_data_start(heap) + txt_bios_data_size(heap) +
+ sizeof(uint64_t);
+}
+
+static inline uint64_t txt_os_sinit_data_size(void *heap)
+{
+ return *((uint64_t *)(txt_os_mle_data_start(heap) +
+ txt_os_mle_data_size(heap))) -
+ sizeof(uint64_t);
+}
+
+static inline void *txt_os_sinit_data_start(void *heap)
+{
+ return txt_os_mle_data_start(heap) + txt_os_mle_data_size(heap) +
+ sizeof(uint64_t);
+}
+
+static inline uint64_t txt_sinit_mle_data_size(void *heap)
+{
+ return *((uint64_t *)(txt_os_sinit_data_start(heap) +
+ txt_os_sinit_data_size(heap))) -
+ sizeof(uint64_t);
+}
+
+static inline void *txt_sinit_mle_data_start(void *heap)
+{
+ return txt_os_sinit_data_start(heap) + txt_os_sinit_data_size(heap) +
+ sizeof(uint64_t);
+}
+
+#endif /* __ASSEMBLY__ */
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index d5db60d335..f68354c374 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -15,6 +15,7 @@
#include <asm/tboot.h>
#include <asm/setup.h>
#include <asm/trampoline.h>
+#include <asm/intel_txt.h>

#include <crypto/vmac.h>

@@ -35,23 +36,6 @@ static uint64_t __initdata sinit_base, __initdata sinit_size;

static bool __ro_after_init is_vtd;

-/*
- * TXT configuration registers (offsets from TXT_{PUB, PRIV}_CONFIG_REGS_BASE)
- */
-
-#define TXT_PUB_CONFIG_REGS_BASE 0xfed30000
-#define TXT_PRIV_CONFIG_REGS_BASE 0xfed20000
-
-/* # pages for each config regs space - used by fixmap */
-#define NR_TXT_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
- TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
-
-/* offsets from pub/priv config space */
-#define TXTCR_SINIT_BASE 0x0270
-#define TXTCR_SINIT_SIZE 0x0278
-#define TXTCR_HEAP_BASE 0x0300
-#define TXTCR_HEAP_SIZE 0x0308
-
#define SHA1_SIZE 20
typedef uint8_t sha1_hash_t[SHA1_SIZE];

@@ -409,7 +393,7 @@ int __init tboot_protect_mem_regions(void)

/* TXT Private Space */
rc = e820_change_range_type(&e820, TXT_PRIV_CONFIG_REGS_BASE,
- TXT_PRIV_CONFIG_REGS_BASE + NR_TXT_CONFIG_PAGES * PAGE_SIZE,
+ TXT_PRIV_CONFIG_REGS_BASE + NR_TXT_CONFIG_SIZE,
E820_RESERVED, E820_UNUSABLE);
if ( !rc )
return 0;
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:12 AMApr 22
to xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
The file provides constants, structures and several helper functions for
parsing SLRT.

Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/include/xen/slr_table.h | 274 ++++++++++++++++++++++++++++++++++++
1 file changed, 274 insertions(+)
create mode 100644 xen/include/xen/slr_table.h

diff --git a/xen/include/xen/slr_table.h b/xen/include/xen/slr_table.h
new file mode 100644
index 0000000000..e9dbac5d0a
--- /dev/null
+++ b/xen/include/xen/slr_table.h
@@ -0,0 +1,274 @@
+/* SPDX-License-Identifier: GPL-3.0-or-later */
+
+/*
+ * Copyright (C) 2023 Oracle and/or its affiliates.
+ *
+ * Secure Launch Resource Table definitions
+ */
+
+#ifndef _SLR_TABLE_H
+#define _SLR_TABLE_H
+
+#include <xen/types.h>
+
+#define UEFI_SLR_TABLE_GUID \
+ { 0x877a9b2a, 0x0385, 0x45d1, { 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f } }
+
+/* SLR table header values */
+#define SLR_TABLE_MAGIC 0x4452544d
+#define SLR_TABLE_REVISION 1
+
+/* Current revisions for the policy and UEFI config */
+#define SLR_POLICY_REVISION 1
+#define SLR_UEFI_CONFIG_REVISION 1
+
+/* SLR defined architectures */
+#define SLR_INTEL_TXT 1
+#define SLR_AMD_SKINIT 2
+
+/* SLR defined bootloaders */
+#define SLR_BOOTLOADER_INVALID 0
+#define SLR_BOOTLOADER_GRUB 1
+
+/* Log formats */
+#define SLR_DRTM_TPM12_LOG 1
+#define SLR_DRTM_TPM20_LOG 2
+
+/* DRTM Policy Entry Flags */
+#define SLR_POLICY_FLAG_MEASURED 0x1
+#define SLR_POLICY_IMPLICIT_SIZE 0x2
+
+/* Array Lengths */
+#define TPM_EVENT_INFO_LENGTH 32
+#define TXT_VARIABLE_MTRRS_LENGTH 32
+
+/* Tags */
+#define SLR_ENTRY_INVALID 0x0000
+#define SLR_ENTRY_DL_INFO 0x0001
+#define SLR_ENTRY_LOG_INFO 0x0002
+#define SLR_ENTRY_DRTM_POLICY 0x0003
+#define SLR_ENTRY_INTEL_INFO 0x0004
+#define SLR_ENTRY_AMD_INFO 0x0005
+#define SLR_ENTRY_ARM_INFO 0x0006
+#define SLR_ENTRY_UEFI_INFO 0x0007
+#define SLR_ENTRY_UEFI_CONFIG 0x0008
+#define SLR_ENTRY_END 0xffff
+
+/* Entity Types */
+#define SLR_ET_UNSPECIFIED 0x0000
+#define SLR_ET_SLRT 0x0001
+#define SLR_ET_BOOT_PARAMS 0x0002
+#define SLR_ET_SETUP_DATA 0x0003
+#define SLR_ET_CMDLINE 0x0004
+#define SLR_ET_UEFI_MEMMAP 0x0005
+#define SLR_ET_RAMDISK 0x0006
+#define SLR_ET_MULTIBOOT2_INFO 0x0007
+#define SLR_ET_MULTIBOOT2_MODULE 0x0008
+#define SLR_ET_TXT_OS2MLE 0x0010
+#define SLR_ET_UNUSED 0xffff
+
+/*
+ * Primary SLR Table Header
+ */
+struct slr_table
+{
+ uint32_t magic;
+ uint16_t revision;
+ uint16_t architecture;
+ uint32_t size;
+ uint32_t max_size;
+ /* entries[] */
+} __packed;
+
+/*
+ * Common SLRT Table Header
+ */
+struct slr_entry_hdr
+{
+ uint32_t tag;
+ uint32_t size;
+} __packed;
+
+/*
+ * Boot loader context
+ */
+struct slr_bl_context
+{
+ uint16_t bootloader;
+ uint16_t reserved[3];
+ uint64_t context;
+} __packed;
+
+/*
+ * Prototype of a function pointed to by slr_entry_dl_info::dl_handler.
+ */
+typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);
+
+/*
+ * DRTM Dynamic Launch Configuration
+ */
+struct slr_entry_dl_info
+{
+ struct slr_entry_hdr hdr;
+ uint64_t dce_size;
+ uint64_t dce_base;
+ uint64_t dlme_size;
+ uint64_t dlme_base;
+ uint64_t dlme_entry;
+ struct slr_bl_context bl_context;
+ uint64_t dl_handler;
+} __packed;
+
+/*
+ * TPM Log Information
+ */
+struct slr_entry_log_info
+{
+ struct slr_entry_hdr hdr;
+ uint16_t format;
+ uint16_t reserved;
+ uint32_t size;
+ uint64_t addr;
+} __packed;
+
+/*
+ * DRTM Measurement Entry
+ */
+struct slr_policy_entry
+{
+ uint16_t pcr;
+ uint16_t entity_type;
+ uint16_t flags;
+ uint16_t reserved;
+ uint64_t size;
+ uint64_t entity;
+ char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * DRTM Measurement Policy
+ */
+struct slr_entry_policy
+{
+ struct slr_entry_hdr hdr;
+ uint16_t reserved[2];
+ uint16_t revision;
+ uint16_t nr_entries;
+ struct slr_policy_entry policy_entries[];
+} __packed;
+
+/*
+ * Secure Launch defined MTRR saving structures
+ */
+struct slr_txt_mtrr_pair
+{
+ uint64_t mtrr_physbase;
+ uint64_t mtrr_physmask;
+} __packed;
+
+struct slr_txt_mtrr_state
+{
+ uint64_t default_mem_type;
+ uint64_t mtrr_vcnt;
+ struct slr_txt_mtrr_pair mtrr_pair[TXT_VARIABLE_MTRRS_LENGTH];
+} __packed;
+
+/*
+ * Intel TXT Info table
+ */
+struct slr_entry_intel_info
+{
+ struct slr_entry_hdr hdr;
+ uint64_t boot_params_base;
+ uint64_t txt_heap;
+ uint64_t saved_misc_enable_msr;
+ struct slr_txt_mtrr_state saved_bsp_mtrrs;
+} __packed;
+
+/*
+ * AMD SKINIT Info table
+ */
+struct slr_entry_amd_info
+{
+ struct slr_entry_hdr hdr;
+ uint64_t next;
+ uint32_t type;
+ uint32_t len;
+ uint64_t slrt_size;
+ uint64_t slrt_base;
+ uint64_t boot_params_base;
+ uint16_t psp_version;
+ uint16_t reserved[3];
+} __packed;
+
+/*
+ * ARM DRTM Info table
+ */
+struct slr_entry_arm_info
+{
+ struct slr_entry_hdr hdr;
+} __packed;
+
+/*
+ * UEFI config measurement entry
+ */
+struct slr_uefi_cfg_entry
+{
+ uint16_t pcr;
+ uint16_t reserved;
+ uint32_t size;
+ uint64_t cfg; /* address or value */
+ char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+struct slr_entry_uefi_config
+{
+ struct slr_entry_hdr hdr;
+ uint16_t reserved[2];
+ uint16_t revision;
+ uint16_t nr_entries;
+ struct slr_uefi_cfg_entry uefi_cfg_entries[];
+} __packed;
+
+static inline void *
+slr_end_of_entries(struct slr_table *table)
+{
+ return (uint8_t *)table + table->size;
+}
+
+static inline struct slr_entry_hdr *
+slr_next_entry(struct slr_table *table, struct slr_entry_hdr *curr)
+{
+ struct slr_entry_hdr *next = (struct slr_entry_hdr *)
+ ((uint8_t *)curr + curr->size);
+
+ if ( (void *)next >= slr_end_of_entries(table) )
+ return NULL;
+ if ( next->tag == SLR_ENTRY_END )
+ return NULL;
+
+ return next;
+}
+
+static inline struct slr_entry_hdr *
+slr_next_entry_by_tag (struct slr_table *table,
+ struct slr_entry_hdr *entry,
+ uint16_t tag)
+{
+ if ( !entry ) /* Start from the beginning */
+ entry = (struct slr_entry_hdr *)((uint8_t *)table + sizeof(*table));
+
+ for ( ; ; )
+ {
+ if ( entry->tag == tag )
+ return entry;
+
+ entry = slr_next_entry(table, entry);
+ if ( !entry )
+ return NULL;
+ }
+
+ return NULL;
+}
+
+#endif /* _SLR_TABLE_H */
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:14 AMApr 22
to xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
From: Kacper Stojek <kacper...@3mdeb.com>

Signed-off-by: Kacper Stojek <kacper...@3mdeb.com>
Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
docs/hypervisor-guide/x86/how-xen-boots.rst | 5 ++
xen/arch/x86/boot/head.S | 53 +++++++++++++++++++++
2 files changed, 58 insertions(+)

diff --git a/docs/hypervisor-guide/x86/how-xen-boots.rst b/docs/hypervisor-guide/x86/how-xen-boots.rst
index 8b3229005c..050fe9c61f 100644
--- a/docs/hypervisor-guide/x86/how-xen-boots.rst
+++ b/docs/hypervisor-guide/x86/how-xen-boots.rst
@@ -55,6 +55,11 @@ If ``CONFIG_PVH_GUEST`` was selected at build time, an Elf note is included
which indicates the ability to use the PVH boot protocol, and registers
``__pvh_start`` as the entrypoint, entered in 32bit mode.

+A combination of Multiboot 2 and MLE headers is used to implement DRTM for
+legacy (BIOS) boot. The separate entry point is used mainly to differentiate
+from other kinds of boots. It moves a magic number to EAX before jumping into
+common startup code.
+

xen.gz
~~~~~~
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 77bb7a9e21..cd951ad2dc 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -4,6 +4,7 @@
#include <public/xen.h>
#include <asm/asm_defns.h>
#include <asm/fixmap.h>
+#include <asm/intel_txt.h>
#include <asm/page.h>
#include <asm/processor.h>
#include <asm/msr-index.h>
@@ -126,6 +127,25 @@ multiboot2_header:
.size multiboot2_header, . - multiboot2_header
.type multiboot2_header, @object

+ .balign 16
+mle_header:
+ .long 0x9082ac5a /* UUID0 */
+ .long 0x74a7476f /* UUID1 */
+ .long 0xa2555c0f /* UUID2 */
+ .long 0x42b651cb /* UUID3 */
+ .long 0x00000034 /* MLE header size */
+ .long 0x00020002 /* MLE version 2.2 */
+ .long (slaunch_stub_entry - start) /* Linear entry point of MLE (SINIT virt. address) */
+ .long 0x00000000 /* First valid page of MLE */
+ .long 0x00000000 /* Offset within binary of first byte of MLE */
+ .long (_end - start) /* Offset within binary of last byte + 1 of MLE */
+ .long 0x00000723 /* Bit vector of MLE-supported capabilities */
+ .long 0x00000000 /* Starting linear address of command line (unused) */
+ .long 0x00000000 /* Ending linear address of command line (unused) */
+
+ .size mle_header, .-mle_header
+ .type mle_header, @object
+
.section .init.rodata, "a", @progbits

.Lbad_cpu_msg: .asciz "ERR: Not a 64-bit CPU!"
@@ -332,6 +352,38 @@ cs32_switch:
/* Jump to earlier loaded address. */
jmp *%edi

+ /*
+ * Entry point for TrenchBoot Secure Launch on Intel TXT platforms.
+ *
+ * CPU is in 32b protected mode with paging disabled. On entry:
+ * - %ebx = %eip = MLE entry point,
+ * - stack pointer is undefined,
+ * - CS is flat 4GB code segment,
+ * - DS, ES, SS, FS and GS are undefined according to TXT SDG, but this
+ * would make it impossible to initialize GDTR, because GDT base must
+ * be relocated in the descriptor, which requires write access that
+ * CS doesn't provide. Instead we have to assume that DS is set by
+ * SINIT ACM as flat 4GB data segment.
+ *
+ * Additional restrictions:
+ * - some MSRs are partially cleared, among them IA32_MISC_ENABLE, so
+ * some capabilities might be reported as disabled even if they are
+ * supported by CPU
+ * - interrupts (including NMIs and SMIs) are disabled and must be
+ * enabled later
+ * - trying to enter real mode results in reset
+ * - APs must be brought up by MONITOR or GETSEC[WAKEUP], depending on
+ * which is supported by a given SINIT ACM
+ */
+slaunch_stub_entry:
+ /* Calculate the load base address. */
+ mov %ebx, %esi
+ sub $sym_offs(slaunch_stub_entry), %esi
+
+ /* Mark Secure Launch boot protocol and jump to common entry. */
+ mov $SLAUNCH_BOOTLOADER_MAGIC, %eax
+ jmp .Lset_stack
+
#ifdef CONFIG_PVH_GUEST
ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY, .long sym_offs(__pvh_start))

@@ -371,6 +423,7 @@ __start:
/* Restore the clobbered field. */
mov %edx, (%ebx)

+.Lset_stack:
/* Set up stack. */
lea STACK_SIZE - CPUINFO_sizeof + sym_esi(cpu0_stack), %esp

--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:20 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

The tests validate that important parts of memory are protected against
DMA attacks, including Xen and MBI. Modules can be tested later, when it
is possible to report issues to a user before invoking TXT reset.

TPM event log validation is temporarily disabled due to an issue with
its allocation by bootloader (GRUB) which will need to be modified to
address this. Ultimately event log will also have to be validated early
as it is used immediately after these tests to hold MBI measurements.
See larger comment in txt_verify_pmr_ranges().

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/boot/slaunch_early.c | 6 ++
xen/arch/x86/include/asm/intel_txt.h | 111 +++++++++++++++++++++++++++
2 files changed, 117 insertions(+)

diff --git a/xen/arch/x86/boot/slaunch_early.c b/xen/arch/x86/boot/slaunch_early.c
index 177267248f..af8aa29ae0 100644
--- a/xen/arch/x86/boot/slaunch_early.c
+++ b/xen/arch/x86/boot/slaunch_early.c
@@ -22,10 +22,13 @@ void slaunch_early_init(uint32_t load_base_addr,
void *txt_heap;
struct txt_os_mle_data *os_mle;
struct slr_table *slrt;
+ struct txt_os_sinit_data *os_sinit;
struct slr_entry_intel_info *intel_info;
+ uint32_t size = tgt_end_addr - tgt_base_addr;

txt_heap = txt_init();
os_mle = txt_os_mle_data_start(txt_heap);
+ os_sinit = txt_os_sinit_data_start(txt_heap);

result->slrt_pa = os_mle->slrt;
result->mbi_pa = 0;
@@ -38,4 +41,7 @@ void slaunch_early_init(uint32_t load_base_addr,
return;

result->mbi_pa = intel_info->boot_params_base;
+
+ txt_verify_pmr_ranges(os_mle, os_sinit, intel_info,
+ load_base_addr, tgt_base_addr, size);
}
diff --git a/xen/arch/x86/include/asm/intel_txt.h b/xen/arch/x86/include/asm/intel_txt.h
index b973640c56..7170baf6fb 100644
--- a/xen/arch/x86/include/asm/intel_txt.h
+++ b/xen/arch/x86/include/asm/intel_txt.h
@@ -81,6 +81,8 @@

#ifndef __ASSEMBLY__

+#include <xen/slr_table.h>
+
/* Need to differentiate between pre- and post paging enabled. */
#ifdef __EARLY_SLAUNCH__
#include <xen/macros.h>
@@ -285,4 +287,113 @@ static inline void *txt_init(void)
return txt_heap;
}

+static inline int is_in_pmr(struct txt_os_sinit_data *os_sinit, uint64_t base,
+ uint32_t size, int check_high)
+{
+ /* Check for size overflow. */
+ if ( base + size < base )
+ txt_reset(SLAUNCH_ERROR_INTEGER_OVERFLOW);
+
+ /* Low range always starts at 0, so its size is also end address. */
+ if ( base >= os_sinit->vtd_pmr_lo_base &&
+ base + size <= os_sinit->vtd_pmr_lo_size )
+ return 1;
+
+ if ( check_high && os_sinit->vtd_pmr_hi_size != 0 )
+ {
+ if ( os_sinit->vtd_pmr_hi_base + os_sinit->vtd_pmr_hi_size <
+ os_sinit->vtd_pmr_hi_size )
+ txt_reset(SLAUNCH_ERROR_INTEGER_OVERFLOW);
+ if ( base >= os_sinit->vtd_pmr_hi_base &&
+ base + size <= os_sinit->vtd_pmr_hi_base +
+ os_sinit->vtd_pmr_hi_size )
+ return 1;
+ }
+
+ return 0;
+}
+
+static inline void txt_verify_pmr_ranges(struct txt_os_mle_data *os_mle,
+ struct txt_os_sinit_data *os_sinit,
+ struct slr_entry_intel_info *info,
+ uint32_t load_base_addr,
+ uint32_t tgt_base_addr,
+ uint32_t xen_size)
+{
+ int check_high_pmr = 0;
+
+ /* Verify the value of the low PMR base. It should always be 0. */
+ if ( os_sinit->vtd_pmr_lo_base != 0 )
+ txt_reset(SLAUNCH_ERROR_LO_PMR_BASE);
+
+ /*
+ * Low PMR size should not be 0 on current platforms. There is an ongoing
+ * transition to TPR-based DMA protection instead of PMR-based; this is not
+ * yet supported by the code.
+ */
+ if ( os_sinit->vtd_pmr_lo_size == 0 )
+ txt_reset(SLAUNCH_ERROR_LO_PMR_SIZE);
+
+ /* Check if regions overlap. Treat regions with no hole between as error. */
+ if ( os_sinit->vtd_pmr_hi_size != 0 &&
+ os_sinit->vtd_pmr_hi_base <= os_sinit->vtd_pmr_lo_size )
+ txt_reset(SLAUNCH_ERROR_HI_PMR_BASE);
+
+ /* All regions accessed by 32b code must be below 4G. */
+ if ( os_sinit->vtd_pmr_hi_base + os_sinit->vtd_pmr_hi_size <=
+ 0x100000000ull )
+ check_high_pmr = 1;
+
+ /*
+ * ACM checks that TXT heap and MLE memory is protected against DMA. We have
+ * to check if MBI and whole Xen memory is protected. The latter is done in
+ * case bootloader failed to set whole image as MLE and to make sure that
+ * both pre- and post-relocation code is protected.
+ */
+
+ /* Check if all of Xen before relocation is covered by PMR. */
+ if ( !is_in_pmr(os_sinit, load_base_addr, xen_size, check_high_pmr) )
+ txt_reset(SLAUNCH_ERROR_LO_PMR_MLE);
+
+ /* Check if all of Xen after relocation is covered by PMR. */
+ if ( load_base_addr != tgt_base_addr &&
+ !is_in_pmr(os_sinit, tgt_base_addr, xen_size, check_high_pmr) )
+ txt_reset(SLAUNCH_ERROR_LO_PMR_MLE);
+
+ /*
+ * If present, check that MBI is covered by PMR. MBI starts with 'uint32_t
+ * total_size'.
+ */
+ if ( info->boot_params_base != 0 &&
+ !is_in_pmr(os_sinit, info->boot_params_base,
+ *(uint32_t *)(uintptr_t)info->boot_params_base,
+ check_high_pmr) )
+ txt_reset(SLAUNCH_ERROR_BUFFER_BEYOND_PMR);
+
+ /* Check if TPM event log (if present) is covered by PMR. */
+ /*
+ * FIXME: currently commented out as GRUB allocates it in a hole between
+ * PMR and reserved RAM, due to 2MB resolution of PMR. There are no other
+ * easy-to-use DMA protection mechanisms that would allow to protect that
+ * part of memory. TPR (TXT DMA Protection Range) gives 1MB resolution, but
+ * it still wouldn't be enough.
+ *
+ * One possible solution would be for GRUB to allocate log at lower address,
+ * but this would further increase memory space fragmentation. Another
+ * option is to align PMR up instead of down, making PMR cover part of
+ * reserved region, but it is unclear what the consequences may be.
+ *
+ * In tboot this issue was resolved by reserving leftover chunks of memory
+ * in e820 and/or UEFI memory map. This is also a valid solution, but would
+ * require more changes to GRUB than the ones listed above, as event log is
+ * allocated much earlier than PMRs.
+ */
+ /*
+ if ( os_mle->evtlog_addr != 0 && os_mle->evtlog_size != 0 &&
+ !is_in_pmr(os_sinit, os_mle->evtlog_addr, os_mle->evtlog_size,
+ check_high_pmr) )
+ txt_reset(SLAUNCH_ERROR_BUFFER_BEYOND_PMR);
+ */
+}
+
#endif /* __ASSEMBLY__ */
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:20 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
Make head.S invoke a C function to retrieve MBI and SLRT addresses in a
platform-specific way. This is also the place to perform sanity checks
of DRTM.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/Makefile | 1 +
xen/arch/x86/boot/Makefile | 5 +++-
xen/arch/x86/boot/head.S | 43 ++++++++++++++++++++++++++++
xen/arch/x86/boot/slaunch_early.c | 41 ++++++++++++++++++++++++++
xen/arch/x86/include/asm/intel_txt.h | 16 +++++++++++
xen/arch/x86/include/asm/slaunch.h | 19 ++++++++++++
xen/arch/x86/slaunch.c | 26 +++++++++++++++++
7 files changed, 150 insertions(+), 1 deletion(-)
create mode 100644 xen/arch/x86/boot/slaunch_early.c
create mode 100644 xen/arch/x86/include/asm/slaunch.h
create mode 100644 xen/arch/x86/slaunch.c

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index f59c9665fd..571cad160d 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -59,6 +59,7 @@ obj-$(CONFIG_COMPAT) += x86_64/physdev.o
obj-$(CONFIG_X86_PSR) += psr.o
obj-y += setup.o
obj-y += shutdown.o
+obj-y += slaunch.o
obj-y += smp.o
obj-y += smpboot.o
obj-y += spec_ctrl.o
diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
index be91a5757a..d0015f7d19 100644
--- a/xen/arch/x86/boot/Makefile
+++ b/xen/arch/x86/boot/Makefile
@@ -5,6 +5,7 @@ obj-bin-y += $(obj64)
obj32 := cmdline.32.o
obj32 += reloc.32.o
obj32 += reloc-trampoline.32.o
+obj32 += slaunch_early.32.o

obj64 := reloc-trampoline.o

@@ -28,6 +29,8 @@ $(obj32): XEN_CFLAGS := $(CFLAGS_x86_32) -fpic
$(obj)/%.32.o: $(src)/%.c FORCE
$(call if_changed_rule,cc_o_c)

+$(obj)/slaunch_early.32.o: XEN_CFLAGS += -D__EARLY_SLAUNCH__
+
orphan-handling-$(call ld-option,--orphan-handling=error) := --orphan-handling=error
LDFLAGS_DIRECT-$(call ld-option,--warn-rwx-segments) := --no-warn-rwx-segments
LDFLAGS_DIRECT += $(LDFLAGS_DIRECT-y)
@@ -81,7 +84,7 @@ cmd_combine = \
--bin1 $(obj)/built-in-32.base.bin \
--bin2 $(obj)/built-in-32.offset.bin \
--map $(obj)/built-in-32.base.map \
- --exports cmdline_parse_early,reloc,reloc_trampoline32 \
+ --exports cmdline_parse_early,reloc,reloc_trampoline32,slaunch_early_init \
--output $@

targets += built-in-32.S
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index cd951ad2dc..e522a36305 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -472,6 +472,10 @@ __start:
/* Bootloaders may set multiboot{1,2}.mem_lower to a nonzero value. */
xor %edx,%edx

+ /* Check for TrenchBoot slaunch bootloader. */
+ cmp $SLAUNCH_BOOTLOADER_MAGIC, %eax
+ je .Lslaunch_proto
+
/* Check for Multiboot2 bootloader. */
cmp $MULTIBOOT2_BOOTLOADER_MAGIC,%eax
je .Lmultiboot2_proto
@@ -487,6 +491,45 @@ __start:
cmovnz MB_mem_lower(%ebx),%edx
jmp trampoline_bios_setup

+.Lslaunch_proto:
+ /*
+ * Upon reaching here, CPU state mostly matches the one setup by the
+ * bootloader with ESP, ESI and EDX being clobbered above.
+ */
+
+ /* Save information that TrenchBoot slaunch was used. */
+ movb $1, sym_esi(slaunch_active)
+
+ /*
+ * Prepare space for output parameter of slaunch_early_init(), which is
+ * a structure of two uint32_t fields.
+ */
+ sub $8, %esp
+
+ push %esp /* pointer to output structure */
+ lea sym_offs(__2M_rwdata_end), %ecx /* end of target image */
+ lea sym_offs(_start), %edx /* target base address */
+ mov %esi, %eax /* load base address */
+ /*
+ * slaunch_early_init(load/eax, tgt/edx, tgt_end/ecx, ret/stk) using
+ * fastcall calling convention.
+ */
+ call slaunch_early_init
+ add $4, %esp /* pop the fourth parameter */
+
+ /* Move outputs of slaunch_early_init() from stack into registers. */
+ pop %eax /* physical MBI address */
+ pop %edx /* physical SLRT address */
+
+ /* Save physical address of SLRT for C code. */
+ mov %edx, sym_esi(slaunch_slrt)
+
+ /* Store MBI address in EBX where MB2 code expects it. */
+ mov %eax, %ebx
+
+ /* Move magic number expected by Multiboot 2 to EAX and fall through. */
+ movl $MULTIBOOT2_BOOTLOADER_MAGIC, %eax
+
.Lmultiboot2_proto:
/* Skip Multiboot2 information fixed part. */
lea (MB2_fixed_sizeof+MULTIBOOT2_TAG_ALIGN-1)(%ebx),%ecx
diff --git a/xen/arch/x86/boot/slaunch_early.c b/xen/arch/x86/boot/slaunch_early.c
new file mode 100644
index 0000000000..177267248f
--- /dev/null
+++ b/xen/arch/x86/boot/slaunch_early.c
@@ -0,0 +1,41 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#include <xen/slr_table.h>
+#include <xen/types.h>
+#include <asm/intel_txt.h>
+
+struct early_init_results
+{
+ uint32_t mbi_pa;
+ uint32_t slrt_pa;
+} __packed;
+
+void slaunch_early_init(uint32_t load_base_addr,
+ uint32_t tgt_base_addr,
+ uint32_t tgt_end_addr,
+ struct early_init_results *result)
+{
+ void *txt_heap;
+ struct txt_os_mle_data *os_mle;
+ struct slr_table *slrt;
+ struct slr_entry_intel_info *intel_info;
+
+ txt_heap = txt_init();
+ os_mle = txt_os_mle_data_start(txt_heap);
+
+ result->slrt_pa = os_mle->slrt;
+ result->mbi_pa = 0;
+
+ slrt = (struct slr_table *)(uintptr_t)os_mle->slrt;
+
+ intel_info = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+ if ( intel_info == NULL || intel_info->hdr.size != sizeof(*intel_info) )
+ return;
+
+ result->mbi_pa = intel_info->boot_params_base;
+}
diff --git a/xen/arch/x86/include/asm/intel_txt.h b/xen/arch/x86/include/asm/intel_txt.h
index 2cc6eb5be9..b973640c56 100644
--- a/xen/arch/x86/include/asm/intel_txt.h
+++ b/xen/arch/x86/include/asm/intel_txt.h
@@ -269,4 +269,20 @@ static inline void *txt_sinit_mle_data_start(void *heap)
sizeof(uint64_t);
}

+static inline void *txt_init(void)
+{
+ void *txt_heap;
+
+ /* Clear the TXT error register for a clean start of the day. */
+ write_txt_reg(TXTCR_ERRORCODE, 0);
+
+ txt_heap = _p(read_txt_reg(TXTCR_HEAP_BASE));
+
+ if ( txt_os_mle_data_size(txt_heap) < sizeof(struct txt_os_mle_data) ||
+ txt_os_sinit_data_size(txt_heap) < sizeof(struct txt_os_sinit_data) )
+ txt_reset(SLAUNCH_ERROR_GENERIC);
+
+ return txt_heap;
+}
+
#endif /* __ASSEMBLY__ */
diff --git a/xen/arch/x86/include/asm/slaunch.h b/xen/arch/x86/include/asm/slaunch.h
new file mode 100644
index 0000000000..08cc2657f0
--- /dev/null
+++ b/xen/arch/x86/include/asm/slaunch.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#ifndef _ASM_X86_SLAUNCH_H_
+#define _ASM_X86_SLAUNCH_H_
+
+#include <xen/types.h>
+
+extern bool slaunch_active;
+
+/*
+ * Retrieves pointer to SLRT. Checks table's validity and maps it as necessary.
+ */
+struct slr_table *slaunch_get_slrt(void);
+
+#endif /* _ASM_X86_SLAUNCH_H_ */
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
new file mode 100644
index 0000000000..0404084b02
--- /dev/null
+++ b/xen/arch/x86/slaunch.c
@@ -0,0 +1,26 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#include <xen/compiler.h>
+#include <xen/init.h>
+#include <xen/macros.h>
+#include <xen/types.h>
+#include <asm/slaunch.h>
+
+/*
+ * These variables are assigned to by the code near Xen's entry point.
+ * slaunch_slrt is not declared in slaunch.h to facilitate accessing the
+ * variable through slaunch_get_slrt().
+ */
+bool __initdata slaunch_active;
+uint32_t __initdata slaunch_slrt; /* physical address */
+
+/* Using slaunch_active in head.S assumes it's a single byte in size, so enforce
+ * this assumption. */
+static void __maybe_unused compile_time_checks(void)
+{
+ BUILD_BUG_ON(sizeof(slaunch_active) != 1);
+}
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:23 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Kacper Stojek <kacper...@3mdeb.com>

TXT heap, SINIT and TXT private space are marked as reserved or unused
in e820 to protect from unintended uses.

Signed-off-by: Kacper Stojek <kacper...@3mdeb.com>
Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/Makefile | 1 +
xen/arch/x86/include/asm/intel_txt.h | 6 ++
xen/arch/x86/include/asm/mm.h | 3 +
xen/arch/x86/include/asm/slaunch.h | 44 ++++++++++++
xen/arch/x86/intel_txt.c | 102 +++++++++++++++++++++++++++
xen/arch/x86/setup.c | 10 ++-
xen/arch/x86/slaunch.c | 96 +++++++++++++++++++++++++
7 files changed, 259 insertions(+), 3 deletions(-)
create mode 100644 xen/arch/x86/intel_txt.c

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 571cad160d..cae548f7e9 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -40,6 +40,7 @@ obj-$(CONFIG_GDBSX) += gdbsx.o
obj-y += hypercall.o
obj-y += i387.o
obj-y += i8259.o
+obj-y += intel_txt.o
obj-y += io_apic.o
obj-$(CONFIG_LIVEPATCH) += alternative.o livepatch.o
obj-y += msi.o
diff --git a/xen/arch/x86/include/asm/intel_txt.h b/xen/arch/x86/include/asm/intel_txt.h
index 7170baf6fb..85ef9f6245 100644
--- a/xen/arch/x86/include/asm/intel_txt.h
+++ b/xen/arch/x86/include/asm/intel_txt.h
@@ -396,4 +396,10 @@ static inline void txt_verify_pmr_ranges(struct txt_os_mle_data *os_mle,
*/
}

+/* Prepares for accesses to TXT-specific memory. */
+void txt_map_mem_regions(void);
+
+/* Marks TXT-specific memory as used to avoid its corruption. */
+void txt_reserve_mem_regions(void);
+
#endif /* __ASSEMBLY__ */
diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h
index a1bc8cc274..061cb12a5b 100644
--- a/xen/arch/x86/include/asm/mm.h
+++ b/xen/arch/x86/include/asm/mm.h
@@ -106,6 +106,9 @@
#define _PGC_need_scrub _PGC_allocated
#define PGC_need_scrub PGC_allocated

+/* How much of the directmap is prebuilt at compile time. */
+#define PREBUILT_MAP_LIMIT (1 << L2_PAGETABLE_SHIFT)
+
#ifndef CONFIG_BIGMEM
/*
* This definition is solely for the use in struct page_info (and
diff --git a/xen/arch/x86/include/asm/slaunch.h b/xen/arch/x86/include/asm/slaunch.h
index 08cc2657f0..78d3c8bf37 100644
--- a/xen/arch/x86/include/asm/slaunch.h
+++ b/xen/arch/x86/include/asm/slaunch.h
@@ -7,13 +7,57 @@
#ifndef _ASM_X86_SLAUNCH_H_
#define _ASM_X86_SLAUNCH_H_

+#include <xen/slr_table.h>
#include <xen/types.h>

extern bool slaunch_active;

+/*
+ * evt_log is assigned a physical address and the caller must map it to
+ * virtual, if needed.
+ */
+static inline void find_evt_log(struct slr_table *slrt, void **evt_log,
+ uint32_t *evt_log_size)
+{
+ struct slr_entry_log_info *log_info;
+
+ log_info = (struct slr_entry_log_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_LOG_INFO);
+ if ( log_info != NULL )
+ {
+ *evt_log = _p(log_info->addr);
+ *evt_log_size = log_info->size;
+ }
+ else
+ {
+ *evt_log = NULL;
+ *evt_log_size = 0;
+ }
+}
+
/*
* Retrieves pointer to SLRT. Checks table's validity and maps it as necessary.
*/
struct slr_table *slaunch_get_slrt(void);

+/*
+ * Prepares for accesses to essential data structures setup by boot environment.
+ */
+void slaunch_map_mem_regions(void);
+
+/* Marks regions of memory as used to avoid their corruption. */
+void slaunch_reserve_mem_regions(void);
+
+/*
+ * This helper function is used to map memory using L2 page tables by aligning
+ * mapped regions to 2MB. This way page allocator (which at this point isn't
+ * yet initialized) isn't needed for creating new L1 mappings. The function
+ * also checks and skips memory already mapped by the prebuilt tables.
+ *
+ * There is no unmap_l2() because the function is meant to be used by the code
+ * that accesses DRTM-related memory soon after which Xen rebuilds memory maps,
+ * effectively dropping all existing mappings.
+ */
+int slaunch_map_l2(unsigned long paddr, unsigned long size);
+
#endif /* _ASM_X86_SLAUNCH_H_ */
diff --git a/xen/arch/x86/intel_txt.c b/xen/arch/x86/intel_txt.c
new file mode 100644
index 0000000000..4a4e404007
--- /dev/null
+++ b/xen/arch/x86/intel_txt.c
@@ -0,0 +1,102 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#include <xen/bug.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/types.h>
+#include <asm/e820.h>
+#include <asm/intel_txt.h>
+#include <asm/slaunch.h>
+
+static uint64_t __initdata txt_heap_base, txt_heap_size;
+
+void __init txt_map_mem_regions(void)
+{
+ int rc;
+
+ rc = slaunch_map_l2(TXT_PRIV_CONFIG_REGS_BASE, NR_TXT_CONFIG_SIZE);
+ BUG_ON(rc != 0);
+
+ txt_heap_base = read_txt_reg(TXTCR_HEAP_BASE);
+ BUG_ON(txt_heap_base == 0);
+
+ txt_heap_size = read_txt_reg(TXTCR_HEAP_SIZE);
+ BUG_ON(txt_heap_size == 0);
+
+ rc = slaunch_map_l2(txt_heap_base, txt_heap_size);
+ BUG_ON(rc != 0);
+}
+
+/* Mark RAM region as RESERVED if it isn't marked that way already. */
+static int __init mark_ram_as(struct e820map *e820, uint64_t start,
+ uint64_t end, uint32_t type)
+{
+ unsigned int i;
+ uint32_t from_type = E820_RAM;
+
+ for ( i = 0; i < e820->nr_map; i++ )
+ {
+ uint64_t rs = e820->map[i].addr;
+ uint64_t re = rs + e820->map[i].size;
+ if ( start >= rs && end <= re )
+ break;
+ }
+
+ /*
+ * Allow the range to be unlisted since we're only preventing RAM from
+ * use.
+ */
+ if ( i == e820->nr_map )
+ return 1;
+
+ /*
+ * e820_change_range_type() fails if the range is already marked with the
+ * desired type. Don't consider it an error if firmware has done it for us.
+ */
+ if ( e820->map[i].type == type )
+ return 1;
+
+ /* E820_ACPI or E820_NVS are really unexpected, but others are fine. */
+ if ( e820->map[i].type == E820_RESERVED ||
+ e820->map[i].type == E820_UNUSABLE )
+ from_type = e820->map[i].type;
+
+ return e820_change_range_type(e820, start, end, from_type, type);
+}
+
+void __init txt_reserve_mem_regions(void)
+{
+ int rc;
+ uint64_t sinit_base, sinit_size;
+
+ /* TXT Heap */
+ BUG_ON(txt_heap_base == 0);
+ printk("SLAUNCH: reserving TXT heap (%#lx - %#lx)\n", txt_heap_base,
+ txt_heap_base + txt_heap_size);
+ rc = mark_ram_as(&e820_raw, txt_heap_base, txt_heap_base + txt_heap_size,
+ E820_RESERVED);
+ BUG_ON(rc == 0);
+
+ sinit_base = read_txt_reg(TXTCR_SINIT_BASE);
+ BUG_ON(sinit_base == 0);
+
+ sinit_size = read_txt_reg(TXTCR_SINIT_SIZE);
+ BUG_ON(sinit_size == 0);
+
+ /* SINIT */
+ printk("SLAUNCH: reserving SINIT memory (%#lx - %#lx)\n", sinit_base,
+ sinit_base + sinit_size);
+ rc = mark_ram_as(&e820_raw, sinit_base, sinit_base + sinit_size,
+ E820_RESERVED);
+ BUG_ON(rc == 0);
+
+ /* TXT Private Space */
+ rc = mark_ram_as(&e820_raw, TXT_PRIV_CONFIG_REGS_BASE,
+ TXT_PRIV_CONFIG_REGS_BASE + NR_TXT_CONFIG_SIZE,
+ E820_UNUSABLE);
+ BUG_ON(rc == 0);
+}
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 24b36c1a59..403d976449 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -52,6 +52,7 @@
#include <asm/prot-key.h>
#include <asm/pv/domain.h>
#include <asm/setup.h>
+#include <asm/slaunch.h>
#include <asm/smp.h>
#include <asm/spec_ctrl.h>
#include <asm/tboot.h>
@@ -1058,9 +1059,6 @@ static struct domain *__init create_dom0(struct boot_info *bi)
return d;
}

-/* How much of the directmap is prebuilt at compile time. */
-#define PREBUILT_MAP_LIMIT (1 << L2_PAGETABLE_SHIFT)
-
void asmlinkage __init noreturn __start_xen(void)
{
const char *memmap_type = NULL;
@@ -1396,6 +1394,12 @@ void asmlinkage __init noreturn __start_xen(void)
#endif
}

+ if ( slaunch_active )
+ {
+ slaunch_map_mem_regions();
+ slaunch_reserve_mem_regions();
+ }
+
/* Sanitise the raw E820 map to produce a final clean version. */
max_page = raw_max_page = init_e820(memmap_type, &e820_raw);

diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index 0404084b02..20e277cc5c 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -7,7 +7,11 @@
#include <xen/compiler.h>
#include <xen/init.h>
#include <xen/macros.h>
+#include <xen/mm.h>
#include <xen/types.h>
+#include <asm/e820.h>
+#include <asm/intel_txt.h>
+#include <asm/page.h>
#include <asm/slaunch.h>

/*
@@ -24,3 +28,95 @@ static void __maybe_unused compile_time_checks(void)
{
BUILD_BUG_ON(sizeof(slaunch_active) != 1);
}
+
+struct slr_table *__init slaunch_get_slrt(void)
+{
+ static struct slr_table *slrt;
+
+ if (slrt == NULL) {
+ int rc;
+
+ slrt = __va(slaunch_slrt);
+
+ rc = slaunch_map_l2(slaunch_slrt, PAGE_SIZE);
+ BUG_ON(rc != 0);
+
+ if ( slrt->magic != SLR_TABLE_MAGIC )
+ panic("SLRT has invalid magic value: %#08x!\n", slrt->magic);
+ /* XXX: are newer revisions allowed? */
+ if ( slrt->revision != SLR_TABLE_REVISION )
+ panic("SLRT is of unsupported revision: %#04x!\n", slrt->revision);
+ if ( slrt->architecture != SLR_INTEL_TXT )
+ panic("SLRT is for unexpected architecture: %#04x!\n",
+ slrt->architecture);
+ if ( slrt->size > slrt->max_size )
+ panic("SLRT is larger than its max size: %#08x > %#08x!\n",
+ slrt->size, slrt->max_size);
+
+ if ( slrt->size > PAGE_SIZE )
+ {
+ rc = slaunch_map_l2(slaunch_slrt, slrt->size);
+ BUG_ON(rc != 0);
+ }
+ }
+
+ return slrt;
+}
+
+void __init slaunch_map_mem_regions(void)
+{
+ void *evt_log_addr;
+ uint32_t evt_log_size;
+
+ /* Vendor-specific part. */
+ txt_map_mem_regions();
+
+ find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
+ if ( evt_log_addr != NULL )
+ {
+ int rc = slaunch_map_l2((uintptr_t)evt_log_addr, evt_log_size);
+ BUG_ON(rc != 0);
+ }
+}
+
+void __init slaunch_reserve_mem_regions(void)
+{
+ int rc;
+
+ void *evt_log_addr;
+ uint32_t evt_log_size;
+
+ /* Vendor-specific part. */
+ txt_reserve_mem_regions();
+
+ find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
+ if ( evt_log_addr != NULL )
+ {
+ printk("SLAUNCH: reserving event log (%#lx - %#lx)\n",
+ (uint64_t)evt_log_addr,
+ (uint64_t)evt_log_addr + evt_log_size);
+ rc = reserve_e820_ram(&e820_raw, (uint64_t)evt_log_addr,
+ (uint64_t)evt_log_addr + evt_log_size);
+ BUG_ON(rc == 0);
+ }
+}
+
+int __init slaunch_map_l2(unsigned long paddr, unsigned long size)
+{
+ unsigned long aligned_paddr = paddr & ~((1ULL << L2_PAGETABLE_SHIFT) - 1);
+ unsigned long pages = ((paddr + size) - aligned_paddr);
+ pages = ROUNDUP(pages, 1ULL << L2_PAGETABLE_SHIFT) >> PAGE_SHIFT;
+
+ if ( aligned_paddr + pages * PAGE_SIZE <= PREBUILT_MAP_LIMIT )
+ return 0;
+
+ if ( aligned_paddr < PREBUILT_MAP_LIMIT )
+ {
+ pages -= (PREBUILT_MAP_LIMIT - aligned_paddr) >> PAGE_SHIFT;
+ aligned_paddr = PREBUILT_MAP_LIMIT;
+ }
+
+ return map_pages_to_xen((uintptr_t)__va(aligned_paddr),
+ maddr_to_mfn(aligned_paddr),
+ pages, PAGE_HYPERVISOR);
+}
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:27 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
This allows the functionality to be reused by other units that need to
update MTRRs.

This also gets rid of a static variable.

Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/cpu/mtrr/generic.c | 51 ++++++++++++++++-----------------
xen/arch/x86/include/asm/mtrr.h | 8 ++++++
2 files changed, 33 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index c587e9140e..2a8dd1d8ff 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -396,9 +396,7 @@ static bool set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
return changed;
}

-static uint64_t deftype;
-
-static unsigned long set_mtrr_state(void)
+static unsigned long set_mtrr_state(uint64_t *deftype)
/* [SUMMARY] Set the MTRR state for this CPU.
<state> The MTRR state information to read.
<ctxt> Some relevant CPU context.
@@ -416,14 +414,12 @@ static unsigned long set_mtrr_state(void)
if (mtrr_state.have_fixed && set_fixed_ranges(mtrr_state.fixed_ranges))
change_mask |= MTRR_CHANGE_MASK_FIXED;

- /* Set_mtrr_restore restores the old value of MTRRdefType,
- so to set it we fiddle with the saved value */
- if ((deftype & 0xff) != mtrr_state.def_type
- || MASK_EXTR(deftype, MTRRdefType_E) != mtrr_state.enabled
- || MASK_EXTR(deftype, MTRRdefType_FE) != mtrr_state.fixed_enabled) {
- deftype = (deftype & ~0xcff) | mtrr_state.def_type |
- MASK_INSR(mtrr_state.enabled, MTRRdefType_E) |
- MASK_INSR(mtrr_state.fixed_enabled, MTRRdefType_FE);
+ if ((*deftype & 0xff) != mtrr_state.def_type
+ || MASK_EXTR(*deftype, MTRRdefType_E) != mtrr_state.enabled
+ || MASK_EXTR(*deftype, MTRRdefType_FE) != mtrr_state.fixed_enabled) {
+ *deftype = (*deftype & ~0xcff) | mtrr_state.def_type |
+ MASK_INSR(mtrr_state.enabled, MTRRdefType_E) |
+ MASK_INSR(mtrr_state.fixed_enabled, MTRRdefType_FE);
change_mask |= MTRR_CHANGE_MASK_DEFTYPE;
}

@@ -440,9 +436,10 @@ static DEFINE_SPINLOCK(set_atomicity_lock);
* has been called.
*/

-static bool prepare_set(void)
+struct mtrr_pausing_state mtrr_pause_caching(void)
{
unsigned long cr4;
+ struct mtrr_pausing_state state;

/* Note that this is not ideal, since the cache is only flushed/disabled
for this CPU while the MTRRs are changed, but changing this requires
@@ -462,7 +459,9 @@ static bool prepare_set(void)
alternative("wbinvd", "", X86_FEATURE_XEN_SELFSNOOP);

cr4 = read_cr4();
- if (cr4 & X86_CR4_PGE)
+ state.pge = cr4 & X86_CR4_PGE;
+
+ if (state.pge)
write_cr4(cr4 & ~X86_CR4_PGE);
else if (use_invpcid)
invpcid_flush_all();
@@ -470,27 +469,27 @@ static bool prepare_set(void)
write_cr3(read_cr3());

/* Save MTRR state */
- rdmsrl(MSR_MTRRdefType, deftype);
+ rdmsrl(MSR_MTRRdefType, state.def_type);

/* Disable MTRRs, and set the default type to uncached */
- mtrr_wrmsr(MSR_MTRRdefType, deftype & ~0xcff);
+ mtrr_wrmsr(MSR_MTRRdefType, state.def_type & ~0xcff);

/* Again, only flush caches if we have to. */
alternative("wbinvd", "", X86_FEATURE_XEN_SELFSNOOP);

- return cr4 & X86_CR4_PGE;
+ return state;
}

-static void post_set(bool pge)
+void mtrr_resume_caching(struct mtrr_pausing_state state)
{
/* Intel (P6) standard MTRRs */
- mtrr_wrmsr(MSR_MTRRdefType, deftype);
+ mtrr_wrmsr(MSR_MTRRdefType, state.def_type);

/* Enable caches */
write_cr0(read_cr0() & ~X86_CR0_CD);

/* Reenable CR4.PGE (also flushes the TLB) */
- if (pge)
+ if (state.pge)
write_cr4(read_cr4() | X86_CR4_PGE);
else if (use_invpcid)
invpcid_flush_all();
@@ -504,15 +503,15 @@ void mtrr_set_all(void)
{
unsigned long mask, count;
unsigned long flags;
- bool pge;
+ struct mtrr_pausing_state pausing_state;

local_irq_save(flags);
- pge = prepare_set();
+ pausing_state = mtrr_pause_caching();

/* Actually set the state */
- mask = set_mtrr_state();
+ mask = set_mtrr_state(&pausing_state.def_type);

- post_set(pge);
+ mtrr_resume_caching(pausing_state);
local_irq_restore(flags);

/* Use the atomic bitops to update the global mask */
@@ -537,12 +536,12 @@ void mtrr_set(
{
unsigned long flags;
struct mtrr_var_range *vr;
- bool pge;
+ struct mtrr_pausing_state pausing_state;

vr = &mtrr_state.var_ranges[reg];

local_irq_save(flags);
- pge = prepare_set();
+ pausing_state = mtrr_pause_caching();

if (size == 0) {
/* The invalid bit is kept in the mask, so we simply clear the
@@ -563,7 +562,7 @@ void mtrr_set(
mtrr_wrmsr(MSR_IA32_MTRR_PHYSMASK(reg), vr->mask);
}

- post_set(pge);
+ mtrr_resume_caching(pausing_state);
local_irq_restore(flags);
}

diff --git a/xen/arch/x86/include/asm/mtrr.h b/xen/arch/x86/include/asm/mtrr.h
index 25d442659d..82ea427ba0 100644
--- a/xen/arch/x86/include/asm/mtrr.h
+++ b/xen/arch/x86/include/asm/mtrr.h
@@ -66,6 +66,14 @@ extern uint8_t pat_type_2_pte_flags(uint8_t pat_type);
extern void mtrr_aps_sync_begin(void);
extern void mtrr_aps_sync_end(void);

+struct mtrr_pausing_state {
+ bool pge;
+ uint64_t def_type;
+};
+
+extern struct mtrr_pausing_state mtrr_pause_caching(void);
+extern void mtrr_resume_caching(struct mtrr_pausing_state state);
+
extern bool mtrr_var_range_msr_set(struct domain *d, struct mtrr_state *m,
uint32_t msr, uint64_t msr_content);
extern bool mtrr_fix_range_msr_set(struct domain *d, struct mtrr_state *m,
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:30 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

In preparation for TXT SENTER call, GRUB had to modify MTRR settings
to be UC for everything except SINIT ACM. Old values are restored
from SLRT where they were saved by the bootloader.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/e820.c | 5 ++
xen/arch/x86/include/asm/intel_txt.h | 3 ++
xen/arch/x86/intel_txt.c | 75 ++++++++++++++++++++++++++++
3 files changed, 83 insertions(+)

diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
index ca577c0bde..d105d1918a 100644
--- a/xen/arch/x86/e820.c
+++ b/xen/arch/x86/e820.c
@@ -11,6 +11,8 @@
#include <asm/mtrr.h>
#include <asm/msr.h>
#include <asm/guest.h>
+#include <asm/intel_txt.h>
+#include <asm/slaunch.h>

/*
* opt_mem: Limit maximum address of physical RAM.
@@ -442,6 +444,9 @@ static uint64_t __init mtrr_top_of_ram(void)
ASSERT(paddr_bits);
addr_mask = ((1ULL << paddr_bits) - 1) & PAGE_MASK;

+ if ( slaunch_active )
+ txt_restore_mtrrs(e820_verbose);
+
rdmsrl(MSR_MTRRcap, mtrr_cap);
rdmsrl(MSR_MTRRdefType, mtrr_def);

diff --git a/xen/arch/x86/include/asm/intel_txt.h b/xen/arch/x86/include/asm/intel_txt.h
index 85ef9f6245..9083260cf9 100644
--- a/xen/arch/x86/include/asm/intel_txt.h
+++ b/xen/arch/x86/include/asm/intel_txt.h
@@ -402,4 +402,7 @@ void txt_map_mem_regions(void);
/* Marks TXT-specific memory as used to avoid its corruption. */
void txt_reserve_mem_regions(void);

+/* Restores original MTRR values saved by a bootloader before starting DRTM. */
+void txt_restore_mtrrs(bool e820_verbose);
+
#endif /* __ASSEMBLY__ */
diff --git a/xen/arch/x86/intel_txt.c b/xen/arch/x86/intel_txt.c
index 4a4e404007..8ffcab0e61 100644
--- a/xen/arch/x86/intel_txt.c
+++ b/xen/arch/x86/intel_txt.c
@@ -10,6 +10,8 @@
#include <xen/types.h>
#include <asm/e820.h>
#include <asm/intel_txt.h>
+#include <asm/msr.h>
+#include <asm/mtrr.h>
#include <asm/slaunch.h>

static uint64_t __initdata txt_heap_base, txt_heap_size;
@@ -100,3 +102,76 @@ void __init txt_reserve_mem_regions(void)
E820_UNUSABLE);
BUG_ON(rc == 0);
}
+
+void __init txt_restore_mtrrs(bool e820_verbose)
+{
+ struct slr_entry_intel_info *intel_info;
+ uint64_t mtrr_cap, mtrr_def, base, mask;
+ unsigned int i;
+ uint64_t def_type;
+ struct mtrr_pausing_state pausing_state;
+
+ rdmsrl(MSR_MTRRcap, mtrr_cap);
+ rdmsrl(MSR_MTRRdefType, mtrr_def);
+
+ if ( e820_verbose )
+ {
+ printk("MTRRs set previously for SINIT ACM:\n");
+ printk(" MTRR cap: %"PRIx64" type: %"PRIx64"\n", mtrr_cap, mtrr_def);
+
+ for ( i = 0; i < (uint8_t)mtrr_cap; i++ )
+ {
+ rdmsrl(MSR_IA32_MTRR_PHYSBASE(i), base);
+ rdmsrl(MSR_IA32_MTRR_PHYSMASK(i), mask);
+
+ printk(" MTRR[%d]: base %"PRIx64" mask %"PRIx64"\n",
+ i, base, mask);
+ }
+ }
+
+ intel_info = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slaunch_get_slrt(), NULL, SLR_ENTRY_INTEL_INFO);
+
+ if ( (mtrr_cap & 0xFF) != intel_info->saved_bsp_mtrrs.mtrr_vcnt )
+ {
+ printk("Bootloader saved %ld MTRR values, but there should be %ld\n",
+ intel_info->saved_bsp_mtrrs.mtrr_vcnt, mtrr_cap & 0xFF);
+ /* Choose the smaller one to be on the safe side. */
+ mtrr_cap = (mtrr_cap & 0xFF) > intel_info->saved_bsp_mtrrs.mtrr_vcnt ?
+ intel_info->saved_bsp_mtrrs.mtrr_vcnt : mtrr_cap;
+ }
+
+ def_type = intel_info->saved_bsp_mtrrs.default_mem_type;
+ pausing_state = mtrr_pause_caching();
+
+ for ( i = 0; i < (uint8_t)mtrr_cap; i++ )
+ {
+ base = intel_info->saved_bsp_mtrrs.mtrr_pair[i].mtrr_physbase;
+ mask = intel_info->saved_bsp_mtrrs.mtrr_pair[i].mtrr_physmask;
+ wrmsrl(MSR_IA32_MTRR_PHYSBASE(i), base);
+ wrmsrl(MSR_IA32_MTRR_PHYSMASK(i), mask);
+ }
+
+ pausing_state.def_type = def_type;
+ mtrr_resume_caching(pausing_state);
+
+ if ( e820_verbose )
+ {
+ printk("Restored MTRRs:\n"); /* Printed by caller, mtrr_top_of_ram(). */
+
+ /* If MTRRs are not enabled or WB is not a default type, MTRRs won't be printed */
+ if ( !test_bit(11, &def_type) || ((uint8_t)def_type == X86_MT_WB) )
+ {
+ for ( i = 0; i < (uint8_t)mtrr_cap; i++ )
+ {
+ rdmsrl(MSR_IA32_MTRR_PHYSBASE(i), base);
+ rdmsrl(MSR_IA32_MTRR_PHYSMASK(i), mask);
+ printk(" MTRR[%d]: base %"PRIx64" mask %"PRIx64"\n",
+ i, base, mask);
+ }
+ }
+ }
+
+ /* Restore IA32_MISC_ENABLES */
+ wrmsrl(MSR_IA32_MISC_ENABLE, intel_info->saved_misc_enable_msr);
+}
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:31 AMApr 22
to xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

The code comes from [1] and is licensed under GPL-2.0 license.
It's a combination of:
- include/crypto/sha1.h
- include/crypto/sha1_base.h
- lib/crypto/sha1.c
- crypto/sha1_generic.c

Changes:
- includes
- formatting
- renames and splicing of some trivial functions that are called once
- dropping of `int` return values (only zero was ever returned)
- getting rid of references to `struct shash_desc`

[1]: https://github.com/torvalds/linux/tree/afdab700f65e14070d8ab92175544b1c62b8bf03

Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
---
xen/include/xen/sha1.h | 12 +++
xen/lib/Makefile | 1 +
xen/lib/sha1.c | 240 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 253 insertions(+)
create mode 100644 xen/include/xen/sha1.h
create mode 100644 xen/lib/sha1.c

diff --git a/xen/include/xen/sha1.h b/xen/include/xen/sha1.h
new file mode 100644
index 0000000000..752dfdf827
--- /dev/null
+++ b/xen/include/xen/sha1.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __XEN_SHA1_H
+#define __XEN_SHA1_H
+
+#include <xen/inttypes.h>
+
+#define SHA1_DIGEST_SIZE 20
+
+void sha1_hash(const u8 *data, unsigned int len, u8 *out);
+
+#endif /* !__XEN_SHA1_H */
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 76dc86fab0..0d5774b8d7 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -38,6 +38,7 @@ lib-y += strtoll.o
lib-y += strtoul.o
lib-y += strtoull.o
lib-$(CONFIG_X86) += x86-generic-hweightl.o
+lib-$(CONFIG_X86) += sha1.o
lib-$(CONFIG_X86) += xxhash32.o
lib-$(CONFIG_X86) += xxhash64.o

diff --git a/xen/lib/sha1.c b/xen/lib/sha1.c
new file mode 100644
index 0000000000..a11822519d
--- /dev/null
+++ b/xen/lib/sha1.c
@@ -0,0 +1,240 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * SHA1 routine optimized to do word accesses rather than byte accesses,
+ * and to avoid unnecessary copies into the context array.
+ *
+ * This was based on the git SHA1 implementation.
+ */
+
+#include <xen/bitops.h>
+#include <xen/types.h>
+#include <xen/sha1.h>
+#include <xen/unaligned.h>
+
+/*
+ * If you have 32 registers or more, the compiler can (and should)
+ * try to change the array[] accesses into registers. However, on
+ * machines with less than ~25 registers, that won't really work,
+ * and at least gcc will make an unholy mess of it.
+ *
+ * So to avoid that mess which just slows things down, we force
+ * the stores to memory to actually happen (we might be better off
+ * with a 'W(t)=(val);asm("":"+m" (W(t))' there instead, as
+ * suggested by Artur Skawina - that will also make gcc unable to
+ * try to do the silly "optimize away loads" part because it won't
+ * see what the value will be).
+ *
+ * Ben Herrenschmidt reports that on PPC, the C version comes close
+ * to the optimized asm with this (ie on PPC you don't want that
+ * 'volatile', since there are lots of registers).
+ *
+ * On ARM we get the best code generation by forcing a full memory barrier
+ * between each SHA_ROUND, otherwise gcc happily get wild with spilling and
+ * the stack frame size simply explode and performance goes down the drain.
+ */
+
+#ifdef CONFIG_X86
+ #define setW(x, val) (*(volatile uint32_t *)&W(x) = (val))
+#elif defined(CONFIG_ARM)
+ #define setW(x, val) do { W(x) = (val); __asm__("":::"memory"); } while ( 0 )
+#else
+ #define setW(x, val) (W(x) = (val))
+#endif
+
+/* This "rolls" over the 512-bit array */
+#define W(x) (array[(x) & 15])
+
+/*
+ * Where do we get the source from? The first 16 iterations get it from
+ * the input data, the next mix it from the 512-bit array.
+ */
+#define SHA_SRC(t) get_unaligned_be32((uint32_t *)data + t)
+#define SHA_MIX(t) rol32(W(t + 13) ^ W(t + 8) ^ W(t + 2) ^ W(t), 1)
+
+#define SHA_ROUND(t, input, fn, constant, A, B, C, D, E) do { \
+ uint32_t TEMP = input(t); setW(t, TEMP); \
+ E += TEMP + rol32(A, 5) + (fn) + (constant); \
+ B = ror32(B, 2); \
+ TEMP = E; E = D; D = C; C = B; B = A; A = TEMP; \
+ } while ( 0 )
+
+#define T_0_15(t, A, B, C, D, E) \
+ SHA_ROUND(t, SHA_SRC, (((C ^ D) & B) ^ D), 0x5a827999, A, B, C, D, E)
+#define T_16_19(t, A, B, C, D, E) \
+ SHA_ROUND(t, SHA_MIX, (((C ^ D) & B) ^ D), 0x5a827999, A, B, C, D, E)
+#define T_20_39(t, A, B, C, D, E) \
+ SHA_ROUND(t, SHA_MIX, (B ^ C ^ D), 0x6ed9eba1, A, B, C, D, E)
+#define T_40_59(t, A, B, C, D, E) \
+ SHA_ROUND(t, SHA_MIX, ((B & C) + (D & (B ^ C))), 0x8f1bbcdc, A, B, C, \
+ D, E)
+#define T_60_79(t, A, B, C, D, E) \
+ SHA_ROUND(t, SHA_MIX, (B ^ C ^ D), 0xca62c1d6, A, B, C, D, E)
+
+#define SHA1_BLOCK_SIZE 64
+#define SHA1_WORKSPACE_WORDS 16
+
+struct sha1_state {
+ uint32_t state[SHA1_DIGEST_SIZE / 4];
+ uint64_t count;
+ uint8_t buffer[SHA1_BLOCK_SIZE];
+};
+
+typedef void sha1_block_fn(struct sha1_state *sst, const uint8_t *src, int blocks);
+
+/**
+ * sha1_transform - single block SHA1 transform (deprecated)
+ *
+ * @digest: 160 bit digest to update
+ * @data: 512 bits of data to hash
+ * @array: 16 words of workspace (see note)
+ *
+ * This function executes SHA-1's internal compression function. It updates the
+ * 160-bit internal state (@digest) with a single 512-bit data block (@data).
+ *
+ * Don't use this function. SHA-1 is no longer considered secure. And even if
+ * you do have to use SHA-1, this isn't the correct way to hash something with
+ * SHA-1 as this doesn't handle padding and finalization.
+ *
+ * Note: If the hash is security sensitive, the caller should be sure
+ * to clear the workspace. This is left to the caller to avoid
+ * unnecessary clears between chained hashing operations.
+ */
+void sha1_transform(uint32_t *digest, const uint8_t *data, uint32_t *array)
+{
+ uint32_t A, B, C, D, E;
+ unsigned int i = 0;
+
+ A = digest[0];
+ B = digest[1];
+ C = digest[2];
+ D = digest[3];
+ E = digest[4];
+
+ /* Round 1 - iterations 0-16 take their input from 'data' */
+ for ( ; i < 16; ++i )
+ T_0_15(i, A, B, C, D, E);
+
+ /* Round 1 - tail. Input from 512-bit mixing array */
+ for ( ; i < 20; ++i )
+ T_16_19(i, A, B, C, D, E);
+
+ /* Round 2 */
+ for ( ; i < 40; ++i )
+ T_20_39(i, A, B, C, D, E);
+
+ /* Round 3 */
+ for ( ; i < 60; ++i )
+ T_40_59(i, A, B, C, D, E);
+
+ /* Round 4 */
+ for ( ; i < 80; ++i )
+ T_60_79(i, A, B, C, D, E);
+
+ digest[0] += A;
+ digest[1] += B;
+ digest[2] += C;
+ digest[3] += D;
+ digest[4] += E;
+}
+
+static void sha1_init(struct sha1_state *sctx)
+{
+ sctx->state[0] = 0x67452301UL;
+ sctx->state[1] = 0xefcdab89UL;
+ sctx->state[2] = 0x98badcfeUL;
+ sctx->state[3] = 0x10325476UL;
+ sctx->state[4] = 0xc3d2e1f0UL;
+ sctx->count = 0;
+}
+
+static void sha1_do_update(struct sha1_state *sctx,
+ const uint8_t *data,
+ unsigned int len,
+ sha1_block_fn *block_fn)
+{
+ unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+ sctx->count += len;
+
+ if ( unlikely((partial + len) >= SHA1_BLOCK_SIZE) )
+ {
+ int blocks;
+
+ if ( partial )
+ {
+ int p = SHA1_BLOCK_SIZE - partial;
+
+ memcpy(sctx->buffer + partial, data, p);
+ data += p;
+ len -= p;
+
+ block_fn(sctx, sctx->buffer, 1);
+ }
+
+ blocks = len / SHA1_BLOCK_SIZE;
+ len %= SHA1_BLOCK_SIZE;
+
+ if ( blocks )
+ {
+ block_fn(sctx, data, blocks);
+ data += blocks * SHA1_BLOCK_SIZE;
+ }
+ partial = 0;
+ }
+ if ( len )
+ memcpy(sctx->buffer + partial, data, len);
+}
+
+static void sha1_do_finalize(struct sha1_state *sctx, sha1_block_fn *block_fn)
+{
+ const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+ __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+ unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+ sctx->buffer[partial++] = 0x80;
+ if ( partial > bit_offset )
+ {
+ memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+ partial = 0;
+
+ block_fn(sctx, sctx->buffer, 1);
+ }
+
+ memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+ *bits = cpu_to_be64(sctx->count << 3);
+ block_fn(sctx, sctx->buffer, 1);
+}
+
+static void sha1_finish(struct sha1_state *sctx, uint8_t *out)
+{
+ __be32 *digest = (__be32 *)out;
+ int i;
+
+ for ( i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++ )
+ put_unaligned_be32(sctx->state[i], digest++);
+
+ memset(sctx, 0, sizeof(*sctx));
+}
+
+static void sha1_generic_block_fn(struct sha1_state *sctx, const uint8_t *src,
+ int blocks)
+{
+ uint32_t temp[SHA1_WORKSPACE_WORDS];
+
+ while ( blocks-- )
+ {
+ sha1_transform(sctx->state, src, temp);
+ src += SHA1_BLOCK_SIZE;
+ }
+ memset(temp, 0, sizeof(temp));
+}
+
+void sha1_hash(const uint8_t *data, unsigned int len, uint8_t *out)
+{
+ struct sha1_state sctx;
+
+ sha1_init(&sctx);
+ sha1_do_update(&sctx, data, len, sha1_generic_block_fn);
+ sha1_do_finalize(&sctx, sha1_generic_block_fn);
+ sha1_finish(&sctx, out);
+}
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:34 AMApr 22
to xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
The code comes from [1] and is licensed under GPL-2.0 or later version
of the license. It's a combination of:
- include/crypto/sha2.h
- include/crypto/sha256_base.h
- lib/crypto/sha256.c
- crypto/sha256_generic.c

Changes:
- includes
- formatting
- renames and splicing of some trivial functions that are called once
- dropping of `int` return values (only zero was ever returned)
- getting rid of references to `struct shash_desc`

[1]: https://github.com/torvalds/linux/tree/afdab700f65e14070d8ab92175544b1c62b8bf03

Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
---
xen/include/xen/sha256.h | 12 ++
xen/lib/Makefile | 1 +
xen/lib/sha256.c | 238 +++++++++++++++++++++++++++++++++++++++
3 files changed, 251 insertions(+)
create mode 100644 xen/include/xen/sha256.h
create mode 100644 xen/lib/sha256.c

diff --git a/xen/include/xen/sha256.h b/xen/include/xen/sha256.h
new file mode 100644
index 0000000000..703eddc198
--- /dev/null
+++ b/xen/include/xen/sha256.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __XEN_SHA256_H
+#define __XEN_SHA256_H
+
+#include <xen/inttypes.h>
+
+#define SHA256_DIGEST_SIZE 32
+
+void sha256_hash(const u8 *data, unsigned int len, u8 *out);
+
+#endif /* !__XEN_SHA256_H */
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 0d5774b8d7..c7a8d1bb02 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -39,6 +39,7 @@ lib-y += strtoul.o
lib-y += strtoull.o
lib-$(CONFIG_X86) += x86-generic-hweightl.o
lib-$(CONFIG_X86) += sha1.o
+lib-$(CONFIG_X86) += sha256.o
lib-$(CONFIG_X86) += xxhash32.o
lib-$(CONFIG_X86) += xxhash64.o

diff --git a/xen/lib/sha256.c b/xen/lib/sha256.c
new file mode 100644
index 0000000000..369a52af80
--- /dev/null
+++ b/xen/lib/sha256.c
@@ -0,0 +1,238 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SHA-256, as specified in
+ * http://csrc.nist.gov/groups/STM/cavp/documents/shs/sha256-384-512.pdf
+ *
+ * SHA-256 code by Jean-Luc Cooke <jlc...@certainkey.com>.
+ *
+ * Copyright (c) Jean-Luc Cooke <jlc...@certainkey.com>
+ * Copyright (c) Andrew McDonald <and...@mcdonald.org.uk>
+ * Copyright (c) 2002 James Morris <jmo...@intercode.com.au>
+ * Copyright (c) 2014 Red Hat Inc.
+ */
+
+#include <xen/bitops.h>
+#include <xen/sha256.h>
+#include <xen/unaligned.h>
+
+#define SHA256_BLOCK_SIZE 64
+
+struct sha256_state {
+ uint32_t state[SHA256_DIGEST_SIZE / 4];
+ uint64_t count;
+ uint8_t buf[SHA256_BLOCK_SIZE];
+};
+
+typedef void sha256_block_fn(struct sha256_state *sst, uint8_t const *src,
+ int blocks);
+
+static const uint32_t SHA256_K[] = {
+ 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
+ 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
+ 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3,
+ 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
+ 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc,
+ 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
+ 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7,
+ 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
+ 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13,
+ 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
+ 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3,
+ 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
+ 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5,
+ 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
+ 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208,
+ 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2,
+};
+
+static uint32_t Ch(uint32_t x, uint32_t y, uint32_t z)
+{
+ return z ^ (x & (y ^ z));
+}
+
+static uint32_t Maj(uint32_t x, uint32_t y, uint32_t z)
+{
+ return (x & y) | (z & (x | y));
+}
+
+#define e0(x) (ror32(x, 2) ^ ror32(x, 13) ^ ror32(x, 22))
+#define e1(x) (ror32(x, 6) ^ ror32(x, 11) ^ ror32(x, 25))
+#define s0(x) (ror32(x, 7) ^ ror32(x, 18) ^ (x >> 3))
+#define s1(x) (ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10))
+
+static void LOAD_OP(int I, uint32_t *W, const uint8_t *input)
+{
+ W[I] = get_unaligned_be32((uint32_t *)input + I);
+}
+
+static void BLEND_OP(int I, uint32_t *W)
+{
+ W[I] = s1(W[I - 2]) + W[I - 7] + s0(W[I - 15]) + W[I - 16];
+}
+
+#define SHA256_ROUND(i, a, b, c, d, e, f, g, h) do { \
+ uint32_t t1, t2; \
+ t1 = h + e1(e) + Ch(e, f, g) + SHA256_K[i] + W[i]; \
+ t2 = e0(a) + Maj(a, b, c); \
+ d += t1; \
+ h = t1 + t2; \
+ } while ( 0 )
+
+static void sha256_init(struct sha256_state *sctx)
+{
+ sctx->state[0] = 0x6a09e667UL;
+ sctx->state[1] = 0xbb67ae85UL;
+ sctx->state[2] = 0x3c6ef372UL;
+ sctx->state[3] = 0xa54ff53aUL;
+ sctx->state[4] = 0x510e527fUL;
+ sctx->state[5] = 0x9b05688cUL;
+ sctx->state[6] = 0x1f83d9abUL;
+ sctx->state[7] = 0x5be0cd19UL;
+ sctx->count = 0;
+}
+
+static void sha256_do_update(struct sha256_state *sctx,
+ const uint8_t *data,
+ unsigned int len,
+ sha256_block_fn *block_fn)
+{
+ unsigned int partial = sctx->count % SHA256_BLOCK_SIZE;
+
+ sctx->count += len;
+
+ if ( unlikely((partial + len) >= SHA256_BLOCK_SIZE) )
+ {
+ int blocks;
+
+ if ( partial )
+ {
+ int p = SHA256_BLOCK_SIZE - partial;
+
+ memcpy(sctx->buf + partial, data, p);
+ data += p;
+ len -= p;
+
+ block_fn(sctx, sctx->buf, 1);
+ }
+
+ blocks = len / SHA256_BLOCK_SIZE;
+ len %= SHA256_BLOCK_SIZE;
+
+ if ( blocks )
+ {
+ block_fn(sctx, data, blocks);
+ data += blocks * SHA256_BLOCK_SIZE;
+ }
+ partial = 0;
+ }
+ if ( len )
+ memcpy(sctx->buf + partial, data, len);
+}
+
+static void sha256_do_finalize(struct sha256_state *sctx,
+ sha256_block_fn *block_fn)
+{
+ const int bit_offset = SHA256_BLOCK_SIZE - sizeof(__be64);
+ __be64 *bits = (__be64 *)(sctx->buf + bit_offset);
+ unsigned int partial = sctx->count % SHA256_BLOCK_SIZE;
+
+ sctx->buf[partial++] = 0x80;
+ if ( partial > bit_offset )
+ {
+ memset(sctx->buf + partial, 0x0, SHA256_BLOCK_SIZE - partial);
+ partial = 0;
+
+ block_fn(sctx, sctx->buf, 1);
+ }
+
+ memset(sctx->buf + partial, 0x0, bit_offset - partial);
+ *bits = cpu_to_be64(sctx->count << 3);
+ block_fn(sctx, sctx->buf, 1);
+}
+
+static void sha256_finish(struct sha256_state *sctx, uint8_t *out,
+ unsigned int digest_size)
+{
+ __be32 *digest = (__be32 *)out;
+ int i;
+
+ for ( i = 0; digest_size > 0; i++, digest_size -= sizeof(__be32) )
+ put_unaligned_be32(sctx->state[i], digest++);
+
+ memset(sctx, 0, sizeof(*sctx));
+}
+
+static void sha256_transform(uint32_t *state, const uint8_t *input, uint32_t *W)
+{
+ uint32_t a, b, c, d, e, f, g, h;
+ int i;
+
+ /* load the input */
+ for ( i = 0; i < 16; i += 8 )
+ {
+ LOAD_OP(i + 0, W, input);
+ LOAD_OP(i + 1, W, input);
+ LOAD_OP(i + 2, W, input);
+ LOAD_OP(i + 3, W, input);
+ LOAD_OP(i + 4, W, input);
+ LOAD_OP(i + 5, W, input);
+ LOAD_OP(i + 6, W, input);
+ LOAD_OP(i + 7, W, input);
+ }
+
+ /* now blend */
+ for ( i = 16; i < 64; i += 8 )
+ {
+ BLEND_OP(i + 0, W);
+ BLEND_OP(i + 1, W);
+ BLEND_OP(i + 2, W);
+ BLEND_OP(i + 3, W);
+ BLEND_OP(i + 4, W);
+ BLEND_OP(i + 5, W);
+ BLEND_OP(i + 6, W);
+ BLEND_OP(i + 7, W);
+ }
+
+ /* load the state into our registers */
+ a = state[0]; b = state[1]; c = state[2]; d = state[3];
+ e = state[4]; f = state[5]; g = state[6]; h = state[7];
+
+ /* now iterate */
+ for ( i = 0; i < 64; i += 8 )
+ {
+ SHA256_ROUND(i + 0, a, b, c, d, e, f, g, h);
+ SHA256_ROUND(i + 1, h, a, b, c, d, e, f, g);
+ SHA256_ROUND(i + 2, g, h, a, b, c, d, e, f);
+ SHA256_ROUND(i + 3, f, g, h, a, b, c, d, e);
+ SHA256_ROUND(i + 4, e, f, g, h, a, b, c, d);
+ SHA256_ROUND(i + 5, d, e, f, g, h, a, b, c);
+ SHA256_ROUND(i + 6, c, d, e, f, g, h, a, b);
+ SHA256_ROUND(i + 7, b, c, d, e, f, g, h, a);
+ }
+
+ state[0] += a; state[1] += b; state[2] += c; state[3] += d;
+ state[4] += e; state[5] += f; state[6] += g; state[7] += h;
+}
+
+static void sha256_transform_blocks(struct sha256_state *sctx,
+ const uint8_t *input, int blocks)
+{
+ uint32_t W[64];
+
+ do {
+ sha256_transform(sctx->state, input, W);
+ input += SHA256_BLOCK_SIZE;
+ } while ( --blocks );
+
+ memset(W, 0, sizeof(W));
+}
+
+void sha256_hash(const uint8_t *data, unsigned int len, uint8_t *out)
+{
+ struct sha256_state sctx;
+
+ sha256_init(&sctx);
+ sha256_do_update(&sctx, data, len, sha256_transform_blocks);
+ sha256_do_finalize(&sctx, sha256_transform_blocks);
+ sha256_finish(&sctx, out, SHA256_DIGEST_SIZE);
+}
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:37 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

This file is built twice: for early 32b mode without paging to measure
MBI and for 64b code to measure dom0 kernel and initramfs. Since MBI
is small, the first case uses TPM to do the hashing. Kernel and
initramfs on the other hand are too big, sending them to the TPM would
take multiple minutes.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/Makefile | 1 +
xen/arch/x86/boot/Makefile | 7 +-
xen/arch/x86/boot/head.S | 3 +
xen/arch/x86/include/asm/slaunch.h | 14 +
xen/arch/x86/include/asm/tpm.h | 19 ++
xen/arch/x86/slaunch.c | 7 +-
xen/arch/x86/tpm.c | 437 +++++++++++++++++++++++++++++
7 files changed, 486 insertions(+), 2 deletions(-)
create mode 100644 xen/arch/x86/include/asm/tpm.h
create mode 100644 xen/arch/x86/tpm.c

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index cae548f7e9..7d1027a50f 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -67,6 +67,7 @@ obj-y += spec_ctrl.o
obj-y += srat.o
obj-y += string.o
obj-y += time.o
+obj-y += tpm.o
obj-y += traps-setup.o
obj-y += traps.o
obj-$(CONFIG_INTEL) += tsx.o
diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
index d0015f7d19..ab37ab1fb7 100644
--- a/xen/arch/x86/boot/Makefile
+++ b/xen/arch/x86/boot/Makefile
@@ -6,6 +6,7 @@ obj32 := cmdline.32.o
obj32 += reloc.32.o
obj32 += reloc-trampoline.32.o
obj32 += slaunch_early.32.o
+obj32 += tpm_early.32.o

obj64 := reloc-trampoline.o

@@ -31,6 +32,10 @@ $(obj)/%.32.o: $(src)/%.c FORCE

$(obj)/slaunch_early.32.o: XEN_CFLAGS += -D__EARLY_SLAUNCH__

+$(obj)/tpm_early.32.o: XEN_CFLAGS += -D__EARLY_SLAUNCH__
+$(obj)/tpm_early.32.o: $(src)/../tpm.c FORCE
+ $(call if_changed_rule,cc_o_c)
+
orphan-handling-$(call ld-option,--orphan-handling=error) := --orphan-handling=error
LDFLAGS_DIRECT-$(call ld-option,--warn-rwx-segments) := --no-warn-rwx-segments
LDFLAGS_DIRECT += $(LDFLAGS_DIRECT-y)
@@ -84,7 +89,7 @@ cmd_combine = \
--bin1 $(obj)/built-in-32.base.bin \
--bin2 $(obj)/built-in-32.offset.bin \
--map $(obj)/built-in-32.base.map \
- --exports cmdline_parse_early,reloc,reloc_trampoline32,slaunch_early_init \
+ --exports cmdline_parse_early,reloc,reloc_trampoline32,slaunch_early_init,tpm_extend_mbi \
--output $@

targets += built-in-32.S
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index e522a36305..0b7903070a 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -527,6 +527,9 @@ __start:
/* Store MBI address in EBX where MB2 code expects it. */
mov %eax, %ebx

+ /* tpm_extend_mbi(mbi/eax, slrt/edx) using fastcall. */
+ call tpm_extend_mbi
+
/* Move magic number expected by Multiboot 2 to EAX and fall through. */
movl $MULTIBOOT2_BOOTLOADER_MAGIC, %eax

diff --git a/xen/arch/x86/include/asm/slaunch.h b/xen/arch/x86/include/asm/slaunch.h
index 78d3c8bf37..b9b50f20c6 100644
--- a/xen/arch/x86/include/asm/slaunch.h
+++ b/xen/arch/x86/include/asm/slaunch.h
@@ -10,6 +10,20 @@
#include <xen/slr_table.h>
#include <xen/types.h>

+#define DRTM_LOC 2
+#define DRTM_CODE_PCR 17
+#define DRTM_DATA_PCR 18
+
+/*
+ * Secure Launch event log entry types. The TXT specification defines the base
+ * event value as 0x400 for DRTM values, use it regardless of the DRTM for
+ * consistency.
+ */
+#define DLE_EVTYPE_BASE 0x400
+#define DLE_EVTYPE_SLAUNCH (DLE_EVTYPE_BASE + 0x102)
+#define DLE_EVTYPE_SLAUNCH_START (DLE_EVTYPE_BASE + 0x103)
+#define DLE_EVTYPE_SLAUNCH_END (DLE_EVTYPE_BASE + 0x104)
+
extern bool slaunch_active;

/*
diff --git a/xen/arch/x86/include/asm/tpm.h b/xen/arch/x86/include/asm/tpm.h
new file mode 100644
index 0000000000..d46eba673c
--- /dev/null
+++ b/xen/arch/x86/include/asm/tpm.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#ifndef _ASM_X86_TPM_H_
+#define _ASM_X86_TPM_H_
+
+#include <xen/types.h>
+
+#define TPM_TIS_BASE 0xFED40000
+#define TPM_TIS_SIZE 0x00010000
+
+void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
+ unsigned size, uint32_t type, const uint8_t *log_data,
+ unsigned log_data_size);
+
+#endif /* _ASM_X86_TPM_H_ */
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index 20e277cc5c..7b13b0a852 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -13,6 +13,7 @@
#include <asm/intel_txt.h>
#include <asm/page.h>
#include <asm/slaunch.h>
+#include <asm/tpm.h>

/*
* These variables are assigned to by the code near Xen's entry point.
@@ -65,16 +66,20 @@ struct slr_table *__init slaunch_get_slrt(void)

void __init slaunch_map_mem_regions(void)
{
+ int rc;
void *evt_log_addr;
uint32_t evt_log_size;

+ rc = slaunch_map_l2(TPM_TIS_BASE, TPM_TIS_SIZE);
+ BUG_ON(rc != 0);
+
/* Vendor-specific part. */
txt_map_mem_regions();

find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
if ( evt_log_addr != NULL )
{
- int rc = slaunch_map_l2((uintptr_t)evt_log_addr, evt_log_size);
+ rc = slaunch_map_l2((uintptr_t)evt_log_addr, evt_log_size);
BUG_ON(rc != 0);
}
}
diff --git a/xen/arch/x86/tpm.c b/xen/arch/x86/tpm.c
new file mode 100644
index 0000000000..8cf836d0df
--- /dev/null
+++ b/xen/arch/x86/tpm.c
@@ -0,0 +1,437 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#include <xen/sha1.h>
+#include <xen/types.h>
+#include <asm/intel_txt.h>
+#include <asm/slaunch.h>
+#include <asm/tpm.h>
+
+#ifdef __EARLY_SLAUNCH__
+
+#ifdef __va
+#error "__va defined in non-paged mode!"
+#endif
+
+#define __va(x) _p(x)
+
+/* Implementation of slaunch_get_slrt() for early TPM code. */
+static uint32_t slrt_location;
+struct slr_table *slaunch_get_slrt(void)
+{
+ return __va(slrt_location);
+}
+
+/*
+ * The code is being compiled as a standalone binary without linking to any
+ * other part of Xen. Providing implementation of builtin functions in this
+ * case is necessary if compiler chooses to not use an inline builtin.
+ */
+void *memcpy(void *dest, const void *src, size_t n)
+{
+ const uint8_t *s = src;
+ uint8_t *d = dest;
+
+ while ( n-- )
+ *d++ = *s++;
+
+ return dest;
+}
+
+#else /* __EARLY_SLAUNCH__ */
+
+#include <xen/mm.h>
+#include <xen/pfn.h>
+
+#endif /* __EARLY_SLAUNCH__ */
+
+#define TPM_LOC_REG(loc, reg) (0x1000 * (loc) + (reg))
+
+#define TPM_ACCESS_(x) TPM_LOC_REG(x, 0x00)
+#define ACCESS_REQUEST_USE (1 << 1)
+#define ACCESS_ACTIVE_LOCALITY (1 << 5)
+#define TPM_INTF_CAPABILITY_(x) TPM_LOC_REG(x, 0x14)
+#define INTF_VERSION_MASK 0x70000000
+#define TPM_STS_(x) TPM_LOC_REG(x, 0x18)
+#define TPM_FAMILY_MASK 0x0C000000
+#define STS_DATA_AVAIL (1 << 4)
+#define STS_TPM_GO (1 << 5)
+#define STS_COMMAND_READY (1 << 6)
+#define STS_VALID (1 << 7)
+#define TPM_DATA_FIFO_(x) TPM_LOC_REG(x, 0x24)
+
+#define swap16(x) __builtin_bswap16(x)
+#define swap32(x) __builtin_bswap32(x)
+#define memcpy(d, s, n) __builtin_memcpy(d, s, n)
+
+static inline volatile uint32_t tis_read32(unsigned reg)
+{
+ return *(volatile uint32_t *)__va(TPM_TIS_BASE + reg);
+}
+
+static inline volatile uint8_t tis_read8(unsigned reg)
+{
+ return *(volatile uint8_t *)__va(TPM_TIS_BASE + reg);
+}
+
+static inline void tis_write8(unsigned reg, uint8_t val)
+{
+ *(volatile uint8_t *)__va(TPM_TIS_BASE + reg) = val;
+}
+
+static inline void request_locality(unsigned loc)
+{
+ tis_write8(TPM_ACCESS_(loc), ACCESS_REQUEST_USE);
+ /* Check that locality was actually activated. */
+ while ( (tis_read8(TPM_ACCESS_(loc)) & ACCESS_ACTIVE_LOCALITY) == 0 );
+}
+
+static inline void relinquish_locality(unsigned loc)
+{
+ tis_write8(TPM_ACCESS_(loc), ACCESS_ACTIVE_LOCALITY);
+}
+
+static void send_cmd(unsigned loc, uint8_t *buf, unsigned i_size,
+ unsigned *o_size)
+{
+ /*
+ * Value of "data available" bit counts only when "valid" field is set as
+ * well.
+ */
+ const unsigned data_avail = STS_VALID | STS_DATA_AVAIL;
+
+ unsigned i;
+
+ /* Make sure TPM can accept a command. */
+ if ( (tis_read8(TPM_STS_(loc)) & STS_COMMAND_READY) == 0 )
+ {
+ /* Abort current command. */
+ tis_write8(TPM_STS_(loc), STS_COMMAND_READY);
+ /* Wait until TPM is ready for a new one. */
+ while ( (tis_read8(TPM_STS_(loc)) & STS_COMMAND_READY) == 0 );
+ }
+
+ for ( i = 0; i < i_size; i++ )
+ tis_write8(TPM_DATA_FIFO_(loc), buf[i]);
+
+ tis_write8(TPM_STS_(loc), STS_TPM_GO);
+
+ /* Wait for the first byte of response. */
+ while ( (tis_read8(TPM_STS_(loc)) & data_avail) != data_avail);
+
+ for ( i = 0; i < *o_size && tis_read8(TPM_STS_(loc)) & data_avail; i++ )
+ buf[i] = tis_read8(TPM_DATA_FIFO_(loc));
+
+ if ( i < *o_size )
+ *o_size = i;
+
+ tis_write8(TPM_STS_(loc), STS_COMMAND_READY);
+}
+
+static inline bool is_tpm12(void)
+{
+ /*
+ * If one of these conditions is true:
+ * - INTF_CAPABILITY_x.interfaceVersion is 0 (TIS <= 1.21)
+ * - INTF_CAPABILITY_x.interfaceVersion is 2 (TIS == 1.3)
+ * - STS_x.tpmFamily is 0
+ * we're dealing with TPM1.2.
+ */
+ uint32_t intf_version = tis_read32(TPM_INTF_CAPABILITY_(0))
+ & INTF_VERSION_MASK;
+ return (intf_version == 0x00000000 || intf_version == 0x20000000 ||
+ (tis_read32(TPM_STS_(0)) & TPM_FAMILY_MASK) == 0);
+}
+
+/****************************** TPM1.2 specific *******************************/
+#define TPM_ORD_Extend 0x00000014
+#define TPM_ORD_SHA1Start 0x000000A0
+#define TPM_ORD_SHA1Update 0x000000A1
+#define TPM_ORD_SHA1CompleteExtend 0x000000A3
+
+#define TPM_TAG_RQU_COMMAND 0x00C1
+#define TPM_TAG_RSP_COMMAND 0x00C4
+
+/* All fields of following structs are big endian. */
+struct tpm_cmd_hdr {
+ uint16_t tag;
+ uint32_t paramSize;
+ uint32_t ordinal;
+} __packed;
+
+struct tpm_rsp_hdr {
+ uint16_t tag;
+ uint32_t paramSize;
+ uint32_t returnCode;
+} __packed;
+
+struct extend_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t pcrNum;
+ uint8_t inDigest[SHA1_DIGEST_SIZE];
+} __packed;
+
+struct extend_rsp {
+ struct tpm_rsp_hdr h;
+ uint8_t outDigest[SHA1_DIGEST_SIZE];
+} __packed;
+
+struct sha1_start_cmd {
+ struct tpm_cmd_hdr h;
+} __packed;
+
+struct sha1_start_rsp {
+ struct tpm_rsp_hdr h;
+ uint32_t maxNumBytes;
+} __packed;
+
+struct sha1_update_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t numBytes; /* Must be a multiple of 64 */
+ uint8_t hashData[];
+} __packed;
+
+struct sha1_update_rsp {
+ struct tpm_rsp_hdr h;
+} __packed;
+
+struct sha1_complete_extend_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t pcrNum;
+ uint32_t hashDataSize; /* 0-64, inclusive */
+ uint8_t hashData[];
+} __packed;
+
+struct sha1_complete_extend_rsp {
+ struct tpm_rsp_hdr h;
+ uint8_t hashValue[SHA1_DIGEST_SIZE];
+ uint8_t outDigest[SHA1_DIGEST_SIZE];
+} __packed;
+
+struct TPM12_PCREvent {
+ uint32_t PCRIndex;
+ uint32_t Type;
+ uint8_t Digest[SHA1_DIGEST_SIZE];
+ uint32_t Size;
+ uint8_t Data[];
+};
+
+struct txt_ev_log_container_12 {
+ char Signature[20]; /* "TXT Event Container", null-terminated */
+ uint8_t Reserved[12];
+ uint8_t ContainerVerMajor;
+ uint8_t ContainerVerMinor;
+ uint8_t PCREventVerMajor;
+ uint8_t PCREventVerMinor;
+ uint32_t ContainerSize; /* Allocated size */
+ uint32_t PCREventsOffset;
+ uint32_t NextEventOffset;
+ struct TPM12_PCREvent PCREvents[];
+};
+
+#ifdef __EARLY_SLAUNCH__
+/*
+ * TPM1.2 is required to support commands of up to 1101 bytes, vendors rarely
+ * go above that. Limit maximum size of block of data to be hashed to 1024.
+ */
+#define MAX_HASH_BLOCK 1024
+#define CMD_RSP_BUF_SIZE (sizeof(struct sha1_update_cmd) + MAX_HASH_BLOCK)
+
+union cmd_rsp {
+ struct sha1_start_cmd start_c;
+ struct sha1_start_rsp start_r;
+ struct sha1_update_cmd update_c;
+ struct sha1_update_rsp update_r;
+ struct sha1_complete_extend_cmd finish_c;
+ struct sha1_complete_extend_rsp finish_r;
+ uint8_t buf[CMD_RSP_BUF_SIZE];
+};
+
+/* Returns true on success. */
+static bool tpm12_hash_extend(unsigned loc, const uint8_t *buf, unsigned size,
+ unsigned pcr, uint8_t *out_digest)
+{
+ union cmd_rsp cmd_rsp;
+ unsigned max_bytes = MAX_HASH_BLOCK;
+ unsigned o_size = sizeof(cmd_rsp);
+ bool success = false;
+
+ request_locality(loc);
+
+ cmd_rsp.start_c = (struct sha1_start_cmd) {
+ .h.tag = swap16(TPM_TAG_RQU_COMMAND),
+ .h.paramSize = swap32(sizeof(struct sha1_start_cmd)),
+ .h.ordinal = swap32(TPM_ORD_SHA1Start),
+ };
+
+ send_cmd(loc, cmd_rsp.buf, sizeof(struct sha1_start_cmd), &o_size);
+ if ( o_size < sizeof(struct sha1_start_rsp) )
+ goto error;
+
+ if ( max_bytes > swap32(cmd_rsp.start_r.maxNumBytes) )
+ max_bytes = swap32(cmd_rsp.start_r.maxNumBytes);
+
+ while ( size > 64 )
+ {
+ if ( size < max_bytes )
+ max_bytes = size & ~(64 - 1);
+
+ o_size = sizeof(cmd_rsp);
+
+ cmd_rsp.update_c = (struct sha1_update_cmd){
+ .h.tag = swap16(TPM_TAG_RQU_COMMAND),
+ .h.paramSize = swap32(sizeof(struct sha1_update_cmd) + max_bytes),
+ .h.ordinal = swap32(TPM_ORD_SHA1Update),
+ .numBytes = swap32(max_bytes),
+ };
+ memcpy(cmd_rsp.update_c.hashData, buf, max_bytes);
+
+ send_cmd(loc, cmd_rsp.buf, sizeof(struct sha1_update_cmd) + max_bytes,
+ &o_size);
+ if ( o_size < sizeof(struct sha1_update_rsp) )
+ goto error;
+
+ size -= max_bytes;
+ buf += max_bytes;
+ }
+
+ o_size = sizeof(cmd_rsp);
+
+ cmd_rsp.finish_c = (struct sha1_complete_extend_cmd) {
+ .h.tag = swap16(TPM_TAG_RQU_COMMAND),
+ .h.paramSize = swap32(sizeof(struct sha1_complete_extend_cmd) + size),
+ .h.ordinal = swap32(TPM_ORD_SHA1CompleteExtend),
+ .pcrNum = swap32(pcr),
+ .hashDataSize = swap32(size),
+ };
+ memcpy(cmd_rsp.finish_c.hashData, buf, size);
+
+ send_cmd(loc, cmd_rsp.buf, sizeof(struct sha1_complete_extend_cmd) + size,
+ &o_size);
+ if ( o_size < sizeof(struct sha1_complete_extend_rsp) )
+ goto error;
+
+ if ( out_digest != NULL )
+ memcpy(out_digest, cmd_rsp.finish_r.hashValue, SHA1_DIGEST_SIZE);
+
+ success = true;
+
+error:
+ relinquish_locality(loc);
+ return success;
+}
+
+#else
+
+union cmd_rsp {
+ struct extend_cmd extend_c;
+ struct extend_rsp extend_r;
+};
+
+/* Returns true on success. */
+static bool tpm12_hash_extend(unsigned loc, const uint8_t *buf, unsigned size,
+ unsigned pcr, uint8_t *out_digest)
+{
+ union cmd_rsp cmd_rsp;
+ unsigned o_size = sizeof(cmd_rsp);
+
+ sha1_hash(buf, size, out_digest);
+
+ request_locality(loc);
+
+ cmd_rsp.extend_c = (struct extend_cmd) {
+ .h.tag = swap16(TPM_TAG_RQU_COMMAND),
+ .h.paramSize = swap32(sizeof(struct extend_cmd)),
+ .h.ordinal = swap32(TPM_ORD_Extend),
+ .pcrNum = swap32(pcr),
+ };
+
+ memcpy(cmd_rsp.extend_c.inDigest, out_digest, SHA1_DIGEST_SIZE);
+
+ send_cmd(loc, (uint8_t *)&cmd_rsp, sizeof(struct extend_cmd), &o_size);
+
+ relinquish_locality(loc);
+
+ return (o_size >= sizeof(struct extend_rsp));
+}
+
+#endif /* __EARLY_SLAUNCH__ */
+
+static void *create_log_event12(struct txt_ev_log_container_12 *evt_log,
+ uint32_t evt_log_size, uint32_t pcr,
+ uint32_t type, const uint8_t *data,
+ unsigned data_size)
+{
+ struct TPM12_PCREvent *new_entry;
+
+ new_entry = (void *)(((uint8_t *)evt_log) + evt_log->NextEventOffset);
+
+ /*
+ * Check if there is enough space left for new entry.
+ * Note: it is possible to introduce a gap in event log if entry with big
+ * data_size is followed by another entry with smaller data. Maybe we should
+ * cap the event log size in such case?
+ */
+ if ( evt_log->NextEventOffset + sizeof(struct TPM12_PCREvent) + data_size
+ > evt_log_size )
+ return NULL;
+
+ evt_log->NextEventOffset += sizeof(struct TPM12_PCREvent) + data_size;
+
+ new_entry->PCRIndex = pcr;
+ new_entry->Type = type;
+ new_entry->Size = data_size;
+
+ if ( data && data_size > 0 )
+ memcpy(new_entry->Data, data, data_size);
+
+ return new_entry->Digest;
+}
+
+/************************** end of TPM1.2 specific ****************************/
+
+void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
+ unsigned size, uint32_t type, const uint8_t *log_data,
+ unsigned log_data_size)
+{
+ void *evt_log_addr;
+ uint32_t evt_log_size;
+
+ find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
+ evt_log_addr = __va((uintptr_t)evt_log_addr);
+
+ if ( is_tpm12() )
+ {
+ uint8_t sha1_digest[SHA1_DIGEST_SIZE];
+
+ struct txt_ev_log_container_12 *evt_log = evt_log_addr;
+ void *entry_digest = create_log_event12(evt_log, evt_log_size, pcr,
+ type, log_data, log_data_size);
+
+ /* We still need to write computed hash somewhere. */
+ if ( entry_digest == NULL )
+ entry_digest = sha1_digest;
+
+ if ( !tpm12_hash_extend(loc, buf, size, pcr, entry_digest) )
+ {
+#ifndef __EARLY_SLAUNCH__
+ printk(XENLOG_ERR "Extending PCR%u failed\n", pcr);
+#endif
+ }
+ }
+}
+
+#ifdef __EARLY_SLAUNCH__
+void tpm_extend_mbi(uint32_t *mbi, uint32_t slrt_pa)
+{
+ /* Need this to implement slaunch_get_slrt() for early TPM code. */
+ slrt_location = slrt_pa;
+
+ /* MBI starts with uint32_t total_size. */
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)mbi, *mbi,
+ DLE_EVTYPE_SLAUNCH, NULL, 0);
+}
+#endif
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:41 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
SHA1 and SHA256 are hard-coded here, but their support by the TPM is
checked. Addition of event log for TPM2.0 will generalize the code
further.

Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/tpm.c | 465 +++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 453 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/tpm.c b/xen/arch/x86/tpm.c
index 8cf836d0df..9d20cff94e 100644
--- a/xen/arch/x86/tpm.c
+++ b/xen/arch/x86/tpm.c
@@ -5,6 +5,7 @@
*/

#include <xen/sha1.h>
+#include <xen/sha256.h>
#include <xen/types.h>
#include <asm/intel_txt.h>
#include <asm/slaunch.h>
@@ -30,6 +31,15 @@ struct slr_table *slaunch_get_slrt(void)
* other part of Xen. Providing implementation of builtin functions in this
* case is necessary if compiler chooses to not use an inline builtin.
*/
+void *memset(void *dest, int c, size_t n)
+{
+ uint8_t *d = dest;
+
+ while ( n-- )
+ *d++ = c;
+
+ return dest;
+}
void *memcpy(void *dest, const void *src, size_t n)
{
const uint8_t *s = src;
@@ -65,6 +75,7 @@ void *memcpy(void *dest, const void *src, size_t n)

#define swap16(x) __builtin_bswap16(x)
#define swap32(x) __builtin_bswap32(x)
+#define memset(s, c, n) __builtin_memset(s, c, n)
#define memcpy(d, s, n) __builtin_memcpy(d, s, n)

static inline volatile uint32_t tis_read32(unsigned reg)
@@ -146,14 +157,15 @@ static inline bool is_tpm12(void)
(tis_read32(TPM_STS_(0)) & TPM_FAMILY_MASK) == 0);
}

-/****************************** TPM1.2 specific *******************************/
-#define TPM_ORD_Extend 0x00000014
-#define TPM_ORD_SHA1Start 0x000000A0
-#define TPM_ORD_SHA1Update 0x000000A1
-#define TPM_ORD_SHA1CompleteExtend 0x000000A3
+/****************************** TPM1.2 & TPM2.0 *******************************/

-#define TPM_TAG_RQU_COMMAND 0x00C1
-#define TPM_TAG_RSP_COMMAND 0x00C4
+/*
+ * TPM1.2 is required to support commands of up to 1101 bytes, vendors rarely
+ * go above that. Limit maximum size of block of data to be hashed to 1024.
+ *
+ * TPM2.0 should support hashing of at least 1024 bytes.
+ */
+#define MAX_HASH_BLOCK 1024

/* All fields of following structs are big endian. */
struct tpm_cmd_hdr {
@@ -168,6 +180,17 @@ struct tpm_rsp_hdr {
uint32_t returnCode;
} __packed;

+/****************************** TPM1.2 specific *******************************/
+
+#define TPM_ORD_Extend 0x00000014
+#define TPM_ORD_SHA1Start 0x000000A0
+#define TPM_ORD_SHA1Update 0x000000A1
+#define TPM_ORD_SHA1CompleteExtend 0x000000A3
+
+#define TPM_TAG_RQU_COMMAND 0x00C1
+#define TPM_TAG_RSP_COMMAND 0x00C4
+
+/* All fields of following structs are big endian. */
struct extend_cmd {
struct tpm_cmd_hdr h;
uint32_t pcrNum;
@@ -233,11 +256,6 @@ struct txt_ev_log_container_12 {
};

#ifdef __EARLY_SLAUNCH__
-/*
- * TPM1.2 is required to support commands of up to 1101 bytes, vendors rarely
- * go above that. Limit maximum size of block of data to be hashed to 1024.
- */
-#define MAX_HASH_BLOCK 1024
#define CMD_RSP_BUF_SIZE (sizeof(struct sha1_update_cmd) + MAX_HASH_BLOCK)

union cmd_rsp {
@@ -393,6 +411,400 @@ static void *create_log_event12(struct txt_ev_log_container_12 *evt_log,

/************************** end of TPM1.2 specific ****************************/

+/****************************** TPM2.0 specific *******************************/
+
+/*
+ * These constants are for TPM2.0 but don't have a distinct prefix to match
+ * names in the specification.
+ */
+
+#define TPM_HT_PCR 0x00
+
+#define TPM_RH_NULL 0x40000007
+#define TPM_RS_PW 0x40000009
+
+#define HR_SHIFT 24
+#define HR_PCR (TPM_HT_PCR << HR_SHIFT)
+
+#define TPM_ST_NO_SESSIONS 0x8001
+#define TPM_ST_SESSIONS 0x8002
+
+#define TPM_ALG_SHA1 0x0004
+#define TPM_ALG_SHA256 0x000b
+#define TPM_ALG_NULL 0x0010
+
+#define TPM2_PCR_Extend 0x00000182
+#define TPM2_PCR_HashSequenceStart 0x00000186
+#define TPM2_PCR_SequenceUpdate 0x0000015C
+#define TPM2_PCR_EventSequenceComplete 0x00000185
+
+#define PUT_BYTES(p, bytes, size) do { \
+ memcpy((p), (bytes), (size)); \
+ (p) += (size); \
+ } while ( 0 )
+
+#define PUT_16BIT(p, data) do { \
+ *(uint16_t *)(p) = swap16(data); \
+ (p) += 2; \
+ } while ( 0 )
+
+/* All fields of following structs are big endian. */
+struct tpm2_session_header {
+ uint32_t handle;
+ uint16_t nonceSize;
+ uint8_t nonce[0];
+ uint8_t attrs;
+ uint16_t hmacSize;
+ uint8_t hmac[0];
+} __packed;
+
+struct tpm2_extend_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t pcrHandle;
+ uint32_t sessionHdrSize;
+ struct tpm2_session_header pcrSession;
+ uint32_t hashCount;
+ uint8_t hashes[0];
+} __packed;
+
+struct tpm2_extend_rsp {
+ struct tpm_rsp_hdr h;
+} __packed;
+
+struct tpm2_sequence_start_cmd {
+ struct tpm_cmd_hdr h;
+ uint16_t hmacSize;
+ uint8_t hmac[0];
+ uint16_t hashAlg;
+} __packed;
+
+struct tpm2_sequence_start_rsp {
+ struct tpm_rsp_hdr h;
+ uint32_t sequenceHandle;
+} __packed;
+
+struct tpm2_sequence_update_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t sequenceHandle;
+ uint32_t sessionHdrSize;
+ struct tpm2_session_header session;
+ uint16_t dataSize;
+ uint8_t data[0];
+} __packed;
+
+struct tpm2_sequence_update_rsp {
+ struct tpm_rsp_hdr h;
+} __packed;
+
+struct tpm2_sequence_complete_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t pcrHandle;
+ uint32_t sequenceHandle;
+ uint32_t sessionHdrSize;
+ struct tpm2_session_header pcrSession;
+ struct tpm2_session_header sequenceSession;
+ uint16_t dataSize;
+ uint8_t data[0];
+} __packed;
+
+struct tpm2_sequence_complete_rsp {
+ struct tpm_rsp_hdr h;
+ uint32_t paramSize;
+ uint32_t hashCount;
+ uint8_t hashes[0];
+ /*
+ * Each hash is represented as:
+ * struct {
+ * uint16_t hashAlg;
+ * uint8_t hash[size of hashAlg];
+ * };
+ */
+} __packed;
+
+/*
+ * These two structure are for convenience, they don't correspond to anything in
+ * any spec.
+ */
+struct tpm2_log_hash {
+ uint16_t alg; /* TPM_ALG_* */
+ uint16_t size;
+ uint8_t *data; /* Non-owning reference to a buffer inside log entry. */
+};
+/* Should be more than enough for now and awhile in the future. */
+#define MAX_HASH_COUNT 8
+struct tpm2_log_hashes {
+ uint32_t count;
+ struct tpm2_log_hash hashes[MAX_HASH_COUNT];
+};
+
+#ifdef __EARLY_SLAUNCH__
+
+union tpm2_cmd_rsp {
+ uint8_t b[sizeof(struct tpm2_sequence_update_cmd) + MAX_HASH_BLOCK];
+ struct tpm_cmd_hdr c;
+ struct tpm_rsp_hdr r;
+ struct tpm2_sequence_start_cmd start_c;
+ struct tpm2_sequence_start_rsp start_r;
+ struct tpm2_sequence_update_cmd update_c;
+ struct tpm2_sequence_update_rsp update_r;
+ struct tpm2_sequence_complete_cmd finish_c;
+ struct tpm2_sequence_complete_rsp finish_r;
+};
+
+static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,
+ unsigned size, unsigned pcr,
+ struct tpm2_log_hashes *log_hashes)
+{
+ uint32_t seq_handle;
+ unsigned max_bytes = MAX_HASH_BLOCK;
+
+ union tpm2_cmd_rsp cmd_rsp;
+ unsigned o_size;
+ unsigned i;
+ uint8_t *p;
+ uint32_t rc;
+
+ cmd_rsp.start_c = (struct tpm2_sequence_start_cmd) {
+ .h.tag = swap16(TPM_ST_NO_SESSIONS),
+ .h.paramSize = swap32(sizeof(cmd_rsp.start_c)),
+ .h.ordinal = swap32(TPM2_PCR_HashSequenceStart),
+ .hashAlg = swap16(TPM_ALG_NULL), /* Compute all supported hashes. */
+ };
+
+ request_locality(loc);
+
+ o_size = sizeof(cmd_rsp);
+ send_cmd(loc, cmd_rsp.b, swap32(cmd_rsp.c.paramSize), &o_size);
+
+ if ( cmd_rsp.r.tag == swap16(TPM_ST_NO_SESSIONS) &&
+ cmd_rsp.r.paramSize == swap32(10) )
+ {
+ rc = swap32(cmd_rsp.r.returnCode);
+ if ( rc != 0 )
+ goto error;
+ }
+
+ seq_handle = swap32(cmd_rsp.start_r.sequenceHandle);
+
+ while ( size > 64 )
+ {
+ if ( size < max_bytes )
+ max_bytes = size & ~(64 - 1);
+
+ cmd_rsp.update_c = (struct tpm2_sequence_update_cmd) {
+ .h.tag = swap16(TPM_ST_SESSIONS),
+ .h.paramSize = swap32(sizeof(cmd_rsp.update_c) + max_bytes),
+ .h.ordinal = swap32(TPM2_PCR_SequenceUpdate),
+ .sequenceHandle = swap32(seq_handle),
+ .sessionHdrSize = swap32(sizeof(struct tpm2_session_header)),
+ .session.handle = swap32(TPM_RS_PW),
+ .dataSize = swap16(max_bytes),
+ };
+
+ memcpy(cmd_rsp.update_c.data, buf, max_bytes);
+
+ o_size = sizeof(cmd_rsp);
+ send_cmd(loc, cmd_rsp.b, swap32(cmd_rsp.c.paramSize), &o_size);
+
+ if ( cmd_rsp.r.tag == swap16(TPM_ST_NO_SESSIONS) &&
+ cmd_rsp.r.paramSize == swap32(10) )
+ {
+ rc = swap32(cmd_rsp.r.returnCode);
+ if ( rc != 0 )
+ goto error;
+ }
+
+ size -= max_bytes;
+ buf += max_bytes;
+ }
+
+ cmd_rsp.finish_c = (struct tpm2_sequence_complete_cmd) {
+ .h.tag = swap16(TPM_ST_SESSIONS),
+ .h.paramSize = swap32(sizeof(cmd_rsp.finish_c) + size),
+ .h.ordinal = swap32(TPM2_PCR_EventSequenceComplete),
+ .pcrHandle = swap32(HR_PCR + pcr),
+ .sequenceHandle = swap32(seq_handle),
+ .sessionHdrSize = swap32(sizeof(struct tpm2_session_header)*2),
+ .pcrSession.handle = swap32(TPM_RS_PW),
+ .sequenceSession.handle = swap32(TPM_RS_PW),
+ .dataSize = swap16(size),
+ };
+
+ memcpy(cmd_rsp.finish_c.data, buf, size);
+
+ o_size = sizeof(cmd_rsp);
+ send_cmd(loc, cmd_rsp.b, swap32(cmd_rsp.c.paramSize), &o_size);
+
+ if ( cmd_rsp.r.tag == swap16(TPM_ST_NO_SESSIONS) &&
+ cmd_rsp.r.paramSize == swap32(10) )
+ {
+ rc = swap32(cmd_rsp.r.returnCode);
+ if ( rc != 0 )
+ goto error;
+ }
+
+ p = cmd_rsp.finish_r.hashes;
+ for ( i = 0; i < swap32(cmd_rsp.finish_r.hashCount); ++i )
+ {
+ unsigned j;
+ uint16_t hash_type;
+
+ hash_type = swap16(*(uint16_t *)p);
+ p += sizeof(uint16_t);
+
+ for ( j = 0; j < log_hashes->count; ++j )
+ {
+ struct tpm2_log_hash *hash = &log_hashes->hashes[j];
+ if ( hash->alg == hash_type )
+ {
+ memcpy(hash->data, p, hash->size);
+ p += hash->size;
+ break;
+ }
+ }
+
+ if ( j == log_hashes->count )
+ /* Can't continue parsing without knowing hash size. */
+ break;
+ }
+
+ rc = 0;
+
+error:
+ relinquish_locality(loc);
+ return rc;
+}
+
+#else
+
+union tpm2_cmd_rsp {
+ /* Enough space for multiple hashes. */
+ uint8_t b[sizeof(struct tpm2_extend_cmd) + 1024];
+ struct tpm_cmd_hdr c;
+ struct tpm_rsp_hdr r;
+ struct tpm2_extend_cmd extend_c;
+ struct tpm2_extend_rsp extend_r;
+};
+
+static uint32_t tpm20_pcr_extend(unsigned loc, uint32_t pcr_handle,
+ const struct tpm2_log_hashes *log_hashes)
+{
+ union tpm2_cmd_rsp cmd_rsp;
+ unsigned o_size;
+ unsigned i;
+ uint8_t *p;
+
+ cmd_rsp.extend_c = (struct tpm2_extend_cmd) {
+ .h.tag = swap16(TPM_ST_SESSIONS),
+ .h.ordinal = swap32(TPM2_PCR_Extend),
+ .pcrHandle = swap32(pcr_handle),
+ .sessionHdrSize = swap32(sizeof(struct tpm2_session_header)),
+ .pcrSession.handle = swap32(TPM_RS_PW),
+ .hashCount = swap32(log_hashes->count),
+ };
+
+ p = cmd_rsp.extend_c.hashes;
+ for ( i = 0; i < log_hashes->count; ++i )
+ {
+ const struct tpm2_log_hash *hash = &log_hashes->hashes[i];
+
+ if ( p + sizeof(uint16_t) + hash->size > &cmd_rsp.b[sizeof(cmd_rsp)] )
+ {
+ printk(XENLOG_ERR "Hit TPM message size implementation limit: %ld\n",
+ sizeof(cmd_rsp));
+ return -1;
+ }
+
+ *(uint16_t *)p = swap16(hash->alg);
+ p += sizeof(uint16_t);
+
+ memcpy(p, hash->data, hash->size);
+ p += hash->size;
+ }
+
+ /* Fill in command size (size of the whole buffer). */
+ cmd_rsp.extend_c.h.paramSize = swap32(sizeof(cmd_rsp.extend_c) +
+ (p - cmd_rsp.extend_c.hashes)),
+
+ o_size = sizeof(cmd_rsp);
+ send_cmd(loc, cmd_rsp.b, swap32(cmd_rsp.c.paramSize), &o_size);
+
+ return swap32(cmd_rsp.r.returnCode);
+}
+
+static bool tpm_supports_hash(unsigned loc, const struct tpm2_log_hash *hash)
+{
+ uint32_t rc;
+ struct tpm2_log_hashes hashes = {
+ .count = 1,
+ .hashes[0] = *hash,
+ };
+
+ /*
+ * This is a valid way of checking hash support, using it to not implement
+ * TPM2_GetCapability().
+ */
+ rc = tpm20_pcr_extend(loc, /*pcr_handle=*/TPM_RH_NULL, &hashes);
+
+ return rc == 0;
+}
+
+static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,
+ unsigned size, unsigned pcr,
+ const struct tpm2_log_hashes *log_hashes)
+{
+ uint32_t rc;
+ unsigned i;
+ struct tpm2_log_hashes supported_hashes = {0};
+
+ request_locality(loc);
+
+ for ( i = 0; i < log_hashes->count; ++i )
+ {
+ const struct tpm2_log_hash *hash = &log_hashes->hashes[i];
+ if ( !tpm_supports_hash(loc, hash) )
+ {
+ printk(XENLOG_WARNING "Skipped hash unsupported by TPM: %d\n",
+ hash->alg);
+ continue;
+ }
+
+ if ( hash->alg == TPM_ALG_SHA1 )
+ {
+ sha1_hash(buf, size, hash->data);
+ }
+ else if ( hash->alg == TPM_ALG_SHA256 )
+ {
+ sha256_hash(buf, size, hash->data);
+ }
+ else
+ {
+ /* This is called "OneDigest" in TXT Software Development Guide. */
+ memset(hash->data, 0, size);
+ hash->data[0] = 1;
+ }
+
+ if ( supported_hashes.count == MAX_HASH_COUNT )
+ {
+ printk(XENLOG_ERR "Hit hash count implementation limit: %d\n",
+ MAX_HASH_COUNT);
+ return -1;
+ }
+
+ supported_hashes.hashes[supported_hashes.count] = *hash;
+ ++supported_hashes.count;
+ }
+
+ rc = tpm20_pcr_extend(loc, HR_PCR + pcr, &supported_hashes);
+ relinquish_locality(loc);
+
+ return rc;
+}
+
+#endif /* __EARLY_SLAUNCH__ */
+
+/************************** end of TPM2.0 specific ****************************/
+
void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
unsigned size, uint32_t type, const uint8_t *log_data,
unsigned log_data_size)
@@ -419,6 +831,35 @@ void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
{
#ifndef __EARLY_SLAUNCH__
printk(XENLOG_ERR "Extending PCR%u failed\n", pcr);
+#endif
+ }
+ } else {
+ uint8_t sha1_digest[SHA1_DIGEST_SIZE];
+ uint8_t sha256_digest[SHA256_DIGEST_SIZE];
+ uint32_t rc;
+
+ struct tpm2_log_hashes log_hashes = {
+ .count = 2,
+ .hashes = {
+ {
+ .alg = TPM_ALG_SHA1,
+ .size = SHA1_DIGEST_SIZE,
+ .data = sha1_digest,
+ },
+ {
+ .alg = TPM_ALG_SHA256,
+ .size = SHA256_DIGEST_SIZE,
+ .data = sha256_digest,
+ },
+ },
+ };
+
+ rc = tpm2_hash_extend(loc, buf, size, pcr, &log_hashes);
+ if ( rc != 0 )
+ {
+#ifndef __EARLY_SLAUNCH__
+ printk(XENLOG_ERR "Extending PCR%u failed with TPM error: 0x%08x\n",
+ pcr, rc);
#endif
}
}
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:42 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Michał Żygowski <michal....@3mdeb.com>

Check whther IA32_FEATURE_CONTROL has the proper bits enabled to run
VMX in SMX when slaunch is active.

Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
---
xen/arch/x86/hvm/vmx/vmcs.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index a44475ae15..ef38903775 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -30,6 +30,7 @@
#include <asm/msr.h>
#include <asm/processor.h>
#include <asm/shadow.h>
+#include <asm/slaunch.h>
#include <asm/spec_ctrl.h>
#include <asm/tboot.h>
#include <asm/xstate.h>
@@ -724,7 +725,7 @@ static int _vmx_cpu_up(bool bsp)
bios_locked = !!(eax & IA32_FEATURE_CONTROL_LOCK);
if ( bios_locked )
{
- if ( !(eax & (tboot_in_measured_env()
+ if ( !(eax & (tboot_in_measured_env() || slaunch_active
? IA32_FEATURE_CONTROL_ENABLE_VMXON_INSIDE_SMX
: IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX)) )
{
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:46 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/include/asm/intel_txt.h | 33 ++++++
xen/arch/x86/tpm.c | 169 ++++++++++++++++++++++-----
2 files changed, 175 insertions(+), 27 deletions(-)

diff --git a/xen/arch/x86/include/asm/intel_txt.h b/xen/arch/x86/include/asm/intel_txt.h
index 9083260cf9..0a36ef66d1 100644
--- a/xen/arch/x86/include/asm/intel_txt.h
+++ b/xen/arch/x86/include/asm/intel_txt.h
@@ -199,6 +199,39 @@ struct txt_sinit_mle_data {
/* Ext Data Elements */
} __packed;

+/* Types of extended data. */
+#define TXT_HEAP_EXTDATA_TYPE_END 0
+#define TXT_HEAP_EXTDATA_TYPE_BIOS_SPEC_VER 1
+#define TXT_HEAP_EXTDATA_TYPE_ACM 2
+#define TXT_HEAP_EXTDATA_TYPE_STM 3
+#define TXT_HEAP_EXTDATA_TYPE_CUSTOM 4
+#define TXT_HEAP_EXTDATA_TYPE_MADT 6
+#define TXT_HEAP_EXTDATA_TYPE_EVENT_LOG_POINTER2_1 8
+#define TXT_HEAP_EXTDATA_TYPE_MCFG 9
+#define TXT_HEAP_EXTDATA_TYPE_TPR_REQ 13
+#define TXT_HEAP_EXTDATA_TYPE_DTPR 14
+#define TXT_HEAP_EXTDATA_TYPE_CEDT 15
+
+/*
+ * Self-describing data structure that is used for extensions to TXT heap
+ * tables.
+ */
+struct txt_ext_data_element {
+ uint32_t type; /* One of TXT_HEAP_EXTDATA_TYPE_*. */
+ uint32_t size;
+ uint8_t data[0]; /* size bytes. */
+} __packed;
+
+/*
+ * Extended data describing TPM 2.0 log.
+ */
+struct heap_event_log_pointer_element2_1 {
+ uint64_t physical_address;
+ uint32_t allocated_event_container_size;
+ uint32_t first_record_offset;
+ uint32_t next_record_offset;
+} __packed;
+
/*
* Functions to extract data from the Intel TXT Heap Memory. The layout
* of the heap is as follows:
diff --git a/xen/arch/x86/tpm.c b/xen/arch/x86/tpm.c
index 9d20cff94e..c51bd9b496 100644
--- a/xen/arch/x86/tpm.c
+++ b/xen/arch/x86/tpm.c
@@ -537,6 +537,44 @@ struct tpm2_log_hashes {
struct tpm2_log_hash hashes[MAX_HASH_COUNT];
};

+struct tpm2_pcr_event_header {
+ uint32_t pcrIndex;
+ uint32_t eventType;
+ uint32_t digestCount;
+ uint8_t digests[0];
+ /*
+ * Each hash is represented as:
+ * struct {
+ * uint16_t hashAlg;
+ * uint8_t hash[size of hashAlg];
+ * };
+ */
+ /* uint32_t eventSize; */
+ /* uint8_t event[0]; */
+} __packed;
+
+struct tpm2_digest_sizes {
+ uint16_t algId;
+ uint16_t digestSize;
+} __packed;
+
+struct tpm2_spec_id_event {
+ uint32_t pcrIndex;
+ uint32_t eventType;
+ uint8_t digest[20];
+ uint32_t eventSize;
+ uint8_t signature[16];
+ uint32_t platformClass;
+ uint8_t specVersionMinor;
+ uint8_t specVersionMajor;
+ uint8_t specErrata;
+ uint8_t uintnSize;
+ uint32_t digestCount;
+ struct tpm2_digest_sizes digestSizes[0]; /* variable number of members */
+ /* uint8_t vendorInfoSize; */
+ /* uint8_t vendorInfo[vendorInfoSize]; */
+} __packed;
+
#ifdef __EARLY_SLAUNCH__

union tpm2_cmd_rsp {
@@ -770,19 +808,11 @@ static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,
}

if ( hash->alg == TPM_ALG_SHA1 )
- {
sha1_hash(buf, size, hash->data);
- }
else if ( hash->alg == TPM_ALG_SHA256 )
- {
sha256_hash(buf, size, hash->data);
- }
else
- {
- /* This is called "OneDigest" in TXT Software Development Guide. */
- memset(hash->data, 0, size);
- hash->data[0] = 1;
- }
+ /* create_log_event20() took care of initializing the digest. */;

if ( supported_hashes.count == MAX_HASH_COUNT )
{
@@ -803,6 +833,102 @@ static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,

#endif /* __EARLY_SLAUNCH__ */

+static struct heap_event_log_pointer_element2_1 *find_evt_log_ext_data(void)
+{
+ struct txt_os_sinit_data *os_sinit;
+ struct txt_ext_data_element *ext_data;
+
+ os_sinit = txt_os_sinit_data_start(__va(read_txt_reg(TXTCR_HEAP_BASE)));
+ ext_data = (void *)((uint8_t *)os_sinit + sizeof(*os_sinit));
+
+ /*
+ * Find TXT_HEAP_EXTDATA_TYPE_EVENT_LOG_POINTER2_1 which is necessary to
+ * know where to put the next entry.
+ */
+ while ( ext_data->type != TXT_HEAP_EXTDATA_TYPE_END )
+ {
+ if ( ext_data->type == TXT_HEAP_EXTDATA_TYPE_EVENT_LOG_POINTER2_1 )
+ break;
+ ext_data = (void *)&ext_data->data[ext_data->size];
+ }
+
+ if ( ext_data->type == TXT_HEAP_EXTDATA_TYPE_END )
+ return NULL;
+
+ return (void *)&ext_data->data[0];
+}
+
+static struct tpm2_log_hashes
+create_log_event20(struct tpm2_spec_id_event *evt_log, uint32_t evt_log_size,
+ uint32_t pcr, uint32_t type, const uint8_t *data,
+ unsigned data_size)
+{
+ struct tpm2_log_hashes log_hashes = {0};
+
+ struct heap_event_log_pointer_element2_1 *log_ext_data;
+ struct tpm2_pcr_event_header *new_entry;
+ uint32_t entry_size;
+ unsigned i;
+ uint8_t *p;
+
+ log_ext_data = find_evt_log_ext_data();
+ if ( log_ext_data == NULL )
+ return log_hashes;
+
+ entry_size = sizeof(*new_entry);
+ for ( i = 0; i < evt_log->digestCount; ++i )
+ {
+ entry_size += sizeof(uint16_t); /* hash type */
+ entry_size += evt_log->digestSizes[i].digestSize;
+ }
+ entry_size += sizeof(uint32_t); /* data size field */
+ entry_size += data_size;
+
+ /*
+ * Check if there is enough space left for new entry.
+ * Note: it is possible to introduce a gap in event log if entry with big
+ * data_size is followed by another entry with smaller data. Maybe we should
+ * cap the event log size in such case?
+ */
+ if ( log_ext_data->next_record_offset + entry_size > evt_log_size )
+ return log_hashes;
+
+ new_entry = (void *)((uint8_t *)evt_log + log_ext_data->next_record_offset);
+ log_ext_data->next_record_offset += entry_size;
+
+ new_entry->pcrIndex = pcr;
+ new_entry->eventType = type;
+ new_entry->digestCount = evt_log->digestCount;
+
+ p = &new_entry->digests[0];
+ for ( i = 0; i < evt_log->digestCount; ++i )
+ {
+ uint16_t alg = evt_log->digestSizes[i].algId;
+ uint16_t size = evt_log->digestSizes[i].digestSize;
+
+ *(uint16_t *)p = alg;
+ p += sizeof(uint16_t);
+
+ log_hashes.hashes[i].alg = alg;
+ log_hashes.hashes[i].size = size;
+ log_hashes.hashes[i].data = p;
+ p += size;
+
+ /* This is called "OneDigest" in TXT Software Development Guide. */
+ memset(log_hashes.hashes[i].data, 0, size);
+ log_hashes.hashes[i].data[0] = 1;
+ }
+ log_hashes.count = evt_log->digestCount;
+
+ *(uint32_t *)p = data_size;
+ p += sizeof(uint32_t);
+
+ if ( data && data_size > 0 )
+ memcpy(p, data, data_size);
+
+ return log_hashes;
+}
+
/************************** end of TPM2.0 specific ****************************/

void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
@@ -833,26 +959,15 @@ void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
printk(XENLOG_ERR "Extending PCR%u failed\n", pcr);
#endif
}
- } else {
- uint8_t sha1_digest[SHA1_DIGEST_SIZE];
- uint8_t sha256_digest[SHA256_DIGEST_SIZE];
+ }
+ else
+ {
uint32_t rc;

- struct tpm2_log_hashes log_hashes = {
- .count = 2,
- .hashes = {
- {
- .alg = TPM_ALG_SHA1,
- .size = SHA1_DIGEST_SIZE,
- .data = sha1_digest,
- },
- {
- .alg = TPM_ALG_SHA256,
- .size = SHA256_DIGEST_SIZE,
- .data = sha256_digest,
- },
- },
- };
+ struct tpm2_spec_id_event *evt_log = evt_log_addr;
+ struct tpm2_log_hashes log_hashes =
+ create_log_event20(evt_log, evt_log_size, pcr, type, log_data,
+ log_data_size);

rc = tpm2_hash_extend(loc, buf, size, pcr, &log_hashes);
if ( rc != 0 )
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:48 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

This is made as the first step of making parallel AP bring-up possible.
It should be enough for pre-C code.

Parallel AP bring-up is necessary because TXT by design releases all APs
at once. In addition to that it reduces number of IPIs (and more
importantly, delays between them) required to start all logical
processors. This results in significant reduction of boot time, even
when DRTM is not used, with performance gain growing with the number of
logical CPUs.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/boot/head.S | 1 +
xen/arch/x86/boot/trampoline.S | 21 +++++++++++++++++++++
xen/arch/x86/boot/x86_64.S | 28 +++++++++++++++++++++++++++-
xen/arch/x86/include/asm/apicdef.h | 4 ++++
xen/arch/x86/include/asm/msr-index.h | 3 +++
xen/arch/x86/setup.c | 7 +++++++
6 files changed, 63 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 0b7903070a..419bf58d5c 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -8,6 +8,7 @@
#include <asm/page.h>
#include <asm/processor.h>
#include <asm/msr-index.h>
+#include <asm/apicdef.h>
#include <asm/cpufeature.h>
#include <asm/trampoline.h>

diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index a92e399fbe..ed593acc46 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -71,6 +71,27 @@ trampoline_protmode_entry:
mov $X86_CR4_PAE,%ecx
mov %ecx,%cr4

+ /*
+ * Get APIC ID while we're in non-paged mode. Start by checking if
+ * x2APIC is enabled.
+ */
+ mov $MSR_APIC_BASE, %ecx
+ rdmsr
+ test $APIC_BASE_EXTD, %eax
+ jnz .Lx2apic
+
+ /* Not x2APIC, read from MMIO */
+ and $APIC_BASE_ADDR_MASK, %eax
+ mov APIC_ID(%eax), %esp
+ shr $24, %esp
+ jmp 1f
+
+.Lx2apic:
+ mov $(MSR_X2APIC_FIRST + (APIC_ID >> MSR_X2APIC_SHIFT)), %ecx
+ rdmsr
+ mov %eax, %esp
+1:
+
/* Load pagetable base register. */
mov $sym_offs(idle_pg_table),%eax
add bootsym_rel(trampoline_xen_phys_start,4,%eax)
diff --git a/xen/arch/x86/boot/x86_64.S b/xen/arch/x86/boot/x86_64.S
index 08ae97e261..ac33576d8f 100644
--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -15,7 +15,33 @@ ENTRY(__high_start)
mov $XEN_MINIMAL_CR4,%rcx
mov %rcx,%cr4

- mov stack_start(%rip),%rsp
+ test %ebx,%ebx
+ cmovz stack_start(%rip), %rsp
+ jz .L_stack_set
+
+ /* APs only: get stack base from APIC ID saved in %esp. */
+ mov $-1, %rax
+ lea x86_cpu_to_apicid(%rip), %rcx
+1:
+ add $1, %rax
+ cmp $NR_CPUS, %eax
+ jb 2f
+ hlt
+2:
+ cmp %esp, (%rcx, %rax, 4)
+ jne 1b
+
+ /* %eax is now Xen CPU index. */
+ lea stack_base(%rip), %rcx
+ mov (%rcx, %rax, 8), %rsp
+
+ test %rsp,%rsp
+ jnz 1f
+ hlt
+1:
+ add $(STACK_SIZE - CPUINFO_sizeof), %rsp
+
+.L_stack_set:

/* Reset EFLAGS (subsumes CLI and CLD). */
pushq $0
diff --git a/xen/arch/x86/include/asm/apicdef.h b/xen/arch/x86/include/asm/apicdef.h
index 63dab01dde..e093a2aa3c 100644
--- a/xen/arch/x86/include/asm/apicdef.h
+++ b/xen/arch/x86/include/asm/apicdef.h
@@ -121,6 +121,10 @@

#define MAX_IO_APICS 128

+#ifndef __ASSEMBLY__
+
extern bool x2apic_enabled;

+#endif /* !__ASSEMBLY__ */
+
#endif
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 22d9e76e55..794cf44abe 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -169,6 +169,9 @@
#define MSR_X2APIC_FIRST 0x00000800
#define MSR_X2APIC_LAST 0x000008ff

+/* MSR offset can be obtained by shifting MMIO offset this number of bits to the right. */
+#define MSR_X2APIC_SHIFT 4
+
#define MSR_X2APIC_TPR 0x00000808
#define MSR_X2APIC_PPR 0x0000080a
#define MSR_X2APIC_EOI 0x0000080b
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 403d976449..c6ebdc3c6b 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -2068,6 +2068,7 @@ void asmlinkage __init noreturn __start_xen(void)
*/
if ( !pv_shim )
{
+ /* Separate loop to make parallel AP bringup possible. */
for_each_present_cpu ( i )
{
/* Set up cpu_to_node[]. */
@@ -2075,6 +2076,12 @@ void asmlinkage __init noreturn __start_xen(void)
/* Set up node_to_cpumask based on cpu_to_node[]. */
numa_add_cpu(i);

+ if ( stack_base[i] == NULL )
+ stack_base[i] = cpu_alloc_stack(i);
+ }
+
+ for_each_present_cpu ( i )
+ {
if ( (park_offline_cpus || num_online_cpus() < max_cpus) &&
!cpu_online(i) )
{
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:54 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
Go through entires in the DRTM policy of SLRT to hash and extend data
that they describe into corresponding PCRs.

Addresses are being zeroed on measuring platform-specific data to
prevent measurements from changing when the only thing that has changed
is an address. Addresses can vary due to bootloader, firmware or user
doing something differently or just if GRUB gets bigger in size due to
inclusion of more modules and ends up offsetting newly allocated memory.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/include/asm/slaunch.h | 14 ++
xen/arch/x86/setup.c | 15 ++
xen/arch/x86/slaunch.c | 213 +++++++++++++++++++++++++++++
3 files changed, 242 insertions(+)

diff --git a/xen/arch/x86/include/asm/slaunch.h b/xen/arch/x86/include/asm/slaunch.h
index b9b50f20c6..5cfd9e95af 100644
--- a/xen/arch/x86/include/asm/slaunch.h
+++ b/xen/arch/x86/include/asm/slaunch.h
@@ -24,6 +24,8 @@
#define DLE_EVTYPE_SLAUNCH_START (DLE_EVTYPE_BASE + 0x103)
#define DLE_EVTYPE_SLAUNCH_END (DLE_EVTYPE_BASE + 0x104)

+struct boot_info;
+
extern bool slaunch_active;

/*
@@ -62,6 +64,18 @@ void slaunch_map_mem_regions(void);
/* Marks regions of memory as used to avoid their corruption. */
void slaunch_reserve_mem_regions(void);

+/* Measures essential parts of SLR table before making use of them. */
+void slaunch_measure_slrt(void);
+
+/*
+ * Takes measurements of DRTM policy entries except for MBI and SLRT which
+ * should have been measured by the time this is called. Also performs sanity
+ * checks of the policy and panics on failure. In particular, the function
+ * verifies that DRTM is consistent with modules obtained from MultibootInfo
+ * (MBI) and written to struct boot_info in setup.c.
+ */
+void slaunch_process_drtm_policy(const struct boot_info *bi);
+
/*
* This helper function is used to map memory using L2 page tables by aligning
* mapped regions to 2MB. This way page allocator (which at this point isn't
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index c6ebdc3c6b..b62e23b29e 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1397,6 +1397,13 @@ void asmlinkage __init noreturn __start_xen(void)
if ( slaunch_active )
{
slaunch_map_mem_regions();
+
+ /*
+ * SLRT needs to be measured here because it is used by init_e820(), the
+ * rest is measured slightly below by slaunch_process_drtm_policy().
+ */
+ slaunch_measure_slrt();
+
slaunch_reserve_mem_regions();
}

@@ -1418,6 +1425,14 @@ void asmlinkage __init noreturn __start_xen(void)
/* Create a temporary copy of the E820 map. */
memcpy(&boot_e820, &e820, sizeof(e820));

+ /*
+ * Process all yet unmeasured DRTM entries after E820 initialization to not
+ * do this while memory is uncached (too slow). This must also happen before
+ * modules are relocated or used.
+ */
+ if ( slaunch_active )
+ slaunch_process_drtm_policy(bi);
+
/* Early kexec reservation (explicit static start address). */
nr_pages = 0;
for ( i = 0; i < e820.nr_map; i++ )
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index 7b13b0a852..772971119a 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -9,9 +9,11 @@
#include <xen/macros.h>
#include <xen/mm.h>
#include <xen/types.h>
+#include <asm/bootinfo.h>
#include <asm/e820.h>
#include <asm/intel_txt.h>
#include <asm/page.h>
+#include <asm/processor.h>
#include <asm/slaunch.h>
#include <asm/tpm.h>

@@ -106,6 +108,217 @@ void __init slaunch_reserve_mem_regions(void)
}
}

+void __init slaunch_measure_slrt(void)
+{
+ struct slr_table *slrt = slaunch_get_slrt();
+
+ if ( slrt->revision == 1 )
+ {
+ /*
+ * In revision one of the SLRT, only platform-specific info table is
+ * measured.
+ */
+ struct slr_entry_intel_info tmp;
+ struct slr_entry_intel_info *entry;
+
+ entry = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+ if ( entry == NULL )
+ panic("SLRT is missing Intel-specific information!\n");
+
+ tmp = *entry;
+ tmp.boot_params_base = 0;
+ tmp.txt_heap = 0;
+
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)&tmp,
+ sizeof(tmp), DLE_EVTYPE_SLAUNCH, NULL, 0);
+ }
+ else
+ {
+ /*
+ * slaunch_get_slrt() checks that the revision is valid, so we must not get
+ * here unless the code is wrong.
+ */
+ panic("Unhandled SLRT revision: %d!\n", slrt->revision);
+ }
+}
+
+static struct slr_entry_policy *__init slr_get_policy(struct slr_table *slrt)
+{
+ struct slr_entry_policy *policy;
+
+ policy = (struct slr_entry_policy *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DRTM_POLICY);
+ if (policy == NULL)
+ panic("SLRT is missing DRTM policy!\n");
+
+ /* XXX: are newer revisions allowed? */
+ if ( policy->revision != SLR_POLICY_REVISION )
+ panic("DRTM policy in SLRT is of unsupported revision: %#04x!\n",
+ slrt->revision);
+
+ return policy;
+}
+
+static void __init
+check_slrt_policy_entry(struct slr_policy_entry *policy_entry,
+ int idx,
+ struct slr_table *slrt)
+{
+ if ( policy_entry->entity_type != SLR_ET_SLRT )
+ panic("Expected DRTM policy entry #%d to describe SLRT, got %#04x!\n",
+ idx, policy_entry->entity_type);
+ if ( policy_entry->pcr != DRTM_DATA_PCR )
+ panic("SLRT was measured to PCR-%d instead of PCR-%d!\n", DRTM_DATA_PCR,
+ policy_entry->pcr);
+ if ( policy_entry->entity != (uint64_t)__pa(slrt) )
+ panic("SLRT address (%#08lx) differs from its DRTM entry (%#08lx)\n",
+ __pa(slrt), policy_entry->entity);
+}
+
+/* Returns number of policy entries that were already measured. */
+static unsigned int __init
+check_drtm_policy(struct slr_table *slrt,
+ struct slr_entry_policy *policy,
+ struct slr_policy_entry *policy_entry,
+ const struct boot_info *bi)
+{
+ uint32_t i;
+ uint32_t num_mod_entries;
+
+ if ( policy->nr_entries < 2 )
+ panic("DRTM policy in SLRT contains less than 2 entries (%d)!\n",
+ policy->nr_entries);
+
+ /*
+ * MBI policy entry must be the first one, so that measuring order matches
+ * policy order.
+ */
+ if ( policy_entry[0].entity_type != SLR_ET_MULTIBOOT2_INFO )
+ panic("First entry of DRTM policy in SLRT is not MBI: %#04x!\n",
+ policy_entry[0].entity_type);
+ if ( policy_entry[0].pcr != DRTM_DATA_PCR )
+ panic("MBI was measured to %d instead of %d PCR!\n", DRTM_DATA_PCR,
+ policy_entry[0].pcr);
+
+ /* SLRT policy entry must be the second one. */
+ check_slrt_policy_entry(&policy_entry[1], 1, slrt);
+
+ for ( i = 0; i < bi->nr_modules; i++ )
+ {
+ uint16_t j;
+ const struct boot_module *mod = &bi->mods[i];
+
+ if (mod->relocated || mod->released)
+ {
+ panic("Multiboot module \"%s\" (at %d) was consumed before measurement\n",
+ (const char *)__va(mod->cmdline_pa), i);
+ }
+
+ for ( j = 2; j < policy->nr_entries; j++ )
+ {
+ if ( policy_entry[j].entity_type != SLR_ET_MULTIBOOT2_MODULE )
+ continue;
+
+ if ( policy_entry[j].entity == mod->start &&
+ policy_entry[j].size == mod->size )
+ break;
+ }
+
+ if ( j >= policy->nr_entries )
+ {
+ panic("Couldn't find Multiboot module \"%s\" (at %d) in DRTM of Secure Launch\n",
+ (const char *)__va(mod->cmdline_pa), i);
+ }
+ }
+
+ num_mod_entries = 0;
+ for ( i = 0; i < policy->nr_entries; i++ )
+ {
+ if ( policy_entry[i].entity_type == SLR_ET_MULTIBOOT2_MODULE )
+ num_mod_entries++;
+ }
+
+ if ( bi->nr_modules != num_mod_entries )
+ {
+ panic("Unexpected number of Multiboot modules: %d instead of %d\n",
+ (int)bi->nr_modules, (int)num_mod_entries);
+ }
+
+ /*
+ * MBI was measured in tpm_extend_mbi().
+ * SLRT was measured in tpm_measure_slrt().
+ */
+ return 2;
+}
+
+void __init slaunch_process_drtm_policy(const struct boot_info *bi)
+{
+ struct slr_table *slrt;
+ struct slr_entry_policy *policy;
+ struct slr_policy_entry *policy_entry;
+ uint16_t i;
+ unsigned int measured;
+
+ slrt = slaunch_get_slrt();
+
+ policy = slr_get_policy(slrt);
+ policy_entry = (struct slr_policy_entry *)
+ ((uint8_t *)policy + sizeof(*policy));
+
+ measured = check_drtm_policy(slrt, policy, policy_entry, bi);
+ for ( i = 0; i < measured; i++ )
+ policy_entry[i].flags |= SLR_POLICY_FLAG_MEASURED;
+
+ for ( i = measured; i < policy->nr_entries; i++ )
+ {
+ int rc;
+ uint64_t start = policy_entry[i].entity;
+ uint64_t size = policy_entry[i].size;
+
+ /* No already measured entries are expected here. */
+ if ( policy_entry[i].flags & SLR_POLICY_FLAG_MEASURED )
+ panic("DRTM entry at %d was measured out of order!\n", i);
+
+ switch ( policy_entry[i].entity_type )
+ {
+ case SLR_ET_MULTIBOOT2_INFO:
+ panic("Duplicated MBI entry in DRTM of Secure Launch at %d\n", i);
+ case SLR_ET_SLRT:
+ panic("Duplicated SLRT entry in DRTM of Secure Launch at %d\n", i);
+
+ case SLR_ET_UNSPECIFIED:
+ case SLR_ET_BOOT_PARAMS:
+ case SLR_ET_SETUP_DATA:
+ case SLR_ET_CMDLINE:
+ case SLR_ET_UEFI_MEMMAP:
+ case SLR_ET_RAMDISK:
+ case SLR_ET_MULTIBOOT2_MODULE:
+ case SLR_ET_TXT_OS2MLE:
+ /* Measure this entry below. */
+ break;
+
+ case SLR_ET_UNUSED:
+ /* Skip this entry. */
+ continue;
+ }
+
+ if ( policy_entry[i].flags & SLR_POLICY_IMPLICIT_SIZE )
+ panic("Unexpected implicitly-sized DRTM entry of Secure Launch at %d (type %d, info: %s)\n",
+ i, policy_entry[i].entity_type, policy_entry[i].evt_info);
+
+ rc = slaunch_map_l2(start, size);
+ BUG_ON(rc != 0);
+
+ tpm_hash_extend(DRTM_LOC, policy_entry[i].pcr, __va(start), size,
+ DLE_EVTYPE_SLAUNCH, (uint8_t *)policy_entry[i].evt_info,
+ strnlen(policy_entry[i].evt_info,
+ TPM_EVENT_INFO_LENGTH));
+
+ policy_entry[i].flags |= SLR_POLICY_FLAG_MEASURED;
+ }
+}
+
int __init slaunch_map_l2(unsigned long paddr, unsigned long size)
{
unsigned long aligned_paddr = paddr & ~((1ULL << L2_PAGETABLE_SHIFT) - 1);
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:07:59 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
This mostly involves not running Intel-specific code when on AMD.

There are only a few new AMD-specific implementation details:
- finding SLB start and size and then mapping and reserving it in e820
- managing offset for adding the next TPM log entry (TXT-compatible
data prepared by SKL is stored inside of vendor data field within TCG
header)

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/e820.c | 2 +-
xen/arch/x86/slaunch.c | 90 ++++++++++++++++++++++++++++++++++--------
xen/arch/x86/tpm.c | 68 ++++++++++++++++++++++++++++++-
3 files changed, 141 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
index d105d1918a..177c428883 100644
--- a/xen/arch/x86/e820.c
+++ b/xen/arch/x86/e820.c
@@ -444,7 +444,7 @@ static uint64_t __init mtrr_top_of_ram(void)
ASSERT(paddr_bits);
addr_mask = ((1ULL << paddr_bits) - 1) & PAGE_MASK;

- if ( slaunch_active )
+ if ( slaunch_active && boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
txt_restore_mtrrs(e820_verbose);

rdmsrl(MSR_MTRRcap, mtrr_cap);
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index 772971119a..51a488a8e0 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -17,6 +17,10 @@
#include <asm/slaunch.h>
#include <asm/tpm.h>

+/* SLB is 64k, 64k-aligned */
+#define SKINIT_SLB_SIZE 0x10000
+#define SKINIT_SLB_ALIGN 0x10000
+
/*
* These variables are assigned to by the code near Xen's entry point.
* slaunch_slrt is not declared in slaunch.h to facilitate accessing the
@@ -38,6 +42,8 @@ struct slr_table *__init slaunch_get_slrt(void)

if (slrt == NULL) {
int rc;
+ bool intel_cpu = (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL);
+ uint16_t slrt_architecture = intel_cpu ? SLR_INTEL_TXT : SLR_AMD_SKINIT;

slrt = __va(slaunch_slrt);

@@ -49,9 +55,9 @@ struct slr_table *__init slaunch_get_slrt(void)
/* XXX: are newer revisions allowed? */
if ( slrt->revision != SLR_TABLE_REVISION )
panic("SLRT is of unsupported revision: %#04x!\n", slrt->revision);
- if ( slrt->architecture != SLR_INTEL_TXT )
- panic("SLRT is for unexpected architecture: %#04x!\n",
- slrt->architecture);
+ if ( slrt->architecture != slrt_architecture )
+ panic("SLRT is for unexpected architecture: %#04x != %#04x!\n",
+ slrt->architecture, slrt_architecture);
if ( slrt->size > slrt->max_size )
panic("SLRT is larger than its max size: %#08x > %#08x!\n",
slrt->size, slrt->max_size);
@@ -66,6 +72,23 @@ struct slr_table *__init slaunch_get_slrt(void)
return slrt;
}

+static uint32_t __init get_slb_start(void)
+{
+ /*
+ * The runtime computation relies on size being a power of 2 and equal to
+ * alignment. Make sure these assumptions hold.
+ */
+ BUILD_BUG_ON(SKINIT_SLB_SIZE != SKINIT_SLB_ALIGN);
+ BUILD_BUG_ON(SKINIT_SLB_SIZE == 0);
+ BUILD_BUG_ON((SKINIT_SLB_SIZE & (SKINIT_SLB_SIZE - 1)) != 0);
+
+ /*
+ * Rounding any address within SLB down to alignment gives SLB base and
+ * SLRT is inside SLB on AMD.
+ */
+ return slaunch_slrt & ~(SKINIT_SLB_SIZE - 1);
+}
+
void __init slaunch_map_mem_regions(void)
{
int rc;
@@ -76,7 +99,10 @@ void __init slaunch_map_mem_regions(void)
BUG_ON(rc != 0);

/* Vendor-specific part. */
- txt_map_mem_regions();
+ if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+ txt_map_mem_regions();
+ else if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+ slaunch_map_l2(get_slb_start(), SKINIT_SLB_SIZE);

find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
if ( evt_log_addr != NULL )
@@ -94,7 +120,18 @@ void __init slaunch_reserve_mem_regions(void)
uint32_t evt_log_size;

/* Vendor-specific part. */
- txt_reserve_mem_regions();
+ if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+ {
+ txt_reserve_mem_regions();
+ }
+ else if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+ {
+ uint64_t slb_start = get_slb_start();
+ uint64_t slb_end = slb_start + SKINIT_SLB_SIZE;
+ printk("SLAUNCH: reserving SLB (%#lx - %#lx)\n", slb_start, slb_end);
+ rc = reserve_e820_ram(&e820_raw, slb_start, slb_end);
+ BUG_ON(rc == 0);
+ }

find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
if ( evt_log_addr != NULL )
@@ -118,20 +155,41 @@ void __init slaunch_measure_slrt(void)
* In revision one of the SLRT, only platform-specific info table is
* measured.
*/
- struct slr_entry_intel_info tmp;
- struct slr_entry_intel_info *entry;
+ if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+ {
+ struct slr_entry_intel_info tmp;
+ struct slr_entry_intel_info *entry;
+
+ entry = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+ if ( entry == NULL )
+ panic("SLRT is missing Intel-specific information!\n");

- entry = (struct slr_entry_intel_info *)
- slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
- if ( entry == NULL )
- panic("SLRT is missing Intel-specific information!\n");
+ tmp = *entry;
+ tmp.boot_params_base = 0;
+ tmp.txt_heap = 0;

- tmp = *entry;
- tmp.boot_params_base = 0;
- tmp.txt_heap = 0;
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)&tmp,
+ sizeof(tmp), DLE_EVTYPE_SLAUNCH, NULL, 0);
+ }
+ else if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+ {
+ struct slr_entry_amd_info tmp;
+ struct slr_entry_amd_info *entry;
+
+ entry = (struct slr_entry_amd_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_AMD_INFO);
+ if ( entry == NULL )
+ panic("SLRT is missing AMD-specific information!\n");

- tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)&tmp,
- sizeof(tmp), DLE_EVTYPE_SLAUNCH, NULL, 0);
+ tmp = *entry;
+ tmp.next = 0;
+ tmp.slrt_base = 0;
+ tmp.boot_params_base = 0;
+
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)&tmp,
+ sizeof(tmp), DLE_EVTYPE_SLAUNCH, NULL, 0);
+ }
}
else
{
diff --git a/xen/arch/x86/tpm.c b/xen/arch/x86/tpm.c
index c51bd9b496..8562296681 100644
--- a/xen/arch/x86/tpm.c
+++ b/xen/arch/x86/tpm.c
@@ -10,6 +10,7 @@
#include <asm/intel_txt.h>
#include <asm/slaunch.h>
#include <asm/tpm.h>
+#include <asm/x86-vendors.h>

#ifdef __EARLY_SLAUNCH__

@@ -51,11 +52,31 @@ void *memcpy(void *dest, const void *src, size_t n)
return dest;
}

+static bool is_amd_cpu(void)
+{
+ /*
+ * asm/processor.h can't be included in early code, which means neither
+ * cpuid() function nor boot_cpu_data can be used here.
+ */
+ uint32_t eax, ebx, ecx, edx;
+ asm volatile ( "cpuid"
+ : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx)
+ : "0" (0), "c" (0) );
+ return ebx == X86_VENDOR_AMD_EBX
+ && ecx == X86_VENDOR_AMD_ECX
+ && edx == X86_VENDOR_AMD_EDX;
+}
+
#else /* __EARLY_SLAUNCH__ */

#include <xen/mm.h>
#include <xen/pfn.h>

+static bool is_amd_cpu(void)
+{
+ return boot_cpu_data.x86_vendor == X86_VENDOR_AMD;
+}
+
#endif /* __EARLY_SLAUNCH__ */

#define TPM_LOC_REG(loc, reg) (0x1000 * (loc) + (reg))
@@ -242,6 +263,21 @@ struct TPM12_PCREvent {
uint8_t Data[];
};

+struct tpm1_spec_id_event {
+ uint32_t pcrIndex;
+ uint32_t eventType;
+ uint8_t digest[20];
+ uint32_t eventSize;
+ uint8_t signature[16];
+ uint32_t platformClass;
+ uint8_t specVersionMinor;
+ uint8_t specVersionMajor;
+ uint8_t specErrata;
+ uint8_t uintnSize;
+ uint8_t vendorInfoSize;
+ uint8_t vendorInfo[0]; /* variable number of members */
+} __packed;
+
struct txt_ev_log_container_12 {
char Signature[20]; /* "TXT Event Container", null-terminated */
uint8_t Reserved[12];
@@ -385,6 +421,16 @@ static void *create_log_event12(struct txt_ev_log_container_12 *evt_log,
{
struct TPM12_PCREvent *new_entry;

+ if ( is_amd_cpu() )
+ {
+ /*
+ * On AMD, TXT-compatible structure is stored as vendor data of
+ * TCG-defined event log header.
+ */
+ struct tpm1_spec_id_event *spec_id = (void *)evt_log;
+ evt_log = (struct txt_ev_log_container_12 *)&spec_id->vendorInfo[0];
+ }
+
new_entry = (void *)(((uint8_t *)evt_log) + evt_log->NextEventOffset);

/*
@@ -833,11 +879,29 @@ static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,

#endif /* __EARLY_SLAUNCH__ */

-static struct heap_event_log_pointer_element2_1 *find_evt_log_ext_data(void)
+static struct heap_event_log_pointer_element2_1 *
+find_evt_log_ext_data(struct tpm2_spec_id_event *evt_log)
{
struct txt_os_sinit_data *os_sinit;
struct txt_ext_data_element *ext_data;

+ if ( is_amd_cpu() )
+ {
+ /*
+ * Event log pointer is defined by TXT specification, but
+ * secure-kernel-loader provides a compatible structure in vendor data
+ * of the log.
+ */
+ const uint8_t *data_size =
+ (void *)&evt_log->digestSizes[evt_log->digestCount];
+
+ if ( *data_size != sizeof(struct heap_event_log_pointer_element2_1) )
+ return NULL;
+
+ /* Vendor data directly follows one-byte size. */
+ return (void *)(data_size + 1);
+ }
+
os_sinit = txt_os_sinit_data_start(__va(read_txt_reg(TXTCR_HEAP_BASE)));
ext_data = (void *)((uint8_t *)os_sinit + sizeof(*os_sinit));

@@ -871,7 +935,7 @@ create_log_event20(struct tpm2_spec_id_event *evt_log, uint32_t evt_log_size,
unsigned i;
uint8_t *p;

- log_ext_data = find_evt_log_ext_data();
+ log_ext_data = find_evt_log_ext_data(evt_log);
if ( log_ext_data == NULL )
return log_hashes;

--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:08:02 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

On Intel TXT, APs are started in one of two ways, depending on ACM
which reports it in its information table. In both cases, all APs are
started simultaneously after BSP requests them to do so. Two possible
ways are:
- GETSEC[WAKEUP] instruction,
- MONITOR address.

GETSEC[WAKEUP] requires versions >= 7 of SINIT to MLE Data, but there is
no clear mapping of that version with regard to processor family and
it's not known which CPUs actually use it. It could have been designed
for TXT support on CPUs that lack MONITOR/MWAIT, because GETSEC[WAKEUP]
seems to be more complicated, in software and hardware alike.

This patch implements only MONITOR approach, GETSEC[WAKEUP] support will
be added later once more details and means of testing are available and
if there is a practical need for it.

With this patch, every AP goes through assembly part, and only when in
start_secondary() in C they re-enter MONITOR/MWAIT iff they are not the
AP that was asked to boot. The same address is reused for simplicity,
and on next wakeup call APs don't have to go through assembly part
again (GDT, paging, stack setting).

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/boot/trampoline.S | 19 +++++++++-
xen/arch/x86/include/asm/intel_txt.h | 6 +++
xen/arch/x86/include/asm/processor.h | 1 +
xen/arch/x86/smpboot.c | 57 ++++++++++++++++++++++++++++
4 files changed, 82 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index ed593acc46..5989d3a69a 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -58,6 +58,16 @@ GLOBAL(entry_SIPI16)
ljmpl $BOOT_CS32,$bootsym_rel(trampoline_protmode_entry,6)

.code32
+GLOBAL(txt_ap_entry)
+ /*
+ * APs enter here in protected mode without paging. GDT is set in JOIN
+ * structure, it points to trampoline_gdt. Interrupts are disabled by
+ * TXT (including NMI and SMI), so IDT doesn't matter at this point.
+ * The only missing point is telling that we are AP by saving non-zero
+ * value in EBX.
+ */
+ mov $1, %ebx
+
trampoline_protmode_entry:
/* Set up a few descriptors: on entry only CS is guaranteed good. */
mov $BOOT_DS,%eax
@@ -143,7 +153,7 @@ start64:
.word 0
idt_48: .word 0, 0, 0 # base = limit = 0

-trampoline_gdt:
+GLOBAL(trampoline_gdt)
.word 0 /* 0x0000: unused (reused for GDTR) */
gdt_48:
.word .Ltrampoline_gdt_end - trampoline_gdt - 1
@@ -154,6 +164,13 @@ gdt_48:
.quad 0x00cf93000000ffff /* 0x0018: ring 0 data */
.quad 0x00009b000000ffff /* 0x0020: real-mode code @ BOOT_TRAMPOLINE */
.quad 0x000093000000ffff /* 0x0028: real-mode data @ BOOT_TRAMPOLINE */
+ /*
+ * Intel TXT requires these two in exact order. This isn't compatible
+ * with order required by syscall, so we have duplicated entries...
+ * If order ever changes, update selector numbers in asm/intel_txt.h.
+ */
+ .quad 0x00cf9b000000ffff /* 0x0030: ring 0 code, 32-bit mode */
+ .quad 0x00cf93000000ffff /* 0x0038: ring 0 data */
.Ltrampoline_gdt_end:

/* Relocations for trampoline Real Mode segments. */
diff --git a/xen/arch/x86/include/asm/intel_txt.h b/xen/arch/x86/include/asm/intel_txt.h
index 0a36ef66d1..af997c9da6 100644
--- a/xen/arch/x86/include/asm/intel_txt.h
+++ b/xen/arch/x86/include/asm/intel_txt.h
@@ -79,6 +79,9 @@

#define SLAUNCH_BOOTLOADER_MAGIC 0x4c534254

+#define TXT_AP_BOOT_CS 0x0030
+#define TXT_AP_BOOT_DS 0x0038
+
#ifndef __ASSEMBLY__

#include <xen/slr_table.h>
@@ -93,6 +96,9 @@
#define _txt(x) __va(x)
#endif

+extern char txt_ap_entry[];
+extern uint32_t trampoline_gdt[];
+
/*
* Always use private space as some of registers are either read-only or not
* present in public space.
diff --git a/xen/arch/x86/include/asm/processor.h b/xen/arch/x86/include/asm/processor.h
index 75af7ea3c4..9957e3cb9e 100644
--- a/xen/arch/x86/include/asm/processor.h
+++ b/xen/arch/x86/include/asm/processor.h
@@ -473,6 +473,7 @@ void set_in_mcu_opt_ctrl(uint32_t mask, uint32_t val);
enum ap_boot_method {
AP_BOOT_NORMAL,
AP_BOOT_SKINIT,
+ AP_BOOT_TXT,
};
extern enum ap_boot_method ap_boot_method;

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 54207e6d88..1ff26761ab 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -29,6 +29,7 @@
#include <asm/flushtlb.h>
#include <asm/guest.h>
#include <asm/idt.h>
+#include <asm/intel_txt.h>
#include <asm/io_apic.h>
#include <asm/irq-vectors.h>
#include <asm/mc146818rtc.h>
@@ -37,6 +38,7 @@
#include <asm/mtrr.h>
#include <asm/prot-key.h>
#include <asm/setup.h>
+#include <asm/slaunch.h>
#include <asm/spec_ctrl.h>
#include <asm/tboot.h>
#include <asm/time.h>
@@ -325,6 +327,29 @@ void asmlinkage start_secondary(void *unused)
*/
unsigned int cpu = booting_cpu;

+ if ( ap_boot_method == AP_BOOT_TXT ) {
+ uint64_t misc_enable;
+ uint32_t my_apicid;
+ struct txt_sinit_mle_data *sinit_mle =
+ txt_sinit_mle_data_start(__va(read_txt_reg(TXTCR_HEAP_BASE)));
+
+ /* TXT released us with MONITOR disabled in IA32_MISC_ENABLE. */
+ rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
+ wrmsrl(MSR_IA32_MISC_ENABLE,
+ misc_enable | MSR_IA32_MISC_ENABLE_MONITOR_ENABLE);
+
+ /* get_apic_id() reads from x2APIC if it thinks it is enabled. */
+ x2apic_ap_setup();
+ my_apicid = get_apic_id();
+
+ while ( my_apicid != x86_cpu_to_apicid[cpu] ) {
+ asm volatile ("monitor; xor %0,%0; mwait"
+ :: "a"(__va(sinit_mle->rlp_wakeup_addr)), "c"(0),
+ "d"(0) : "memory");
+ cpu = booting_cpu;
+ }
+ }
+
/* Critical region without IDT or TSS. Any fault is deadly! */

set_current(idle_vcpu[cpu]);
@@ -421,6 +446,28 @@ void asmlinkage start_secondary(void *unused)
startup_cpu_idle_loop();
}

+static int wake_aps_in_txt(void)
+{
+ struct txt_sinit_mle_data *sinit_mle =
+ txt_sinit_mle_data_start(__va(read_txt_reg(TXTCR_HEAP_BASE)));
+ uint32_t *wakeup_addr = __va(sinit_mle->rlp_wakeup_addr);
+
+ uint32_t join[4] = {
+ trampoline_gdt[1], /* GDT limit */
+ bootsym_phys(trampoline_gdt), /* GDT base */
+ TXT_AP_BOOT_CS, /* CS selector, DS = CS+8 */
+ bootsym_phys(txt_ap_entry) /* EIP */
+ };
+
+ write_txt_reg(TXTCR_MLE_JOIN, __pa(join));
+
+ smp_mb();
+
+ *wakeup_addr = 1;
+
+ return 0;
+}
+
static int wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip)
{
unsigned long send_status = 0, accept_status = 0;
@@ -443,6 +490,9 @@ static int wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip)
if ( tboot_in_measured_env() && !tboot_wake_ap(phys_apicid, start_eip) )
return 0;

+ if ( ap_boot_method == AP_BOOT_TXT )
+ return wake_aps_in_txt();
+
/*
* Be paranoid about clearing APIC errors.
*/
@@ -1150,6 +1200,13 @@ static struct notifier_block cpu_smpboot_nfb = {

void __init smp_prepare_cpus(void)
{
+ /*
+ * If the platform is performing a Secure Launch via TXT, secondary
+ * CPUs (APs) will need to be woken up in a TXT-specific way.
+ */
+ if ( slaunch_active && boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+ ap_boot_method = AP_BOOT_TXT;
+
register_cpu_notifier(&cpu_smpboot_nfb);

mtrr_aps_sync_begin();
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:08:03 AMApr 22
to xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, Daniel P. Smith, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
When running on an EFI-enabled system, Xen needs to have access to Boot
Services in order to initialize itself properly and reach a state in
which a dom0 kernel can operate without issues.

This means that DRTM must be started in the middle of Xen's
initialization process. This effect is achieved via a callback into
bootloader (GRUB) which is responsible for initiating DRTM and
continuing Xen's initialization process. The latter is done by
branching in Slaunch entry point on a flag to switch back into long mode
before calling the same function which Xen would execute as the next
step without DRTM.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
.gitignore | 1 +
docs/hypervisor-guide/x86/how-xen-boots.rst | 10 +-
xen/arch/x86/Makefile | 9 +-
xen/arch/x86/boot/head.S | 124 ++++++++++++++++++++
xen/arch/x86/boot/x86_64.S | 14 ++-
xen/arch/x86/efi/efi-boot.h | 90 +++++++++++++-
xen/arch/x86/efi/fixmlehdr.c | 122 +++++++++++++++++++
xen/arch/x86/slaunch.c | 74 +++++++++++-
xen/common/efi/boot.c | 4 +
xen/common/efi/runtime.c | 1 +
xen/include/xen/efi.h | 1 +
11 files changed, 437 insertions(+), 13 deletions(-)
create mode 100644 xen/arch/x86/efi/fixmlehdr.c

diff --git a/.gitignore b/.gitignore
index 53f5df0003..dab829d7e1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -201,6 +201,7 @@ xen/.xen.elf32
xen/System.map
xen/arch/x86/efi.lds
xen/arch/x86/efi/check.efi
+xen/arch/x86/efi/fixmlehdr
xen/arch/x86/efi/mkreloc
xen/arch/x86/include/asm/asm-macros.h
xen/arch/*/xen.lds
diff --git a/docs/hypervisor-guide/x86/how-xen-boots.rst b/docs/hypervisor-guide/x86/how-xen-boots.rst
index 050fe9c61f..63f81a8198 100644
--- a/docs/hypervisor-guide/x86/how-xen-boots.rst
+++ b/docs/hypervisor-guide/x86/how-xen-boots.rst
@@ -55,10 +55,12 @@ If ``CONFIG_PVH_GUEST`` was selected at build time, an Elf note is included
which indicates the ability to use the PVH boot protocol, and registers
``__pvh_start`` as the entrypoint, entered in 32bit mode.

-A combination of Multiboot 2 and MLE headers is used to implement DRTM for
-legacy (BIOS) boot. The separate entry point is used mainly to differentiate
-from other kinds of boots. It moves a magic number to EAX before jumping into
-common startup code.
+A combination of Multiboot 2 and MLE headers is used to implement DRTM. The
+separate entry point is used mainly to differentiate from other kinds of boots.
+For a legacy (BIOS) boot, it moves a magic number to EAX before jumping into
+common startup code. For a EFI boot, it resumes execution of Xen.efi which was
+paused by handing control to a part of a bootloader responsible for initiating
+DRTM sequence.


xen.gz
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 7d1027a50f..af4dd16f8a 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -90,6 +90,7 @@ extra-y += xen.lds

hostprogs-y += boot/mkelf32
hostprogs-y += efi/mkreloc
+hostprogs-y += efi/fixmlehdr

$(obj)/efi/mkreloc: HOSTCFLAGS += -I$(srctree)/include

@@ -141,6 +142,10 @@ $(TARGET): $(TARGET)-syms $(efi-y) $(obj)/boot/mkelf32

CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI

+ifeq ($(XEN_BUILD_EFI),y)
+XEN_AFLAGS += -DXEN_BUILD_EFI
+endif
+
$(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
$(objtree)/common/symbols-dummy.o -o $(dot-target).0
@@ -210,7 +215,7 @@ note_file_option ?= $(note_file)

extra-$(XEN_BUILD_PE) += efi.lds
ifeq ($(XEN_BUILD_PE),y)
-$(TARGET).efi: $(objtree)/prelink.o $(note_file) $(obj)/efi.lds $(obj)/efi/relocs-dummy.o $(obj)/efi/mkreloc
+$(TARGET).efi: $(objtree)/prelink.o $(note_file) $(obj)/efi.lds $(obj)/efi/relocs-dummy.o $(obj)/efi/mkreloc $(obj)/efi/fixmlehdr
ifeq ($(CONFIG_DEBUG_INFO),y)
$(if $(filter --strip-debug,$(EFI_LDFLAGS)),echo,:) "Will strip debug info from $(@F)"
endif
@@ -237,6 +242,8 @@ endif
$(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds -N $< \
$(dot-target).1r.o $(dot-target).1s.o $(orphan-handling-y) \
$(note_file_option) -o $@
+ # take image offset into account
+ $(obj)/efi/fixmlehdr $@ $(XEN_IMG_OFFSET)
$(NM) -pa --format=sysv $@ \
| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> $@.map
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 3184b6883a..27b63fae32 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -397,6 +397,12 @@ slaunch_stub_entry:
mov %ebx, %esi
sub $sym_offs(slaunch_stub_entry), %esi

+#ifdef XEN_BUILD_EFI
+ /* If the flag is already set, then Xen should continue execution. */
+ cmpb $0, sym_esi(slaunch_active)
+ jne slaunch_efi_jumpback
+#endif
+
/* On AMD, %ebp holds the base address of SLB, save it for later. */
mov %ebp, %ebx

@@ -836,6 +842,124 @@ trampoline_setup:
/* Jump into the relocated trampoline. */
lret

+#ifdef XEN_BUILD_EFI
+
+ /*
+ * The state matches that of slaunch_stub_entry above, but with %esi
+ * already initialized.
+ */
+slaunch_efi_jumpback:
+ lea STACK_SIZE - CPUINFO_sizeof + sym_esi(cpu0_stack), %esp
+
+ /* Prepare gdt and segments. */
+ add %esi, sym_esi(gdt_boot_base)
+ lgdt sym_esi(gdt_boot_descr)
+
+ mov $BOOT_DS, %ecx
+ mov %ecx, %ds
+ mov %ecx, %es
+ mov %ecx, %ss
+
+ push $BOOT_CS32
+ lea sym_esi(.Lgdt_is_set),%edx
+ push %edx
+ lret
+.Lgdt_is_set:
+
+ /*
+ * Stash TSC as above because it was zeroed on jumping into bootloader
+ * to not interfere with measurements.
+ */
+ rdtsc
+ mov %eax, sym_esi(boot_tsc_stamp)
+ mov %edx, 4 + sym_esi(boot_tsc_stamp)
+
+ /*
+ * Clear the pagetables before the use. We are loaded below 4GiB and
+ * this avoids the need for writing to higher dword of each entry.
+ * Additionally, this ensures those dwords are actually zero and the
+ * mappings aren't manipulated from outside.
+ */
+ lea sym_esi(bootmap_start), %edi
+ lea sym_esi(bootmap_end), %ecx
+ sub %edi, %ecx
+ xor %eax, %eax
+ shr $2, %ecx
+ rep stosl
+
+ /* 1x L1 page, 512 entries mapping total of 2M. */
+ lea sym_esi(l1_bootmap), %edi
+ mov $512, %ecx
+ mov $(__PAGE_HYPERVISOR + 512 * PAGE_SIZE), %edx
+.Lfill_l1_identmap:
+ sub $PAGE_SIZE, %edx
+ /* Loop runs for ecx=[512..1] for entries [511..0], hence -8. */
+ mov %edx, -8(%edi,%ecx,8)
+ loop .Lfill_l1_identmap
+
+ /* 4x L2 pages, each page mapping 1G of RAM. */
+ lea sym_esi(l2_bootmap), %edi
+ /* 1st entry points to L1. */
+ lea (sym_offs(l1_bootmap) + __PAGE_HYPERVISOR)(%esi), %edx
+ mov %edx, (%edi)
+ /* Other entries are 2MB pages. */
+ mov $(4 * 512 - 1), %ecx
+ /*
+ * Value below should be 4GB + flags, which wouldn't fit in 32b
+ * register. To avoid warning from the assembler, 4GB is skipped here.
+ * Substitution in first iteration makes the value roll over and point
+ * to 4GB - 2MB + flags.
+ */
+ mov $(_PAGE_PSE + __PAGE_HYPERVISOR), %edx
+.Lfill_l2_identmap:
+ sub $(1 << L2_PAGETABLE_SHIFT), %edx
+ /* Loop runs for ecx=[2047..1] for entries [2047..1]. */
+ mov %edx, (%edi,%ecx,8)
+ loop .Lfill_l2_identmap
+
+ /* 1x L3 page, mapping the 4x L2 pages. */
+ lea sym_esi(l3_bootmap), %edi
+ mov $4, %ecx
+ lea (sym_offs(l2_bootmap) + 4 * PAGE_SIZE + __PAGE_HYPERVISOR)(%esi), %edx
+.Lfill_l3_identmap:
+ sub $PAGE_SIZE, %edx
+ /* Loop runs for ecx=[4..1] for entries [3..0], hence -8. */
+ mov %edx, -8(%edi,%ecx,8)
+ loop .Lfill_l3_identmap
+
+ /* 1x L4 page, mapping the L3 page. */
+ lea (sym_offs(l3_bootmap) + __PAGE_HYPERVISOR)(%esi), %edx
+ mov %edx, sym_esi(l4_bootmap)
+
+ /* Restore CR4, PAE must be enabled before IA-32e mode */
+ mov %cr4, %ecx
+ or $X86_CR4_PAE, %ecx
+ mov %ecx, %cr4
+
+ /* Load PML4 table location into PT base register */
+ lea sym_esi(l4_bootmap), %eax
+ mov %eax, %cr3
+
+ /* Enable IA-32e mode and paging */
+ mov $MSR_EFER, %ecx
+ rdmsr
+ or $EFER_LME >> 8, %ah
+ wrmsr
+
+ mov %cr0, %eax
+ or $X86_CR0_PG | X86_CR0_NE | X86_CR0_TS | X86_CR0_MP, %eax
+ mov %eax, %cr0
+
+ /* Now in IA-32e compatibility mode, use lret to jump to 64b mode */
+ lea sym_esi(start_xen_from_efi), %ecx
+ push $BOOT_CS64
+ push %ecx
+ lret
+
+.global start_xen_from_efi
+
+#endif /* XEN_BUILD_EFI */
+
ENTRY(trampoline_start)
#include "trampoline.S"
ENTRY(trampoline_end)
diff --git a/xen/arch/x86/boot/x86_64.S b/xen/arch/x86/boot/x86_64.S
index ac33576d8f..67896f5fe5 100644
--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -221,14 +221,22 @@ GLOBAL(__page_tables_end)
/* Init pagetables. Enough page directories to map into 4GB. */
.section .init.data, "aw", @progbits

-DATA_LOCAL(l1_bootmap, PAGE_SIZE)
+bootmap_start:
+
+DATA_LOCAL(l1_bootmap, PAGE_SIZE) /* 1x L1 page, mapping 2M of RAM. */
.fill L1_PAGETABLE_ENTRIES, 8, 0
END(l1_bootmap)

-DATA(l2_bootmap, PAGE_SIZE)
+DATA(l2_bootmap, PAGE_SIZE) /* 4x L2 pages, each mapping 1G of RAM. */
.fill 4 * L2_PAGETABLE_ENTRIES, 8, 0
END(l2_bootmap)

-DATA(l3_bootmap, PAGE_SIZE)
+DATA(l3_bootmap, PAGE_SIZE) /* 1x L3 page, mapping the 4x L2 pages. */
.fill L3_PAGETABLE_ENTRIES, 8, 0
END(l3_bootmap)
+
+DATA_LOCAL(l4_bootmap, PAGE_SIZE) /* 1x L4 page, mapping the L3 page. */
+ .fill L4_PAGETABLE_ENTRIES, 8, 0
+END(l4_bootmap)
+
+bootmap_end:
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 1d8902a9a7..1cfb4582d4 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -5,6 +5,12 @@
*/
#include <xen/vga.h>

+/*
+ * Tell <asm/intel_txt.h> to access TXT registers without address translation
+ * which has not yet been set up.
+ */
+#define __EARLY_SLAUNCH__
+
#include <asm/boot-helpers.h>
#include <asm/e820.h>
#include <asm/edd.h>
@@ -13,8 +19,11 @@
#include <asm/setup.h>
#include <asm/trampoline.h>
#include <asm/efi.h>
+#include <asm/intel_txt.h>
+#include <asm/slaunch.h>

static struct file __initdata ucode;
+static uint64_t __initdata xen_image_size;
static multiboot_info_t __initdata mbi = {
.flags = MBI_MODULES | MBI_LOADERNAME
};
@@ -230,10 +239,29 @@ static void __init efi_arch_pre_exit_boot(void)
}
}

-static void __init noreturn efi_arch_post_exit_boot(void)
+void __init noreturn start_xen_from_efi(void)
{
u64 cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, efer;

+ if ( slaunch_active )
+ {
+ struct slr_table *slrt = (struct slr_table *)efi.slr;
+ struct slr_entry_intel_info *intel_info;
+
+ intel_info = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+ if ( intel_info != NULL )
+ {
+ void *txt_heap = txt_init();
+ struct txt_os_mle_data *os_mle = txt_os_mle_data_start(txt_heap);
+ struct txt_os_sinit_data *os_sinit =
+ txt_os_sinit_data_start(txt_heap);
+
+ txt_verify_pmr_ranges(os_mle, os_sinit, intel_info, xen_phys_start,
+ xen_phys_start, xen_image_size);
+ }
+ }
+
efi_arch_relocate_image(__XEN_VIRT_START - xen_phys_start);
memcpy(_p(trampoline_phys), trampoline_start, cfg.size);

@@ -279,6 +307,65 @@ static void __init noreturn efi_arch_post_exit_boot(void)
unreachable();
}

+extern uint32_t slaunch_slrt;
+
+static void __init attempt_secure_launch(void)
+{
+ struct slr_table *slrt;
+ struct slr_entry_dl_info *dlinfo;
+ dl_handler_func handler_callback;
+
+ /* The presence of this table indicates a Secure Launch boot. */
+ slrt = (struct slr_table *)efi.slr;
+ if ( efi.slr == EFI_INVALID_TABLE_ADDR || slrt->magic != SLR_TABLE_MAGIC ||
+ slrt->revision != SLR_TABLE_REVISION )
+ return;
+
+ /* Avoid calls into firmware after DRTM. */
+ __clear_bit(EFI_RS, &efi_flags);
+
+ /*
+ * Make measurements less sensitive to hardware-specific details.
+ *
+ * Intentionally leaving efi_ct and efi_num_ct intact.
+ */
+ efi_ih = 0;
+ efi_bs = NULL;
+ efi_bs_revision = 0;
+ efi_rs = NULL;
+ efi_version = 0;
+ efi_fw_vendor = NULL;
+ efi_fw_revision = 0;
+ StdOut = NULL;
+ StdErr = NULL;
+ boot_tsc_stamp = 0;
+
+ slaunch_active = true;
+ slaunch_slrt = efi.slr;
+
+ /* Jump through DL stub to initiate Secure Launch. */
+ dlinfo = (struct slr_entry_dl_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
+
+ handler_callback = (dl_handler_func)dlinfo->dl_handler;
+ handler_callback(&dlinfo->bl_context);
+
+ unreachable();
+}
+
+static void __init noreturn efi_arch_post_exit_boot(void)
+{
+ /*
+ * If Secure Launch happens, attempt_secure_launch() doesn't return and
+ * start_xen_from_efi() is invoked after DRTM has been initiated.
+ * Otherwise, attempt_secure_launch() returns and execution continues as
+ * usual.
+ */
+ attempt_secure_launch();
+
+ start_xen_from_efi();
+}
+
static void __init efi_arch_cfg_file_early(const EFI_LOADED_IMAGE *image,
EFI_FILE_HANDLE dir_handle,
const char *section)
@@ -775,6 +862,7 @@ static void __init efi_arch_halt(void)
static void __init efi_arch_load_addr_check(const EFI_LOADED_IMAGE *loaded_image)
{
xen_phys_start = (UINTN)loaded_image->ImageBase;
+ xen_image_size = loaded_image->ImageSize;
if ( (xen_phys_start + loaded_image->ImageSize - 1) >> 32 )
blexit(L"Xen must be loaded below 4Gb.");
if ( xen_phys_start & ((1 << L2_PAGETABLE_SHIFT) - 1) )
diff --git a/xen/arch/x86/efi/fixmlehdr.c b/xen/arch/x86/efi/fixmlehdr.c
new file mode 100644
index 0000000000..d443f3d75d
--- /dev/null
+++ b/xen/arch/x86/efi/fixmlehdr.c
@@ -0,0 +1,122 @@
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#define PREFIX_SIZE (4*1024)
+
+struct mle_header
+{
+ uint8_t uuid[16];
+ uint32_t header_len;
+ uint32_t version;
+ uint32_t entry_point;
+ uint32_t first_valid_page;
+ uint32_t mle_start;
+ uint32_t mle_end;
+ uint32_t capabilities;
+ uint32_t cmdline_start;
+ uint32_t cmdline_end;
+} __attribute__ ((packed));
+
+static const uint8_t MLE_HEADER_UUID[] = {
+ 0x5a, 0xac, 0x82, 0x90, 0x6f, 0x47, 0xa7, 0x74,
+ 0x0f, 0x5c, 0x55, 0xa2, 0xcb, 0x51, 0xb6, 0x42
+};
+
+int main(int argc, char *argv[])
+{
+ FILE *fp;
+ struct mle_header header;
+ int i;
+ char *end_ptr;
+ long long correction;
+ const char *file_path;
+
+ if ( argc != 3 )
+ {
+ fprintf(stderr, "Usage: %s <xen.efi> <entry-correction>\n", argv[0]);
+ return 1;
+ }
+
+ correction = strtoll(argv[2], &end_ptr, 0);
+ if ( *end_ptr != '\0' )
+ {
+ fprintf(stderr, "Failed to parse '%s' as a number\n", argv[2]);
+ return 1;
+ }
+ if ( correction < INT32_MIN )
+ {
+ fprintf(stderr, "Correction '%s' is too small\n", argv[2]);
+ return 1;
+ }
+ if ( correction > INT32_MAX )
+ {
+ fprintf(stderr, "Correction '%s' is too large\n", argv[2]);
+ return 1;
+ }
+
+ file_path = argv[1];
+
+ fp = fopen(file_path, "r+");
+ if ( fp == NULL )
+ {
+ fprintf(stderr, "Failed to open %s\n", file_path);
+ return 1;
+ }
+
+ for ( i = 0; i < PREFIX_SIZE; i += 16 )
+ {
+ uint8_t bytes[16];
+
+ if ( fread(bytes, sizeof(bytes), 1, fp) != 1 )
+ {
+ fprintf(stderr, "Failed to find MLE header in %s\n", file_path);
+ goto fail;
+ }
+
+ if ( memcmp(bytes, MLE_HEADER_UUID, 16) == 0 )
+ {
+ break;
+ }
+ }
+
+ if ( i >= PREFIX_SIZE )
+ {
+ fprintf(stderr, "Failed to find MLE header in %s\n", file_path);
+ goto fail;
+ }
+
+ if ( fseek(fp, -16, SEEK_CUR) )
+ {
+ fprintf(stderr, "Failed to seek back to MLE header in %s\n", file_path);
+ goto fail;
+ }
+
+ if ( fread(&header, sizeof(header), 1, fp) != 1 )
+ {
+ fprintf(stderr, "Failed to read MLE header from %s\n", file_path);
+ goto fail;
+ }
+
+ if ( fseek(fp, -(int)sizeof(header), SEEK_CUR) )
+ {
+ fprintf(stderr, "Failed to seek back again to MLE header in %s\n",
+ file_path);
+ goto fail;
+ }
+
+ header.entry_point += correction;
+
+ if ( fwrite(&header, sizeof(header), 1, fp) != 1 )
+ {
+ fprintf(stderr, "Failed to write MLE header in %s\n", file_path);
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ fclose(fp);
+ return 1;
+}
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index 51a488a8e0..a4b7d00da5 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -5,6 +5,7 @@
*/

#include <xen/compiler.h>
+#include <xen/efi.h>
#include <xen/init.h>
#include <xen/macros.h>
#include <xen/mm.h>
@@ -243,10 +244,23 @@ check_drtm_policy(struct slr_table *slrt,
{
uint32_t i;
uint32_t num_mod_entries;
+ int min_entries;

- if ( policy->nr_entries < 2 )
- panic("DRTM policy in SLRT contains less than 2 entries (%d)!\n",
- policy->nr_entries);
+ min_entries = efi_enabled(EFI_BOOT) ? 1 : 2;
+ if ( policy->nr_entries < min_entries )
+ {
+ panic("DRTM policy in SLRT contains less than %d entries (%d)!\n",
+ min_entries, policy->nr_entries);
+ }
+
+ if ( efi_enabled(EFI_BOOT) )
+ {
+ check_slrt_policy_entry(&policy_entry[0], 0, slrt);
+ /* SLRT was measured in tpm_measure_slrt(). */
+ return 1;
+ }
+
+ /* This must be legacy MultiBoot2 boot. */

/*
* MBI policy entry must be the first one, so that measuring order matches
@@ -315,6 +329,7 @@ void __init slaunch_process_drtm_policy(const struct boot_info *bi)
struct slr_table *slrt;
struct slr_entry_policy *policy;
struct slr_policy_entry *policy_entry;
+ int rc;
uint16_t i;
unsigned int measured;

@@ -330,7 +345,6 @@ void __init slaunch_process_drtm_policy(const struct boot_info *bi)

for ( i = measured; i < policy->nr_entries; i++ )
{
- int rc;
uint64_t start = policy_entry[i].entity;
uint64_t size = policy_entry[i].size;

@@ -375,6 +389,58 @@ void __init slaunch_process_drtm_policy(const struct boot_info *bi)

policy_entry[i].flags |= SLR_POLICY_FLAG_MEASURED;
}
+
+ /*
+ * On x86 EFI platforms Xen reads its command-line options and kernel/initrd
+ * from configuration files (several can be chained). Bootloader can't know
+ * contents of the configuration beforehand without parsing it, so there
+ * will be no corresponding policy entries. Instead, measure command-line
+ * and all modules here.
+ */
+ if ( efi_enabled(EFI_BOOT) )
+ {
+#define LOG_DATA(str) (uint8_t *)(str), (sizeof(str) - 1)
+
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR,
+ (const uint8_t *)bi->cmdline, strlen(bi->cmdline),
+ DLE_EVTYPE_SLAUNCH, LOG_DATA("Xen's command line"));
+
+ for ( i = 0; i < bi->nr_modules; i++ )
+ {
+ const struct boot_module *mod = &bi->mods[i];
+
+ paddr_t string = mod->cmdline_pa;
+ paddr_t start = mod->start;
+ size_t size = mod->size;
+
+ if ( mod->relocated || mod->released )
+ {
+ panic("A module \"%s\" (#%d) was consumed before measurement\n",
+ (const char *)__va(string), i);
+ }
+
+ /*
+ * Measuring module's name separately because module's command-line
+ * parameters are appended to its name when present.
+ *
+ * 2 MiB is minimally mapped size and it should more than suffice.
+ */
+ rc = slaunch_map_l2(string, 2 * 1024 * 1024);
+ BUG_ON(rc != 0);
+
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR,
+ __va(string), strlen(__va(string)),
+ DLE_EVTYPE_SLAUNCH, LOG_DATA("MB module string"));
+
+ rc = slaunch_map_l2(start, size);
+ BUG_ON(rc != 0);
+
+ tpm_hash_extend(DRTM_LOC, DRTM_CODE_PCR, __va(start), size,
+ DLE_EVTYPE_SLAUNCH, LOG_DATA("MB module"));
+ }
+
+#undef LOG_DATA
+ }
}

int __init slaunch_map_l2(unsigned long paddr, unsigned long size)
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 143b5681ba..eb4ce6991a 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -19,6 +19,7 @@
#if EFI_PAGE_SIZE != PAGE_SIZE
# error Cannot use xen/pfn.h here!
#endif
+#include <xen/slr_table.h>
#include <xen/string.h>
#include <xen/stringify.h>
#ifdef CONFIG_X86
@@ -1004,6 +1005,7 @@ static void __init efi_tables(void)
static EFI_GUID __initdata mps_guid = MPS_TABLE_GUID;
static EFI_GUID __initdata smbios_guid = SMBIOS_TABLE_GUID;
static EFI_GUID __initdata smbios3_guid = SMBIOS3_TABLE_GUID;
+ static EFI_GUID __initdata slr_guid = UEFI_SLR_TABLE_GUID;

if ( match_guid(&acpi2_guid, &efi_ct[i].VendorGuid) )
efi.acpi20 = (unsigned long)efi_ct[i].VendorTable;
@@ -1015,6 +1017,8 @@ static void __init efi_tables(void)
efi.smbios = (unsigned long)efi_ct[i].VendorTable;
if ( match_guid(&smbios3_guid, &efi_ct[i].VendorGuid) )
efi.smbios3 = (unsigned long)efi_ct[i].VendorTable;
+ if ( match_guid(&slr_guid, &efi_ct[i].VendorGuid) )
+ efi.slr = (unsigned long)efi_ct[i].VendorTable;
if ( match_guid(&esrt_guid, &efi_ct[i].VendorGuid) )
esrt = (UINTN)efi_ct[i].VendorTable;
}
diff --git a/xen/common/efi/runtime.c b/xen/common/efi/runtime.c
index 7e1fce291d..e1b339f162 100644
--- a/xen/common/efi/runtime.c
+++ b/xen/common/efi/runtime.c
@@ -70,6 +70,7 @@ struct efi __read_mostly efi = {
.mps = EFI_INVALID_TABLE_ADDR,
.smbios = EFI_INVALID_TABLE_ADDR,
.smbios3 = EFI_INVALID_TABLE_ADDR,
+ .slr = EFI_INVALID_TABLE_ADDR,
};

const struct efi_pci_rom *__read_mostly efi_pci_roms;
diff --git a/xen/include/xen/efi.h b/xen/include/xen/efi.h
index 160804e294..614dfce66a 100644
--- a/xen/include/xen/efi.h
+++ b/xen/include/xen/efi.h
@@ -19,6 +19,7 @@ struct efi {
unsigned long acpi20; /* ACPI table (ACPI 2.0) */
unsigned long smbios; /* SM BIOS table */
unsigned long smbios3; /* SMBIOS v3 table */
+ unsigned long slr; /* SLR table */
};

extern struct efi efi;
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:08:06 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Michał Żygowski <michal....@3mdeb.com>

Report TXT capabilities so that dom0 can query the Intel TXT or AMD
SKINIT support information using xl dmesg.

Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/cpu/amd.c | 14 +++++++++
xen/arch/x86/cpu/cpu.h | 1 +
xen/arch/x86/cpu/hygon.c | 1 +
xen/arch/x86/cpu/intel.c | 44 ++++++++++++++++++++++++++++
xen/arch/x86/include/asm/intel_txt.h | 5 ++++
5 files changed, 65 insertions(+)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index ce4e1df710..8be135dbc1 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -671,6 +671,19 @@ void amd_log_freq(const struct cpuinfo_x86 *c)
#undef FREQ
}

+void amd_log_skinit(const struct cpuinfo_x86 *c)
+{
+ /* Run only on BSP to report the capability only once */
+ if ( smp_processor_id() )
+ return;
+
+ printk("CPU: SKINIT capability ");
+ if ( !test_bit(X86_FEATURE_SKINIT, &boot_cpu_data.x86_capability) )
+ printk("not supported\n");
+ else
+ printk("supported\n");
+}
+
void cf_check early_init_amd(struct cpuinfo_x86 *c)
{
if (c == &boot_cpu_data)
@@ -1320,6 +1333,7 @@ static void cf_check init_amd(struct cpuinfo_x86 *c)
check_syscfg_dram_mod_en();

amd_log_freq(c);
+ amd_log_skinit(c);
}

const struct cpu_dev __initconst_cf_clobber amd_cpu_dev = {
diff --git a/xen/arch/x86/cpu/cpu.h b/xen/arch/x86/cpu/cpu.h
index 8be65e975a..5bcf118a93 100644
--- a/xen/arch/x86/cpu/cpu.h
+++ b/xen/arch/x86/cpu/cpu.h
@@ -20,6 +20,7 @@ extern bool detect_extended_topology(struct cpuinfo_x86 *c);

void cf_check early_init_amd(struct cpuinfo_x86 *c);
void amd_log_freq(const struct cpuinfo_x86 *c);
+void amd_log_skinit(const struct cpuinfo_x86 *c);
void amd_init_lfence(struct cpuinfo_x86 *c);
void amd_init_ssbd(const struct cpuinfo_x86 *c);
void amd_init_spectral_chicken(void);
diff --git a/xen/arch/x86/cpu/hygon.c b/xen/arch/x86/cpu/hygon.c
index f7508cc8fc..6ebb8b5fab 100644
--- a/xen/arch/x86/cpu/hygon.c
+++ b/xen/arch/x86/cpu/hygon.c
@@ -85,6 +85,7 @@ static void cf_check init_hygon(struct cpuinfo_x86 *c)
}

amd_log_freq(c);
+ amd_log_skinit(c);
}

const struct cpu_dev __initconst_cf_clobber hygon_cpu_dev = {
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 6a680ba38d..618bd5540e 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -13,6 +13,7 @@
#include <asm/apic.h>
#include <asm/i387.h>
#include <asm/trampoline.h>
+#include <asm/intel_txt.h>

#include "cpu.h"

@@ -571,6 +572,47 @@ static void init_intel_perf(struct cpuinfo_x86 *c)
}
}

+/*
+ * Print out the SMX and TXT capabilties, so that dom0 can determine if the
+ * system is DRTM-capable.
+ */
+static void intel_log_smx_txt(struct cpuinfo_x86 *c)
+{
+ unsigned long cr4_val, getsec_caps;
+
+ /* Run only on BSP to report the SMX/TXT caps only once */
+ if ( smp_processor_id() )
+ return;
+
+ printk("CPU: SMX capability ");
+ if ( !test_bit(X86_FEATURE_SMX, &boot_cpu_data.x86_capability) )
+ {
+ printk("not supported\n");
+ return;
+ }
+ printk("supported\n");
+
+ /* Can't run GETSEC without VMX and SMX */
+ if ( !test_bit(X86_FEATURE_VMX, &boot_cpu_data.x86_capability) )
+ return;
+
+ cr4_val = read_cr4();
+ if ( !(cr4_val & X86_CR4_SMXE) )
+ write_cr4(cr4_val | X86_CR4_SMXE);
+
+ asm volatile ("getsec\n"
+ : "=a" (getsec_caps)
+ : "a" (GETSEC_CAPABILITIES), "b" (0) :);
+
+ if ( getsec_caps & GETSEC_CAP_TXT_CHIPSET )
+ printk("Chipset supports TXT\n");
+ else
+ printk("Chipset does not support TXT\n");
+
+ if ( !(cr4_val & X86_CR4_SMXE) )
+ write_cr4(cr4_val & ~X86_CR4_SMXE);
+}
+
static void cf_check init_intel(struct cpuinfo_x86 *c)
{
/* Detect the extended topology information if available */
@@ -585,6 +627,8 @@ static void cf_check init_intel(struct cpuinfo_x86 *c)
detect_ht(c);
}

+ intel_log_smx_txt(c);
+
/* Work around errata */
Intel_errata_workarounds(c);

diff --git a/xen/arch/x86/include/asm/intel_txt.h b/xen/arch/x86/include/asm/intel_txt.h
index af997c9da6..76ec651b11 100644
--- a/xen/arch/x86/include/asm/intel_txt.h
+++ b/xen/arch/x86/include/asm/intel_txt.h
@@ -82,6 +82,11 @@
#define TXT_AP_BOOT_CS 0x0030
#define TXT_AP_BOOT_DS 0x0038

+/* EAX value for GETSEC leaf functions. Intel SDM: GETSEC[CAPABILITIES] */
+#define GETSEC_CAPABILITIES 0
+/* Intel SDM: GETSEC Capability Result Encoding */
+#define GETSEC_CAP_TXT_CHIPSET 1
+
#ifndef __ASSEMBLY__

#include <xen/slr_table.h>
--
2.49.0

Sergii Dmytruk

unread,
Apr 22, 2025, 11:08:07 AMApr 22
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
Use slr_entry_amd_info::boot_params_base on AMD with SKINIT to get MBI
location.

Another thing of interest is the location of SLRT which is bootloader's
data after SKL.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/boot/head.S | 38 ++++++++++++++++----
xen/arch/x86/boot/slaunch_early.c | 58 +++++++++++++++++++++++++++++++
2 files changed, 90 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 419bf58d5c..3184b6883a 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -354,10 +354,12 @@ cs32_switch:
jmp *%edi

/*
- * Entry point for TrenchBoot Secure Launch on Intel TXT platforms.
+ * Entry point for TrenchBoot Secure Launch, common for Intel TXT and
+ * AMD Secure Startup, but state is slightly different.
*
+ * On Intel:
* CPU is in 32b protected mode with paging disabled. On entry:
- * - %ebx = %eip = MLE entry point,
+ * - %ebx = %eip = this entry point,
* - stack pointer is undefined,
* - CS is flat 4GB code segment,
* - DS, ES, SS, FS and GS are undefined according to TXT SDG, but this
@@ -375,13 +377,34 @@ cs32_switch:
* - trying to enter real mode results in reset
* - APs must be brought up by MONITOR or GETSEC[WAKEUP], depending on
* which is supported by a given SINIT ACM
+ *
+ * On AMD (as implemented by TrenchBoot's SKL):
+ * CPU is in 32b protected mode with paging disabled. On entry:
+ * - %ebx = %eip = this entry point,
+ * - %ebp holds base address of SKL
+ * - stack pointer is treated as undefined for parity with TXT,
+ * - CS is flat 4GB code segment,
+ * - DS, ES, SS are flat 4GB data segments, but treated as undefined for
+ * parity with TXT.
+ *
+ * Additional restrictions:
+ * - interrupts (including NMIs and SMIs) are disabled and must be
+ * enabled later
+ * - APs must be brought up by SIPI without an INIT
*/
slaunch_stub_entry:
/* Calculate the load base address. */
mov %ebx, %esi
sub $sym_offs(slaunch_stub_entry), %esi

- /* Mark Secure Launch boot protocol and jump to common entry. */
+ /* On AMD, %ebp holds the base address of SLB, save it for later. */
+ mov %ebp, %ebx
+
+ /*
+ * Mark Secure Launch boot protocol and jump to common entry. Note that
+ * all general purpose registers except %ebx and %esi are clobbered
+ * between here and .Lslaunch_proto.
+ */
mov $SLAUNCH_BOOTLOADER_MAGIC, %eax
jmp .Lset_stack

@@ -508,15 +531,18 @@ __start:
sub $8, %esp

push %esp /* pointer to output structure */
+ push %ebx /* Slaunch parameter on AMD */
lea sym_offs(__2M_rwdata_end), %ecx /* end of target image */
lea sym_offs(_start), %edx /* target base address */
mov %esi, %eax /* load base address */
/*
- * slaunch_early_init(load/eax, tgt/edx, tgt_end/ecx, ret/stk) using
- * fastcall calling convention.
+ * slaunch_early_init(load/eax, tgt/edx, tgt_end/ecx,
+ * slaunch/stk, ret/stk)
+ *
+ * Uses fastcall calling convention.
*/
call slaunch_early_init
- add $4, %esp /* pop the fourth parameter */
+ add $8, %esp /* pop last two parameters */

/* Move outputs of slaunch_early_init() from stack into registers. */
pop %eax /* physical MBI address */
diff --git a/xen/arch/x86/boot/slaunch_early.c b/xen/arch/x86/boot/slaunch_early.c
index af8aa29ae0..d53faf8ab0 100644
--- a/xen/arch/x86/boot/slaunch_early.c
+++ b/xen/arch/x86/boot/slaunch_early.c
@@ -7,6 +7,20 @@
#include <xen/slr_table.h>
#include <xen/types.h>
#include <asm/intel_txt.h>
+#include <asm/x86-vendors.h>
+
+/*
+ * The AMD-defined structure layout for the SLB. The last two fields are
+ * SL-specific.
+ */
+struct skinit_sl_header
+{
+ uint16_t skl_entry_point;
+ uint16_t length;
+ uint8_t reserved[62];
+ uint16_t skl_info_offset;
+ uint16_t bootloader_data_offset;
+} __packed;

struct early_init_results
{
@@ -14,9 +28,25 @@ struct early_init_results
uint32_t slrt_pa;
} __packed;

+static bool is_intel_cpu(void)
+{
+ /*
+ * asm/processor.h can't be included in early code, which means neither
+ * cpuid() function nor boot_cpu_data can be used here.
+ */
+ uint32_t eax, ebx, ecx, edx;
+ asm volatile ( "cpuid"
+ : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx)
+ : "0" (0), "c" (0) );
+ return ebx == X86_VENDOR_INTEL_EBX
+ && ecx == X86_VENDOR_INTEL_ECX
+ && edx == X86_VENDOR_INTEL_EDX;
+}
+
void slaunch_early_init(uint32_t load_base_addr,
uint32_t tgt_base_addr,
uint32_t tgt_end_addr,
+ uint32_t slaunch_param,
struct early_init_results *result)
{
void *txt_heap;
@@ -26,6 +56,34 @@ void slaunch_early_init(uint32_t load_base_addr,
struct slr_entry_intel_info *intel_info;
uint32_t size = tgt_end_addr - tgt_base_addr;

+ if ( !is_intel_cpu() )
+ {
+ /*
+ * Not an Intel CPU. Currently the only other option is AMD with SKINIT
+ * and secure-kernel-loader (SKL).
+ */
+ struct slr_entry_amd_info *amd_info;
+ const struct skinit_sl_header *sl_header = (void *)slaunch_param;
+
+ /*
+ * slaunch_param holds a physical address of SLB.
+ * Bootloader's data is SLRT.
+ */
+ result->slrt_pa = slaunch_param + sl_header->bootloader_data_offset;
+ result->mbi_pa = 0;
+
+ slrt = (struct slr_table *)(uintptr_t)result->slrt_pa;
+
+ amd_info = (struct slr_entry_amd_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_AMD_INFO);
+ /* Basic checks only, SKL checked and consumed the rest. */
+ if ( amd_info == NULL || amd_info->hdr.size != sizeof(*amd_info) )
+ return;
+
+ result->mbi_pa = amd_info->boot_params_base;
+ return;
+ }
+
txt_heap = txt_init();
os_mle = txt_os_mle_data_start(txt_heap);
os_sinit = txt_os_sinit_data_start(txt_heap);
--
2.49.0

Jan Beulich

unread,
Apr 22, 2025, 11:23:35 AMApr 22
to Sergii Dmytruk, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
Just one basic nit right here: In the names of new files you add, please
prefer dashes over underscores.

Jan

Jan Beulich

unread,
Apr 22, 2025, 11:36:27 AMApr 22
to Sergii Dmytruk, Krystian Hebel, Andrew Cooper, Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On 22.04.2025 17:06, Sergii Dmytruk wrote:
> From: Krystian Hebel <krystia...@3mdeb.com>
>
> The code comes from [1] and is licensed under GPL-2.0 license.
> It's a combination of:
> - include/crypto/sha1.h
> - include/crypto/sha1_base.h
> - lib/crypto/sha1.c
> - crypto/sha1_generic.c
>
> Changes:
> - includes
> - formatting
> - renames and splicing of some trivial functions that are called once
> - dropping of `int` return values (only zero was ever returned)
> - getting rid of references to `struct shash_desc`

Since you did move the code to (largely) Xen style, a few further requests
in that direction:

> --- /dev/null
> +++ b/xen/include/xen/sha1.h
> @@ -0,0 +1,12 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef __XEN_SHA1_H
> +#define __XEN_SHA1_H
> +
> +#include <xen/inttypes.h>
> +
> +#define SHA1_DIGEST_SIZE 20
> +
> +void sha1_hash(const u8 *data, unsigned int len, u8 *out);

uint8_t please in both instances here, and more generally {,u}int<N>_t
in place of {s,u}<N>.

> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -38,6 +38,7 @@ lib-y += strtoll.o
> lib-y += strtoul.o
> lib-y += strtoull.o
> lib-$(CONFIG_X86) += x86-generic-hweightl.o
> +lib-$(CONFIG_X86) += sha1.o

Please obey to alphabetic sorting.
The # of pre-processor directives generally wants to be in the first column.

> +#elif defined(CONFIG_ARM)
> + #define setW(x, val) do { W(x) = (val); __asm__("":::"memory"); } while ( 0 )

__asm__ ( "" ::: "memory" );

as far as style goes. But then I see no need to open-code barrier().

> +#else
> + #define setW(x, val) (W(x) = (val))
> +#endif
> +
> +/* This "rolls" over the 512-bit array */
> +#define W(x) (array[(x) & 15])
> +
> +/*
> + * Where do we get the source from? The first 16 iterations get it from
> + * the input data, the next mix it from the 512-bit array.
> + */
> +#define SHA_SRC(t) get_unaligned_be32((uint32_t *)data + t)
> +#define SHA_MIX(t) rol32(W(t + 13) ^ W(t + 8) ^ W(t + 2) ^ W(t), 1)

I fear Misra isn't going to like the lack of parenthesization of macro
arguments used in expressions. This looks to be an issue with most
macros here.
Please respect line length restrictions. The use of plain int here also looks
questionable, as just from the name that parameter looks like it can't have a
negative argument passed for it. This will want adjusting elsewhere as well.

> +/**
> + * sha1_transform - single block SHA1 transform (deprecated)
> + *
> + * @digest: 160 bit digest to update
> + * @data: 512 bits of data to hash
> + * @array: 16 words of workspace (see note)
> + *
> + * This function executes SHA-1's internal compression function. It updates the
> + * 160-bit internal state (@digest) with a single 512-bit data block (@data).
> + *
> + * Don't use this function. SHA-1 is no longer considered secure. And even if
> + * you do have to use SHA-1, this isn't the correct way to hash something with
> + * SHA-1 as this doesn't handle padding and finalization.
> + *
> + * Note: If the hash is security sensitive, the caller should be sure
> + * to clear the workspace. This is left to the caller to avoid
> + * unnecessary clears between chained hashing operations.
> + */
> +void sha1_transform(uint32_t *digest, const uint8_t *data, uint32_t *array)

You add no declaration of this function in the header. Should it be static?
This would also help with the "Don't use ..." part of the comment.

Jan

Andrew Cooper

unread,
Apr 22, 2025, 11:37:15 AMApr 22
to Sergii Dmytruk, xen-...@lists.xenproject.org, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
On 22/04/2025 4:06 pm, Sergii Dmytruk wrote:
> xen/include/xen/sha256.h | 12 ++
> xen/lib/Makefile | 1 +
> xen/lib/sha256.c | 238 +++++++++++++++++++++++++++++++++++++++
> 3 files changed, 251 insertions(+)
> create mode 100644 xen/include/xen/sha256.h
> create mode 100644 xen/lib/sha256.c

I added SHA2 a little while back, derived from the Trenchboot tree.

See 372af524411f5a013bcb0b117073d8d07c026563 (and a few follow-up fixes).

It should have everything needed, but we can adjust if necessary.

We need to integrate SHA1 in a similar way.  Xen now has various MISRA
requirements to adhere to, which requires some adjustments, but I can
advise if it isn't clear from the sha2 work I already did.

~Andrew

Jan Beulich

unread,
Apr 22, 2025, 11:39:05 AMApr 22
to Sergii Dmytruk, Andrew Cooper, Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On 22.04.2025 17:06, Sergii Dmytruk wrote:
> The code comes from [1] and is licensed under GPL-2.0 or later version
> of the license. It's a combination of:
> - include/crypto/sha2.h
> - include/crypto/sha256_base.h
> - lib/crypto/sha256.c
> - crypto/sha256_generic.c
>
> Changes:
> - includes
> - formatting
> - renames and splicing of some trivial functions that are called once
> - dropping of `int` return values (only zero was ever returned)
> - getting rid of references to `struct shash_desc`
>
> [1]: https://github.com/torvalds/linux/tree/afdab700f65e14070d8ab92175544b1c62b8bf03
>
> Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
> Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>

Most comments just give on patch 09 apply here as well.

Jan

Andrew Cooper

unread,
Apr 22, 2025, 1:14:35 PMApr 22
to Sergii Dmytruk, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
On 22/04/2025 4:06 pm, Sergii Dmytruk wrote:
> The aim of the [TrenchBoot] project is to provide an implementation of
> DRTM that is generic enough to cover various use cases:
> - Intel TXT and AMD SKINIT on x86 CPUs
> - legacy and UEFI boot
> - TPM1.2 and TPM2.0
> - (in the future) DRTM on Arm CPUs
>
> DRTM is a version of a measured launch that starts on request rather
> than at the start of a boot cycle. One of its advantages is in not
> including the firmware in the chain of trust.
>
> Xen already supports DRTM via [tboot] which targets Intel TXT only.
> tboot employs encapsulates some of the DRTM details within itself while
> with TrenchBoot Xen (or Linux) is meant to be a self-contained payload
> for a TrenchBoot-enabled bootloader (think GRUB). The one exception is
> that UEFI case requires calling back into bootloader to initiate DRTM,
> which is necessary to give Xen a chance of querying all the information
> it needs from the firmware before performing DRTM start.
>
> From reading the above tboot might seem like a more abstracted, but the
> reality is that the payload needs to have DRTM-specific knowledge either
> way. TrenchBoot in principle allows coming up with independent
> implementations of bootloaders and payloads that are compatible with
> each other.
>
> The "x86/boot: choose AP stack based on APIC ID" patch is shared with
> [Parallelize AP bring-up] series which is required here because Intel
> TXT always releases all APs simultaneously. The rest of the patches are
> unique.

I've stripped out the sha2 patch and fixed up to use the existing sha2,
then kicked off some CI testing:

https://gitlab.com/xen-project/hardware/xen-staging/-/pipelines/1780285393
https://cirrus-ci.com/build/5452335868018688

When the dust has settled, I'll talk you through the failures.

~Andrew

Sergii Dmytruk

unread,
Apr 22, 2025, 2:33:29 PMApr 22
to Jan Beulich, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On Tue, Apr 22, 2025 at 05:23:30PM +0200, Jan Beulich wrote:
> Just one basic nit right here: In the names of new files you add, please
> prefer dashes over underscores.

I wasn't aware of this preference, will be updated in the next version.

> Jan

Nicola Vetrini

unread,
Apr 22, 2025, 3:01:09 PMApr 22
to Andrew Cooper, Sergii Dmytruk, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com

Andrew Cooper

unread,
Apr 22, 2025, 4:23:07 PMApr 22
to Sergii Dmytruk, xen-...@lists.xenproject.org, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
On 22/04/2025 4:06 pm, Sergii Dmytruk wrote:
> diff --git a/xen/include/xen/slr_table.h b/xen/include/xen/slr_table.h
> new file mode 100644
> index 0000000000..e9dbac5d0a
> --- /dev/null
> +++ b/xen/include/xen/slr_table.h
> @@ -0,0 +1,274 @@
> +/* SPDX-License-Identifier: GPL-3.0-or-later */

I'm sorry, but we cannot accept this submission.

Xen is GPL-2-only, and can only accept source code compatible with this
license.  Everything else in this series appears to be compatible (and
therefore is fine), but this patch is not.

~Andrew

ross.ph...@oracle.com

unread,
Apr 22, 2025, 4:46:44 PMApr 22
to Sergii Dmytruk, xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
On 4/22/25 8:06 AM, Sergii Dmytruk wrote:
> The file provides constants, structures and several helper functions for
> parsing SLRT.
>
> Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
> ---
> xen/include/xen/slr_table.h | 274 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 274 insertions(+)
> create mode 100644 xen/include/xen/slr_table.h
>
> diff --git a/xen/include/xen/slr_table.h b/xen/include/xen/slr_table.h
> new file mode 100644
> index 0000000000..e9dbac5d0a
> --- /dev/null
> +++ b/xen/include/xen/slr_table.h
> @@ -0,0 +1,274 @@
> +/* SPDX-License-Identifier: GPL-3.0-or-later */
> +
> +/*
> + * Copyright (C) 2023 Oracle and/or its affiliates.
> + *
> + * Secure Launch Resource Table definitions
> + */
> +
> +#ifndef _SLR_TABLE_H
> +#define _SLR_TABLE_H
> +
> +#include <xen/types.h>
> +
> +#define UEFI_SLR_TABLE_GUID \
> + { 0x877a9b2a, 0x0385, 0x45d1, { 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f } }
> +
> +/* SLR table header values */
> +#define SLR_TABLE_MAGIC 0x4452544d
> +#define SLR_TABLE_REVISION 1
> +
> +/* Current revisions for the policy and UEFI config */
> +#define SLR_POLICY_REVISION 1
> +#define SLR_UEFI_CONFIG_REVISION 1
> +
> +/* SLR defined architectures */
> +#define SLR_INTEL_TXT 1
> +#define SLR_AMD_SKINIT 2
> +
> +/* SLR defined bootloaders */
> +#define SLR_BOOTLOADER_INVALID 0
> +#define SLR_BOOTLOADER_GRUB 1
> +
> +/* Log formats */
> +#define SLR_DRTM_TPM12_LOG 1
> +#define SLR_DRTM_TPM20_LOG 2
> +
> +/* DRTM Policy Entry Flags */
> +#define SLR_POLICY_FLAG_MEASURED 0x1
> +#define SLR_POLICY_IMPLICIT_SIZE 0x2
> +
> +/* Array Lengths */
> +#define TPM_EVENT_INFO_LENGTH 32
> +#define TXT_VARIABLE_MTRRS_LENGTH 32
> +
> +/* Tags */
> +#define SLR_ENTRY_INVALID 0x0000
> +#define SLR_ENTRY_DL_INFO 0x0001
> +#define SLR_ENTRY_LOG_INFO 0x0002
> +#define SLR_ENTRY_DRTM_POLICY 0x0003
> +#define SLR_ENTRY_INTEL_INFO 0x0004
> +#define SLR_ENTRY_AMD_INFO 0x0005
> +#define SLR_ENTRY_ARM_INFO 0x0006
> +#define SLR_ENTRY_UEFI_INFO 0x0007
> +#define SLR_ENTRY_UEFI_CONFIG 0x0008
> +#define SLR_ENTRY_END 0xffff
> +
> +/* Entity Types */
> +#define SLR_ET_UNSPECIFIED 0x0000
> +#define SLR_ET_SLRT 0x0001
> +#define SLR_ET_BOOT_PARAMS 0x0002
> +#define SLR_ET_SETUP_DATA 0x0003
> +#define SLR_ET_CMDLINE 0x0004
> +#define SLR_ET_UEFI_MEMMAP 0x0005
> +#define SLR_ET_RAMDISK 0x0006
> +#define SLR_ET_MULTIBOOT2_INFO 0x0007
> +#define SLR_ET_MULTIBOOT2_MODULE 0x0008
> +#define SLR_ET_TXT_OS2MLE 0x0010
> +#define SLR_ET_UNUSED 0xffff
> +
> +/*
> + * Primary SLR Table Header
> + */
> +struct slr_table
> +{
> + uint32_t magic;
> + uint16_t revision;
> + uint16_t architecture;
> + uint32_t size;
> + uint32_t max_size;
> + /* entries[] */
> +} __packed;
> +
> +/*
> + * Common SLRT Table Header
> + */
> +struct slr_entry_hdr
> +{
> + uint32_t tag;
> + uint32_t size;
> +} __packed;
> +
> +/*
> + * Boot loader context
> + */
> +struct slr_bl_context
> +{
> + uint16_t bootloader;
> + uint16_t reserved[3];
> + uint64_t context;
> +} __packed;
> +
> +/*
> + * Prototype of a function pointed to by slr_entry_dl_info::dl_handler.
> + */
> +typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);
> +
> +/*
> + * DRTM Dynamic Launch Configuration
> + */
> +struct slr_entry_dl_info
> +{
> + struct slr_entry_hdr hdr;
> + uint64_t dce_size;
> + uint64_t dce_base;
> + uint64_t dlme_size;
> + uint64_t dlme_base;
> + uint64_t dlme_entry;
> + struct slr_bl_context bl_context;
> + uint64_t dl_handler;
> +} __packed;
> +
> +/*
> + * TPM Log Information
> + */
> +struct slr_entry_log_info
> +{
> + struct slr_entry_hdr hdr;
> + uint16_t format;
> + uint16_t reserved;
> + uint32_t size;
> + uint64_t addr;
> +} __packed;
> +
> +/*
> + * DRTM Measurement Entry
> + */
> +struct slr_policy_entry
> +{
> + uint16_t pcr;
> + uint16_t entity_type;
> + uint16_t flags;
> + uint16_t reserved;
> + uint64_t size;
> + uint64_t entity;
> + char evt_info[TPM_EVENT_INFO_LENGTH];
> +} __packed;
> +
> +/*
> + * DRTM Measurement Policy
> + */
> +struct slr_entry_policy
> +{
> + struct slr_entry_hdr hdr;
> + uint16_t reserved[2];
> + uint16_t revision;
> + uint16_t nr_entries;
> + struct slr_policy_entry policy_entries[];
> +} __packed;
> +
> +/*
> + * Secure Launch defined MTRR saving structures
> + */
> +struct slr_txt_mtrr_pair
> +{
> + uint64_t mtrr_physbase;
> + uint64_t mtrr_physmask;
> +} __packed;
> +
> +struct slr_txt_mtrr_state
> +{
> + uint64_t default_mem_type;
> + uint64_t mtrr_vcnt;
> + struct slr_txt_mtrr_pair mtrr_pair[TXT_VARIABLE_MTRRS_LENGTH];
> +} __packed;
> +
> +/*
> + * Intel TXT Info table
> + */
> +struct slr_entry_intel_info
> +{
> + struct slr_entry_hdr hdr;
> + uint64_t boot_params_base;
> + uint64_t txt_heap;
> + uint64_t saved_misc_enable_msr;
> + struct slr_txt_mtrr_state saved_bsp_mtrrs;
> +} __packed;
> +
> +/*
> + * AMD SKINIT Info table
> + */
> +struct slr_entry_amd_info
> +{
> + struct slr_entry_hdr hdr;
> + uint64_t next;
> + uint32_t type;
> + uint32_t len;
> + uint64_t slrt_size;
> + uint64_t slrt_base;
> + uint64_t boot_params_base;
> + uint16_t psp_version;
> + uint16_t reserved[3];
> +} __packed;
> +
> +/*
> + * ARM DRTM Info table
> + */
> +struct slr_entry_arm_info
> +{
> + struct slr_entry_hdr hdr;
> +} __packed;

You can probably ditch this for now.

> +
> +/*
> + * UEFI config measurement entry
> + */
> +struct slr_uefi_cfg_entry
> +{
> + uint16_t pcr;
> + uint16_t reserved;
> + uint32_t size;
> + uint64_t cfg; /* address or value */
> + char evt_info[TPM_EVENT_INFO_LENGTH];
> +} __packed;
> +
> +struct slr_entry_uefi_config
> +{
> + struct slr_entry_hdr hdr;
> + uint16_t reserved[2];
> + uint16_t revision;
> + uint16_t nr_entries;
> + struct slr_uefi_cfg_entry uefi_cfg_entries[];
> +} __packed;
> +
> +static inline void *
> +slr_end_of_entries(struct slr_table *table)
> +{
> + return (uint8_t *)table + table->size;
> +}
> +
> +static inline struct slr_entry_hdr *
> +slr_next_entry(struct slr_table *table, struct slr_entry_hdr *curr)
> +{
> + struct slr_entry_hdr *next = (struct slr_entry_hdr *)
> + ((uint8_t *)curr + curr->size);
> +
> + if ( (void *)next >= slr_end_of_entries(table) )
> + return NULL;
> + if ( next->tag == SLR_ENTRY_END )
> + return NULL;
> +
> + return next;
> +}
> +
> +static inline struct slr_entry_hdr *
> +slr_next_entry_by_tag (struct slr_table *table,
> + struct slr_entry_hdr *entry,
> + uint16_t tag)
> +{
> + if ( !entry ) /* Start from the beginning */
> + entry = (struct slr_entry_hdr *)((uint8_t *)table + sizeof(*table));
> +
> + for ( ; ; )
> + {
> + if ( entry->tag == tag )
> + return entry;
> +
> + entry = slr_next_entry(table, entry);
> + if ( !entry )
> + return NULL;
> + }
> +
> + return NULL;
> +}

I am surprised you did not need the slr_add_entry() function. How do you
add entries to the SLRT?

Thanks
Ross

> +
> +#endif /* _SLR_TABLE_H */

Sergii Dmytruk

unread,
Apr 22, 2025, 6:15:22 PMApr 22
to Jan Beulich, Krystian Hebel, Andrew Cooper, Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On Tue, Apr 22, 2025 at 05:36:22PM +0200, Jan Beulich wrote:
> On 22.04.2025 17:06, Sergii Dmytruk wrote:
> > From: Krystian Hebel <krystia...@3mdeb.com>
> >
> > The code comes from [1] and is licensed under GPL-2.0 license.
> > It's a combination of:
> > - include/crypto/sha1.h
> > - include/crypto/sha1_base.h
> > - lib/crypto/sha1.c
> > - crypto/sha1_generic.c
> >
> > Changes:
> > - includes
> > - formatting
> > - renames and splicing of some trivial functions that are called once
> > - dropping of `int` return values (only zero was ever returned)
> > - getting rid of references to `struct shash_desc`
>
> Since you did move the code to (largely) Xen style, a few further requests
> in that direction:

Rewriting the patch due to a comment by Andrew Cooper obsoletes most of
your comments, but thanks for them anyway.

>
> Jan

Sergii Dmytruk

unread,
Apr 22, 2025, 6:20:37 PMApr 22
to Andrew Cooper, xen-...@lists.xenproject.org, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
Oh, I actually checked for existing hash implementations before sending
the patches... Need to remove untracked files which made it hard to see
the new file.

Thanks, I think I figured out the modifications you've made for SHA256
and almost done getting rid of macros for SHA1.

Andrew Cooper

unread,
Apr 23, 2025, 9:38:42 AMApr 23
to Sergii Dmytruk, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
On 22/04/2025 6:14 pm, Andrew Cooper wrote:
> I've stripped out the sha2 patch and fixed up to use the existing sha2,
> then kicked off some CI testing:
>
> https://gitlab.com/xen-project/hardware/xen-staging/-/pipelines/1780285393
> https://cirrus-ci.com/build/5452335868018688
>
> When the dust has settled, I'll talk you through the failures.

And here we go.  Interestingly, the FreeBSD testing was entirely happy,
and that is the rare way around.

For Gitlab, there are several areas.

First, for MISRA.  In the job logs, you want the "Browse current
reports:" link which will give you full details, but it's all pretty
simple stuff.

kbl-suspend-x86-64-gcc-debug is a real S3 test on KabyLake hardware,
which appears to have gone to sleep and never woken up.  (More likely,
crashed on wakeup before we got the console up).  The AlderLake
equivalent test seems to be happy, as well as the AMD ones.

For the build issues, there are quite a few.

debian-12-x86_64-gcc-ibt is special, using an out-of-tree patch for
CET-IBT safety.  tl;dr function pointer callees need a cf_check
annotation.  But, all the failures here are from sha1, and from bits
which I don't think want to survive into the final form.

Other common failures seem to be:

    # take image offset into account
    arch/x86/efi/fixmlehdr xen.efi 0x200000
    Failed to find MLE header in xen.efi
    arch/x86/Makefile:220: recipe for target 'xen.efi' failed
    make[3]: *** [xen.efi] Error 1

~Andrew

Sergii Dmytruk

unread,
Apr 23, 2025, 10:40:57 AMApr 23
to Andrew Cooper, xen-...@lists.xenproject.org, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, ross.ph...@oracle.com, trenchbo...@googlegroups.com
I think the license comes from GRUB's version which is GPL-3-or-later
while the original Linux header file is GPL-2. Linux patches is really
the source here. I don't think anything prevents use of the header
under GPL-2, so I'll change the license in v2. Adding Ross Philipson to
CC as the original author of both Linux and GRUB versions just in case.

Sergii Dmytruk

unread,
Apr 23, 2025, 10:48:02 AMApr 23
to ross.ph...@oracle.com, xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
On Tue, Apr 22, 2025 at 01:46:14PM -0700, ross.ph...@oracle.com wrote:
> > +
> > +/*
> > + * ARM DRTM Info table
> > + */
> > +struct slr_entry_arm_info
> > +{
> > + struct slr_entry_hdr hdr;
> > +} __packed;
>
> You can probably ditch this for now.

Right, it has no value at this point.

> I am surprised you did not need the slr_add_entry() function. How do you add
> entries to the SLRT?

Xen doesn't add any SLRT entries. It's also the final consumer of the
SLRT, at least at the moment, so no need to update something that won't
be used again.

> Thanks
> Ross

ross.ph...@oracle.com

unread,
Apr 23, 2025, 1:33:29 PMApr 23
to Sergii Dmytruk, xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
Ahh right. The Linux version allows the policy to be updated by the EFI
stub but you are not doing that.

Thanks
Ross

>
>> Thanks
>> Ross

Sergii Dmytruk

unread,
Apr 23, 2025, 2:46:11 PMApr 23
to Andrew Cooper, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
On Wed, Apr 23, 2025 at 02:38:37PM +0100, Andrew Cooper wrote:
> On 22/04/2025 6:14 pm, Andrew Cooper wrote:
> > I've stripped out the sha2 patch and fixed up to use the existing sha2,
> > then kicked off some CI testing:
> >
> > https://gitlab.com/xen-project/hardware/xen-staging/-/pipelines/1780285393
> > https://cirrus-ci.com/build/5452335868018688
> >
> > When the dust has settled, I'll talk you through the failures.
>
> And here we go.  Interestingly, the FreeBSD testing was entirely happy,
> and that is the rare way around.
>
> For Gitlab, there are several areas.
>
> First, for MISRA.  In the job logs, you want the "Browse current
> reports:" link which will give you full details, but it's all pretty
> simple stuff.

Thanks, but that link gives me a list of 5096 failures all over the code
base. Is there any way to see a diff against master?

> kbl-suspend-x86-64-gcc-debug is a real S3 test on KabyLake hardware,
> which appears to have gone to sleep and never woken up.  (More likely,
> crashed on wakeup before we got the console up).  The AlderLake
> equivalent test seems to be happy, as well as the AMD ones.

Hm, not sure what that could be, but will try to reproduce/guess.

> For the build issues, there are quite a few.
>
> debian-12-x86_64-gcc-ibt is special, using an out-of-tree patch for
> CET-IBT safety.  tl;dr function pointer callees need a cf_check
> annotation.  But, all the failures here are from sha1, and from bits
> which I don't think want to survive into the final form.

That stuff is gone and the build should succeed the next time.

> Other common failures seem to be:
>
>     # take image offset into account
>     arch/x86/efi/fixmlehdr xen.efi 0x200000
>     Failed to find MLE header in xen.efi
>     arch/x86/Makefile:220: recipe for target 'xen.efi' failed
>     make[3]: *** [xen.efi] Error 1
>
> ~Andrew

That seems to be the only reason behind the rest of build failures.
I was able to reproduce the failure in Fedora 37 docker. Searching for
the header in 8KiB instead of 4KiB fixes it. Looks like large default
alignment of some toolchains pushes `head.S` to 4 KiB offset.

Daniel P. Smith

unread,
Apr 23, 2025, 2:59:43 PMApr 23
to Sergii Dmytruk, xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
Sergii,

Thanks so much to you and the team over at 3mdeb for taking the lead on
getting Secure Launch written for Xen.

One quick comment, Secure Launch will eventually support other
architectures, and we really should not let the maintenance fall on the
x86 maintainers, or eventually to "the rest". I would like to suggest
adding an entry into the MAINTAINERS file for "TrenchBoot Secure Launch"
and list any new files that are being introduced for Secure Launch. When
adding the section to MAINTAINERS, I would kindly like to request that
myself be included as a maintainer and Ross Phillipson as a reviewer.

V/r,
Daniel P. Smith

Sergii Dmytruk

unread,
Apr 23, 2025, 5:53:16 PMApr 23
to Nicola Vetrini, Andrew Cooper, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
On Wed, Apr 23, 2025 at 10:11:35PM +0200, Nicola Vetrini wrote:
> On 2025-04-23 20:45, Sergii Dmytruk wrote:
> > On Wed, Apr 23, 2025 at 02:38:37PM +0100, Andrew Cooper wrote:
> > > On 22/04/2025 6:14 pm, Andrew Cooper wrote:
> > > > I've stripped out the sha2 patch and fixed up to use the existing sha2,
> > > > then kicked off some CI testing:
> > > >
> > > > https://gitlab.com/xen-project/hardware/xen-staging/-/pipelines/1780285393
> > > > https://cirrus-ci.com/build/5452335868018688
> > > >
> > > > When the dust has settled, I'll talk you through the failures.
> > >
> > > And here we go.  Interestingly, the FreeBSD testing was entirely
> > > happy,
> > > and that is the rare way around.
> > >
> > > For Gitlab, there are several areas.
> > >
> > > First, for MISRA.  In the job logs, you want the "Browse current
> > > reports:" link which will give you full details, but it's all pretty
> > > simple stuff.
> >
> > Thanks, but that link gives me a list of 5096 failures all over the code
> > base. Is there any way to see a diff against master?
> >
>
> Hi,
>
> yes, you can define selections of violations introduced on previously clean
> guidelines by clicking on the "ECLAIR" button on the upper right. See [1]
> which is the result of defining the "clean_added" selection shown in the
> attached screenshot. If you have other questions please let me know.

Hi,

not sure why, but using "added" left 4861 violations. Picking `_NO_TAG`
instead seemingly left only new violations. Maybe that's something
specific to this particular run. Either way, I can go through the list
now and know how to adjust it. Thank you for the instructions.

> Thanks,
> Nicola
>
> [1] https://saas.eclairit.com:3787/fs/var/local/eclair/xen-project.ecdf/xen-project/hardware/xen-staging/ECLAIR_normal/andrew/tb-v1.1/ARM64/9791028027/PROJECT.ecd;/by_service.html#service&kind{"select":true,"selection":{"hiddenAreaKinds":[],"hiddenSubareaKinds":[],"show":true,"selector":{"enabled":true,"negated":false,"kind":1,"children":[{"enabled":true,"negated":false,"kind":0,"domain":"clean","inputs":[{"enabled":true,"text":"added"}]}]}}}

Andrew Cooper

unread,
Apr 23, 2025, 6:43:22 PMApr 23
to Sergii Dmytruk, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
On 23/04/2025 7:45 pm, Sergii Dmytruk wrote:
> On Wed, Apr 23, 2025 at 02:38:37PM +0100, Andrew Cooper wrote:
>> On 22/04/2025 6:14 pm, Andrew Cooper wrote:
>>> I've stripped out the sha2 patch and fixed up to use the existing sha2,
>>> then kicked off some CI testing:
>>>
>>> https://gitlab.com/xen-project/hardware/xen-staging/-/pipelines/1780285393
>>> https://cirrus-ci.com/build/5452335868018688
>>>
>>> When the dust has settled, I'll talk you through the failures.
>> And here we go.  Interestingly, the FreeBSD testing was entirely happy,
>> and that is the rare way around.
>>
>> For Gitlab, there are several areas.
>>
>> First, for MISRA.  In the job logs, you want the "Browse current
>> reports:" link which will give you full details, but it's all pretty
>> simple stuff.
> Thanks, but that link gives me a list of 5096 failures all over the code
> base. Is there any way to see a diff against master?

No sadly not.  What you see is a mix of the blocking issues, and the "we
want to see these so we can work on them".

Immediately under the link is the one-line tl;dr.  For ARM, it's just a
single:

Failure: 1 regressions found for clean guidelines
  service MC3A2.R7.2: (required) A `u' or `U' suffix shall be applied to
all integer constants that are represented in an unsigned type:
   violation: 1

Clicking through into the R7.2 analysis shows
https://saas.eclairit.com:3787/fs/var/local/eclair/xen-project.ecdf/xen-project/hardware/xen-staging/ECLAIR_normal/andrew/tb-v1.1/ARM64/9791028027/PROJECT.ecd;/by_service/MC3A2.R7.2.html

This violation is shared with x86 because it's a header pulled into a
common file.

For x86, the list is rather longer.  You've got:

6x D1.1
2x D4.14
1x R5.3
116x R7.2
1x R7.3
12x R8.3
7x R8.4
1x R11.9
87x R20.7

These are the blocking directives/rules.  Others which you see in the
overall report are non-blocking.

>
>> kbl-suspend-x86-64-gcc-debug is a real S3 test on KabyLake hardware,
>> which appears to have gone to sleep and never woken up.  (More likely,
>> crashed on wakeup before we got the console up).  The AlderLake
>> equivalent test seems to be happy, as well as the AMD ones.
> Hm, not sure what that could be, but will try to reproduce/guess.

KBL is unreliable in one specific way, but not with these symptoms.

I reran the suspend test, and it failed in the same way.  I think it's a
deterministic bug.

I can probably dig out my emergency serial debugging patches for S3 if
you want?

>> Other common failures seem to be:
>>
>>     # take image offset into account
>>     arch/x86/efi/fixmlehdr xen.efi 0x200000
>>     Failed to find MLE header in xen.efi
>>     arch/x86/Makefile:220: recipe for target 'xen.efi' failed
>>     make[3]: *** [xen.efi] Error 1
>>
>> ~Andrew
> That seems to be the only reason behind the rest of build failures.
> I was able to reproduce the failure in Fedora 37 docker. Searching for
> the header in 8KiB instead of 4KiB fixes it. Looks like large default
> alignment of some toolchains pushes `head.S` to 4 KiB offset.

FYI, you can access all the Xen containers with:

CONTAINER=foo ./automation/scripts/containerize

in the xen.git tree.

Alignment that large is unexpected, and I suspect we want to fix it.  Is
it pre-existing, or something introduced by your series?

~Andrew

Sergii Dmytruk

unread,
Apr 24, 2025, 2:14:27 PMApr 24
to Nicola Vetrini, Andrew Cooper, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
On Thu, Apr 24, 2025 at 12:54:41PM +0200, Nicola Vetrini wrote:
> I'm not sure I fully understand this. This is what I see on x86: the ones
> still shown are those rules where the CI is blocking and new issues have
> been introduced by that pipeline run (of course a different pipeline may
> yield different results). Only new violations are blocking, so that is why I
> filtered out the rest in this case.

My bad, I still had "Hide" instead of "Show" in the selection. Other
comboboxes are also hard to see but I wasn't even looking for one in
the title. Thanks again.

Sergii Dmytruk

unread,
Apr 24, 2025, 2:47:45 PMApr 24
to Andrew Cooper, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
On Wed, Apr 23, 2025 at 11:43:15PM +0100, Andrew Cooper wrote:
> On 23/04/2025 7:45 pm, Sergii Dmytruk wrote:
> > On Wed, Apr 23, 2025 at 02:38:37PM +0100, Andrew Cooper wrote:
> >> On 22/04/2025 6:14 pm, Andrew Cooper wrote:
> >>> I've stripped out the sha2 patch and fixed up to use the existing sha2,
> >>> then kicked off some CI testing:
> >>>
> >>> https://gitlab.com/xen-project/hardware/xen-staging/-/pipelines/1780285393
> >>> https://cirrus-ci.com/build/5452335868018688
> >>>
> >>> When the dust has settled, I'll talk you through the failures.
> >> And here we go.  Interestingly, the FreeBSD testing was entirely happy,
> >> and that is the rare way around.
> >>
> >> For Gitlab, there are several areas.
> >>
> >> First, for MISRA.  In the job logs, you want the "Browse current
> >> reports:" link which will give you full details, but it's all pretty
> >> simple stuff.
> > Thanks, but that link gives me a list of 5096 failures all over the code
> > base. Is there any way to see a diff against master?
>
> No sadly not.  What you see is a mix of the blocking issues, and the "we
> want to see these so we can work on them".

Nicola Vetrini explained how some errors can be filtered in
https://lore.kernel.org/xen-devel/c2940798-11d0-4aaa...@apertussolutions.com/T/#m153e1cf8a6ef37d3d301253624c07fa3c25814c2
At least in this case it works when done correctly.

> >> kbl-suspend-x86-64-gcc-debug is a real S3 test on KabyLake hardware,
> >> which appears to have gone to sleep and never woken up.  (More likely,
> >> crashed on wakeup before we got the console up).  The AlderLake
> >> equivalent test seems to be happy, as well as the AMD ones.
> > Hm, not sure what that could be, but will try to reproduce/guess.
>
> KBL is unreliable in one specific way, but not with these symptoms.
>
> I reran the suspend test, and it failed in the same way.  I think it's a
> deterministic bug.
>
> I can probably dig out my emergency serial debugging patches for S3 if
> you want?

Thanks, I'll try to come up with something first. So far I thought
about a change in how stack is picked for APs, but I would expect all
hardware to have issues with S3 if that was the problem.

> >> Other common failures seem to be:
> >>
> >>     # take image offset into account
> >>     arch/x86/efi/fixmlehdr xen.efi 0x200000
> >>     Failed to find MLE header in xen.efi
> >>     arch/x86/Makefile:220: recipe for target 'xen.efi' failed
> >>     make[3]: *** [xen.efi] Error 1
> >>
> >> ~Andrew
> > That seems to be the only reason behind the rest of build failures.
> > I was able to reproduce the failure in Fedora 37 docker. Searching for
> > the header in 8KiB instead of 4KiB fixes it. Looks like large default
> > alignment of some toolchains pushes `head.S` to 4 KiB offset.
>
> FYI, you can access all the Xen containers with:
>
> CONTAINER=foo ./automation/scripts/containerize
>
> in the xen.git tree.

Thanks, that looks more convenient.

> Alignment that large is unexpected, and I suspect we want to fix it.  Is
> it pre-existing, or something introduced by your series?
>
> ~Andrew

Pre-existing one. I can see `-N` is already passed to `ld`:

-N, --omagic Do not page align data, do not make text readonly

Specifying `--section-alignment 512 --file-alignment 512 --nmagic`
didn't reduce the alignment. Statistics so far:

Give 0x1000 offset:
GNU ld (GNU Binutils for Debian) 2.31.1
GNU ld version 2.38-27.fc37

Give 0x440 offset:
GNU ld (GNU Binutils for Debian) 2.40
GNU ld (GNU Binutils for Debian) 2.41

Maybe that's not something configurable and just needs a newer version.

Andrew Cooper

unread,
Apr 24, 2025, 2:51:27 PMApr 24
to Sergii Dmytruk, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
On 24/04/2025 7:47 pm, Sergii Dmytruk wrote:
>> Alignment that large is unexpected, and I suspect we want to fix it.  Is
>> it pre-existing, or something introduced by your series?
>>
>> ~Andrew
> Pre-existing one. I can see `-N` is already passed to `ld`:
>
> -N, --omagic Do not page align data, do not make text readonly
>
> Specifying `--section-alignment 512 --file-alignment 512 --nmagic`
> didn't reduce the alignment. Statistics so far:
>
> Give 0x1000 offset:
> GNU ld (GNU Binutils for Debian) 2.31.1
> GNU ld version 2.38-27.fc37
>
> Give 0x440 offset:
> GNU ld (GNU Binutils for Debian) 2.40
> GNU ld (GNU Binutils for Debian) 2.41
>
> Maybe that's not something configurable and just needs a newer version.

Ah - that's something that was changed literally yesterday:

https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=d444763f8ca556d0a67a4b933be303d346baef02

in order to fix some problems we've had trying to get xen.efi happy to
be NX_COMPAT.

We couldn't identify any good reason why -N was in use.

~Andrew

Sergii Dmytruk

unread,
Apr 25, 2025, 9:33:42 AMApr 25
to Andrew Cooper, xen-...@lists.xenproject.org, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
The fewer cryptic flags the better, but adding either of those flags or
removing -N doesn't affect the file offset. EFI_LDFLAGS even includes
--file-alignment=0x20, it just gets ignored and that could be
target-specific behaviour in older versions of ld. This commit by Jan
Beulich might be the one fixing it:

https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=bc5baa9f13ffb3fd4c39f1a779062bfb3a980cea
Reply all
Reply to author
Forward
0 new messages