[PATCH v2 00/22] x86: Trenchboot Secure Launch DRTM (Xen)

8 views
Skip to first unread message

Sergii Dmytruk

unread,
May 13, 2025, 1:06:25 PMMay 13
to xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, Anthony PERARD, Michal Orzel, Julien Grall, Stefano Stabellini, Nicola Vetrini, Doug Goldstein, Marek Marczykowski-Górecki, trenchbo...@googlegroups.com
The aim of the [TrenchBoot] project is to provide an implementation of
DRTM that is generic enough to cover various use cases:
- Intel TXT and AMD SKINIT on x86 CPUs
- legacy and UEFI boot
- TPM1.2 and TPM2.0
- (in the future) DRTM on Arm CPUs

DRTM is a version of a measured launch that starts on request rather
than at the start of a boot cycle. One of its advantages is in not
including the firmware in the chain of trust.

Xen already supports DRTM via [tboot] which targets Intel TXT only.
tboot employs encapsulates some of the DRTM details within itself while
with TrenchBoot Xen (or Linux) is meant to be a self-contained payload
for a TrenchBoot-enabled bootloader (think GRUB). The one exception is
that UEFI case requires calling back into bootloader to initiate DRTM,
which is necessary to give Xen a chance of querying all the information
it needs from the firmware before performing DRTM start.

From reading the above tboot might seem like a more abstracted, but the
reality is that the payload needs to have DRTM-specific knowledge either
way. TrenchBoot in principle allows coming up with independent
implementations of bootloaders and payloads that are compatible with
each other.

The "x86/boot: choose AP stack based on APIC ID" patch is shared with
[Parallelize AP bring-up] series which is required here because Intel
TXT always releases all APs simultaneously. The rest of the patches are
unique.

This version of the patches corresponds to this branch:
https://github.com/TrenchBoot/xen/pull/new/aem-staging-2025-05-12-v2

With the help from Andrew Cooper v2 passes all CI tests:
https://gitlab.com/xen-project/hardware/xen-staging/-/pipelines/1815190030

-----

[TrenchBoot]: https://trenchboot.org/
[tboot]: https://sourceforge.net/p/tboot/wiki/Home/
[Parallelize AP bring-up]: https://lore.kernel.org/xen-devel/cover.1699982111....@3mdeb.com/
[v1]: https://lore.kernel.org/xen-devel/cover.1745172094....@3mdeb.com/

-----

Changes in v2:
- using dashes instead of underscores in the names of new files
- dropping of an extra sha256 implementation
- rewriting sha1 implementation to be in line with already present
sha256 implementation (simplifying it and getting rid of macros)
- correct placement of new lines in Makefile
- add header guards to all new files
- use correct names for header guards in new files
- update license of xen/include/xen/slr-table.h
- changed fixmlehdr to search for header within 8 instead of 4 KiB file
prefix
- don't print DRTM-related capabilities when resuming from S3
- forbade S3 in case of Secure Launch
- fixed an issue with resuming from S3 caused by inappropriate use of
__initdata
- added a new section to MAINTAINERS
- improved commit messages
- fixed MISRA C violations:
* shadowing of e820 global
* missing U literal suffixes
* use of ull literal suffix
* excluded fixmlehdr from analysis (similar to other build tools)
* use of 0 instead of NULL in one place
* provided declarations for some definitions
* marked asm-invoked functions with `asmlinkage`

-----

Kacper Stojek (2):
x86/boot: add MLE header and Secure Launch entry point
xen/arch/x86: reserve TXT memory during Slaunch

Krystian Hebel (7):
x86/include/asm/intel-txt.h: constants and accessors for TXT registers
and heap
x86/boot/slaunch-early: early TXT checks and boot data retrieval
x86/slaunch: restore boot MTRRs after Intel TXT DRTM
xen/lib: add implementation of SHA-1
x86/tpm.c: code for early hashing and extending PCRs (for TPM1.2)
x86/boot: choose AP stack based on APIC ID
x86/smpboot.c: TXT AP bringup

Michał Żygowski (2):
x86/hvm: check for VMX in SMX if Slaunch is active
x86/cpu: report SMX, TXT and SKINIT capabilities

Sergii Dmytruk (11):
include/xen/slr-table.h: Secure Launch Resource Table definitions
x86/boot/slaunch-early: implement early initialization
x86/mtrr: expose functions for pausing caching
x86/tpm.c: support extending PCRs of TPM2.0
x86/tpm.c: implement event log for TPM2.0
x86/slaunch: process DRTM policy
x86/acpi: disallow S3 on Secure Launch boot
x86/boot/slaunch-early: find MBI and SLRT on AMD
x86/slaunch: support AMD SKINIT
x86/slaunch: support EFI boot
MAINTAINERS: add a section for TrenchBoot Slaunch

.gitignore | 1 +
MAINTAINERS | 15 +
.../eclair_analysis/ECLAIR/out_of_scope.ecl | 1 +
docs/hypervisor-guide/x86/how-xen-boots.rst | 7 +
xen/arch/x86/Makefile | 12 +-
xen/arch/x86/acpi/power.c | 8 +
xen/arch/x86/boot/Makefile | 10 +-
xen/arch/x86/boot/head.S | 250 ++++
xen/arch/x86/boot/slaunch-early.c | 105 ++
xen/arch/x86/boot/trampoline.S | 40 +-
xen/arch/x86/boot/x86_64.S | 42 +-
xen/arch/x86/cpu/amd.c | 16 +
xen/arch/x86/cpu/cpu.h | 1 +
xen/arch/x86/cpu/hygon.c | 1 +
xen/arch/x86/cpu/intel.c | 46 +
xen/arch/x86/cpu/mtrr/generic.c | 51 +-
xen/arch/x86/e820.c | 5 +
xen/arch/x86/efi/efi-boot.h | 88 +-
xen/arch/x86/efi/fixmlehdr.c | 127 ++
xen/arch/x86/hvm/vmx/vmcs.c | 3 +-
xen/arch/x86/include/asm/apicdef.h | 4 +
xen/arch/x86/include/asm/intel-txt.h | 457 +++++++
xen/arch/x86/include/asm/mm.h | 3 +
xen/arch/x86/include/asm/msr-index.h | 3 +
xen/arch/x86/include/asm/mtrr.h | 8 +
xen/arch/x86/include/asm/processor.h | 1 +
xen/arch/x86/include/asm/slaunch.h | 98 ++
xen/arch/x86/include/asm/tpm.h | 19 +
xen/arch/x86/intel-txt.c | 188 +++
xen/arch/x86/setup.c | 32 +-
xen/arch/x86/slaunch.c | 465 ++++++++
xen/arch/x86/smpboot.c | 63 +
xen/arch/x86/tboot.c | 20 +-
xen/arch/x86/tpm.c | 1056 +++++++++++++++++
xen/common/efi/boot.c | 4 +
xen/common/efi/runtime.c | 1 +
xen/include/xen/efi.h | 1 +
xen/include/xen/sha1.h | 12 +
xen/include/xen/slr-table.h | 268 +++++
xen/lib/Makefile | 1 +
xen/lib/sha1.c | 218 ++++
41 files changed, 3695 insertions(+), 56 deletions(-)
create mode 100644 xen/arch/x86/boot/slaunch-early.c
create mode 100644 xen/arch/x86/efi/fixmlehdr.c
create mode 100644 xen/arch/x86/include/asm/intel-txt.h
create mode 100644 xen/arch/x86/include/asm/slaunch.h
create mode 100644 xen/arch/x86/include/asm/tpm.h
create mode 100644 xen/arch/x86/intel-txt.c
create mode 100644 xen/arch/x86/slaunch.c
create mode 100644 xen/arch/x86/tpm.c
create mode 100644 xen/include/xen/sha1.h
create mode 100644 xen/include/xen/slr-table.h
create mode 100644 xen/lib/sha1.c


base-commit: f6042f38e621525feff86bb101dc751d2d87cff8
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:27 PMMay 13
to xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

The file contains TXT register spaces base address, registers offsets,
error codes and inline functions for accessing structures stored on
TXT heap.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/include/asm/intel-txt.h | 277 +++++++++++++++++++++++++++
xen/arch/x86/tboot.c | 20 +-
2 files changed, 279 insertions(+), 18 deletions(-)
create mode 100644 xen/arch/x86/include/asm/intel-txt.h

diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
new file mode 100644
index 0000000000..07be43f557
--- /dev/null
+++ b/xen/arch/x86/include/asm/intel-txt.h
@@ -0,0 +1,277 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef ASM__X86__INTEL_TXT_H
+#define ASM__X86__INTEL_TXT_H
+
+/*
+ * TXT configuration registers (offsets from TXT_{PUB, PRIV}_CONFIG_REGS_BASE)
+ */
+#define TXT_PUB_CONFIG_REGS_BASE 0xfed30000U
+#define TXT_PRIV_CONFIG_REGS_BASE 0xfed20000U
+
+/*
+ * The same set of registers is exposed twice (with different permissions) and
+ * they are allocated continuously with page alignment.
+ */
+#define NR_TXT_CONFIG_SIZE \
+ (TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE)
+
+/* Offsets from pub/priv config space. */
+#define TXTCR_STS 0x0000
+#define TXTCR_ESTS 0x0008
+#define TXTCR_ERRORCODE 0x0030
+#define TXTCR_CMD_RESET 0x0038
+#define TXTCR_CMD_CLOSE_PRIVATE 0x0048
+#define TXTCR_DIDVID 0x0110
+#define TXTCR_VER_EMIF 0x0200
+#define TXTCR_CMD_UNLOCK_MEM_CONFIG 0x0218
+#define TXTCR_SINIT_BASE 0x0270
+#define TXTCR_SINIT_SIZE 0x0278
+#define TXTCR_MLE_JOIN 0x0290
+#define TXTCR_HEAP_BASE 0x0300
+#define TXTCR_HEAP_SIZE 0x0308
+#define TXTCR_SCRATCHPAD 0x0378
+#define TXTCR_CMD_OPEN_LOCALITY1 0x0380
+#define TXTCR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXTCR_CMD_OPEN_LOCALITY2 0x0390
+#define TXTCR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXTCR_CMD_SECRETS 0x08e0
+#define TXTCR_CMD_NO_SECRETS 0x08e8
+#define TXTCR_E2STS 0x08f0
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SLAUNCH_ERROR_GENERIC 0xc0008001U
+#define SLAUNCH_ERROR_TPM_INIT 0xc0008002U
+#define SLAUNCH_ERROR_TPM_INVALID_LOG20 0xc0008003U
+#define SLAUNCH_ERROR_TPM_LOGGING_FAILED 0xc0008004U
+#define SLAUNCH_ERROR_REGION_STRADDLE_4GB 0xc0008005U
+#define SLAUNCH_ERROR_TPM_EXTEND 0xc0008006U
+#define SLAUNCH_ERROR_MTRR_INV_VCNT 0xc0008007U
+#define SLAUNCH_ERROR_MTRR_INV_DEF_TYPE 0xc0008008U
+#define SLAUNCH_ERROR_MTRR_INV_BASE 0xc0008009U
+#define SLAUNCH_ERROR_MTRR_INV_MASK 0xc000800aU
+#define SLAUNCH_ERROR_MSR_INV_MISC_EN 0xc000800bU
+#define SLAUNCH_ERROR_INV_AP_INTERRUPT 0xc000800cU
+#define SLAUNCH_ERROR_INTEGER_OVERFLOW 0xc000800dU
+#define SLAUNCH_ERROR_HEAP_WALK 0xc000800eU
+#define SLAUNCH_ERROR_HEAP_MAP 0xc000800fU
+#define SLAUNCH_ERROR_REGION_ABOVE_4GB 0xc0008010U
+#define SLAUNCH_ERROR_HEAP_INVALID_DMAR 0xc0008011U
+#define SLAUNCH_ERROR_HEAP_DMAR_SIZE 0xc0008012U
+#define SLAUNCH_ERROR_HEAP_DMAR_MAP 0xc0008013U
+#define SLAUNCH_ERROR_HI_PMR_BASE 0xc0008014U
+#define SLAUNCH_ERROR_HI_PMR_SIZE 0xc0008015U
+#define SLAUNCH_ERROR_LO_PMR_BASE 0xc0008016U
+#define SLAUNCH_ERROR_LO_PMR_SIZE 0xc0008017U
+#define SLAUNCH_ERROR_LO_PMR_MLE 0xc0008018U
+#define SLAUNCH_ERROR_INITRD_TOO_BIG 0xc0008019U
+#define SLAUNCH_ERROR_HEAP_ZERO_OFFSET 0xc000801aU
+#define SLAUNCH_ERROR_WAKE_BLOCK_TOO_SMALL 0xc000801bU
+#define SLAUNCH_ERROR_MLE_BUFFER_OVERLAP 0xc000801cU
+#define SLAUNCH_ERROR_BUFFER_BEYOND_PMR 0xc000801dU
+#define SLAUNCH_ERROR_OS_SINIT_BAD_VERSION 0xc000801eU
+#define SLAUNCH_ERROR_EVENTLOG_MAP 0xc000801fU
+#define SLAUNCH_ERROR_TPM_NUMBER_ALGS 0xc0008020U
+#define SLAUNCH_ERROR_TPM_UNKNOWN_DIGEST 0xc0008021U
+#define SLAUNCH_ERROR_TPM_INVALID_EVENT 0xc0008022U
+
+#define SLAUNCH_BOOTLOADER_MAGIC 0x4c534254
+
+#ifndef __ASSEMBLY__
+
+/* Need to differentiate between pre- and post paging enabled. */
+#ifdef __EARLY_SLAUNCH__
+#include <xen/macros.h>
+#define _txt(x) _p(x)
+#else
+#include <xen/types.h>
+#include <asm/page.h> // __va()
+#define _txt(x) __va(x)
+#endif
+
+/*
+ * Always use private space as some of registers are either read-only or not
+ * present in public space.
+ */
+static inline uint64_t read_txt_reg(int reg_no)
+{
+ volatile uint64_t *reg = _txt(TXT_PRIV_CONFIG_REGS_BASE + reg_no);
+ return *reg;
+}
+
+static inline void write_txt_reg(int reg_no, uint64_t val)
+{
+ volatile uint64_t *reg = _txt(TXT_PRIV_CONFIG_REGS_BASE + reg_no);
+ *reg = val;
+ /* This serves as TXT register barrier */
+ (void)read_txt_reg(TXTCR_ESTS);
+}
+
+static inline void txt_reset(uint32_t error)
+{
+ write_txt_reg(TXTCR_ERRORCODE, error);
+ write_txt_reg(TXTCR_CMD_NO_SECRETS, 1);
+ write_txt_reg(TXTCR_CMD_UNLOCK_MEM_CONFIG, 1);
+ write_txt_reg(TXTCR_CMD_RESET, 1);
+ while (1);
+}
+
+/*
+ * Secure Launch defined OS/MLE TXT Heap table
+ */
+struct txt_os_mle_data {
+ uint32_t version;
+ uint32_t reserved;
+ uint64_t slrt;
+ uint64_t txt_info;
+ uint32_t ap_wake_block;
+ uint32_t ap_wake_block_size;
+ uint8_t mle_scratch[64];
+} __packed;
+
+/*
+ * TXT specification defined BIOS data TXT Heap table
+ */
+struct txt_bios_data {
+ uint32_t version; /* Currently 5 for TPM 1.2 and 6 for TPM 2.0 */
+ uint32_t bios_sinit_size;
+ uint64_t reserved1;
+ uint64_t reserved2;
+ uint32_t num_logical_procs;
+ /* Versions >= 3 && < 5 */
+ uint32_t sinit_flags;
+ /* Versions >= 5 with updates in version 6 */
+ uint32_t mle_flags;
+ /* Versions >= 4 */
+ /* Ext Data Elements */
+} __packed;
+
+/*
+ * TXT specification defined OS/SINIT TXT Heap table
+ */
+struct txt_os_sinit_data {
+ uint32_t version; /* Currently 6 for TPM 1.2 and 7 for TPM 2.0 */
+ uint32_t flags; /* Reserved in version 6 */
+ uint64_t mle_ptab;
+ uint64_t mle_size;
+ uint64_t mle_hdr_base;
+ uint64_t vtd_pmr_lo_base;
+ uint64_t vtd_pmr_lo_size;
+ uint64_t vtd_pmr_hi_base;
+ uint64_t vtd_pmr_hi_size;
+ uint64_t lcp_po_base;
+ uint64_t lcp_po_size;
+ uint32_t capabilities;
+ /* Version = 5 */
+ uint64_t efi_rsdt_ptr; /* RSD*P* in versions >= 6 */
+ /* Versions >= 6 */
+ /* Ext Data Elements */
+} __packed;
+
+/*
+ * TXT specification defined SINIT/MLE TXT Heap table
+ */
+struct txt_sinit_mle_data {
+ uint32_t version; /* Current values are 6 through 9 */
+ /* Versions <= 8, fields until lcp_policy_control must be 0 for >= 9 */
+ uint8_t bios_acm_id[20];
+ uint32_t edx_senter_flags;
+ uint64_t mseg_valid;
+ uint8_t sinit_hash[20];
+ uint8_t mle_hash[20];
+ uint8_t stm_hash[20];
+ uint8_t lcp_policy_hash[20];
+ uint32_t lcp_policy_control;
+ /* Versions >= 7 */
+ uint32_t rlp_wakeup_addr;
+ uint32_t reserved;
+ uint32_t num_of_sinit_mdrs;
+ uint32_t sinit_mdrs_table_offset;
+ uint32_t sinit_vtd_dmar_table_size;
+ uint32_t sinit_vtd_dmar_table_offset;
+ /* Versions >= 8 */
+ uint32_t processor_scrtm_status;
+ /* Versions >= 9 */
+ /* Ext Data Elements */
+} __packed;
+
+/*
+ * Functions to extract data from the Intel TXT Heap Memory. The layout
+ * of the heap is as follows:
+ * +------------------------------------+
+ * | Size of Bios Data table (uint64_t) |
+ * +------------------------------------+
+ * | Bios Data table |
+ * +------------------------------------+
+ * | Size of OS MLE table (uint64_t) |
+ * +------------------------------------+
+ * | OS MLE table |
+ * +-------------------------------- +
+ * | Size of OS SINIT table (uint64_t) |
+ * +------------------------------------+
+ * | OS SINIT table |
+ * +------------------------------------+
+ * | Size of SINIT MLE table (uint64_t) |
+ * +------------------------------------+
+ * | SINIT MLE table |
+ * +------------------------------------+
+ *
+ * NOTE: the table size fields include the 8 byte size field itself.
+ */
+static inline uint64_t txt_bios_data_size(void *heap)
+{
+ return *((uint64_t *)heap) - sizeof(uint64_t);
+}
+
+static inline void *txt_bios_data_start(void *heap)
+{
+ return heap + sizeof(uint64_t);
+}
+
+static inline uint64_t txt_os_mle_data_size(void *heap)
+{
+ return *((uint64_t *)(txt_bios_data_start(heap) +
+ txt_bios_data_size(heap))) -
+ sizeof(uint64_t);
+}
+
+static inline void *txt_os_mle_data_start(void *heap)
+{
+ return txt_bios_data_start(heap) + txt_bios_data_size(heap) +
+ sizeof(uint64_t);
+}
+
+static inline uint64_t txt_os_sinit_data_size(void *heap)
+{
+ return *((uint64_t *)(txt_os_mle_data_start(heap) +
+ txt_os_mle_data_size(heap))) -
+ sizeof(uint64_t);
+}
+
+static inline void *txt_os_sinit_data_start(void *heap)
+{
+ return txt_os_mle_data_start(heap) + txt_os_mle_data_size(heap) +
+ sizeof(uint64_t);
+}
+
+static inline uint64_t txt_sinit_mle_data_size(void *heap)
+{
+ return *((uint64_t *)(txt_os_sinit_data_start(heap) +
+ txt_os_sinit_data_size(heap))) -
+ sizeof(uint64_t);
+}
+
+static inline void *txt_sinit_mle_data_start(void *heap)
+{
+ return txt_os_sinit_data_start(heap) + txt_os_sinit_data_size(heap) +
+ sizeof(uint64_t);
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* ASM__X86__INTEL_TXT_H */
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index d5db60d335..8a573d8c79 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -15,6 +15,7 @@
#include <asm/tboot.h>
#include <asm/setup.h>
#include <asm/trampoline.h>
+#include <asm/intel-txt.h>

#include <crypto/vmac.h>

@@ -35,23 +36,6 @@ static uint64_t __initdata sinit_base, __initdata sinit_size;

static bool __ro_after_init is_vtd;

-/*
- * TXT configuration registers (offsets from TXT_{PUB, PRIV}_CONFIG_REGS_BASE)
- */
-
-#define TXT_PUB_CONFIG_REGS_BASE 0xfed30000
-#define TXT_PRIV_CONFIG_REGS_BASE 0xfed20000
-
-/* # pages for each config regs space - used by fixmap */
-#define NR_TXT_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
- TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
-
-/* offsets from pub/priv config space */
-#define TXTCR_SINIT_BASE 0x0270
-#define TXTCR_SINIT_SIZE 0x0278
-#define TXTCR_HEAP_BASE 0x0300
-#define TXTCR_HEAP_SIZE 0x0308
-
#define SHA1_SIZE 20
typedef uint8_t sha1_hash_t[SHA1_SIZE];

@@ -409,7 +393,7 @@ int __init tboot_protect_mem_regions(void)

/* TXT Private Space */
rc = e820_change_range_type(&e820, TXT_PRIV_CONFIG_REGS_BASE,
- TXT_PRIV_CONFIG_REGS_BASE + NR_TXT_CONFIG_PAGES * PAGE_SIZE,
+ TXT_PRIV_CONFIG_REGS_BASE + NR_TXT_CONFIG_SIZE,
E820_RESERVED, E820_UNUSABLE);
if ( !rc )
return 0;
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:29 PMMay 13
to xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
The file provides constants, structures and several helper functions for
parsing SLRT.

Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/include/xen/slr-table.h | 268 ++++++++++++++++++++++++++++++++++++
1 file changed, 268 insertions(+)
create mode 100644 xen/include/xen/slr-table.h

diff --git a/xen/include/xen/slr-table.h b/xen/include/xen/slr-table.h
new file mode 100644
index 0000000000..6f0018bceb
--- /dev/null
+++ b/xen/include/xen/slr-table.h
@@ -0,0 +1,268 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright (c) 2025 Apertus Solutions, LLC
+ * Copyright (c) 2025 Oracle and/or its affiliates.
+ * Copyright (c) 2025 3mdeb Sp. z o.o
+ *
+ * Secure Launch Resource Table definitions
+ */
+
+#ifndef XEN__SLR_TABLE_H
+#define XEN__SLR_TABLE_H
+
+#include <xen/types.h>
+
+#define UEFI_SLR_TABLE_GUID \
+ { 0x877a9b2aU, 0x0385, 0x45d1, { 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f } }
+
+/* SLR table header values */
+#define SLR_TABLE_MAGIC 0x4452544d
+#define SLR_TABLE_REVISION 1
+
+/* Current revisions for the policy and UEFI config */
+#define SLR_POLICY_REVISION 1
+#define SLR_UEFI_CONFIG_REVISION 1
+
+/* SLR defined architectures */
+#define SLR_INTEL_TXT 1
+#define SLR_AMD_SKINIT 2
+
+/* SLR defined bootloaders */
+#define SLR_BOOTLOADER_INVALID 0
+#define SLR_BOOTLOADER_GRUB 1
+
+/* Log formats */
+#define SLR_DRTM_TPM12_LOG 1
+#define SLR_DRTM_TPM20_LOG 2
+
+/* DRTM Policy Entry Flags */
+#define SLR_POLICY_FLAG_MEASURED 0x1
+#define SLR_POLICY_IMPLICIT_SIZE 0x2
+
+/* Array Lengths */
+#define TPM_EVENT_INFO_LENGTH 32
+#define TXT_VARIABLE_MTRRS_LENGTH 32
+
+/* Tags */
+#define SLR_ENTRY_INVALID 0x0000
+#define SLR_ENTRY_DL_INFO 0x0001
+#define SLR_ENTRY_LOG_INFO 0x0002
+#define SLR_ENTRY_DRTM_POLICY 0x0003
+#define SLR_ENTRY_INTEL_INFO 0x0004
+#define SLR_ENTRY_AMD_INFO 0x0005
+#define SLR_ENTRY_ARM_INFO 0x0006
+#define SLR_ENTRY_UEFI_INFO 0x0007
+#define SLR_ENTRY_UEFI_CONFIG 0x0008
+#define SLR_ENTRY_END 0xffff
+
+/* Entity Types */
+#define SLR_ET_UNSPECIFIED 0x0000
+#define SLR_ET_SLRT 0x0001
+#define SLR_ET_BOOT_PARAMS 0x0002
+#define SLR_ET_SETUP_DATA 0x0003
+#define SLR_ET_CMDLINE 0x0004
+#define SLR_ET_UEFI_MEMMAP 0x0005
+#define SLR_ET_RAMDISK 0x0006
+#define SLR_ET_MULTIBOOT2_INFO 0x0007
+#define SLR_ET_MULTIBOOT2_MODULE 0x0008
+#define SLR_ET_TXT_OS2MLE 0x0010
+#define SLR_ET_UNUSED 0xffff
+
+/*
+ * Primary SLR Table Header
+ */
+struct slr_table
+{
+ uint32_t magic;
+ uint16_t revision;
+ uint16_t architecture;
+ uint32_t size;
+ uint32_t max_size;
+ /* entries[] */
+} __packed;
+
+/*
+ * Common SLRT Table Header
+ */
+struct slr_entry_hdr
+{
+ uint32_t tag;
+ uint32_t size;
+} __packed;
+
+/*
+ * Boot loader context
+ */
+struct slr_bl_context
+{
+ uint16_t bootloader;
+ uint16_t reserved[3];
+ uint64_t context;
+} __packed;
+
+/*
+ * Prototype of a function pointed to by slr_entry_dl_info::dl_handler.
+ */
+typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);
+
+/*
+ * DRTM Dynamic Launch Configuration
+ */
+struct slr_entry_dl_info
+{
+ struct slr_entry_hdr hdr;
+ uint64_t dce_size;
+ uint64_t dce_base;
+ uint64_t dlme_size;
+ uint64_t dlme_base;
+ uint64_t dlme_entry;
+ struct slr_bl_context bl_context;
+ uint64_t dl_handler;
+} __packed;
+
+/*
+ * TPM Log Information
+ */
+struct slr_entry_log_info
+{
+ struct slr_entry_hdr hdr;
+ uint16_t format;
+ uint16_t reserved;
+ uint32_t size;
+ uint64_t addr;
+} __packed;
+
+/*
+ * DRTM Measurement Entry
+ */
+struct slr_policy_entry
+{
+ uint16_t pcr;
+ uint16_t entity_type;
+ uint16_t flags;
+ uint16_t reserved;
+ uint64_t size;
+ uint64_t entity;
+ char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * DRTM Measurement Policy
+ */
+struct slr_entry_policy
+{
+ struct slr_entry_hdr hdr;
+ uint16_t reserved[2];
+ uint16_t revision;
+ uint16_t nr_entries;
+ struct slr_policy_entry policy_entries[];
+} __packed;
+
+/*
+ * Secure Launch defined MTRR saving structures
+ */
+struct slr_txt_mtrr_pair
+{
+ uint64_t mtrr_physbase;
+ uint64_t mtrr_physmask;
+} __packed;
+
+struct slr_txt_mtrr_state
+{
+ uint64_t default_mem_type;
+ uint64_t mtrr_vcnt;
+ struct slr_txt_mtrr_pair mtrr_pair[TXT_VARIABLE_MTRRS_LENGTH];
+} __packed;
+
+/*
+ * Intel TXT Info table
+ */
+struct slr_entry_intel_info
+{
+ struct slr_entry_hdr hdr;
+ uint64_t boot_params_base;
+ uint64_t txt_heap;
+ uint64_t saved_misc_enable_msr;
+ struct slr_txt_mtrr_state saved_bsp_mtrrs;
+} __packed;
+
+/*
+ * AMD SKINIT Info table
+ */
+struct slr_entry_amd_info
+{
+ struct slr_entry_hdr hdr;
+ uint64_t next;
+ uint32_t type;
+ uint32_t len;
+ uint64_t slrt_size;
+ uint64_t slrt_base;
+ uint64_t boot_params_base;
+ uint16_t psp_version;
+ uint16_t reserved[3];
+} __packed;
+
+/*
+ * UEFI config measurement entry
+ */
+struct slr_uefi_cfg_entry
+{
+ uint16_t pcr;
+ uint16_t reserved;
+ uint32_t size;
+ uint64_t cfg; /* address or value */
+ char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+struct slr_entry_uefi_config
+{
+ struct slr_entry_hdr hdr;
+ uint16_t reserved[2];
+ uint16_t revision;
+ uint16_t nr_entries;
+ struct slr_uefi_cfg_entry uefi_cfg_entries[];
+} __packed;
+
+static inline void *
+slr_end_of_entries(struct slr_table *table)
+{
+ return (uint8_t *)table + table->size;
+}
+
+static inline struct slr_entry_hdr *
+slr_next_entry(struct slr_table *table, struct slr_entry_hdr *curr)
+{
+ struct slr_entry_hdr *next = (struct slr_entry_hdr *)
+ ((uint8_t *)curr + curr->size);
+
+ if ( (void *)next >= slr_end_of_entries(table) )
+ return NULL;
+ if ( next->tag == SLR_ENTRY_END )
+ return NULL;
+
+ return next;
+}
+
+static inline struct slr_entry_hdr *
+slr_next_entry_by_tag (struct slr_table *table,
+ struct slr_entry_hdr *entry,
+ uint16_t tag)
+{
+ if ( !entry ) /* Start from the beginning */
+ entry = (struct slr_entry_hdr *)((uint8_t *)table + sizeof(*table));
+
+ for ( ; ; )
+ {
+ if ( entry->tag == tag )
+ return entry;
+
+ entry = slr_next_entry(table, entry);
+ if ( !entry )
+ return NULL;
+ }
+
+ return NULL;
+}
+
+#endif /* XEN__SLR_TABLE_H */
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:32 PMMay 13
to xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
From: Kacper Stojek <kacper...@3mdeb.com>

Signed-off-by: Kacper Stojek <kacper...@3mdeb.com>
Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
docs/hypervisor-guide/x86/how-xen-boots.rst | 5 ++
xen/arch/x86/boot/head.S | 53 +++++++++++++++++++++
2 files changed, 58 insertions(+)

diff --git a/docs/hypervisor-guide/x86/how-xen-boots.rst b/docs/hypervisor-guide/x86/how-xen-boots.rst
index 8b3229005c..050fe9c61f 100644
--- a/docs/hypervisor-guide/x86/how-xen-boots.rst
+++ b/docs/hypervisor-guide/x86/how-xen-boots.rst
@@ -55,6 +55,11 @@ If ``CONFIG_PVH_GUEST`` was selected at build time, an Elf note is included
which indicates the ability to use the PVH boot protocol, and registers
``__pvh_start`` as the entrypoint, entered in 32bit mode.

+A combination of Multiboot 2 and MLE headers is used to implement DRTM for
+legacy (BIOS) boot. The separate entry point is used mainly to differentiate
+from other kinds of boots. It moves a magic number to EAX before jumping into
+common startup code.
+

xen.gz
~~~~~~
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 77bb7a9e21..a69107bd81 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -4,6 +4,7 @@
#include <public/xen.h>
#include <asm/asm_defns.h>
#include <asm/fixmap.h>
+#include <asm/intel-txt.h>
#include <asm/page.h>
#include <asm/processor.h>
#include <asm/msr-index.h>
@@ -126,6 +127,25 @@ multiboot2_header:
.size multiboot2_header, . - multiboot2_header
.type multiboot2_header, @object

+ .balign 16
+mle_header:
+ .long 0x9082ac5a /* UUID0 */
+ .long 0x74a7476f /* UUID1 */
+ .long 0xa2555c0f /* UUID2 */
+ .long 0x42b651cb /* UUID3 */
+ .long 0x00000034 /* MLE header size */
+ .long 0x00020002 /* MLE version 2.2 */
+ .long (slaunch_stub_entry - start) /* Linear entry point of MLE (SINIT virt. address) */
+ .long 0x00000000 /* First valid page of MLE */
+ .long 0x00000000 /* Offset within binary of first byte of MLE */
+ .long (_end - start) /* Offset within binary of last byte + 1 of MLE */
+ .long 0x00000723 /* Bit vector of MLE-supported capabilities */
+ .long 0x00000000 /* Starting linear address of command line (unused) */
+ .long 0x00000000 /* Ending linear address of command line (unused) */
+
+ .size mle_header, .-mle_header
+ .type mle_header, @object
+
.section .init.rodata, "a", @progbits

.Lbad_cpu_msg: .asciz "ERR: Not a 64-bit CPU!"
@@ -332,6 +352,38 @@ cs32_switch:
/* Jump to earlier loaded address. */
jmp *%edi

+ /*
+ * Entry point for TrenchBoot Secure Launch on Intel TXT platforms.
+ *
+ * CPU is in 32b protected mode with paging disabled. On entry:
+ * - %ebx = %eip = MLE entry point,
+ * - stack pointer is undefined,
+ * - CS is flat 4GB code segment,
+ * - DS, ES, SS, FS and GS are undefined according to TXT SDG, but this
+ * would make it impossible to initialize GDTR, because GDT base must
+ * be relocated in the descriptor, which requires write access that
+ * CS doesn't provide. Instead we have to assume that DS is set by
+ * SINIT ACM as flat 4GB data segment.
+ *
+ * Additional restrictions:
+ * - some MSRs are partially cleared, among them IA32_MISC_ENABLE, so
+ * some capabilities might be reported as disabled even if they are
+ * supported by CPU
+ * - interrupts (including NMIs and SMIs) are disabled and must be
+ * enabled later
+ * - trying to enter real mode results in reset
+ * - APs must be brought up by MONITOR or GETSEC[WAKEUP], depending on
+ * which is supported by a given SINIT ACM
+ */
+slaunch_stub_entry:
+ /* Calculate the load base address. */
+ mov %ebx, %esi
+ sub $sym_offs(slaunch_stub_entry), %esi
+
+ /* Mark Secure Launch boot protocol and jump to common entry. */
+ mov $SLAUNCH_BOOTLOADER_MAGIC, %eax
+ jmp .Lset_stack
+
#ifdef CONFIG_PVH_GUEST
ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY, .long sym_offs(__pvh_start))

@@ -371,6 +423,7 @@ __start:
/* Restore the clobbered field. */
mov %edx, (%ebx)

+.Lset_stack:
/* Set up stack. */
lea STACK_SIZE - CPUINFO_sizeof + sym_esi(cpu0_stack), %esp

--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:35 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
Make head.S invoke a C function to retrieve MBI and SLRT addresses in a
platform-specific way. This is also the place to perform sanity checks
of DRTM.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/Makefile | 1 +
xen/arch/x86/boot/Makefile | 5 +++-
xen/arch/x86/boot/head.S | 43 ++++++++++++++++++++++++++++
xen/arch/x86/boot/slaunch-early.c | 41 ++++++++++++++++++++++++++
xen/arch/x86/include/asm/intel-txt.h | 16 +++++++++++
xen/arch/x86/include/asm/slaunch.h | 26 +++++++++++++++++
xen/arch/x86/slaunch.c | 27 +++++++++++++++++
7 files changed, 158 insertions(+), 1 deletion(-)
create mode 100644 xen/arch/x86/boot/slaunch-early.c
create mode 100644 xen/arch/x86/include/asm/slaunch.h
create mode 100644 xen/arch/x86/slaunch.c

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index bedb97cbee..43a80be458 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-$(CONFIG_COMPAT) += x86_64/physdev.o
obj-$(CONFIG_X86_PSR) += psr.o
obj-y += setup.o
obj-y += shutdown.o
+obj-y += slaunch.o
obj-y += smp.o
obj-y += smpboot.o
obj-y += spec_ctrl.o
diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
index ff0d61d7ac..5471b966dd 100644
--- a/xen/arch/x86/boot/Makefile
+++ b/xen/arch/x86/boot/Makefile
@@ -5,6 +5,7 @@ obj-bin-y += $(obj64)
obj32 := cmdline.32.o
obj32 += reloc.32.o
obj32 += reloc-trampoline.32.o
+obj32 += slaunch-early.32.o

obj64 := reloc-trampoline.o

@@ -28,6 +29,8 @@ $(obj32): XEN_CFLAGS := $(CFLAGS_x86_32) -fpic
$(obj)/%.32.o: $(src)/%.c FORCE
$(call if_changed_rule,cc_o_c)

+$(obj)/slaunch-early.32.o: XEN_CFLAGS += -D__EARLY_SLAUNCH__
+
orphan-handling-$(call ld-option,--orphan-handling=error) := --orphan-handling=error
LDFLAGS_DIRECT-$(call ld-option,--warn-rwx-segments) := --no-warn-rwx-segments
LDFLAGS_DIRECT += $(LDFLAGS_DIRECT-y)
@@ -81,7 +84,7 @@ cmd_combine = \
--bin1 $(obj)/built-in-32.base.bin \
--bin2 $(obj)/built-in-32.offset.bin \
--map $(obj)/built-in-32.base.map \
- --exports cmdline_parse_early,reloc,reloc_trampoline32 \
+ --exports cmdline_parse_early,reloc,reloc_trampoline32,slaunch_early_init \
--output $@

targets += built-in-32.S
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index a69107bd81..b4cf423c80 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -472,6 +472,10 @@ __start:
/* Bootloaders may set multiboot{1,2}.mem_lower to a nonzero value. */
xor %edx,%edx

+ /* Check for TrenchBoot slaunch bootloader. */
+ cmp $SLAUNCH_BOOTLOADER_MAGIC, %eax
+ je .Lslaunch_proto
+
/* Check for Multiboot2 bootloader. */
cmp $MULTIBOOT2_BOOTLOADER_MAGIC,%eax
je .Lmultiboot2_proto
@@ -487,6 +491,45 @@ __start:
cmovnz MB_mem_lower(%ebx),%edx
jmp trampoline_bios_setup

+.Lslaunch_proto:
+ /*
+ * Upon reaching here, CPU state mostly matches the one setup by the
+ * bootloader with ESP, ESI and EDX being clobbered above.
+ */
+
+ /* Save information that TrenchBoot slaunch was used. */
+ movb $1, sym_esi(slaunch_active)
+
+ /*
+ * Prepare space for output parameter of slaunch_early_init(), which is
+ * a structure of two uint32_t fields.
+ */
+ sub $8, %esp
+
+ push %esp /* pointer to output structure */
+ lea sym_offs(__2M_rwdata_end), %ecx /* end of target image */
+ lea sym_offs(_start), %edx /* target base address */
+ mov %esi, %eax /* load base address */
+ /*
+ * slaunch_early_init(load/eax, tgt/edx, tgt_end/ecx, ret/stk) using
+ * fastcall calling convention.
+ */
+ call slaunch_early_init
+ add $4, %esp /* pop the fourth parameter */
+
+ /* Move outputs of slaunch_early_init() from stack into registers. */
+ pop %eax /* physical MBI address */
+ pop %edx /* physical SLRT address */
+
+ /* Save physical address of SLRT for C code. */
+ mov %edx, sym_esi(slaunch_slrt)
+
+ /* Store MBI address in EBX where MB2 code expects it. */
+ mov %eax, %ebx
+
+ /* Move magic number expected by Multiboot 2 to EAX and fall through. */
+ movl $MULTIBOOT2_BOOTLOADER_MAGIC, %eax
+
.Lmultiboot2_proto:
/* Skip Multiboot2 information fixed part. */
lea (MB2_fixed_sizeof+MULTIBOOT2_TAG_ALIGN-1)(%ebx),%ecx
diff --git a/xen/arch/x86/boot/slaunch-early.c b/xen/arch/x86/boot/slaunch-early.c
new file mode 100644
index 0000000000..48776ef559
--- /dev/null
+++ b/xen/arch/x86/boot/slaunch-early.c
@@ -0,0 +1,41 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#include <xen/slr-table.h>
+#include <xen/types.h>
+#include <asm/intel-txt.h>
+
+struct early_init_results
+{
+ uint32_t mbi_pa;
+ uint32_t slrt_pa;
+} __packed;
+
+void asmlinkage slaunch_early_init(uint32_t load_base_addr,
+ uint32_t tgt_base_addr,
+ uint32_t tgt_end_addr,
+ struct early_init_results *result)
+{
+ void *txt_heap;
+ struct txt_os_mle_data *os_mle;
+ struct slr_table *slrt;
+ struct slr_entry_intel_info *intel_info;
+
+ txt_heap = txt_init();
+ os_mle = txt_os_mle_data_start(txt_heap);
+
+ result->slrt_pa = os_mle->slrt;
+ result->mbi_pa = 0;
+
+ slrt = (struct slr_table *)(uintptr_t)os_mle->slrt;
+
+ intel_info = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+ if ( intel_info == NULL || intel_info->hdr.size != sizeof(*intel_info) )
+ return;
+
+ result->mbi_pa = intel_info->boot_params_base;
+}
diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
index 07be43f557..7a665b7cc3 100644
--- a/xen/arch/x86/include/asm/intel-txt.h
+++ b/xen/arch/x86/include/asm/intel-txt.h
@@ -272,6 +272,22 @@ static inline void *txt_sinit_mle_data_start(void *heap)
sizeof(uint64_t);
}

+static inline void *txt_init(void)
+{
+ void *txt_heap;
+
+ /* Clear the TXT error register for a clean start of the day. */
+ write_txt_reg(TXTCR_ERRORCODE, 0);
+
+ txt_heap = _p(read_txt_reg(TXTCR_HEAP_BASE));
+
+ if ( txt_os_mle_data_size(txt_heap) < sizeof(struct txt_os_mle_data) ||
+ txt_os_sinit_data_size(txt_heap) < sizeof(struct txt_os_sinit_data) )
+ txt_reset(SLAUNCH_ERROR_GENERIC);
+
+ return txt_heap;
+}
+
#endif /* __ASSEMBLY__ */

#endif /* ASM__X86__INTEL_TXT_H */
diff --git a/xen/arch/x86/include/asm/slaunch.h b/xen/arch/x86/include/asm/slaunch.h
new file mode 100644
index 0000000000..3fc5f00073
--- /dev/null
+++ b/xen/arch/x86/include/asm/slaunch.h
@@ -0,0 +1,26 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#ifndef ASM__X86__SLAUNCH_H
+#define ASM__X86__SLAUNCH_H
+
+#include <xen/types.h>
+
+/* Indicates an active Secure Launch boot. */
+extern bool slaunch_active;
+
+/*
+ * Holds physical address of SLRT. Use slaunch_get_slrt() to access SLRT
+ * instead of mapping where this points to.
+ */
+extern uint32_t slaunch_slrt;
+
+/*
+ * Retrieves pointer to SLRT. Checks table's validity and maps it as necessary.
+ */
+struct slr_table *slaunch_get_slrt(void);
+
+#endif /* ASM__X86__SLAUNCH_H */
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
new file mode 100644
index 0000000000..a3e6ab8d71
--- /dev/null
+++ b/xen/arch/x86/slaunch.c
@@ -0,0 +1,27 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#include <xen/compiler.h>
+#include <xen/init.h>
+#include <xen/macros.h>
+#include <xen/types.h>
+#include <asm/slaunch.h>
+
+/*
+ * These variables are assigned to by the code near Xen's entry point.
+ *
+ * slaunch_active is not __initdata to allow checking for an active Secure
+ * Launch boot.
+ */
+bool slaunch_active;
+uint32_t __initdata slaunch_slrt; /* physical address */
+
+/* Using slaunch_active in head.S assumes it's a single byte in size, so enforce
+ * this assumption. */
+static void __maybe_unused compile_time_checks(void)
+{
+ BUILD_BUG_ON(sizeof(slaunch_active) != 1);
+}
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:39 PMMay 13
to xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

The tests validate that important parts of memory are protected against
DMA attacks, including Xen and MBI. Modules can be tested later, when it
is possible to report issues to a user before invoking TXT reset.

TPM event log validation is temporarily disabled due to an issue with
its allocation by bootloader (GRUB) which will need to be modified to
address this. Ultimately event log will also have to be validated early
as it is used immediately after these tests to hold MBI measurements.
See larger comment in txt_verify_pmr_ranges().

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/boot/slaunch-early.c | 6 ++
xen/arch/x86/include/asm/intel-txt.h | 111 +++++++++++++++++++++++++++
2 files changed, 117 insertions(+)

diff --git a/xen/arch/x86/boot/slaunch-early.c b/xen/arch/x86/boot/slaunch-early.c
index 48776ef559..b6f6deedc9 100644
--- a/xen/arch/x86/boot/slaunch-early.c
+++ b/xen/arch/x86/boot/slaunch-early.c
@@ -22,10 +22,13 @@ void asmlinkage slaunch_early_init(uint32_t load_base_addr,
void *txt_heap;
struct txt_os_mle_data *os_mle;
struct slr_table *slrt;
+ struct txt_os_sinit_data *os_sinit;
struct slr_entry_intel_info *intel_info;
+ uint32_t size = tgt_end_addr - tgt_base_addr;

txt_heap = txt_init();
os_mle = txt_os_mle_data_start(txt_heap);
+ os_sinit = txt_os_sinit_data_start(txt_heap);

result->slrt_pa = os_mle->slrt;
result->mbi_pa = 0;
@@ -38,4 +41,7 @@ void asmlinkage slaunch_early_init(uint32_t load_base_addr,
return;

result->mbi_pa = intel_info->boot_params_base;
+
+ txt_verify_pmr_ranges(os_mle, os_sinit, intel_info,
+ load_base_addr, tgt_base_addr, size);
}
diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
index 7a665b7cc3..339c29ebef 100644
--- a/xen/arch/x86/include/asm/intel-txt.h
+++ b/xen/arch/x86/include/asm/intel-txt.h
@@ -84,6 +84,8 @@

#ifndef __ASSEMBLY__

+#include <xen/slr-table.h>
+
/* Need to differentiate between pre- and post paging enabled. */
#ifdef __EARLY_SLAUNCH__
#include <xen/macros.h>
@@ -288,6 +290,115 @@ static inline void *txt_init(void)
return txt_heap;
}

+static inline int is_in_pmr(struct txt_os_sinit_data *os_sinit, uint64_t base,
+ uint32_t size, int check_high)
+{
+ /* Check for size overflow. */
+ if ( base + size < base )
+ txt_reset(SLAUNCH_ERROR_INTEGER_OVERFLOW);
+
+ /* Low range always starts at 0, so its size is also end address. */
+ if ( base >= os_sinit->vtd_pmr_lo_base &&
+ base + size <= os_sinit->vtd_pmr_lo_size )
+ return 1;
+
+ if ( check_high && os_sinit->vtd_pmr_hi_size != 0 )
+ {
+ if ( os_sinit->vtd_pmr_hi_base + os_sinit->vtd_pmr_hi_size <
+ os_sinit->vtd_pmr_hi_size )
+ txt_reset(SLAUNCH_ERROR_INTEGER_OVERFLOW);
+ if ( base >= os_sinit->vtd_pmr_hi_base &&
+ base + size <= os_sinit->vtd_pmr_hi_base +
+ os_sinit->vtd_pmr_hi_size )
+ return 1;
+ }
+
+ return 0;
+}
+
+static inline void txt_verify_pmr_ranges(struct txt_os_mle_data *os_mle,
+ struct txt_os_sinit_data *os_sinit,
+ struct slr_entry_intel_info *info,
+ uint32_t load_base_addr,
+ uint32_t tgt_base_addr,
+ uint32_t xen_size)
+{
+ int check_high_pmr = 0;
+
+ /* Verify the value of the low PMR base. It should always be 0. */
+ if ( os_sinit->vtd_pmr_lo_base != 0 )
+ txt_reset(SLAUNCH_ERROR_LO_PMR_BASE);
+
+ /*
+ * Low PMR size should not be 0 on current platforms. There is an ongoing
+ * transition to TPR-based DMA protection instead of PMR-based; this is not
+ * yet supported by the code.
+ */
+ if ( os_sinit->vtd_pmr_lo_size == 0 )
+ txt_reset(SLAUNCH_ERROR_LO_PMR_SIZE);
+
+ /* Check if regions overlap. Treat regions with no hole between as error. */
+ if ( os_sinit->vtd_pmr_hi_size != 0 &&
+ os_sinit->vtd_pmr_hi_base <= os_sinit->vtd_pmr_lo_size )
+ txt_reset(SLAUNCH_ERROR_HI_PMR_BASE);
+
+ /* All regions accessed by 32b code must be below 4G. */
+ if ( os_sinit->vtd_pmr_hi_base + os_sinit->vtd_pmr_hi_size <=
+ 0x100000000ULL )
+ check_high_pmr = 1;
+
+ /*
+ * ACM checks that TXT heap and MLE memory is protected against DMA. We have
+ * to check if MBI and whole Xen memory is protected. The latter is done in
+ * case bootloader failed to set whole image as MLE and to make sure that
+ * both pre- and post-relocation code is protected.
+ */
+
+ /* Check if all of Xen before relocation is covered by PMR. */
+ if ( !is_in_pmr(os_sinit, load_base_addr, xen_size, check_high_pmr) )
+ txt_reset(SLAUNCH_ERROR_LO_PMR_MLE);
+
+ /* Check if all of Xen after relocation is covered by PMR. */
+ if ( load_base_addr != tgt_base_addr &&
+ !is_in_pmr(os_sinit, tgt_base_addr, xen_size, check_high_pmr) )
+ txt_reset(SLAUNCH_ERROR_LO_PMR_MLE);
+
+ /*
+ * If present, check that MBI is covered by PMR. MBI starts with 'uint32_t
+ * total_size'.
+ */
+ if ( info->boot_params_base != 0 &&
+ !is_in_pmr(os_sinit, info->boot_params_base,
+ *(uint32_t *)(uintptr_t)info->boot_params_base,
+ check_high_pmr) )
+ txt_reset(SLAUNCH_ERROR_BUFFER_BEYOND_PMR);
+
+ /* Check if TPM event log (if present) is covered by PMR. */
+ /*
+ * FIXME: currently commented out as GRUB allocates it in a hole between
+ * PMR and reserved RAM, due to 2MB resolution of PMR. There are no other
+ * easy-to-use DMA protection mechanisms that would allow to protect that
+ * part of memory. TPR (TXT DMA Protection Range) gives 1MB resolution, but
+ * it still wouldn't be enough.
+ *
+ * One possible solution would be for GRUB to allocate log at lower address,
+ * but this would further increase memory space fragmentation. Another
+ * option is to align PMR up instead of down, making PMR cover part of
+ * reserved region, but it is unclear what the consequences may be.
+ *
+ * In tboot this issue was resolved by reserving leftover chunks of memory
+ * in e820 and/or UEFI memory map. This is also a valid solution, but would
+ * require more changes to GRUB than the ones listed above, as event log is
+ * allocated much earlier than PMRs.
+ */
+ /*
+ if ( os_mle->evtlog_addr != 0 && os_mle->evtlog_size != 0 &&
+ !is_in_pmr(os_sinit, os_mle->evtlog_addr, os_mle->evtlog_size,
+ check_high_pmr) )
+ txt_reset(SLAUNCH_ERROR_BUFFER_BEYOND_PMR);
+ */
+}
+
#endif /* __ASSEMBLY__ */

#endif /* ASM__X86__INTEL_TXT_H */
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:42 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
From: Kacper Stojek <kacper...@3mdeb.com>

TXT heap, SINIT and TXT private space are marked as reserved or unused
in e820 to protect from unintended uses.

Signed-off-by: Kacper Stojek <kacper...@3mdeb.com>
Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/Makefile | 1 +
xen/arch/x86/include/asm/intel-txt.h | 6 ++
xen/arch/x86/include/asm/mm.h | 3 +
xen/arch/x86/include/asm/slaunch.h | 44 +++++++++++
xen/arch/x86/intel-txt.c | 113 +++++++++++++++++++++++++++
xen/arch/x86/setup.c | 10 ++-
xen/arch/x86/slaunch.c | 98 ++++++++++++++++++++++-
7 files changed, 271 insertions(+), 4 deletions(-)
create mode 100644 xen/arch/x86/intel-txt.c

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 43a80be458..6f15c5a87a 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_GDBSX) += gdbsx.o
obj-y += hypercall.o
obj-y += i387.o
obj-y += i8259.o
+obj-y += intel-txt.o
obj-y += io_apic.o
obj-$(CONFIG_LIVEPATCH) += livepatch.o
obj-y += msi.o
diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
index 339c29ebef..0126f56a6c 100644
--- a/xen/arch/x86/include/asm/intel-txt.h
+++ b/xen/arch/x86/include/asm/intel-txt.h
@@ -399,6 +399,12 @@ static inline void txt_verify_pmr_ranges(struct txt_os_mle_data *os_mle,
*/
}

+/* Prepares for accesses to TXT-specific memory. */
+void txt_map_mem_regions(void);
+
+/* Marks TXT-specific memory as used to avoid its corruption. */
+void txt_reserve_mem_regions(void);
+
#endif /* __ASSEMBLY__ */

#endif /* ASM__X86__INTEL_TXT_H */
diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h
index d6e80db71c..91fa95cd90 100644
--- a/xen/arch/x86/include/asm/mm.h
+++ b/xen/arch/x86/include/asm/mm.h
@@ -106,6 +106,9 @@
#define _PGC_need_scrub _PGC_allocated
#define PGC_need_scrub PGC_allocated

+/* How much of the directmap is prebuilt at compile time. */
+#define PREBUILT_MAP_LIMIT (1 << L2_PAGETABLE_SHIFT)
+
#ifndef CONFIG_BIGMEM
/*
* This definition is solely for the use in struct page_info (and
diff --git a/xen/arch/x86/include/asm/slaunch.h b/xen/arch/x86/include/asm/slaunch.h
index 3fc5f00073..ddc1bd2361 100644
--- a/xen/arch/x86/include/asm/slaunch.h
+++ b/xen/arch/x86/include/asm/slaunch.h
@@ -7,6 +7,7 @@
#ifndef ASM__X86__SLAUNCH_H
#define ASM__X86__SLAUNCH_H

+#include <xen/slr-table.h>
#include <xen/types.h>

/* Indicates an active Secure Launch boot. */
@@ -18,9 +19,52 @@ extern bool slaunch_active;
*/
extern uint32_t slaunch_slrt;

+/*
+ * evt_log is assigned a physical address and the caller must map it to
+ * virtual, if needed.
+ */
+static inline void find_evt_log(struct slr_table *slrt, void **evt_log,
+ uint32_t *evt_log_size)
+{
+ struct slr_entry_log_info *log_info;
+
+ log_info = (struct slr_entry_log_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_LOG_INFO);
+ if ( log_info != NULL )
+ {
+ *evt_log = _p(log_info->addr);
+ *evt_log_size = log_info->size;
+ }
+ else
+ {
+ *evt_log = NULL;
+ *evt_log_size = 0;
+ }
+}
+
/*
* Retrieves pointer to SLRT. Checks table's validity and maps it as necessary.
*/
struct slr_table *slaunch_get_slrt(void);

+/*
+ * Prepares for accesses to essential data structures setup by boot environment.
+ */
+void slaunch_map_mem_regions(void);
+
+/* Marks regions of memory as used to avoid their corruption. */
+void slaunch_reserve_mem_regions(void);
+
+/*
+ * This helper function is used to map memory using L2 page tables by aligning
+ * mapped regions to 2MB. This way page allocator (which at this point isn't
+ * yet initialized) isn't needed for creating new L1 mappings. The function
+ * also checks and skips memory already mapped by the prebuilt tables.
+ *
+ * There is no unmap_l2() because the function is meant to be used by the code
+ * that accesses DRTM-related memory soon after which Xen rebuilds memory maps,
+ * effectively dropping all existing mappings.
+ */
+int slaunch_map_l2(unsigned long paddr, unsigned long size);
+
#endif /* ASM__X86__SLAUNCH_H */
diff --git a/xen/arch/x86/intel-txt.c b/xen/arch/x86/intel-txt.c
new file mode 100644
index 0000000000..67051b0917
--- /dev/null
+++ b/xen/arch/x86/intel-txt.c
@@ -0,0 +1,113 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#include <xen/bug.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/types.h>
+#include <asm/e820.h>
+#include <asm/intel-txt.h>
+#include <asm/slaunch.h>
+
+static uint64_t __initdata txt_heap_base, txt_heap_size;
+
+void __init txt_map_mem_regions(void)
+{
+ int rc;
+
+ rc = slaunch_map_l2(TXT_PRIV_CONFIG_REGS_BASE, NR_TXT_CONFIG_SIZE);
+ BUG_ON(rc != 0);
+
+ txt_heap_base = read_txt_reg(TXTCR_HEAP_BASE);
+ BUG_ON(txt_heap_base == 0);
+
+ txt_heap_size = read_txt_reg(TXTCR_HEAP_SIZE);
+ BUG_ON(txt_heap_size == 0);
+
+ rc = slaunch_map_l2(txt_heap_base, txt_heap_size);
+ BUG_ON(rc != 0);
+}
+
+/* Mark RAM region as RESERVED if it isn't marked that way already. */
+static int __init mark_ram_as(struct e820map *map, uint64_t start,
+ uint64_t end, uint32_t type)
+{
+ unsigned int i;
+ uint32_t from_type = E820_RAM;
+
+ for ( i = 0; i < map->nr_map; i++ )
+ {
+ uint64_t rs = map->map[i].addr;
+ uint64_t re = rs + map->map[i].size;
+
+ /* The entry includes the range. */
+ if ( start >= rs && end <= re )
+ break;
+
+ /* The entry intersects the range. */
+ if ( end > rs && start < re )
+ {
+ /* Fatal failure. */
+ return 0;
+ }
+ }
+
+ /*
+ * If the range is not included by any entry and no entry intersects it,
+ * then it's not listed in the memory map. Consider this case as a success
+ * since we're only preventing RAM from being used and unlisted range should
+ * not be used.
+ */
+ if ( i == map->nr_map )
+ return 1;
+
+ /*
+ * e820_change_range_type() fails if the range is already marked with the
+ * desired type. Don't consider it an error if firmware has done it for us.
+ */
+ if ( map->map[i].type == type )
+ return 1;
+
+ /* E820_ACPI or E820_NVS are really unexpected, but others are fine. */
+ if ( map->map[i].type == E820_RESERVED ||
+ map->map[i].type == E820_UNUSABLE )
+ from_type = map->map[i].type;
+
+ return e820_change_range_type(map, start, end, from_type, type);
+}
+
+void __init txt_reserve_mem_regions(void)
+{
+ int rc;
+ uint64_t sinit_base, sinit_size;
+
+ /* TXT Heap */
+ BUG_ON(txt_heap_base == 0);
+ printk("SLAUNCH: reserving TXT heap (%#lx - %#lx)\n", txt_heap_base,
+ txt_heap_base + txt_heap_size);
+ rc = mark_ram_as(&e820_raw, txt_heap_base, txt_heap_base + txt_heap_size,
+ E820_RESERVED);
+ BUG_ON(rc == 0);
+
+ sinit_base = read_txt_reg(TXTCR_SINIT_BASE);
+ BUG_ON(sinit_base == 0);
+
+ sinit_size = read_txt_reg(TXTCR_SINIT_SIZE);
+ BUG_ON(sinit_size == 0);
+
+ /* SINIT */
+ printk("SLAUNCH: reserving SINIT memory (%#lx - %#lx)\n", sinit_base,
+ sinit_base + sinit_size);
+ rc = mark_ram_as(&e820_raw, sinit_base, sinit_base + sinit_size,
+ E820_RESERVED);
+ BUG_ON(rc == 0);
+
+ /* TXT Private Space */
+ rc = mark_ram_as(&e820_raw, TXT_PRIV_CONFIG_REGS_BASE,
+ TXT_PRIV_CONFIG_REGS_BASE + NR_TXT_CONFIG_SIZE,
+ E820_UNUSABLE);
+ BUG_ON(rc == 0);
+}
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 2518954124..479d2d744e 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -53,6 +53,7 @@
#include <asm/prot-key.h>
#include <asm/pv/domain.h>
#include <asm/setup.h>
+#include <asm/slaunch.h>
#include <asm/smp.h>
#include <asm/spec_ctrl.h>
#include <asm/tboot.h>
@@ -1086,9 +1087,6 @@ static struct domain *__init create_dom0(struct boot_info *bi)
return d;
}

-/* How much of the directmap is prebuilt at compile time. */
-#define PREBUILT_MAP_LIMIT (1 << L2_PAGETABLE_SHIFT)
-
void asmlinkage __init noreturn __start_xen(void)
{
const char *memmap_type = NULL;
@@ -1424,6 +1422,12 @@ void asmlinkage __init noreturn __start_xen(void)
#endif
}

+ if ( slaunch_active )
+ {
+ slaunch_map_mem_regions();
+ slaunch_reserve_mem_regions();
+ }
+
/* Sanitise the raw E820 map to produce a final clean version. */
max_page = raw_max_page = init_e820(memmap_type, &e820_raw);

diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index a3e6ab8d71..ac3b43942b 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -7,14 +7,18 @@
#include <xen/compiler.h>
#include <xen/init.h>
#include <xen/macros.h>
+#include <xen/mm.h>
#include <xen/types.h>
+#include <asm/e820.h>
+#include <asm/intel-txt.h>
+#include <asm/page.h>
#include <asm/slaunch.h>

/*
* These variables are assigned to by the code near Xen's entry point.
*
* slaunch_active is not __initdata to allow checking for an active Secure
- * Launch boot.
+ * Launch boot at any point.
*/
bool slaunch_active;
uint32_t __initdata slaunch_slrt; /* physical address */
@@ -25,3 +29,95 @@ static void __maybe_unused compile_time_checks(void)
{
BUILD_BUG_ON(sizeof(slaunch_active) != 1);
}
+
+struct slr_table *__init slaunch_get_slrt(void)
+{
+ static struct slr_table *slrt;
+
+ if (slrt == NULL) {
+ int rc;
+
+ slrt = __va(slaunch_slrt);
+
+ rc = slaunch_map_l2(slaunch_slrt, PAGE_SIZE);
+ BUG_ON(rc != 0);
+
+ if ( slrt->magic != SLR_TABLE_MAGIC )
+ panic("SLRT has invalid magic value: %#08x!\n", slrt->magic);
+ /* XXX: are newer revisions allowed? */
+ if ( slrt->revision != SLR_TABLE_REVISION )
+ panic("SLRT is of unsupported revision: %#04x!\n", slrt->revision);
+ if ( slrt->architecture != SLR_INTEL_TXT )
+ panic("SLRT is for unexpected architecture: %#04x!\n",
+ slrt->architecture);
+ if ( slrt->size > slrt->max_size )
+ panic("SLRT is larger than its max size: %#08x > %#08x!\n",
+ slrt->size, slrt->max_size);
+
+ if ( slrt->size > PAGE_SIZE )
+ {
+ rc = slaunch_map_l2(slaunch_slrt, slrt->size);
+ BUG_ON(rc != 0);
+ }
+ }
+
+ return slrt;
+}
+
+void __init slaunch_map_mem_regions(void)
+{
+ void *evt_log_addr;
+ uint32_t evt_log_size;
+
+ /* Vendor-specific part. */
+ txt_map_mem_regions();
+
+ find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
+ if ( evt_log_addr != NULL )
+ {
+ int rc = slaunch_map_l2((uintptr_t)evt_log_addr, evt_log_size);
+ BUG_ON(rc != 0);
+ }
+}
+
+void __init slaunch_reserve_mem_regions(void)
+{
+ int rc;
+
+ void *evt_log_addr;
+ uint32_t evt_log_size;
+
+ /* Vendor-specific part. */
+ txt_reserve_mem_regions();
+
+ find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
+ if ( evt_log_addr != NULL )
+ {
+ printk("SLAUNCH: reserving event log (%#lx - %#lx)\n",
+ (uint64_t)evt_log_addr,
+ (uint64_t)evt_log_addr + evt_log_size);
+ rc = reserve_e820_ram(&e820_raw, (uint64_t)evt_log_addr,
+ (uint64_t)evt_log_addr + evt_log_size);
+ BUG_ON(rc == 0);
+ }
+}
+
+int __init slaunch_map_l2(unsigned long paddr, unsigned long size)
+{
+ unsigned long aligned_paddr = paddr & ~((1ULL << L2_PAGETABLE_SHIFT) - 1);
+ unsigned long pages = ((paddr + size) - aligned_paddr);
+ pages = ROUNDUP(pages, 1ULL << L2_PAGETABLE_SHIFT) >> PAGE_SHIFT;
+
+ if ( aligned_paddr + pages * PAGE_SIZE <= PREBUILT_MAP_LIMIT )
+ return 0;
+
+ if ( aligned_paddr < PREBUILT_MAP_LIMIT )
+ {
+ pages -= (PREBUILT_MAP_LIMIT - aligned_paddr) >> PAGE_SHIFT;
+ aligned_paddr = PREBUILT_MAP_LIMIT;
+ }
+
+ return map_pages_to_xen((uintptr_t)__va(aligned_paddr),
+ maddr_to_mfn(aligned_paddr),
+ pages, PAGE_HYPERVISOR);
+}
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:44 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
This allows the functionality to be reused by other units that need to
update MTRRs.

This also gets rid of a static variable.

Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/cpu/mtrr/generic.c | 51 ++++++++++++++++-----------------
xen/arch/x86/include/asm/mtrr.h | 8 ++++++
2 files changed, 33 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index c587e9140e..2a8dd1d8ff 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -396,9 +396,7 @@ static bool set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
return changed;
}

-static uint64_t deftype;
-
-static unsigned long set_mtrr_state(void)
+static unsigned long set_mtrr_state(uint64_t *deftype)
/* [SUMMARY] Set the MTRR state for this CPU.
<state> The MTRR state information to read.
<ctxt> Some relevant CPU context.
@@ -416,14 +414,12 @@ static unsigned long set_mtrr_state(void)
if (mtrr_state.have_fixed && set_fixed_ranges(mtrr_state.fixed_ranges))
change_mask |= MTRR_CHANGE_MASK_FIXED;

- /* Set_mtrr_restore restores the old value of MTRRdefType,
- so to set it we fiddle with the saved value */
- if ((deftype & 0xff) != mtrr_state.def_type
- || MASK_EXTR(deftype, MTRRdefType_E) != mtrr_state.enabled
- || MASK_EXTR(deftype, MTRRdefType_FE) != mtrr_state.fixed_enabled) {
- deftype = (deftype & ~0xcff) | mtrr_state.def_type |
- MASK_INSR(mtrr_state.enabled, MTRRdefType_E) |
- MASK_INSR(mtrr_state.fixed_enabled, MTRRdefType_FE);
+ if ((*deftype & 0xff) != mtrr_state.def_type
+ || MASK_EXTR(*deftype, MTRRdefType_E) != mtrr_state.enabled
+ || MASK_EXTR(*deftype, MTRRdefType_FE) != mtrr_state.fixed_enabled) {
+ *deftype = (*deftype & ~0xcff) | mtrr_state.def_type |
+ MASK_INSR(mtrr_state.enabled, MTRRdefType_E) |
+ MASK_INSR(mtrr_state.fixed_enabled, MTRRdefType_FE);
change_mask |= MTRR_CHANGE_MASK_DEFTYPE;
}

@@ -440,9 +436,10 @@ static DEFINE_SPINLOCK(set_atomicity_lock);
* has been called.
*/

-static bool prepare_set(void)
+struct mtrr_pausing_state mtrr_pause_caching(void)
{
unsigned long cr4;
+ struct mtrr_pausing_state state;

/* Note that this is not ideal, since the cache is only flushed/disabled
for this CPU while the MTRRs are changed, but changing this requires
@@ -462,7 +459,9 @@ static bool prepare_set(void)
alternative("wbinvd", "", X86_FEATURE_XEN_SELFSNOOP);

cr4 = read_cr4();
- if (cr4 & X86_CR4_PGE)
+ state.pge = cr4 & X86_CR4_PGE;
+
+ if (state.pge)
write_cr4(cr4 & ~X86_CR4_PGE);
else if (use_invpcid)
invpcid_flush_all();
@@ -470,27 +469,27 @@ static bool prepare_set(void)
write_cr3(read_cr3());

/* Save MTRR state */
- rdmsrl(MSR_MTRRdefType, deftype);
+ rdmsrl(MSR_MTRRdefType, state.def_type);

/* Disable MTRRs, and set the default type to uncached */
- mtrr_wrmsr(MSR_MTRRdefType, deftype & ~0xcff);
+ mtrr_wrmsr(MSR_MTRRdefType, state.def_type & ~0xcff);

/* Again, only flush caches if we have to. */
alternative("wbinvd", "", X86_FEATURE_XEN_SELFSNOOP);

- return cr4 & X86_CR4_PGE;
+ return state;
}

-static void post_set(bool pge)
+void mtrr_resume_caching(struct mtrr_pausing_state state)
{
/* Intel (P6) standard MTRRs */
- mtrr_wrmsr(MSR_MTRRdefType, deftype);
+ mtrr_wrmsr(MSR_MTRRdefType, state.def_type);

/* Enable caches */
write_cr0(read_cr0() & ~X86_CR0_CD);

/* Reenable CR4.PGE (also flushes the TLB) */
- if (pge)
+ if (state.pge)
write_cr4(read_cr4() | X86_CR4_PGE);
else if (use_invpcid)
invpcid_flush_all();
@@ -504,15 +503,15 @@ void mtrr_set_all(void)
{
unsigned long mask, count;
unsigned long flags;
- bool pge;
+ struct mtrr_pausing_state pausing_state;

local_irq_save(flags);
- pge = prepare_set();
+ pausing_state = mtrr_pause_caching();

/* Actually set the state */
- mask = set_mtrr_state();
+ mask = set_mtrr_state(&pausing_state.def_type);

- post_set(pge);
+ mtrr_resume_caching(pausing_state);
local_irq_restore(flags);

/* Use the atomic bitops to update the global mask */
@@ -537,12 +536,12 @@ void mtrr_set(
{
unsigned long flags;
struct mtrr_var_range *vr;
- bool pge;
+ struct mtrr_pausing_state pausing_state;

vr = &mtrr_state.var_ranges[reg];

local_irq_save(flags);
- pge = prepare_set();
+ pausing_state = mtrr_pause_caching();

if (size == 0) {
/* The invalid bit is kept in the mask, so we simply clear the
@@ -563,7 +562,7 @@ void mtrr_set(
mtrr_wrmsr(MSR_IA32_MTRR_PHYSMASK(reg), vr->mask);
}

- post_set(pge);
+ mtrr_resume_caching(pausing_state);
local_irq_restore(flags);
}

diff --git a/xen/arch/x86/include/asm/mtrr.h b/xen/arch/x86/include/asm/mtrr.h
index 25d442659d..82ea427ba0 100644
--- a/xen/arch/x86/include/asm/mtrr.h
+++ b/xen/arch/x86/include/asm/mtrr.h
@@ -66,6 +66,14 @@ extern uint8_t pat_type_2_pte_flags(uint8_t pat_type);
extern void mtrr_aps_sync_begin(void);
extern void mtrr_aps_sync_end(void);

+struct mtrr_pausing_state {
+ bool pge;
+ uint64_t def_type;
+};
+
+extern struct mtrr_pausing_state mtrr_pause_caching(void);
+extern void mtrr_resume_caching(struct mtrr_pausing_state state);
+
extern bool mtrr_var_range_msr_set(struct domain *d, struct mtrr_state *m,
uint32_t msr, uint64_t msr_content);
extern bool mtrr_fix_range_msr_set(struct domain *d, struct mtrr_state *m,
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:48 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

In preparation for TXT SENTER call, GRUB had to modify MTRR settings
to be UC for everything except SINIT ACM. Old values are restored
from SLRT where they were saved by the bootloader.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/e820.c | 5 ++
xen/arch/x86/include/asm/intel-txt.h | 3 ++
xen/arch/x86/intel-txt.c | 75 ++++++++++++++++++++++++++++
3 files changed, 83 insertions(+)

diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
index ca577c0bde..60f00e5259 100644
--- a/xen/arch/x86/e820.c
+++ b/xen/arch/x86/e820.c
@@ -11,6 +11,8 @@
#include <asm/mtrr.h>
#include <asm/msr.h>
#include <asm/guest.h>
+#include <asm/intel-txt.h>
+#include <asm/slaunch.h>

/*
* opt_mem: Limit maximum address of physical RAM.
@@ -442,6 +444,9 @@ static uint64_t __init mtrr_top_of_ram(void)
ASSERT(paddr_bits);
addr_mask = ((1ULL << paddr_bits) - 1) & PAGE_MASK;

+ if ( slaunch_active )
+ txt_restore_mtrrs(e820_verbose);
+
rdmsrl(MSR_MTRRcap, mtrr_cap);
rdmsrl(MSR_MTRRdefType, mtrr_def);

diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
index 0126f56a6c..f0ec580558 100644
--- a/xen/arch/x86/include/asm/intel-txt.h
+++ b/xen/arch/x86/include/asm/intel-txt.h
@@ -405,6 +405,9 @@ void txt_map_mem_regions(void);
/* Marks TXT-specific memory as used to avoid its corruption. */
void txt_reserve_mem_regions(void);

+/* Restores original MTRR values saved by a bootloader before starting DRTM. */
+void txt_restore_mtrrs(bool e820_verbose);
+
#endif /* __ASSEMBLY__ */

#endif /* ASM__X86__INTEL_TXT_H */
diff --git a/xen/arch/x86/intel-txt.c b/xen/arch/x86/intel-txt.c
index 67051b0917..92f3e48b49 100644
--- a/xen/arch/x86/intel-txt.c
+++ b/xen/arch/x86/intel-txt.c
@@ -10,6 +10,8 @@
#include <xen/types.h>
#include <asm/e820.h>
#include <asm/intel-txt.h>
+#include <asm/msr.h>
+#include <asm/mtrr.h>
#include <asm/slaunch.h>

static uint64_t __initdata txt_heap_base, txt_heap_size;
@@ -111,3 +113,76 @@ void __init txt_reserve_mem_regions(void)
E820_UNUSABLE);
BUG_ON(rc == 0);
}
+
+void __init txt_restore_mtrrs(bool e820_verbose)
+{
+ struct slr_entry_intel_info *intel_info;
+ uint64_t mtrr_cap, mtrr_def, base, mask;
+ unsigned int i;
+ uint64_t def_type;
+ struct mtrr_pausing_state pausing_state;
+
+ rdmsrl(MSR_MTRRcap, mtrr_cap);
+ rdmsrl(MSR_MTRRdefType, mtrr_def);
+
+ if ( e820_verbose )
+ {
+ printk("MTRRs set previously for SINIT ACM:\n");
+ printk(" MTRR cap: %"PRIx64" type: %"PRIx64"\n", mtrr_cap, mtrr_def);
+
+ for ( i = 0; i < (uint8_t)mtrr_cap; i++ )
+ {
+ rdmsrl(MSR_IA32_MTRR_PHYSBASE(i), base);
+ rdmsrl(MSR_IA32_MTRR_PHYSMASK(i), mask);
+
+ printk(" MTRR[%d]: base %"PRIx64" mask %"PRIx64"\n",
+ i, base, mask);
+ }
+ }
+
+ intel_info = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slaunch_get_slrt(), NULL, SLR_ENTRY_INTEL_INFO);
+
+ if ( (mtrr_cap & 0xFF) != intel_info->saved_bsp_mtrrs.mtrr_vcnt )
+ {
+ printk("Bootloader saved %ld MTRR values, but there should be %ld\n",
+ intel_info->saved_bsp_mtrrs.mtrr_vcnt, mtrr_cap & 0xFF);
+ /* Choose the smaller one to be on the safe side. */
+ mtrr_cap = (mtrr_cap & 0xFF) > intel_info->saved_bsp_mtrrs.mtrr_vcnt ?
+ intel_info->saved_bsp_mtrrs.mtrr_vcnt : mtrr_cap;
+ }
+
+ def_type = intel_info->saved_bsp_mtrrs.default_mem_type;
+ pausing_state = mtrr_pause_caching();
+
+ for ( i = 0; i < (uint8_t)mtrr_cap; i++ )
+ {
+ base = intel_info->saved_bsp_mtrrs.mtrr_pair[i].mtrr_physbase;
+ mask = intel_info->saved_bsp_mtrrs.mtrr_pair[i].mtrr_physmask;
+ wrmsrl(MSR_IA32_MTRR_PHYSBASE(i), base);
+ wrmsrl(MSR_IA32_MTRR_PHYSMASK(i), mask);
+ }
+
+ pausing_state.def_type = def_type;
+ mtrr_resume_caching(pausing_state);
+
+ if ( e820_verbose )
+ {
+ printk("Restored MTRRs:\n"); /* Printed by caller, mtrr_top_of_ram(). */
+
+ /* If MTRRs are not enabled or WB is not a default type, MTRRs won't be printed */
+ if ( !test_bit(11, &def_type) || ((uint8_t)def_type == X86_MT_WB) )
+ {
+ for ( i = 0; i < (uint8_t)mtrr_cap; i++ )
+ {
+ rdmsrl(MSR_IA32_MTRR_PHYSBASE(i), base);
+ rdmsrl(MSR_IA32_MTRR_PHYSMASK(i), mask);
+ printk(" MTRR[%d]: base %"PRIx64" mask %"PRIx64"\n",
+ i, base, mask);
+ }
+ }
+ }
+
+ /* Restore IA32_MISC_ENABLES */
+ wrmsrl(MSR_IA32_MISC_ENABLE, intel_info->saved_misc_enable_msr);
+}
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:51 PMMay 13
to xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

The code comes from [1] and is licensed under GPL-2.0 license.
The initial version was a combination of:
- include/crypto/sha1.h
- include/crypto/sha1_base.h
- lib/crypto/sha1.c
- crypto/sha1_generic.c

Changes:
- includes, formatting, naming
- renames and splicing of some trivial functions that are called once
- dropping of `int` return values (only zero was ever returned)
- getting rid of references to `struct shash_desc`
- getting rid of macros
- getting rid of unnecessary function pointers

[1]: https://github.com/torvalds/linux/tree/afdab700f65e14070d8ab92175544b1c62b8bf03

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/include/xen/sha1.h | 12 +++
xen/lib/Makefile | 1 +
xen/lib/sha1.c | 218 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 231 insertions(+)
create mode 100644 xen/include/xen/sha1.h
create mode 100644 xen/lib/sha1.c

diff --git a/xen/include/xen/sha1.h b/xen/include/xen/sha1.h
new file mode 100644
index 0000000000..085f750a6a
--- /dev/null
+++ b/xen/include/xen/sha1.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef XEN__SHA1_H
+#define XEN__SHA1_H
+
+#include <xen/inttypes.h>
+
+#define SHA1_DIGEST_SIZE 20
+
+void sha1_hash(uint8_t digest[SHA1_DIGEST_SIZE], const void *msg, size_t len);
+
+#endif /* XEN__SHA1_H */
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 5ccb1e5241..fd4b9ece63 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -17,6 +17,7 @@ lib-y += memset.o
lib-y += muldiv64.o
lib-y += parse-size.o
lib-y += rbtree.o
+lib-$(CONFIG_X86) += sha1.o
lib-$(CONFIG_X86) += sha2-256.o
lib-y += sort.o
lib-y += strcasecmp.o
diff --git a/xen/lib/sha1.c b/xen/lib/sha1.c
new file mode 100644
index 0000000000..c7a464e2cf
--- /dev/null
+++ b/xen/lib/sha1.c
@@ -0,0 +1,218 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * SHA1 routine optimized to do word accesses rather than byte accesses,
+ * and to avoid unnecessary copies into the context array.
+ *
+ * This was based on the git SHA1 implementation.
+ */
+
+#include <xen/bitops.h>
+#include <xen/sha1.h>
+#include <xen/string.h>
+#include <xen/types.h>
+#include <xen/unaligned.h>
+
+/*
+ * If you have 32 registers or more, the compiler can (and should)
+ * try to change the array[] accesses into registers. However, on
+ * machines with less than ~25 registers, that won't really work,
+ * and at least GCC will make an unholy mess of it.
+ *
+ * So to avoid that mess which just slows things down, we force
+ * the stores to memory to actually happen (we might be better off
+ * with a 'W(t)=(val);asm("":"+m" (W(t))' there instead, as
+ * suggested by Artur Skawina - that will also make GCC unable to
+ * try to do the silly "optimize away loads" part because it won't
+ * see what the value will be).
+ *
+ * Ben Herrenschmidt reports that on PPC, the C version comes close
+ * to the optimized asm with this (ie on PPC you don't want that
+ * 'volatile', since there are lots of registers).
+ *
+ * On ARM we get the best code generation by forcing a full memory barrier
+ * between each SHA round, otherwise GCC happily gets wild with spilling and
+ * the stack frame size simply explode and performance goes down the drain.
+ */
+
+#define SHA1_BLOCK_SIZE 64
+#define SHA1_WORKSPACE_WORDS 16
+#define SHA1_WORKSPACE_MASK (SHA1_WORKSPACE_WORDS - 1)
+
+struct sha1_state {
+ uint32_t state[SHA1_DIGEST_SIZE / 4];
+ uint64_t count;
+ uint8_t buffer[SHA1_BLOCK_SIZE];
+};
+
+/* This "rolls" over the 512-bit array */
+static void set_w(uint32_t w[SHA1_WORKSPACE_WORDS], size_t i, uint32_t val)
+{
+#ifdef CONFIG_X86
+ *(volatile uint32_t *)&w[i & SHA1_WORKSPACE_MASK] = val;
+#else
+ w[i & SHA1_WORKSPACE_MASK] = val;
+# ifdef CONFIG_ARM
+ barrier();
+# endif
+#endif
+}
+
+static uint32_t blend(const uint32_t w[SHA1_WORKSPACE_WORDS], size_t i)
+{
+/* This "rolls" over the 512-bit array */
+#define w(i) w[(i) & SHA1_WORKSPACE_MASK]
+
+ return rol32(w(i + 13) ^ w(i + 8) ^ w(i + 2) ^ w(i), 1);
+
+#undef w
+}
+
+/**
+ * sha1_transform - single block SHA1 transform
+ *
+ * @digest: 160 bit digest to update
+ * @data: 512 bits of data to hash
+ *
+ * This function executes SHA-1's internal compression function. It updates the
+ * 160-bit internal state (@digest) with a single 512-bit data block (@data).
+ */
+static void sha1_transform(uint32_t *digest, const uint8_t *data)
+{
+ uint32_t a, b, c, d, e, t;
+ uint32_t workspace[SHA1_WORKSPACE_WORDS];
+ unsigned int i = 0;
+
+ a = digest[0];
+ b = digest[1];
+ c = digest[2];
+ d = digest[3];
+ e = digest[4];
+
+ /* Round 1 - iterations 0-16 take their input from 'data' */
+ for ( ; i < 16; ++i ) {
+ t = get_unaligned_be32((uint32_t *)data + i);
+ set_w(workspace, i, t);
+ e += t + rol32(a, 5) + (((c ^ d) & b) ^ d) + 0x5a827999U;
+ b = ror32(b, 2);
+ t = e; e = d; d = c; c = b; b = a; a = t;
+ }
+
+ /* Round 1 - tail. Input from 512-bit mixing array */
+ for ( ; i < 20; ++i ) {
+ t = blend(workspace, i);
+ set_w(workspace, i, t);
+ e += t + rol32(a, 5) + (((c ^ d) & b) ^ d) + 0x5a827999U;
+ b = ror32(b, 2);
+ t = e; e = d; d = c; c = b; b = a; a = t;
+ }
+
+ /* Round 2 */
+ for ( ; i < 40; ++i ) {
+ t = blend(workspace, i);
+ set_w(workspace, i, t);
+ e += t + rol32(a, 5) + (b ^ c ^ d) + 0x6ed9eba1U;
+ b = ror32(b, 2);
+ t = e; e = d; d = c; c = b; b = a; a = t;
+ }
+
+ /* Round 3 */
+ for ( ; i < 60; ++i ) {
+ t = blend(workspace, i);
+ set_w(workspace, i, t);
+ e += t + rol32(a, 5) + ((b & c) + (d & (b ^ c))) + 0x8f1bbcdcU;
+ b = ror32(b, 2);
+ t = e; e = d; d = c; c = b; b = a; a = t;
+ }
+
+ /* Round 4 */
+ for ( ; i < 80; ++i ) {
+ t = blend(workspace, i);
+ set_w(workspace, i, t);
+ e += t + rol32(a, 5) + (b ^ c ^ d) + 0xca62c1d6U;
+ b = ror32(b, 2);
+ t = e; e = d; d = c; c = b; b = a; a = t;
+ }
+
+ digest[0] += a;
+ digest[1] += b;
+ digest[2] += c;
+ digest[3] += d;
+ digest[4] += e;
+}
+
+static void sha1_init(struct sha1_state *sctx)
+{
+ sctx->state[0] = 0x67452301UL;
+ sctx->state[1] = 0xefcdab89UL;
+ sctx->state[2] = 0x98badcfeUL;
+ sctx->state[3] = 0x10325476UL;
+ sctx->state[4] = 0xc3d2e1f0UL;
+ sctx->count = 0;
+}
+
+static void sha1_update(struct sha1_state *sctx, const uint8_t *msg, size_t len)
+{
+ unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+ sctx->count += len;
+
+ if ( (partial + len) >= SHA1_BLOCK_SIZE )
+ {
+ if ( partial )
+ {
+ int rem = SHA1_BLOCK_SIZE - partial;
+
+ memcpy(sctx->buffer + partial, msg, rem);
+ msg += rem;
+ len -= rem;
+
+ sha1_transform(sctx->state, sctx->buffer);
+ }
+
+ for ( ; len >= SHA1_BLOCK_SIZE; len -= SHA1_BLOCK_SIZE )
+ {
+ sha1_transform(sctx->state, msg);
+ msg += SHA1_BLOCK_SIZE;
+ }
+ partial = 0;
+ }
+
+ /* Remaining data becomes partial. */
+ memcpy(sctx->buffer + partial, msg, len);
+}
+
+static void sha1_final(struct sha1_state *sctx, void *out)
+{
+ const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+ __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+ unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+ __be32 *digest = out;
+ unsigned int i;
+
+ sctx->buffer[partial++] = 0x80;
+ if ( partial > bit_offset )
+ {
+ memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+ partial = 0;
+
+ sha1_transform(sctx->state, sctx->buffer);
+ }
+
+ memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+ *bits = cpu_to_be64(sctx->count << 3);
+ sha1_transform(sctx->state, sctx->buffer);
+
+ /* Store state in digest */
+ for ( i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++ )
+ put_unaligned_be32(sctx->state[i], &digest[i]);
+}
+
+void sha1_hash(uint8_t digest[SHA1_DIGEST_SIZE], const void *msg, size_t len)
+{
+ struct sha1_state sctx;
+
+ sha1_init(&sctx);
+ sha1_update(&sctx, msg, len);
+ sha1_final(&sctx, digest);
+}
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:53 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

This file is built twice: for early 32b mode without paging to measure
MBI and for 64b code to measure dom0 kernel and initramfs. Since MBI
is small, the first case uses TPM to do the hashing. Kernel and
initramfs on the other hand are too big, sending them to the TPM would
take multiple minutes.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/Makefile | 1 +
xen/arch/x86/boot/Makefile | 7 +-
xen/arch/x86/boot/head.S | 3 +
xen/arch/x86/include/asm/slaunch.h | 14 +
xen/arch/x86/include/asm/tpm.h | 19 ++
xen/arch/x86/slaunch.c | 7 +-
xen/arch/x86/tpm.c | 437 +++++++++++++++++++++++++++++
7 files changed, 486 insertions(+), 2 deletions(-)
create mode 100644 xen/arch/x86/include/asm/tpm.h
create mode 100644 xen/arch/x86/tpm.c

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 6f15c5a87a..2527a1909c 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -66,6 +66,7 @@ obj-y += spec_ctrl.o
obj-y += srat.o
obj-y += string.o
obj-y += time.o
+obj-y += tpm.o
obj-y += traps-setup.o
obj-y += traps.o
obj-$(CONFIG_INTEL) += tsx.o
diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
index 5471b966dd..55fbe155b6 100644
--- a/xen/arch/x86/boot/Makefile
+++ b/xen/arch/x86/boot/Makefile
@@ -6,6 +6,7 @@ obj32 := cmdline.32.o
obj32 += reloc.32.o
obj32 += reloc-trampoline.32.o
obj32 += slaunch-early.32.o
+obj32 += tpm-early.32.o

obj64 := reloc-trampoline.o

@@ -31,6 +32,10 @@ $(obj)/%.32.o: $(src)/%.c FORCE

$(obj)/slaunch-early.32.o: XEN_CFLAGS += -D__EARLY_SLAUNCH__

+$(obj)/tpm-early.32.o: XEN_CFLAGS += -D__EARLY_SLAUNCH__
+$(obj)/tpm-early.32.o: $(src)/../tpm.c FORCE
+ $(call if_changed_rule,cc_o_c)
+
orphan-handling-$(call ld-option,--orphan-handling=error) := --orphan-handling=error
LDFLAGS_DIRECT-$(call ld-option,--warn-rwx-segments) := --no-warn-rwx-segments
LDFLAGS_DIRECT += $(LDFLAGS_DIRECT-y)
@@ -84,7 +89,7 @@ cmd_combine = \
--bin1 $(obj)/built-in-32.base.bin \
--bin2 $(obj)/built-in-32.offset.bin \
--map $(obj)/built-in-32.base.map \
- --exports cmdline_parse_early,reloc,reloc_trampoline32,slaunch_early_init \
+ --exports cmdline_parse_early,reloc,reloc_trampoline32,slaunch_early_init,tpm_extend_mbi \
--output $@

targets += built-in-32.S
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index b4cf423c80..9a272155e9 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -527,6 +527,9 @@ __start:
/* Store MBI address in EBX where MB2 code expects it. */
mov %eax, %ebx

+ /* tpm_extend_mbi(mbi/eax, slrt/edx) using fastcall. */
+ call tpm_extend_mbi
+
/* Move magic number expected by Multiboot 2 to EAX and fall through. */
movl $MULTIBOOT2_BOOTLOADER_MAGIC, %eax

diff --git a/xen/arch/x86/include/asm/slaunch.h b/xen/arch/x86/include/asm/slaunch.h
index ddc1bd2361..4ff1f9c7d5 100644
--- a/xen/arch/x86/include/asm/slaunch.h
+++ b/xen/arch/x86/include/asm/slaunch.h
@@ -10,6 +10,20 @@
#include <xen/slr-table.h>
#include <xen/types.h>

+#define DRTM_LOC 2
+#define DRTM_CODE_PCR 17
+#define DRTM_DATA_PCR 18
+
+/*
+ * Secure Launch event log entry types. The TXT specification defines the base
+ * event value as 0x400 for DRTM values, use it regardless of the DRTM for
+ * consistency.
+ */
+#define DLE_EVTYPE_BASE 0x400
+#define DLE_EVTYPE_SLAUNCH (DLE_EVTYPE_BASE + 0x102)
+#define DLE_EVTYPE_SLAUNCH_START (DLE_EVTYPE_BASE + 0x103)
+#define DLE_EVTYPE_SLAUNCH_END (DLE_EVTYPE_BASE + 0x104)
+
/* Indicates an active Secure Launch boot. */
extern bool slaunch_active;

diff --git a/xen/arch/x86/include/asm/tpm.h b/xen/arch/x86/include/asm/tpm.h
new file mode 100644
index 0000000000..a3c9e24b8f
--- /dev/null
+++ b/xen/arch/x86/include/asm/tpm.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#ifndef ASM__X86__TPM_H
+#define ASM__X86__TPM_H
+
+#include <xen/types.h>
+
+#define TPM_TIS_BASE 0xfed40000U
+#define TPM_TIS_SIZE 0x00010000U
+
+void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
+ unsigned size, uint32_t type, const uint8_t *log_data,
+ unsigned log_data_size);
+
+#endif /* ASM__X86__TPM_H */
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index ac3b43942b..5f91fe5ad0 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -13,6 +13,7 @@
#include <asm/intel-txt.h>
#include <asm/page.h>
#include <asm/slaunch.h>
+#include <asm/tpm.h>

/*
* These variables are assigned to by the code near Xen's entry point.
@@ -66,16 +67,20 @@ struct slr_table *__init slaunch_get_slrt(void)

void __init slaunch_map_mem_regions(void)
{
+ int rc;
void *evt_log_addr;
uint32_t evt_log_size;

+ rc = slaunch_map_l2(TPM_TIS_BASE, TPM_TIS_SIZE);
+ BUG_ON(rc != 0);
+
/* Vendor-specific part. */
txt_map_mem_regions();

find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
if ( evt_log_addr != NULL )
{
- int rc = slaunch_map_l2((uintptr_t)evt_log_addr, evt_log_size);
+ rc = slaunch_map_l2((uintptr_t)evt_log_addr, evt_log_size);
BUG_ON(rc != 0);
}
}
diff --git a/xen/arch/x86/tpm.c b/xen/arch/x86/tpm.c
new file mode 100644
index 0000000000..7fb19ce4fa
--- /dev/null
+++ b/xen/arch/x86/tpm.c
@@ -0,0 +1,437 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (c) 2022-2025 3mdeb Sp. z o.o. All rights reserved.
+ */
+
+#include <xen/sha1.h>
+#include <xen/string.h>
+#include <xen/types.h>
+#include <asm/intel-txt.h>
+#include <asm/slaunch.h>
+#include <asm/tpm.h>
+
+#ifdef __EARLY_SLAUNCH__
+
+#ifdef __va
+#error "__va defined in non-paged mode!"
+#endif
+
+#define __va(x) _p(x)
+
+/* Implementation of slaunch_get_slrt() for early TPM code. */
+static uint32_t slrt_location;
+struct slr_table *slaunch_get_slrt(void)
+{
+ return __va(slrt_location);
+}
+
+/*
+ * The code is being compiled as a standalone binary without linking to any
+ * other part of Xen. Providing implementation of builtin functions in this
+ * case is necessary if compiler chooses to not use an inline builtin.
+ */
+void *(memcpy)(void *dest, const void *src, size_t n)
+{
+ const uint8_t *s = src;
+ uint8_t *d = dest;
+
+ while ( n-- )
+ *d++ = *s++;
+
+ return dest;
+}
+
+#else /* __EARLY_SLAUNCH__ */
+
+#include <xen/mm.h>
+#include <xen/pfn.h>
+
+#endif /* __EARLY_SLAUNCH__ */
+
+#define TPM_LOC_REG(loc, reg) (0x1000 * (loc) + (reg))
+
+#define TPM_ACCESS_(x) TPM_LOC_REG(x, 0x00)
+#define ACCESS_REQUEST_USE (1 << 1)
+#define ACCESS_ACTIVE_LOCALITY (1 << 5)
+#define TPM_INTF_CAPABILITY_(x) TPM_LOC_REG(x, 0x14)
+#define INTF_VERSION_MASK 0x70000000
+#define TPM_STS_(x) TPM_LOC_REG(x, 0x18)
+#define TPM_FAMILY_MASK 0x0C000000
+#define STS_DATA_AVAIL (1 << 4)
+#define STS_TPM_GO (1 << 5)
+#define STS_COMMAND_READY (1 << 6)
+#define STS_VALID (1 << 7)
+#define TPM_DATA_FIFO_(x) TPM_LOC_REG(x, 0x24)
+
+#define swap16(x) __builtin_bswap16(x)
+#define swap32(x) __builtin_bswap32(x)
+
+static inline volatile uint32_t tis_read32(unsigned reg)
+{
+ return *(volatile uint32_t *)__va(TPM_TIS_BASE + reg);
+}
+
+static inline volatile uint8_t tis_read8(unsigned reg)
+{
+ return *(volatile uint8_t *)__va(TPM_TIS_BASE + reg);
+}
+
+static inline void tis_write8(unsigned reg, uint8_t val)
+{
+ *(volatile uint8_t *)__va(TPM_TIS_BASE + reg) = val;
+}
+
+static inline void request_locality(unsigned loc)
+{
+ tis_write8(TPM_ACCESS_(loc), ACCESS_REQUEST_USE);
+ /* Check that locality was actually activated. */
+ while ( (tis_read8(TPM_ACCESS_(loc)) & ACCESS_ACTIVE_LOCALITY) == 0 );
+}
+
+static inline void relinquish_locality(unsigned loc)
+{
+ tis_write8(TPM_ACCESS_(loc), ACCESS_ACTIVE_LOCALITY);
+}
+
+static void send_cmd(unsigned loc, uint8_t *buf, unsigned i_size,
+ unsigned *o_size)
+{
+ /*
+ * Value of "data available" bit counts only when "valid" field is set as
+ * well.
+ */
+ const unsigned data_avail = STS_VALID | STS_DATA_AVAIL;
+
+ unsigned i;
+
+ /* Make sure TPM can accept a command. */
+ if ( (tis_read8(TPM_STS_(loc)) & STS_COMMAND_READY) == 0 )
+ {
+ /* Abort current command. */
+ tis_write8(TPM_STS_(loc), STS_COMMAND_READY);
+ /* Wait until TPM is ready for a new one. */
+ while ( (tis_read8(TPM_STS_(loc)) & STS_COMMAND_READY) == 0 );
+ }
+
+ for ( i = 0; i < i_size; i++ )
+ tis_write8(TPM_DATA_FIFO_(loc), buf[i]);
+
+ tis_write8(TPM_STS_(loc), STS_TPM_GO);
+
+ /* Wait for the first byte of response. */
+ while ( (tis_read8(TPM_STS_(loc)) & data_avail) != data_avail);
+
+ for ( i = 0; i < *o_size && tis_read8(TPM_STS_(loc)) & data_avail; i++ )
+ buf[i] = tis_read8(TPM_DATA_FIFO_(loc));
+
+ if ( i < *o_size )
+ *o_size = i;
+
+ tis_write8(TPM_STS_(loc), STS_COMMAND_READY);
+}
+
+static inline bool is_tpm12(void)
+{
+ /*
+ * If one of these conditions is true:
+ * - INTF_CAPABILITY_x.interfaceVersion is 0 (TIS <= 1.21)
+ * - INTF_CAPABILITY_x.interfaceVersion is 2 (TIS == 1.3)
+ * - STS_x.tpmFamily is 0
+ * we're dealing with TPM1.2.
+ */
+ uint32_t intf_version = tis_read32(TPM_INTF_CAPABILITY_(0))
+ & INTF_VERSION_MASK;
+ return (intf_version == 0x00000000 || intf_version == 0x20000000 ||
+ (tis_read32(TPM_STS_(0)) & TPM_FAMILY_MASK) == 0);
+}
+
+/****************************** TPM1.2 specific *******************************/
+#define TPM_ORD_Extend 0x00000014
+#define TPM_ORD_SHA1Start 0x000000A0
+#define TPM_ORD_SHA1Update 0x000000A1
+#define TPM_ORD_SHA1CompleteExtend 0x000000A3
+
+#define TPM_TAG_RQU_COMMAND 0x00C1
+#define TPM_TAG_RSP_COMMAND 0x00C4
+
+/* All fields of following structs are big endian. */
+struct tpm_cmd_hdr {
+ uint16_t tag;
+ uint32_t paramSize;
+ uint32_t ordinal;
+} __packed;
+
+struct tpm_rsp_hdr {
+ uint16_t tag;
+ uint32_t paramSize;
+ uint32_t returnCode;
+} __packed;
+
+struct extend_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t pcrNum;
+ uint8_t inDigest[SHA1_DIGEST_SIZE];
+} __packed;
+
+struct extend_rsp {
+ struct tpm_rsp_hdr h;
+ uint8_t outDigest[SHA1_DIGEST_SIZE];
+} __packed;
+
+struct sha1_start_cmd {
+ struct tpm_cmd_hdr h;
+} __packed;
+
+struct sha1_start_rsp {
+ struct tpm_rsp_hdr h;
+ uint32_t maxNumBytes;
+} __packed;
+
+struct sha1_update_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t numBytes; /* Must be a multiple of 64 */
+ uint8_t hashData[];
+} __packed;
+
+struct sha1_update_rsp {
+ struct tpm_rsp_hdr h;
+} __packed;
+
+struct sha1_complete_extend_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t pcrNum;
+ uint32_t hashDataSize; /* 0-64, inclusive */
+ uint8_t hashData[];
+} __packed;
+
+struct sha1_complete_extend_rsp {
+ struct tpm_rsp_hdr h;
+ uint8_t hashValue[SHA1_DIGEST_SIZE];
+ uint8_t outDigest[SHA1_DIGEST_SIZE];
+} __packed;
+
+struct TPM12_PCREvent {
+ uint32_t PCRIndex;
+ uint32_t Type;
+ uint8_t Digest[SHA1_DIGEST_SIZE];
+ uint32_t Size;
+ uint8_t Data[];
+};
+
+struct txt_ev_log_container_12 {
+ char Signature[20]; /* "TXT Event Container", null-terminated */
+ uint8_t Reserved[12];
+ uint8_t ContainerVerMajor;
+ uint8_t ContainerVerMinor;
+ uint8_t PCREventVerMajor;
+ uint8_t PCREventVerMinor;
+ uint32_t ContainerSize; /* Allocated size */
+ uint32_t PCREventsOffset;
+ uint32_t NextEventOffset;
+ struct TPM12_PCREvent PCREvents[];
+};
+
+#ifdef __EARLY_SLAUNCH__
+/*
+ * TPM1.2 is required to support commands of up to 1101 bytes, vendors rarely
+ * go above that. Limit maximum size of block of data to be hashed to 1024.
+ */
+#define MAX_HASH_BLOCK 1024
+#define CMD_RSP_BUF_SIZE (sizeof(struct sha1_update_cmd) + MAX_HASH_BLOCK)
+
+union cmd_rsp {
+ struct sha1_start_cmd start_c;
+ struct sha1_start_rsp start_r;
+ struct sha1_update_cmd update_c;
+ struct sha1_update_rsp update_r;
+ struct sha1_complete_extend_cmd finish_c;
+ struct sha1_complete_extend_rsp finish_r;
+ uint8_t buf[CMD_RSP_BUF_SIZE];
+};
+
+/* Returns true on success. */
+static bool tpm12_hash_extend(unsigned loc, const uint8_t *buf, unsigned size,
+ unsigned pcr, uint8_t *out_digest)
+{
+ union cmd_rsp cmd_rsp;
+ unsigned max_bytes = MAX_HASH_BLOCK;
+ unsigned o_size = sizeof(cmd_rsp);
+ bool success = false;
+
+ request_locality(loc);
+
+ cmd_rsp.start_c = (struct sha1_start_cmd) {
+ .h.tag = swap16(TPM_TAG_RQU_COMMAND),
+ .h.paramSize = swap32(sizeof(struct sha1_start_cmd)),
+ .h.ordinal = swap32(TPM_ORD_SHA1Start),
+ };
+
+ send_cmd(loc, cmd_rsp.buf, sizeof(struct sha1_start_cmd), &o_size);
+ if ( o_size < sizeof(struct sha1_start_rsp) )
+ goto error;
+
+ if ( max_bytes > swap32(cmd_rsp.start_r.maxNumBytes) )
+ max_bytes = swap32(cmd_rsp.start_r.maxNumBytes);
+
+ while ( size > 64 )
+ {
+ if ( size < max_bytes )
+ max_bytes = size & ~(64 - 1);
+
+ o_size = sizeof(cmd_rsp);
+
+ cmd_rsp.update_c = (struct sha1_update_cmd){
+ .h.tag = swap16(TPM_TAG_RQU_COMMAND),
+ .h.paramSize = swap32(sizeof(struct sha1_update_cmd) + max_bytes),
+ .h.ordinal = swap32(TPM_ORD_SHA1Update),
+ .numBytes = swap32(max_bytes),
+ };
+ memcpy(cmd_rsp.update_c.hashData, buf, max_bytes);
+
+ send_cmd(loc, cmd_rsp.buf, sizeof(struct sha1_update_cmd) + max_bytes,
+ &o_size);
+ if ( o_size < sizeof(struct sha1_update_rsp) )
+ goto error;
+
+ size -= max_bytes;
+ buf += max_bytes;
+ }
+
+ o_size = sizeof(cmd_rsp);
+
+ cmd_rsp.finish_c = (struct sha1_complete_extend_cmd) {
+ .h.tag = swap16(TPM_TAG_RQU_COMMAND),
+ .h.paramSize = swap32(sizeof(struct sha1_complete_extend_cmd) + size),
+ .h.ordinal = swap32(TPM_ORD_SHA1CompleteExtend),
+ .pcrNum = swap32(pcr),
+ .hashDataSize = swap32(size),
+ };
+ memcpy(cmd_rsp.finish_c.hashData, buf, size);
+
+ send_cmd(loc, cmd_rsp.buf, sizeof(struct sha1_complete_extend_cmd) + size,
+ &o_size);
+ if ( o_size < sizeof(struct sha1_complete_extend_rsp) )
+ goto error;
+
+ if ( out_digest != NULL )
+ memcpy(out_digest, cmd_rsp.finish_r.hashValue, SHA1_DIGEST_SIZE);
+
+ success = true;
+
+error:
+ relinquish_locality(loc);
+ return success;
+}
+
+#else
+
+union cmd_rsp {
+ struct extend_cmd extend_c;
+ struct extend_rsp extend_r;
+};
+
+/* Returns true on success. */
+static bool tpm12_hash_extend(unsigned loc, const uint8_t *buf, unsigned size,
+ unsigned pcr, uint8_t *out_digest)
+{
+ union cmd_rsp cmd_rsp;
+ unsigned o_size = sizeof(cmd_rsp);
+
+ sha1_hash(out_digest, buf, size);
+
+ request_locality(loc);
+
+ cmd_rsp.extend_c = (struct extend_cmd) {
+ .h.tag = swap16(TPM_TAG_RQU_COMMAND),
+ .h.paramSize = swap32(sizeof(struct extend_cmd)),
+ .h.ordinal = swap32(TPM_ORD_Extend),
+ .pcrNum = swap32(pcr),
+ };
+
+ memcpy(cmd_rsp.extend_c.inDigest, out_digest, SHA1_DIGEST_SIZE);
+
+ send_cmd(loc, (uint8_t *)&cmd_rsp, sizeof(struct extend_cmd), &o_size);
+
+ relinquish_locality(loc);
+
+ return (o_size >= sizeof(struct extend_rsp));
+}
+
+#endif /* __EARLY_SLAUNCH__ */
+
+static void *create_log_event12(struct txt_ev_log_container_12 *evt_log,
+ uint32_t evt_log_size, uint32_t pcr,
+ uint32_t type, const uint8_t *data,
+ unsigned data_size)
+{
+ struct TPM12_PCREvent *new_entry;
+
+ new_entry = (void *)(((uint8_t *)evt_log) + evt_log->NextEventOffset);
+
+ /*
+ * Check if there is enough space left for new entry.
+ * Note: it is possible to introduce a gap in event log if entry with big
+ * data_size is followed by another entry with smaller data. Maybe we should
+ * cap the event log size in such case?
+ */
+ if ( evt_log->NextEventOffset + sizeof(struct TPM12_PCREvent) + data_size
+ > evt_log_size )
+ return NULL;
+
+ evt_log->NextEventOffset += sizeof(struct TPM12_PCREvent) + data_size;
+
+ new_entry->PCRIndex = pcr;
+ new_entry->Type = type;
+ new_entry->Size = data_size;
+
+ if ( data && data_size > 0 )
+ memcpy(new_entry->Data, data, data_size);
+
+ return new_entry->Digest;
+}
+
+/************************** end of TPM1.2 specific ****************************/
+
+void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
+ unsigned size, uint32_t type, const uint8_t *log_data,
+ unsigned log_data_size)
+{
+ void *evt_log_addr;
+ uint32_t evt_log_size;
+
+ find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
+ evt_log_addr = __va((uintptr_t)evt_log_addr);
+
+ if ( is_tpm12() )
+ {
+ uint8_t sha1_digest[SHA1_DIGEST_SIZE];
+
+ struct txt_ev_log_container_12 *evt_log = evt_log_addr;
+ void *entry_digest = create_log_event12(evt_log, evt_log_size, pcr,
+ type, log_data, log_data_size);
+
+ /* We still need to write computed hash somewhere. */
+ if ( entry_digest == NULL )
+ entry_digest = sha1_digest;
+
+ if ( !tpm12_hash_extend(loc, buf, size, pcr, entry_digest) )
+ {
+#ifndef __EARLY_SLAUNCH__
+ printk(XENLOG_ERR "Extending PCR%u failed\n", pcr);
+#endif
+ }
+ }
+}
+
+#ifdef __EARLY_SLAUNCH__
+void asmlinkage tpm_extend_mbi(uint32_t *mbi, uint32_t slrt_pa)
+{
+ /* Need this to implement slaunch_get_slrt() for early TPM code. */
+ slrt_location = slrt_pa;
+
+ /* MBI starts with uint32_t total_size. */
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)mbi, *mbi,
+ DLE_EVTYPE_SLAUNCH, NULL, 0);
+}
+#endif
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:56 PMMay 13
to xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
SHA1 and SHA256 are hard-coded here, but their support by the TPM is
checked. Addition of event log for TPM2.0 will generalize the code
further.

Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/tpm.c | 464 +++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 452 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/tpm.c b/xen/arch/x86/tpm.c
index 7fb19ce4fa..ed49fe3ccf 100644
--- a/xen/arch/x86/tpm.c
+++ b/xen/arch/x86/tpm.c
@@ -5,6 +5,7 @@
*/

#include <xen/sha1.h>
+#include <xen/sha2.h>
#include <xen/string.h>
#include <xen/types.h>
#include <asm/intel-txt.h>
@@ -31,6 +32,15 @@ struct slr_table *slaunch_get_slrt(void)
* other part of Xen. Providing implementation of builtin functions in this
* case is necessary if compiler chooses to not use an inline builtin.
*/
+void *(memset)(void *s, int c, size_t n)
+{
+ uint8_t *d = s;
+
+ while ( n-- )
+ *d++ = c;
+
+ return s;
+}
void *(memcpy)(void *dest, const void *src, size_t n)
{
const uint8_t *s = src;
@@ -146,14 +156,15 @@ static inline bool is_tpm12(void)
(tis_read32(TPM_STS_(0)) & TPM_FAMILY_MASK) == 0);
}

-/****************************** TPM1.2 specific *******************************/
-#define TPM_ORD_Extend 0x00000014
-#define TPM_ORD_SHA1Start 0x000000A0
-#define TPM_ORD_SHA1Update 0x000000A1
-#define TPM_ORD_SHA1CompleteExtend 0x000000A3
+/****************************** TPM1.2 & TPM2.0 *******************************/

-#define TPM_TAG_RQU_COMMAND 0x00C1
-#define TPM_TAG_RSP_COMMAND 0x00C4
+/*
+ * TPM1.2 is required to support commands of up to 1101 bytes, vendors rarely
+ * go above that. Limit maximum size of block of data to be hashed to 1024.
+ *
+ * TPM2.0 should support hashing of at least 1024 bytes.
+ */
+#define MAX_HASH_BLOCK 1024

/* All fields of following structs are big endian. */
struct tpm_cmd_hdr {
@@ -168,6 +179,17 @@ struct tpm_rsp_hdr {
uint32_t returnCode;
} __packed;

+/****************************** TPM1.2 specific *******************************/
+
+#define TPM_ORD_Extend 0x00000014
+#define TPM_ORD_SHA1Start 0x000000A0
+#define TPM_ORD_SHA1Update 0x000000A1
+#define TPM_ORD_SHA1CompleteExtend 0x000000A3
+
+#define TPM_TAG_RQU_COMMAND 0x00C1
+#define TPM_TAG_RSP_COMMAND 0x00C4
+
+/* All fields of following structs are big endian. */
struct extend_cmd {
struct tpm_cmd_hdr h;
uint32_t pcrNum;
@@ -233,11 +255,6 @@ struct txt_ev_log_container_12 {
};

#ifdef __EARLY_SLAUNCH__
-/*
- * TPM1.2 is required to support commands of up to 1101 bytes, vendors rarely
- * go above that. Limit maximum size of block of data to be hashed to 1024.
- */
-#define MAX_HASH_BLOCK 1024
#define CMD_RSP_BUF_SIZE (sizeof(struct sha1_update_cmd) + MAX_HASH_BLOCK)

union cmd_rsp {
@@ -393,6 +410,400 @@ static void *create_log_event12(struct txt_ev_log_container_12 *evt_log,

/************************** end of TPM1.2 specific ****************************/

+/****************************** TPM2.0 specific *******************************/
+
+/*
+ * These constants are for TPM2.0 but don't have a distinct prefix to match
+ * names in the specification.
+ */
+
+#define TPM_HT_PCR 0x00
+
+#define TPM_RH_NULL 0x40000007
+#define TPM_RS_PW 0x40000009
+
+#define HR_SHIFT 24
+#define HR_PCR (TPM_HT_PCR << HR_SHIFT)
+
+#define TPM_ST_NO_SESSIONS 0x8001
+#define TPM_ST_SESSIONS 0x8002
+
+#define TPM_ALG_SHA1 0x0004
+#define TPM_ALG_SHA256 0x000b
+#define TPM_ALG_NULL 0x0010
+
+#define TPM2_PCR_Extend 0x00000182
+#define TPM2_PCR_HashSequenceStart 0x00000186
+#define TPM2_PCR_SequenceUpdate 0x0000015C
+#define TPM2_PCR_EventSequenceComplete 0x00000185
+
+#define PUT_BYTES(p, bytes, size) do { \
+ memcpy((p), (bytes), (size)); \
+ (p) += (size); \
+ } while ( 0 )
+
+#define PUT_16BIT(p, data) do { \
+ *(uint16_t *)(p) = swap16(data); \
+ (p) += 2; \
+ } while ( 0 )
+
+/* All fields of following structs are big endian. */
+struct tpm2_session_header {
+ uint32_t handle;
+ uint16_t nonceSize;
+ uint8_t nonce[0];
+ uint8_t attrs;
+ uint16_t hmacSize;
+ uint8_t hmac[0];
+} __packed;
+
+struct tpm2_extend_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t pcrHandle;
+ uint32_t sessionHdrSize;
+ struct tpm2_session_header pcrSession;
+ uint32_t hashCount;
+ uint8_t hashes[0];
+} __packed;
+
+struct tpm2_extend_rsp {
+ struct tpm_rsp_hdr h;
+} __packed;
+
+struct tpm2_sequence_start_cmd {
+ struct tpm_cmd_hdr h;
+ uint16_t hmacSize;
+ uint8_t hmac[0];
+ uint16_t hashAlg;
+} __packed;
+
+struct tpm2_sequence_start_rsp {
+ struct tpm_rsp_hdr h;
+ uint32_t sequenceHandle;
+} __packed;
+
+struct tpm2_sequence_update_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t sequenceHandle;
+ uint32_t sessionHdrSize;
+ struct tpm2_session_header session;
+ uint16_t dataSize;
+ uint8_t data[0];
+} __packed;
+
+struct tpm2_sequence_update_rsp {
+ struct tpm_rsp_hdr h;
+} __packed;
+
+struct tpm2_sequence_complete_cmd {
+ struct tpm_cmd_hdr h;
+ uint32_t pcrHandle;
+ uint32_t sequenceHandle;
+ uint32_t sessionHdrSize;
+ struct tpm2_session_header pcrSession;
+ struct tpm2_session_header sequenceSession;
+ uint16_t dataSize;
+ uint8_t data[0];
+} __packed;
+
+struct tpm2_sequence_complete_rsp {
+ struct tpm_rsp_hdr h;
+ uint32_t paramSize;
+ uint32_t hashCount;
+ uint8_t hashes[0];
+ /*
+ * Each hash is represented as:
+ * struct {
+ * uint16_t hashAlg;
+ * uint8_t hash[size of hashAlg];
+ * };
+ */
+} __packed;
+
+/*
+ * These two structure are for convenience, they don't correspond to anything in
+ * any spec.
+ */
+struct tpm2_log_hash {
+ uint16_t alg; /* TPM_ALG_* */
+ uint16_t size;
+ uint8_t *data; /* Non-owning reference to a buffer inside log entry. */
+};
+/* Should be more than enough for now and awhile in the future. */
+#define MAX_HASH_COUNT 8
+struct tpm2_log_hashes {
+ uint32_t count;
+ struct tpm2_log_hash hashes[MAX_HASH_COUNT];
+};
+
+#ifdef __EARLY_SLAUNCH__
+
+union tpm2_cmd_rsp {
+ uint8_t b[sizeof(struct tpm2_sequence_update_cmd) + MAX_HASH_BLOCK];
+ struct tpm_cmd_hdr c;
+ struct tpm_rsp_hdr r;
+ struct tpm2_sequence_start_cmd start_c;
+ struct tpm2_sequence_start_rsp start_r;
+ struct tpm2_sequence_update_cmd update_c;
+ struct tpm2_sequence_update_rsp update_r;
+ struct tpm2_sequence_complete_cmd finish_c;
+ struct tpm2_sequence_complete_rsp finish_r;
+};
+
+static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,
+ unsigned size, unsigned pcr,
+ struct tpm2_log_hashes *log_hashes)
+{
+ uint32_t seq_handle;
+ unsigned max_bytes = MAX_HASH_BLOCK;
+
+ union tpm2_cmd_rsp cmd_rsp;
+ unsigned o_size;
+ unsigned i;
+ uint8_t *p;
+ uint32_t rc;
+
+ cmd_rsp.start_c = (struct tpm2_sequence_start_cmd) {
+ .h.tag = swap16(TPM_ST_NO_SESSIONS),
+ .h.paramSize = swap32(sizeof(cmd_rsp.start_c)),
+ .h.ordinal = swap32(TPM2_PCR_HashSequenceStart),
+ .hashAlg = swap16(TPM_ALG_NULL), /* Compute all supported hashes. */
+ };
+
+ request_locality(loc);
+
+ o_size = sizeof(cmd_rsp);
+ send_cmd(loc, cmd_rsp.b, swap32(cmd_rsp.c.paramSize), &o_size);
+
+ if ( cmd_rsp.r.tag == swap16(TPM_ST_NO_SESSIONS) &&
+ cmd_rsp.r.paramSize == swap32(10) )
+ {
+ rc = swap32(cmd_rsp.r.returnCode);
+ if ( rc != 0 )
+ goto error;
+ }
+
+ seq_handle = swap32(cmd_rsp.start_r.sequenceHandle);
+
+ while ( size > 64 )
+ {
+ if ( size < max_bytes )
+ max_bytes = size & ~(64 - 1);
+
+ cmd_rsp.update_c = (struct tpm2_sequence_update_cmd) {
+ .h.tag = swap16(TPM_ST_SESSIONS),
+ .h.paramSize = swap32(sizeof(cmd_rsp.update_c) + max_bytes),
+ .h.ordinal = swap32(TPM2_PCR_SequenceUpdate),
+ .sequenceHandle = swap32(seq_handle),
+ .sessionHdrSize = swap32(sizeof(struct tpm2_session_header)),
+ .session.handle = swap32(TPM_RS_PW),
+ .dataSize = swap16(max_bytes),
+ };
+
+ memcpy(cmd_rsp.update_c.data, buf, max_bytes);
+
+ o_size = sizeof(cmd_rsp);
+ send_cmd(loc, cmd_rsp.b, swap32(cmd_rsp.c.paramSize), &o_size);
+
+ if ( cmd_rsp.r.tag == swap16(TPM_ST_NO_SESSIONS) &&
+ cmd_rsp.r.paramSize == swap32(10) )
+ {
+ rc = swap32(cmd_rsp.r.returnCode);
+ if ( rc != 0 )
+ goto error;
+ }
+
+ size -= max_bytes;
+ buf += max_bytes;
+ }
+
+ cmd_rsp.finish_c = (struct tpm2_sequence_complete_cmd) {
+ .h.tag = swap16(TPM_ST_SESSIONS),
+ .h.paramSize = swap32(sizeof(cmd_rsp.finish_c) + size),
+ .h.ordinal = swap32(TPM2_PCR_EventSequenceComplete),
+ .pcrHandle = swap32(HR_PCR + pcr),
+ .sequenceHandle = swap32(seq_handle),
+ .sessionHdrSize = swap32(sizeof(struct tpm2_session_header)*2),
+ .pcrSession.handle = swap32(TPM_RS_PW),
+ .sequenceSession.handle = swap32(TPM_RS_PW),
+ .dataSize = swap16(size),
+ };
+
+ memcpy(cmd_rsp.finish_c.data, buf, size);
+
+ o_size = sizeof(cmd_rsp);
+ send_cmd(loc, cmd_rsp.b, swap32(cmd_rsp.c.paramSize), &o_size);
+
+ if ( cmd_rsp.r.tag == swap16(TPM_ST_NO_SESSIONS) &&
+ cmd_rsp.r.paramSize == swap32(10) )
+ {
+ rc = swap32(cmd_rsp.r.returnCode);
+ if ( rc != 0 )
+ goto error;
+ }
+
+ p = cmd_rsp.finish_r.hashes;
+ for ( i = 0; i < swap32(cmd_rsp.finish_r.hashCount); ++i )
+ {
+ unsigned j;
+ uint16_t hash_type;
+
+ hash_type = swap16(*(uint16_t *)p);
+ p += sizeof(uint16_t);
+
+ for ( j = 0; j < log_hashes->count; ++j )
+ {
+ struct tpm2_log_hash *hash = &log_hashes->hashes[j];
+ if ( hash->alg == hash_type )
+ {
+ memcpy(hash->data, p, hash->size);
+ p += hash->size;
+ break;
+ }
+ }
+
+ if ( j == log_hashes->count )
+ /* Can't continue parsing without knowing hash size. */
+ break;
+ }
+
+ rc = 0;
+
+error:
+ relinquish_locality(loc);
+ return rc;
+}
+
+#else
+
+union tpm2_cmd_rsp {
+ /* Enough space for multiple hashes. */
+ uint8_t b[sizeof(struct tpm2_extend_cmd) + 1024];
+ struct tpm_cmd_hdr c;
+ struct tpm_rsp_hdr r;
+ struct tpm2_extend_cmd extend_c;
+ struct tpm2_extend_rsp extend_r;
+};
+
+static uint32_t tpm20_pcr_extend(unsigned loc, uint32_t pcr_handle,
+ const struct tpm2_log_hashes *log_hashes)
+{
+ union tpm2_cmd_rsp cmd_rsp;
+ unsigned o_size;
+ unsigned i;
+ uint8_t *p;
+
+ cmd_rsp.extend_c = (struct tpm2_extend_cmd) {
+ .h.tag = swap16(TPM_ST_SESSIONS),
+ .h.ordinal = swap32(TPM2_PCR_Extend),
+ .pcrHandle = swap32(pcr_handle),
+ .sessionHdrSize = swap32(sizeof(struct tpm2_session_header)),
+ .pcrSession.handle = swap32(TPM_RS_PW),
+ .hashCount = swap32(log_hashes->count),
+ };
+
+ p = cmd_rsp.extend_c.hashes;
+ for ( i = 0; i < log_hashes->count; ++i )
+ {
+ const struct tpm2_log_hash *hash = &log_hashes->hashes[i];
+
+ if ( p + sizeof(uint16_t) + hash->size > &cmd_rsp.b[sizeof(cmd_rsp)] )
+ {
+ printk(XENLOG_ERR "Hit TPM message size implementation limit: %ld\n",
+ sizeof(cmd_rsp));
+ return -1;
+ }
+
+ *(uint16_t *)p = swap16(hash->alg);
+ p += sizeof(uint16_t);
+
+ memcpy(p, hash->data, hash->size);
+ p += hash->size;
+ }
+
+ /* Fill in command size (size of the whole buffer). */
+ cmd_rsp.extend_c.h.paramSize = swap32(sizeof(cmd_rsp.extend_c) +
+ (p - cmd_rsp.extend_c.hashes)),
+
+ o_size = sizeof(cmd_rsp);
+ send_cmd(loc, cmd_rsp.b, swap32(cmd_rsp.c.paramSize), &o_size);
+
+ return swap32(cmd_rsp.r.returnCode);
+}
+
+static bool tpm_supports_hash(unsigned loc, const struct tpm2_log_hash *hash)
+{
+ uint32_t rc;
+ struct tpm2_log_hashes hashes = {
+ .count = 1,
+ .hashes[0] = *hash,
+ };
+
+ /*
+ * This is a valid way of checking hash support, using it to not implement
+ * TPM2_GetCapability().
+ */
+ rc = tpm20_pcr_extend(loc, /*pcr_handle=*/TPM_RH_NULL, &hashes);
+
+ return rc == 0;
+}
+
+static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,
+ unsigned size, unsigned pcr,
+ const struct tpm2_log_hashes *log_hashes)
+{
+ uint32_t rc;
+ unsigned i;
+ struct tpm2_log_hashes supported_hashes = {0};
+
+ request_locality(loc);
+
+ for ( i = 0; i < log_hashes->count; ++i )
+ {
+ const struct tpm2_log_hash *hash = &log_hashes->hashes[i];
+ if ( !tpm_supports_hash(loc, hash) )
+ {
+ printk(XENLOG_WARNING "Skipped hash unsupported by TPM: %d\n",
+ hash->alg);
+ continue;
+ }
+
+ if ( hash->alg == TPM_ALG_SHA1 )
+ {
+ sha1_hash(hash->data, buf, size);
+ }
+ else if ( hash->alg == TPM_ALG_SHA256 )
+ {
+ sha2_256_digest(hash->data, buf, size);
+ }
+ else
+ {
+ /* This is called "OneDigest" in TXT Software Development Guide. */
+ memset(hash->data, 0, size);
+ hash->data[0] = 1;
+ }
+
+ if ( supported_hashes.count == MAX_HASH_COUNT )
+ {
+ printk(XENLOG_ERR "Hit hash count implementation limit: %d\n",
+ MAX_HASH_COUNT);
+ return -1;
+ }
+
+ supported_hashes.hashes[supported_hashes.count] = *hash;
+ ++supported_hashes.count;
+ }
+
+ rc = tpm20_pcr_extend(loc, HR_PCR + pcr, &supported_hashes);
+ relinquish_locality(loc);
+
+ return rc;
+}
+
+#endif /* __EARLY_SLAUNCH__ */
+
+/************************** end of TPM2.0 specific ****************************/
+
void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
unsigned size, uint32_t type, const uint8_t *log_data,
unsigned log_data_size)
@@ -419,6 +830,35 @@ void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
{
#ifndef __EARLY_SLAUNCH__
printk(XENLOG_ERR "Extending PCR%u failed\n", pcr);
+#endif
+ }
+ } else {
+ uint8_t sha1_digest[SHA1_DIGEST_SIZE];
+ uint8_t sha256_digest[SHA2_256_DIGEST_SIZE];
+ uint32_t rc;
+
+ struct tpm2_log_hashes log_hashes = {
+ .count = 2,
+ .hashes = {
+ {
+ .alg = TPM_ALG_SHA1,
+ .size = SHA1_DIGEST_SIZE,
+ .data = sha1_digest,
+ },
+ {
+ .alg = TPM_ALG_SHA256,
+ .size = SHA2_256_DIGEST_SIZE,
+ .data = sha256_digest,
+ },
+ },
+ };
+
+ rc = tpm2_hash_extend(loc, buf, size, pcr, &log_hashes);
+ if ( rc != 0 )
+ {
+#ifndef __EARLY_SLAUNCH__
+ printk(XENLOG_ERR "Extending PCR%u failed with TPM error: 0x%08x\n",
+ pcr, rc);
#endif
}
}
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:06:59 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Michał Żygowski <michal....@3mdeb.com>

Check whther IA32_FEATURE_CONTROL has the proper bits enabled to run
VMX in SMX when slaunch is active.

Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
---
xen/arch/x86/hvm/vmx/vmcs.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index a44475ae15..ef38903775 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -30,6 +30,7 @@
#include <asm/msr.h>
#include <asm/processor.h>
#include <asm/shadow.h>
+#include <asm/slaunch.h>
#include <asm/spec_ctrl.h>
#include <asm/tboot.h>
#include <asm/xstate.h>
@@ -724,7 +725,7 @@ static int _vmx_cpu_up(bool bsp)
bios_locked = !!(eax & IA32_FEATURE_CONTROL_LOCK);
if ( bios_locked )
{
- if ( !(eax & (tboot_in_measured_env()
+ if ( !(eax & (tboot_in_measured_env() || slaunch_active
? IA32_FEATURE_CONTROL_ENABLE_VMXON_INSIDE_SMX
: IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX)) )
{
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:03 PMMay 13
to xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/include/asm/intel-txt.h | 33 ++++++
xen/arch/x86/tpm.c | 169 ++++++++++++++++++++++-----
2 files changed, 175 insertions(+), 27 deletions(-)

diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
index f0ec580558..bc51d2d287 100644
--- a/xen/arch/x86/include/asm/intel-txt.h
+++ b/xen/arch/x86/include/asm/intel-txt.h
@@ -202,6 +202,39 @@ struct txt_sinit_mle_data {
/* Ext Data Elements */
} __packed;

+/* Types of extended data. */
+#define TXT_HEAP_EXTDATA_TYPE_END 0
+#define TXT_HEAP_EXTDATA_TYPE_BIOS_SPEC_VER 1
+#define TXT_HEAP_EXTDATA_TYPE_ACM 2
+#define TXT_HEAP_EXTDATA_TYPE_STM 3
+#define TXT_HEAP_EXTDATA_TYPE_CUSTOM 4
+#define TXT_HEAP_EXTDATA_TYPE_MADT 6
+#define TXT_HEAP_EXTDATA_TYPE_EVENT_LOG_POINTER2_1 8
+#define TXT_HEAP_EXTDATA_TYPE_MCFG 9
+#define TXT_HEAP_EXTDATA_TYPE_TPR_REQ 13
+#define TXT_HEAP_EXTDATA_TYPE_DTPR 14
+#define TXT_HEAP_EXTDATA_TYPE_CEDT 15
+
+/*
+ * Self-describing data structure that is used for extensions to TXT heap
+ * tables.
+ */
+struct txt_ext_data_element {
+ uint32_t type; /* One of TXT_HEAP_EXTDATA_TYPE_*. */
+ uint32_t size;
+ uint8_t data[0]; /* size bytes. */
+} __packed;
+
+/*
+ * Extended data describing TPM 2.0 log.
+ */
+struct heap_event_log_pointer_element2_1 {
+ uint64_t physical_address;
+ uint32_t allocated_event_container_size;
+ uint32_t first_record_offset;
+ uint32_t next_record_offset;
+} __packed;
+
/*
* Functions to extract data from the Intel TXT Heap Memory. The layout
* of the heap is as follows:
diff --git a/xen/arch/x86/tpm.c b/xen/arch/x86/tpm.c
index ed49fe3ccf..47a9edef50 100644
--- a/xen/arch/x86/tpm.c
+++ b/xen/arch/x86/tpm.c
@@ -536,6 +536,44 @@ struct tpm2_log_hashes {
struct tpm2_log_hash hashes[MAX_HASH_COUNT];
};

+struct tpm2_pcr_event_header {
+ uint32_t pcrIndex;
+ uint32_t eventType;
+ uint32_t digestCount;
+ uint8_t digests[0];
+ /*
+ * Each hash is represented as:
+ * struct {
+ * uint16_t hashAlg;
+ * uint8_t hash[size of hashAlg];
+ * };
+ */
+ /* uint32_t eventSize; */
+ /* uint8_t event[0]; */
+} __packed;
+
+struct tpm2_digest_sizes {
+ uint16_t algId;
+ uint16_t digestSize;
+} __packed;
+
+struct tpm2_spec_id_event {
+ uint32_t pcrIndex;
+ uint32_t eventType;
+ uint8_t digest[20];
+ uint32_t eventSize;
+ uint8_t signature[16];
+ uint32_t platformClass;
+ uint8_t specVersionMinor;
+ uint8_t specVersionMajor;
+ uint8_t specErrata;
+ uint8_t uintnSize;
+ uint32_t digestCount;
+ struct tpm2_digest_sizes digestSizes[0]; /* variable number of members */
+ /* uint8_t vendorInfoSize; */
+ /* uint8_t vendorInfo[vendorInfoSize]; */
+} __packed;
+
#ifdef __EARLY_SLAUNCH__

union tpm2_cmd_rsp {
@@ -769,19 +807,11 @@ static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,
}

if ( hash->alg == TPM_ALG_SHA1 )
- {
sha1_hash(hash->data, buf, size);
- }
else if ( hash->alg == TPM_ALG_SHA256 )
- {
sha2_256_digest(hash->data, buf, size);
- }
else
- {
- /* This is called "OneDigest" in TXT Software Development Guide. */
- memset(hash->data, 0, size);
- hash->data[0] = 1;
- }
+ /* create_log_event20() took care of initializing the digest. */;

if ( supported_hashes.count == MAX_HASH_COUNT )
{
@@ -802,6 +832,102 @@ static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,

#endif /* __EARLY_SLAUNCH__ */

+static struct heap_event_log_pointer_element2_1 *find_evt_log_ext_data(void)
+{
+ struct txt_os_sinit_data *os_sinit;
+ struct txt_ext_data_element *ext_data;
+
+ os_sinit = txt_os_sinit_data_start(__va(read_txt_reg(TXTCR_HEAP_BASE)));
+ ext_data = (void *)((uint8_t *)os_sinit + sizeof(*os_sinit));
+
+ /*
+ * Find TXT_HEAP_EXTDATA_TYPE_EVENT_LOG_POINTER2_1 which is necessary to
+ * know where to put the next entry.
+ */
+ while ( ext_data->type != TXT_HEAP_EXTDATA_TYPE_END )
+ {
+ if ( ext_data->type == TXT_HEAP_EXTDATA_TYPE_EVENT_LOG_POINTER2_1 )
+ break;
+ ext_data = (void *)&ext_data->data[ext_data->size];
+ }
+
+ if ( ext_data->type == TXT_HEAP_EXTDATA_TYPE_END )
+ return NULL;
+
+ return (void *)&ext_data->data[0];
+}
+
+static struct tpm2_log_hashes
+create_log_event20(struct tpm2_spec_id_event *evt_log, uint32_t evt_log_size,
+ uint32_t pcr, uint32_t type, const uint8_t *data,
+ unsigned data_size)
+{
+ struct tpm2_log_hashes log_hashes = {0};
+
+ struct heap_event_log_pointer_element2_1 *log_ext_data;
+ struct tpm2_pcr_event_header *new_entry;
+ uint32_t entry_size;
+ unsigned i;
+ uint8_t *p;
+
+ log_ext_data = find_evt_log_ext_data();
+ if ( log_ext_data == NULL )
+ return log_hashes;
+
+ entry_size = sizeof(*new_entry);
+ for ( i = 0; i < evt_log->digestCount; ++i )
+ {
+ entry_size += sizeof(uint16_t); /* hash type */
+ entry_size += evt_log->digestSizes[i].digestSize;
+ }
+ entry_size += sizeof(uint32_t); /* data size field */
+ entry_size += data_size;
+
+ /*
+ * Check if there is enough space left for new entry.
+ * Note: it is possible to introduce a gap in event log if entry with big
+ * data_size is followed by another entry with smaller data. Maybe we should
+ * cap the event log size in such case?
+ */
+ if ( log_ext_data->next_record_offset + entry_size > evt_log_size )
+ return log_hashes;
+
+ new_entry = (void *)((uint8_t *)evt_log + log_ext_data->next_record_offset);
+ log_ext_data->next_record_offset += entry_size;
+
+ new_entry->pcrIndex = pcr;
+ new_entry->eventType = type;
+ new_entry->digestCount = evt_log->digestCount;
+
+ p = &new_entry->digests[0];
+ for ( i = 0; i < evt_log->digestCount; ++i )
+ {
+ uint16_t alg = evt_log->digestSizes[i].algId;
+ uint16_t size = evt_log->digestSizes[i].digestSize;
+
+ *(uint16_t *)p = alg;
+ p += sizeof(uint16_t);
+
+ log_hashes.hashes[i].alg = alg;
+ log_hashes.hashes[i].size = size;
+ log_hashes.hashes[i].data = p;
+ p += size;
+
+ /* This is called "OneDigest" in TXT Software Development Guide. */
+ memset(log_hashes.hashes[i].data, 0, size);
+ log_hashes.hashes[i].data[0] = 1;
+ }
+ log_hashes.count = evt_log->digestCount;
+
+ *(uint32_t *)p = data_size;
+ p += sizeof(uint32_t);
+
+ if ( data && data_size > 0 )
+ memcpy(p, data, data_size);
+
+ return log_hashes;
+}
+
/************************** end of TPM2.0 specific ****************************/

void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
@@ -832,26 +958,15 @@ void tpm_hash_extend(unsigned loc, unsigned pcr, const uint8_t *buf,
printk(XENLOG_ERR "Extending PCR%u failed\n", pcr);
#endif
}
- } else {
- uint8_t sha1_digest[SHA1_DIGEST_SIZE];
- uint8_t sha256_digest[SHA2_256_DIGEST_SIZE];
+ }
+ else
+ {
uint32_t rc;

- struct tpm2_log_hashes log_hashes = {
- .count = 2,
- .hashes = {
- {
- .alg = TPM_ALG_SHA1,
- .size = SHA1_DIGEST_SIZE,
- .data = sha1_digest,
- },
- {
- .alg = TPM_ALG_SHA256,
- .size = SHA2_256_DIGEST_SIZE,
- .data = sha256_digest,
- },
- },
- };
+ struct tpm2_spec_id_event *evt_log = evt_log_addr;
+ struct tpm2_log_hashes log_hashes =
+ create_log_event20(evt_log, evt_log_size, pcr, type, log_data,
+ log_data_size);

rc = tpm2_hash_extend(loc, buf, size, pcr, &log_hashes);
if ( rc != 0 )
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:04 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

This is made as the first step of making parallel AP bring-up possible.
It should be enough for pre-C code.

Parallel AP bring-up is necessary because TXT by design releases all APs
at once. In addition to that it reduces number of IPIs (and more
importantly, delays between them) required to start all logical
processors. This results in significant reduction of boot time, even
when DRTM is not used, with performance gain growing with the number of
logical CPUs.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/boot/head.S | 1 +
xen/arch/x86/boot/trampoline.S | 21 +++++++++++++++++++++
xen/arch/x86/boot/x86_64.S | 28 +++++++++++++++++++++++++++-
xen/arch/x86/include/asm/apicdef.h | 4 ++++
xen/arch/x86/include/asm/msr-index.h | 3 +++
xen/arch/x86/setup.c | 7 +++++++
6 files changed, 63 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 9a272155e9..7376fa85d5 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -8,6 +8,7 @@
#include <asm/page.h>
#include <asm/processor.h>
#include <asm/msr-index.h>
+#include <asm/apicdef.h>
#include <asm/cpufeature.h>
#include <asm/trampoline.h>

diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index a92e399fbe..ed593acc46 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -71,6 +71,27 @@ trampoline_protmode_entry:
mov $X86_CR4_PAE,%ecx
mov %ecx,%cr4

+ /*
+ * Get APIC ID while we're in non-paged mode. Start by checking if
+ * x2APIC is enabled.
+ */
+ mov $MSR_APIC_BASE, %ecx
+ rdmsr
+ test $APIC_BASE_EXTD, %eax
+ jnz .Lx2apic
+
+ /* Not x2APIC, read from MMIO */
+ and $APIC_BASE_ADDR_MASK, %eax
+ mov APIC_ID(%eax), %esp
+ shr $24, %esp
+ jmp 1f
+
+.Lx2apic:
+ mov $(MSR_X2APIC_FIRST + (APIC_ID >> MSR_X2APIC_SHIFT)), %ecx
+ rdmsr
+ mov %eax, %esp
+1:
+
/* Load pagetable base register. */
mov $sym_offs(idle_pg_table),%eax
add bootsym_rel(trampoline_xen_phys_start,4,%eax)
diff --git a/xen/arch/x86/boot/x86_64.S b/xen/arch/x86/boot/x86_64.S
index 08ae97e261..ac33576d8f 100644
--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -15,7 +15,33 @@ ENTRY(__high_start)
mov $XEN_MINIMAL_CR4,%rcx
mov %rcx,%cr4

- mov stack_start(%rip),%rsp
+ test %ebx,%ebx
+ cmovz stack_start(%rip), %rsp
+ jz .L_stack_set
+
+ /* APs only: get stack base from APIC ID saved in %esp. */
+ mov $-1, %rax
+ lea x86_cpu_to_apicid(%rip), %rcx
+1:
+ add $1, %rax
+ cmp $NR_CPUS, %eax
+ jb 2f
+ hlt
+2:
+ cmp %esp, (%rcx, %rax, 4)
+ jne 1b
+
+ /* %eax is now Xen CPU index. */
+ lea stack_base(%rip), %rcx
+ mov (%rcx, %rax, 8), %rsp
+
+ test %rsp,%rsp
+ jnz 1f
+ hlt
+1:
+ add $(STACK_SIZE - CPUINFO_sizeof), %rsp
+
+.L_stack_set:

/* Reset EFLAGS (subsumes CLI and CLD). */
pushq $0
diff --git a/xen/arch/x86/include/asm/apicdef.h b/xen/arch/x86/include/asm/apicdef.h
index 63dab01dde..e093a2aa3c 100644
--- a/xen/arch/x86/include/asm/apicdef.h
+++ b/xen/arch/x86/include/asm/apicdef.h
@@ -121,6 +121,10 @@

#define MAX_IO_APICS 128

+#ifndef __ASSEMBLY__
+
extern bool x2apic_enabled;

+#endif /* !__ASSEMBLY__ */
+
#endif
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 22d9e76e55..794cf44abe 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -169,6 +169,9 @@
#define MSR_X2APIC_FIRST 0x00000800
#define MSR_X2APIC_LAST 0x000008ff

+/* MSR offset can be obtained by shifting MMIO offset this number of bits to the right. */
+#define MSR_X2APIC_SHIFT 4
+
#define MSR_X2APIC_TPR 0x00000808
#define MSR_X2APIC_PPR 0x0000080a
#define MSR_X2APIC_EOI 0x0000080b
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 479d2d744e..8e79d4be23 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -2096,6 +2096,7 @@ void asmlinkage __init noreturn __start_xen(void)
*/
if ( !pv_shim )
{
+ /* Separate loop to make parallel AP bringup possible. */
for_each_present_cpu ( i )
{
/* Set up cpu_to_node[]. */
@@ -2103,6 +2104,12 @@ void asmlinkage __init noreturn __start_xen(void)
/* Set up node_to_cpumask based on cpu_to_node[]. */
numa_add_cpu(i);

+ if ( stack_base[i] == NULL )
+ stack_base[i] = cpu_alloc_stack(i);
+ }
+
+ for_each_present_cpu ( i )
+ {
if ( (park_offline_cpus || num_online_cpus() < max_cpus) &&
!cpu_online(i) )
{
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:08 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
From: Krystian Hebel <krystia...@3mdeb.com>

On Intel TXT, APs are started in one of two ways, depending on ACM
which reports it in its information table. In both cases, all APs are
started simultaneously after BSP requests them to do so. Two possible
ways are:
- GETSEC[WAKEUP] instruction,
- MONITOR address.

GETSEC[WAKEUP] requires versions >= 7 of SINIT to MLE Data, but there is
no clear mapping of that version with regard to processor family and
it's not known which CPUs actually use it. It could have been designed
for TXT support on CPUs that lack MONITOR/MWAIT, because GETSEC[WAKEUP]
seems to be more complicated, in software and hardware alike.

This patch implements only MONITOR approach, GETSEC[WAKEUP] support will
be added later once more details and means of testing are available and
if there is a practical need for it.

With this patch, every AP goes through assembly part, and only when in
start_secondary() in C they re-enter MONITOR/MWAIT iff they are not the
AP that was asked to boot. The same address is reused for simplicity,
and on next wakeup call APs don't have to go through assembly part
again (GDT, paging, stack setting).

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/boot/trampoline.S | 19 ++++++++-
xen/arch/x86/include/asm/intel-txt.h | 6 +++
xen/arch/x86/include/asm/processor.h | 1 +
xen/arch/x86/smpboot.c | 63 ++++++++++++++++++++++++++++
4 files changed, 88 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index ed593acc46..8cd9881828 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -58,6 +58,16 @@ GLOBAL(entry_SIPI16)
ljmpl $BOOT_CS32,$bootsym_rel(trampoline_protmode_entry,6)

.code32
+GLOBAL(txt_ap_entry)
+ /*
+ * APs enter here in protected mode without paging. GDT is set in JOIN
+ * structure, it points to trampoline_gdt. Interrupts are disabled by
+ * TXT (including NMI and SMI), so IDT doesn't matter at this point.
+ * The only missing point is telling that we are AP by saving non-zero
+ * value in EBX.
+ */
+ mov $1, %ebx
+
trampoline_protmode_entry:
/* Set up a few descriptors: on entry only CS is guaranteed good. */
mov $BOOT_DS,%eax
@@ -143,7 +153,7 @@ start64:
.word 0
idt_48: .word 0, 0, 0 # base = limit = 0

-trampoline_gdt:
+GLOBAL(trampoline_gdt)
.word 0 /* 0x0000: unused (reused for GDTR) */
gdt_48:
.word .Ltrampoline_gdt_end - trampoline_gdt - 1
@@ -154,6 +164,13 @@ gdt_48:
.quad 0x00cf93000000ffff /* 0x0018: ring 0 data */
.quad 0x00009b000000ffff /* 0x0020: real-mode code @ BOOT_TRAMPOLINE */
.quad 0x000093000000ffff /* 0x0028: real-mode data @ BOOT_TRAMPOLINE */
+ /*
+ * Intel TXT requires these two in exact order. This isn't compatible
+ * with order required by syscall, so we have duplicated entries...
+ * If order ever changes, update selector numbers in asm/intel-txt.h.
+ */
+ .quad 0x00cf9b000000ffff /* 0x0030: ring 0 code, 32-bit mode */
+ .quad 0x00cf93000000ffff /* 0x0038: ring 0 data */
.Ltrampoline_gdt_end:

/* Relocations for trampoline Real Mode segments. */
diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
index bc51d2d287..5d76443d35 100644
--- a/xen/arch/x86/include/asm/intel-txt.h
+++ b/xen/arch/x86/include/asm/intel-txt.h
@@ -82,6 +82,9 @@

#define SLAUNCH_BOOTLOADER_MAGIC 0x4c534254

+#define TXT_AP_BOOT_CS 0x0030
+#define TXT_AP_BOOT_DS 0x0038
+
#ifndef __ASSEMBLY__

#include <xen/slr-table.h>
@@ -96,6 +99,9 @@
#define _txt(x) __va(x)
#endif

+extern char txt_ap_entry[];
+extern uint32_t trampoline_gdt[];
+
/*
* Always use private space as some of registers are either read-only or not
* present in public space.
diff --git a/xen/arch/x86/include/asm/processor.h b/xen/arch/x86/include/asm/processor.h
index 75af7ea3c4..9957e3cb9e 100644
--- a/xen/arch/x86/include/asm/processor.h
+++ b/xen/arch/x86/include/asm/processor.h
@@ -473,6 +473,7 @@ void set_in_mcu_opt_ctrl(uint32_t mask, uint32_t val);
enum ap_boot_method {
AP_BOOT_NORMAL,
AP_BOOT_SKINIT,
+ AP_BOOT_TXT,
};
extern enum ap_boot_method ap_boot_method;

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 54207e6d88..5e53cce20b 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -29,6 +29,7 @@
#include <asm/flushtlb.h>
#include <asm/guest.h>
#include <asm/idt.h>
+#include <asm/intel-txt.h>
#include <asm/io_apic.h>
#include <asm/irq-vectors.h>
#include <asm/mc146818rtc.h>
@@ -37,6 +38,7 @@
#include <asm/mtrr.h>
#include <asm/prot-key.h>
#include <asm/setup.h>
+#include <asm/slaunch.h>
#include <asm/spec_ctrl.h>
#include <asm/tboot.h>
#include <asm/time.h>
@@ -325,6 +327,29 @@ void asmlinkage start_secondary(void *unused)
*/
unsigned int cpu = booting_cpu;

+ if ( ap_boot_method == AP_BOOT_TXT ) {
+ uint64_t misc_enable;
+ uint32_t my_apicid;
+ struct txt_sinit_mle_data *sinit_mle =
+ txt_sinit_mle_data_start(__va(read_txt_reg(TXTCR_HEAP_BASE)));
+
+ /* TXT released us with MONITOR disabled in IA32_MISC_ENABLE. */
+ rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
+ wrmsrl(MSR_IA32_MISC_ENABLE,
+ misc_enable | MSR_IA32_MISC_ENABLE_MONITOR_ENABLE);
+
+ /* get_apic_id() reads from x2APIC if it thinks it is enabled. */
+ x2apic_ap_setup();
+ my_apicid = get_apic_id();
+
+ while ( my_apicid != x86_cpu_to_apicid[cpu] ) {
+ asm volatile ("monitor; xor %0,%0; mwait"
+ :: "a"(__va(sinit_mle->rlp_wakeup_addr)), "c"(0),
+ "d"(0) : "memory");
+ cpu = booting_cpu;
+ }
+ }
+
/* Critical region without IDT or TSS. Any fault is deadly! */

set_current(idle_vcpu[cpu]);
@@ -421,6 +446,28 @@ void asmlinkage start_secondary(void *unused)
startup_cpu_idle_loop();
}

+static int wake_aps_in_txt(void)
+{
+ struct txt_sinit_mle_data *sinit_mle =
+ txt_sinit_mle_data_start(__va(read_txt_reg(TXTCR_HEAP_BASE)));
+ uint32_t *wakeup_addr = __va(sinit_mle->rlp_wakeup_addr);
+
+ uint32_t join[4] = {
+ trampoline_gdt[1], /* GDT limit */
+ bootsym_phys(trampoline_gdt), /* GDT base */
+ TXT_AP_BOOT_CS, /* CS selector, DS = CS+8 */
+ bootsym_phys(txt_ap_entry) /* EIP */
+ };
+
+ write_txt_reg(TXTCR_MLE_JOIN, __pa(join));
+
+ smp_mb();
+
+ *wakeup_addr = 1;
+
+ return 0;
+}
+
static int wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip)
{
unsigned long send_status = 0, accept_status = 0;
@@ -443,6 +490,9 @@ static int wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip)
if ( tboot_in_measured_env() && !tboot_wake_ap(phys_apicid, start_eip) )
return 0;

+ if ( ap_boot_method == AP_BOOT_TXT )
+ return wake_aps_in_txt();
+
/*
* Be paranoid about clearing APIC errors.
*/
@@ -1150,6 +1200,13 @@ static struct notifier_block cpu_smpboot_nfb = {

void __init smp_prepare_cpus(void)
{
+ /*
+ * If the platform is performing a Secure Launch via TXT, secondary
+ * CPUs (APs) will need to be woken up in a TXT-specific way.
+ */
+ if ( slaunch_active && boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+ ap_boot_method = AP_BOOT_TXT;
+
register_cpu_notifier(&cpu_smpboot_nfb);

mtrr_aps_sync_begin();
@@ -1418,6 +1475,12 @@ void __init smp_cpus_done(void)

mtrr_save_state();
mtrr_aps_sync_end();
+
+ /*
+ * After the initial startup the DRTM-specific method for booting APs
+ * should not longer be used unless DRTM sequence is started again.
+ */
+ ap_boot_method = AP_BOOT_NORMAL;
}

void __init smp_intr_init(void)
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:11 PMMay 13
to xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
Go through entires in the DRTM policy of SLRT to hash and extend data
that they describe into corresponding PCRs.

Addresses are being zeroed on measuring platform-specific data to
prevent measurements from changing when the only thing that has changed
is an address. Addresses can vary due to bootloader, firmware or user
doing something differently or just if GRUB gets bigger in size due to
inclusion of more modules and ends up offsetting newly allocated memory.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/include/asm/slaunch.h | 14 ++
xen/arch/x86/setup.c | 15 ++
xen/arch/x86/slaunch.c | 213 +++++++++++++++++++++++++++++
3 files changed, 242 insertions(+)

diff --git a/xen/arch/x86/include/asm/slaunch.h b/xen/arch/x86/include/asm/slaunch.h
index 4ff1f9c7d5..1f98ec8de1 100644
--- a/xen/arch/x86/include/asm/slaunch.h
+++ b/xen/arch/x86/include/asm/slaunch.h
@@ -24,6 +24,8 @@
#define DLE_EVTYPE_SLAUNCH_START (DLE_EVTYPE_BASE + 0x103)
#define DLE_EVTYPE_SLAUNCH_END (DLE_EVTYPE_BASE + 0x104)

+struct boot_info;
+
/* Indicates an active Secure Launch boot. */
extern bool slaunch_active;

@@ -69,6 +71,18 @@ void slaunch_map_mem_regions(void);
/* Marks regions of memory as used to avoid their corruption. */
void slaunch_reserve_mem_regions(void);

+/* Measures essential parts of SLR table before making use of them. */
+void slaunch_measure_slrt(void);
+
+/*
+ * Takes measurements of DRTM policy entries except for MBI and SLRT which
+ * should have been measured by the time this is called. Also performs sanity
+ * checks of the policy and panics on failure. In particular, the function
+ * verifies that DRTM is consistent with modules obtained from MultibootInfo
+ * (MBI) and written to struct boot_info in setup.c.
+ */
+void slaunch_process_drtm_policy(const struct boot_info *bi);
+
/*
* This helper function is used to map memory using L2 page tables by aligning
* mapped regions to 2MB. This way page allocator (which at this point isn't
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 8e79d4be23..5d88cec6fc 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1425,6 +1425,13 @@ void asmlinkage __init noreturn __start_xen(void)
if ( slaunch_active )
{
slaunch_map_mem_regions();
+
+ /*
+ * SLRT needs to be measured here because it is used by init_e820(), the
+ * rest is measured slightly below by slaunch_process_drtm_policy().
+ */
+ slaunch_measure_slrt();
+
slaunch_reserve_mem_regions();
}

@@ -1446,6 +1453,14 @@ void asmlinkage __init noreturn __start_xen(void)
/* Create a temporary copy of the E820 map. */
memcpy(&boot_e820, &e820, sizeof(e820));

+ /*
+ * Process all yet unmeasured DRTM entries after E820 initialization to not
+ * do this while memory is uncached (too slow). This must also happen before
+ * modules are relocated or used.
+ */
+ if ( slaunch_active )
+ slaunch_process_drtm_policy(bi);
+
/* Early kexec reservation (explicit static start address). */
nr_pages = 0;
for ( i = 0; i < e820.nr_map; i++ )
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index 5f91fe5ad0..7cc1831f15 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -9,9 +9,11 @@
#include <xen/macros.h>
#include <xen/mm.h>
#include <xen/types.h>
+#include <asm/bootinfo.h>
#include <asm/e820.h>
#include <asm/intel-txt.h>
#include <asm/page.h>
+#include <asm/processor.h>
#include <asm/slaunch.h>
#include <asm/tpm.h>

@@ -107,6 +109,217 @@ void __init slaunch_reserve_mem_regions(void)
}
}

+void __init slaunch_measure_slrt(void)
+{
+ struct slr_table *slrt = slaunch_get_slrt();
+
+ if ( slrt->revision == 1 )
+ {
+ /*
+ * In revision one of the SLRT, only platform-specific info table is
+ * measured.
+ */
+ struct slr_entry_intel_info tmp;
+ struct slr_entry_intel_info *entry;
+
+ entry = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+ if ( entry == NULL )
+ panic("SLRT is missing Intel-specific information!\n");
+
+ tmp = *entry;
+ tmp.boot_params_base = 0;
+ tmp.txt_heap = 0;
+
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)&tmp,
+ sizeof(tmp), DLE_EVTYPE_SLAUNCH, NULL, 0);
+ }
+ else
+ {
+ /*
+ * slaunch_get_slrt() checks that the revision is valid, so we must not get
+ * here unless the code is wrong.
+ */
+ panic("Unhandled SLRT revision: %d!\n", slrt->revision);
+ }
+}
+
+static struct slr_entry_policy *__init slr_get_policy(struct slr_table *slrt)
+{
+ struct slr_entry_policy *policy;
+
+ policy = (struct slr_entry_policy *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DRTM_POLICY);
+ if (policy == NULL)
+ panic("SLRT is missing DRTM policy!\n");
+
+ /* XXX: are newer revisions allowed? */
+ if ( policy->revision != SLR_POLICY_REVISION )
+ panic("DRTM policy in SLRT is of unsupported revision: %#04x!\n",
+ slrt->revision);
+
+ return policy;
+}
+
+static void __init
+check_slrt_policy_entry(struct slr_policy_entry *policy_entry,
+ int idx,
+ struct slr_table *slrt)
+{
+ if ( policy_entry->entity_type != SLR_ET_SLRT )
+ panic("Expected DRTM policy entry #%d to describe SLRT, got %#04x!\n",
+ idx, policy_entry->entity_type);
+ if ( policy_entry->pcr != DRTM_DATA_PCR )
+ panic("SLRT was measured to PCR-%d instead of PCR-%d!\n", DRTM_DATA_PCR,
+ policy_entry->pcr);
+ if ( policy_entry->entity != (uint64_t)__pa(slrt) )
+ panic("SLRT address (%#08lx) differs from its DRTM entry (%#08lx)\n",
+ __pa(slrt), policy_entry->entity);
+}
+
+/* Returns number of policy entries that were already measured. */
+static unsigned int __init
+check_drtm_policy(struct slr_table *slrt,
+ struct slr_entry_policy *policy,
+ struct slr_policy_entry *policy_entry,
+ const struct boot_info *bi)
+{
+ uint32_t i;
+ uint32_t num_mod_entries;
+
+ if ( policy->nr_entries < 2 )
+ panic("DRTM policy in SLRT contains less than 2 entries (%d)!\n",
+ policy->nr_entries);
+
+ /*
+ * MBI policy entry must be the first one, so that measuring order matches
+ * policy order.
+ */
+ if ( policy_entry[0].entity_type != SLR_ET_MULTIBOOT2_INFO )
+ panic("First entry of DRTM policy in SLRT is not MBI: %#04x!\n",
+ policy_entry[0].entity_type);
+ if ( policy_entry[0].pcr != DRTM_DATA_PCR )
+ panic("MBI was measured to %d instead of %d PCR!\n", DRTM_DATA_PCR,
+ policy_entry[0].pcr);
+
+ /* SLRT policy entry must be the second one. */
+ check_slrt_policy_entry(&policy_entry[1], 1, slrt);
+
+ for ( i = 0; i < bi->nr_modules; i++ )
+ {
+ uint16_t j;
+ const struct boot_module *mod = &bi->mods[i];
+
+ if (mod->relocated || mod->released)
+ {
+ panic("Multiboot module \"%s\" (at %d) was consumed before measurement\n",
+ (const char *)__va(mod->cmdline_pa), i);
+ }
+
+ for ( j = 2; j < policy->nr_entries; j++ )
+ {
+ if ( policy_entry[j].entity_type != SLR_ET_MULTIBOOT2_MODULE )
+ continue;
+
+ if ( policy_entry[j].entity == mod->start &&
+ policy_entry[j].size == mod->size )
+ break;
+ }
+
+ if ( j >= policy->nr_entries )
+ {
+ panic("Couldn't find Multiboot module \"%s\" (at %d) in DRTM of Secure Launch\n",
+ (const char *)__va(mod->cmdline_pa), i);
+ }
+ }
+
+ num_mod_entries = 0;
+ for ( i = 0; i < policy->nr_entries; i++ )
+ {
+ if ( policy_entry[i].entity_type == SLR_ET_MULTIBOOT2_MODULE )
+ num_mod_entries++;
+ }
+
+ if ( bi->nr_modules != num_mod_entries )
+ {
+ panic("Unexpected number of Multiboot modules: %d instead of %d\n",
+ (int)bi->nr_modules, (int)num_mod_entries);
+ }
+
+ /*
+ * MBI was measured in tpm_extend_mbi().
+ * SLRT was measured in tpm_measure_slrt().
+ */
+ return 2;
+}
+
+void __init slaunch_process_drtm_policy(const struct boot_info *bi)
+{
+ struct slr_table *slrt;
+ struct slr_entry_policy *policy;
+ struct slr_policy_entry *policy_entry;
+ uint16_t i;
+ unsigned int measured;
+
+ slrt = slaunch_get_slrt();
+
+ policy = slr_get_policy(slrt);
+ policy_entry = (struct slr_policy_entry *)
+ ((uint8_t *)policy + sizeof(*policy));
+
+ measured = check_drtm_policy(slrt, policy, policy_entry, bi);
+ for ( i = 0; i < measured; i++ )
+ policy_entry[i].flags |= SLR_POLICY_FLAG_MEASURED;
+
+ for ( i = measured; i < policy->nr_entries; i++ )
+ {
+ int rc;
+ uint64_t start = policy_entry[i].entity;
+ uint64_t size = policy_entry[i].size;
+
+ /* No already measured entries are expected here. */
+ if ( policy_entry[i].flags & SLR_POLICY_FLAG_MEASURED )
+ panic("DRTM entry at %d was measured out of order!\n", i);
+
+ switch ( policy_entry[i].entity_type )
+ {
+ case SLR_ET_MULTIBOOT2_INFO:
+ panic("Duplicated MBI entry in DRTM of Secure Launch at %d\n", i);
+ case SLR_ET_SLRT:
+ panic("Duplicated SLRT entry in DRTM of Secure Launch at %d\n", i);
+
+ case SLR_ET_UNSPECIFIED:
+ case SLR_ET_BOOT_PARAMS:
+ case SLR_ET_SETUP_DATA:
+ case SLR_ET_CMDLINE:
+ case SLR_ET_UEFI_MEMMAP:
+ case SLR_ET_RAMDISK:
+ case SLR_ET_MULTIBOOT2_MODULE:
+ case SLR_ET_TXT_OS2MLE:
+ /* Measure this entry below. */
+ break;
+
+ case SLR_ET_UNUSED:
+ /* Skip this entry. */
+ continue;
+ }
+
+ if ( policy_entry[i].flags & SLR_POLICY_IMPLICIT_SIZE )
+ panic("Unexpected implicitly-sized DRTM entry of Secure Launch at %d (type %d, info: %s)\n",
+ i, policy_entry[i].entity_type, policy_entry[i].evt_info);
+
+ rc = slaunch_map_l2(start, size);
+ BUG_ON(rc != 0);
+
+ tpm_hash_extend(DRTM_LOC, policy_entry[i].pcr, __va(start), size,
+ DLE_EVTYPE_SLAUNCH, (uint8_t *)policy_entry[i].evt_info,
+ strnlen(policy_entry[i].evt_info,
+ TPM_EVENT_INFO_LENGTH));
+
+ policy_entry[i].flags |= SLR_POLICY_FLAG_MEASURED;
+ }
+}
+
int __init slaunch_map_l2(unsigned long paddr, unsigned long size)
{
unsigned long aligned_paddr = paddr & ~((1ULL << L2_PAGETABLE_SHIFT) - 1);
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:14 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, trenchbo...@googlegroups.com
Secure Launch won't initiate DRTM on S3 resume (the code for starting
DRTM is not part of Xen), so abort a request to perform S3 suspend to
not lose the state of DRTM PCRs.

Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/acpi/power.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 3196a33b19..81eb8f705a 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -28,6 +28,7 @@
#include <asm/irq.h>
#include <asm/microcode.h>
#include <asm/prot-key.h>
+#include <asm/slaunch.h>
#include <asm/spec_ctrl.h>
#include <asm/tboot.h>
#include <asm/trampoline.h>
@@ -357,6 +358,13 @@ int acpi_enter_sleep(const struct xenpf_enter_acpi_sleep *sleep)
PAGE_SIZE - acpi_sinfo.vector_width / 8)) )
return -EOPNOTSUPP;

+ /* Secure Launch won't initiate DRTM on S3 resume, so abort S3 suspend. */
+ if ( sleep->sleep_state == ACPI_STATE_S3 && slaunch_active )
+ {
+ printk(XENLOG_INFO "SLAUNCH: refusing switching into ACPI S3 state.\n");
+ return -EPERM;
+ }
+
if ( sleep->flags & XENPF_ACPI_SLEEP_EXTENDED )
{
if ( !acpi_sinfo.sleep_control.address ||
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:17 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
Use slr_entry_amd_info::boot_params_base on AMD with SKINIT to get MBI
location.

Another thing of interest is the location of SLRT which is bootloader's
data after SKL.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/boot/head.S | 38 ++++++++++++++++----
xen/arch/x86/boot/slaunch-early.c | 58 +++++++++++++++++++++++++++++++
2 files changed, 90 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 7376fa85d5..66e1a21033 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -354,10 +354,12 @@ cs32_switch:
jmp *%edi

/*
- * Entry point for TrenchBoot Secure Launch on Intel TXT platforms.
+ * Entry point for TrenchBoot Secure Launch, common for Intel TXT and
+ * AMD Secure Startup, but state is slightly different.
*
+ * On Intel:
* CPU is in 32b protected mode with paging disabled. On entry:
- * - %ebx = %eip = MLE entry point,
+ * - %ebx = %eip = this entry point,
* - stack pointer is undefined,
* - CS is flat 4GB code segment,
* - DS, ES, SS, FS and GS are undefined according to TXT SDG, but this
@@ -375,13 +377,34 @@ cs32_switch:
* - trying to enter real mode results in reset
* - APs must be brought up by MONITOR or GETSEC[WAKEUP], depending on
* which is supported by a given SINIT ACM
+ *
+ * On AMD (as implemented by TrenchBoot's SKL):
+ * CPU is in 32b protected mode with paging disabled. On entry:
+ * - %ebx = %eip = this entry point,
+ * - %ebp holds base address of SKL
+ * - stack pointer is treated as undefined for parity with TXT,
+ * - CS is flat 4GB code segment,
+ * - DS, ES, SS are flat 4GB data segments, but treated as undefined for
+ * parity with TXT.
+ *
+ * Additional restrictions:
+ * - interrupts (including NMIs and SMIs) are disabled and must be
+ * enabled later
+ * - APs must be brought up by SIPI without an INIT
*/
slaunch_stub_entry:
/* Calculate the load base address. */
mov %ebx, %esi
sub $sym_offs(slaunch_stub_entry), %esi

- /* Mark Secure Launch boot protocol and jump to common entry. */
+ /* On AMD, %ebp holds the base address of SLB, save it for later. */
+ mov %ebp, %ebx
+
+ /*
+ * Mark Secure Launch boot protocol and jump to common entry. Note that
+ * all general purpose registers except %ebx and %esi are clobbered
+ * between here and .Lslaunch_proto.
+ */
mov $SLAUNCH_BOOTLOADER_MAGIC, %eax
jmp .Lset_stack

@@ -508,15 +531,18 @@ __start:
sub $8, %esp

push %esp /* pointer to output structure */
+ push %ebx /* Slaunch parameter on AMD */
lea sym_offs(__2M_rwdata_end), %ecx /* end of target image */
lea sym_offs(_start), %edx /* target base address */
mov %esi, %eax /* load base address */
/*
- * slaunch_early_init(load/eax, tgt/edx, tgt_end/ecx, ret/stk) using
- * fastcall calling convention.
+ * slaunch_early_init(load/eax, tgt/edx, tgt_end/ecx,
+ * slaunch/stk, ret/stk)
+ *
+ * Uses fastcall calling convention.
*/
call slaunch_early_init
- add $4, %esp /* pop the fourth parameter */
+ add $8, %esp /* pop last two parameters */

/* Move outputs of slaunch_early_init() from stack into registers. */
pop %eax /* physical MBI address */
diff --git a/xen/arch/x86/boot/slaunch-early.c b/xen/arch/x86/boot/slaunch-early.c
index b6f6deedc9..e886d1bba7 100644
--- a/xen/arch/x86/boot/slaunch-early.c
+++ b/xen/arch/x86/boot/slaunch-early.c
@@ -7,6 +7,20 @@
#include <xen/slr-table.h>
#include <xen/types.h>
#include <asm/intel-txt.h>
+#include <asm/x86-vendors.h>
+
+/*
+ * The AMD-defined structure layout for the SLB. The last two fields are
+ * SL-specific.
+ */
+struct skinit_sl_header
+{
+ uint16_t skl_entry_point;
+ uint16_t length;
+ uint8_t reserved[62];
+ uint16_t skl_info_offset;
+ uint16_t bootloader_data_offset;
+} __packed;

struct early_init_results
{
@@ -14,9 +28,25 @@ struct early_init_results
uint32_t slrt_pa;
} __packed;

+static bool is_intel_cpu(void)
+{
+ /*
+ * asm/processor.h can't be included in early code, which means neither
+ * cpuid() function nor boot_cpu_data can be used here.
+ */
+ uint32_t eax, ebx, ecx, edx;
+ asm volatile ( "cpuid"
+ : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx)
+ : "0" (0), "c" (0) );
+ return ebx == X86_VENDOR_INTEL_EBX
+ && ecx == X86_VENDOR_INTEL_ECX
+ && edx == X86_VENDOR_INTEL_EDX;
+}
+
void asmlinkage slaunch_early_init(uint32_t load_base_addr,
uint32_t tgt_base_addr,
uint32_t tgt_end_addr,
+ uint32_t slaunch_param,
struct early_init_results *result)
{
void *txt_heap;
@@ -26,6 +56,34 @@ void asmlinkage slaunch_early_init(uint32_t load_base_addr,
struct slr_entry_intel_info *intel_info;
uint32_t size = tgt_end_addr - tgt_base_addr;

+ if ( !is_intel_cpu() )
+ {
+ /*
+ * Not an Intel CPU. Currently the only other option is AMD with SKINIT
+ * and secure-kernel-loader (SKL).
+ */
+ struct slr_entry_amd_info *amd_info;
+ const struct skinit_sl_header *sl_header = (void *)slaunch_param;
+
+ /*
+ * slaunch_param holds a physical address of SLB.
+ * Bootloader's data is SLRT.
+ */
+ result->slrt_pa = slaunch_param + sl_header->bootloader_data_offset;
+ result->mbi_pa = 0;
+
+ slrt = (struct slr_table *)(uintptr_t)result->slrt_pa;
+
+ amd_info = (struct slr_entry_amd_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_AMD_INFO);
+ /* Basic checks only, SKL checked and consumed the rest. */
+ if ( amd_info == NULL || amd_info->hdr.size != sizeof(*amd_info) )
+ return;
+
+ result->mbi_pa = amd_info->boot_params_base;
+ return;
+ }
+
txt_heap = txt_init();
os_mle = txt_os_mle_data_start(txt_heap);
os_sinit = txt_os_sinit_data_start(txt_heap);
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:20 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
This mostly involves not running Intel-specific code when on AMD.

There are only a few new AMD-specific implementation details:
- finding SLB start and size and then mapping and reserving it in e820
- managing offset for adding the next TPM log entry (TXT-compatible
data prepared by SKL is stored inside of vendor data field within TCG
header)

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/e820.c | 2 +-
xen/arch/x86/slaunch.c | 90 ++++++++++++++++++++++++++++++++++--------
xen/arch/x86/tpm.c | 68 ++++++++++++++++++++++++++++++-
3 files changed, 141 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
index 60f00e5259..cf13ab269a 100644
--- a/xen/arch/x86/e820.c
+++ b/xen/arch/x86/e820.c
@@ -444,7 +444,7 @@ static uint64_t __init mtrr_top_of_ram(void)
ASSERT(paddr_bits);
addr_mask = ((1ULL << paddr_bits) - 1) & PAGE_MASK;

- if ( slaunch_active )
+ if ( slaunch_active && boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
txt_restore_mtrrs(e820_verbose);

rdmsrl(MSR_MTRRcap, mtrr_cap);
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index 7cc1831f15..f0447f91d2 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -17,6 +17,10 @@
#include <asm/slaunch.h>
#include <asm/tpm.h>

+/* SLB is 64k, 64k-aligned */
+#define SKINIT_SLB_SIZE 0x10000
+#define SKINIT_SLB_ALIGN 0x10000
+
/*
* These variables are assigned to by the code near Xen's entry point.
*
@@ -39,6 +43,8 @@ struct slr_table *__init slaunch_get_slrt(void)

if (slrt == NULL) {
int rc;
+ bool intel_cpu = (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL);
+ uint16_t slrt_architecture = intel_cpu ? SLR_INTEL_TXT : SLR_AMD_SKINIT;

slrt = __va(slaunch_slrt);

@@ -50,9 +56,9 @@ struct slr_table *__init slaunch_get_slrt(void)
/* XXX: are newer revisions allowed? */
if ( slrt->revision != SLR_TABLE_REVISION )
panic("SLRT is of unsupported revision: %#04x!\n", slrt->revision);
- if ( slrt->architecture != SLR_INTEL_TXT )
- panic("SLRT is for unexpected architecture: %#04x!\n",
- slrt->architecture);
+ if ( slrt->architecture != slrt_architecture )
+ panic("SLRT is for unexpected architecture: %#04x != %#04x!\n",
+ slrt->architecture, slrt_architecture);
if ( slrt->size > slrt->max_size )
panic("SLRT is larger than its max size: %#08x > %#08x!\n",
slrt->size, slrt->max_size);
@@ -67,6 +73,23 @@ struct slr_table *__init slaunch_get_slrt(void)
return slrt;
}

+static uint32_t __init get_slb_start(void)
+{
+ /*
+ * The runtime computation relies on size being a power of 2 and equal to
+ * alignment. Make sure these assumptions hold.
+ */
+ BUILD_BUG_ON(SKINIT_SLB_SIZE != SKINIT_SLB_ALIGN);
+ BUILD_BUG_ON(SKINIT_SLB_SIZE == 0);
+ BUILD_BUG_ON((SKINIT_SLB_SIZE & (SKINIT_SLB_SIZE - 1)) != 0);
+
+ /*
+ * Rounding any address within SLB down to alignment gives SLB base and
+ * SLRT is inside SLB on AMD.
+ */
+ return slaunch_slrt & ~(SKINIT_SLB_SIZE - 1);
+}
+
void __init slaunch_map_mem_regions(void)
{
int rc;
@@ -77,7 +100,10 @@ void __init slaunch_map_mem_regions(void)
BUG_ON(rc != 0);

/* Vendor-specific part. */
- txt_map_mem_regions();
+ if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+ txt_map_mem_regions();
+ else if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+ slaunch_map_l2(get_slb_start(), SKINIT_SLB_SIZE);

find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
if ( evt_log_addr != NULL )
@@ -95,7 +121,18 @@ void __init slaunch_reserve_mem_regions(void)
uint32_t evt_log_size;

/* Vendor-specific part. */
- txt_reserve_mem_regions();
+ if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+ {
+ txt_reserve_mem_regions();
+ }
+ else if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+ {
+ uint64_t slb_start = get_slb_start();
+ uint64_t slb_end = slb_start + SKINIT_SLB_SIZE;
+ printk("SLAUNCH: reserving SLB (%#lx - %#lx)\n", slb_start, slb_end);
+ rc = reserve_e820_ram(&e820_raw, slb_start, slb_end);
+ BUG_ON(rc == 0);
+ }

find_evt_log(slaunch_get_slrt(), &evt_log_addr, &evt_log_size);
if ( evt_log_addr != NULL )
@@ -119,20 +156,41 @@ void __init slaunch_measure_slrt(void)
* In revision one of the SLRT, only platform-specific info table is
* measured.
*/
- struct slr_entry_intel_info tmp;
- struct slr_entry_intel_info *entry;
+ if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+ {
+ struct slr_entry_intel_info tmp;
+ struct slr_entry_intel_info *entry;
+
+ entry = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+ if ( entry == NULL )
+ panic("SLRT is missing Intel-specific information!\n");

- entry = (struct slr_entry_intel_info *)
- slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
- if ( entry == NULL )
- panic("SLRT is missing Intel-specific information!\n");
+ tmp = *entry;
+ tmp.boot_params_base = 0;
+ tmp.txt_heap = 0;

- tmp = *entry;
- tmp.boot_params_base = 0;
- tmp.txt_heap = 0;
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)&tmp,
+ sizeof(tmp), DLE_EVTYPE_SLAUNCH, NULL, 0);
+ }
+ else if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+ {
+ struct slr_entry_amd_info tmp;
+ struct slr_entry_amd_info *entry;
+
+ entry = (struct slr_entry_amd_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_AMD_INFO);
+ if ( entry == NULL )
+ panic("SLRT is missing AMD-specific information!\n");

- tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)&tmp,
- sizeof(tmp), DLE_EVTYPE_SLAUNCH, NULL, 0);
+ tmp = *entry;
+ tmp.next = 0;
+ tmp.slrt_base = 0;
+ tmp.boot_params_base = 0;
+
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR, (uint8_t *)&tmp,
+ sizeof(tmp), DLE_EVTYPE_SLAUNCH, NULL, 0);
+ }
}
else
{
diff --git a/xen/arch/x86/tpm.c b/xen/arch/x86/tpm.c
index 47a9edef50..e9ba073d55 100644
--- a/xen/arch/x86/tpm.c
+++ b/xen/arch/x86/tpm.c
@@ -11,6 +11,7 @@
#include <asm/intel-txt.h>
#include <asm/slaunch.h>
#include <asm/tpm.h>
+#include <asm/x86-vendors.h>

#ifdef __EARLY_SLAUNCH__

@@ -52,11 +53,31 @@ void *(memcpy)(void *dest, const void *src, size_t n)
return dest;
}

+static bool is_amd_cpu(void)
+{
+ /*
+ * asm/processor.h can't be included in early code, which means neither
+ * cpuid() function nor boot_cpu_data can be used here.
+ */
+ uint32_t eax, ebx, ecx, edx;
+ asm volatile ( "cpuid"
+ : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx)
+ : "0" (0), "c" (0) );
+ return ebx == X86_VENDOR_AMD_EBX
+ && ecx == X86_VENDOR_AMD_ECX
+ && edx == X86_VENDOR_AMD_EDX;
+}
+
#else /* __EARLY_SLAUNCH__ */

#include <xen/mm.h>
#include <xen/pfn.h>

+static bool is_amd_cpu(void)
+{
+ return boot_cpu_data.x86_vendor == X86_VENDOR_AMD;
+}
+
#endif /* __EARLY_SLAUNCH__ */

#define TPM_LOC_REG(loc, reg) (0x1000 * (loc) + (reg))
@@ -241,6 +262,21 @@ struct TPM12_PCREvent {
uint8_t Data[];
};

+struct tpm1_spec_id_event {
+ uint32_t pcrIndex;
+ uint32_t eventType;
+ uint8_t digest[20];
+ uint32_t eventSize;
+ uint8_t signature[16];
+ uint32_t platformClass;
+ uint8_t specVersionMinor;
+ uint8_t specVersionMajor;
+ uint8_t specErrata;
+ uint8_t uintnSize;
+ uint8_t vendorInfoSize;
+ uint8_t vendorInfo[0]; /* variable number of members */
+} __packed;
+
struct txt_ev_log_container_12 {
char Signature[20]; /* "TXT Event Container", null-terminated */
uint8_t Reserved[12];
@@ -384,6 +420,16 @@ static void *create_log_event12(struct txt_ev_log_container_12 *evt_log,
{
struct TPM12_PCREvent *new_entry;

+ if ( is_amd_cpu() )
+ {
+ /*
+ * On AMD, TXT-compatible structure is stored as vendor data of
+ * TCG-defined event log header.
+ */
+ struct tpm1_spec_id_event *spec_id = (void *)evt_log;
+ evt_log = (struct txt_ev_log_container_12 *)&spec_id->vendorInfo[0];
+ }
+
new_entry = (void *)(((uint8_t *)evt_log) + evt_log->NextEventOffset);

/*
@@ -832,11 +878,29 @@ static uint32_t tpm2_hash_extend(unsigned loc, const uint8_t *buf,

#endif /* __EARLY_SLAUNCH__ */

-static struct heap_event_log_pointer_element2_1 *find_evt_log_ext_data(void)
+static struct heap_event_log_pointer_element2_1 *
+find_evt_log_ext_data(struct tpm2_spec_id_event *evt_log)
{
struct txt_os_sinit_data *os_sinit;
struct txt_ext_data_element *ext_data;

+ if ( is_amd_cpu() )
+ {
+ /*
+ * Event log pointer is defined by TXT specification, but
+ * secure-kernel-loader provides a compatible structure in vendor data
+ * of the log.
+ */
+ const uint8_t *data_size =
+ (void *)&evt_log->digestSizes[evt_log->digestCount];
+
+ if ( *data_size != sizeof(struct heap_event_log_pointer_element2_1) )
+ return NULL;
+
+ /* Vendor data directly follows one-byte size. */
+ return (void *)(data_size + 1);
+ }
+
os_sinit = txt_os_sinit_data_start(__va(read_txt_reg(TXTCR_HEAP_BASE)));
ext_data = (void *)((uint8_t *)os_sinit + sizeof(*os_sinit));

@@ -870,7 +934,7 @@ create_log_event20(struct tpm2_spec_id_event *evt_log, uint32_t evt_log_size,
unsigned i;
uint8_t *p;

- log_ext_data = find_evt_log_ext_data();
+ log_ext_data = find_evt_log_ext_data(evt_log);
if ( log_ext_data == NULL )
return log_hashes;

--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:24 PMMay 13
to xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, Nicola Vetrini, Doug Goldstein, Daniel P. Smith, Marek Marczykowski-Górecki, Ross Philipson, trenchbo...@googlegroups.com
When running on an EFI-enabled system, Xen needs to have access to Boot
Services in order to initialize itself properly and reach a state in
which a dom0 kernel can operate without issues.

This means that DRTM must be started in the middle of Xen's
initialization process. This effect is achieved via a callback into
bootloader (GRUB) which is responsible for initiating DRTM and
continuing Xen's initialization process. The latter is done by
branching in Slaunch entry point on a flag to switch back into long mode
before calling the same function which Xen would execute as the next
step without DRTM.

Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
.gitignore | 1 +
.../eclair_analysis/ECLAIR/out_of_scope.ecl | 1 +
docs/hypervisor-guide/x86/how-xen-boots.rst | 10 +-
xen/arch/x86/Makefile | 9 +-
xen/arch/x86/boot/head.S | 124 +++++++++++++++++
xen/arch/x86/boot/x86_64.S | 14 +-
xen/arch/x86/efi/efi-boot.h | 88 +++++++++++-
xen/arch/x86/efi/fixmlehdr.c | 127 ++++++++++++++++++
xen/arch/x86/slaunch.c | 74 +++++++++-
xen/common/efi/boot.c | 4 +
xen/common/efi/runtime.c | 1 +
xen/include/xen/efi.h | 1 +
12 files changed, 441 insertions(+), 13 deletions(-)
create mode 100644 xen/arch/x86/efi/fixmlehdr.c

diff --git a/.gitignore b/.gitignore
index 53f5df0003..dab829d7e1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -201,6 +201,7 @@ xen/.xen.elf32
xen/System.map
xen/arch/x86/efi.lds
xen/arch/x86/efi/check.efi
+xen/arch/x86/efi/fixmlehdr
xen/arch/x86/efi/mkreloc
xen/arch/x86/include/asm/asm-macros.h
xen/arch/*/xen.lds
diff --git a/automation/eclair_analysis/ECLAIR/out_of_scope.ecl b/automation/eclair_analysis/ECLAIR/out_of_scope.ecl
index 9bcec4c69d..a09cf5442c 100644
--- a/automation/eclair_analysis/ECLAIR/out_of_scope.ecl
+++ b/automation/eclair_analysis/ECLAIR/out_of_scope.ecl
@@ -19,6 +19,7 @@

-doc_begin="Build tools are out of scope."
-file_tag+={out_of_scope_tools,"^xen/tools/.*$"}
+-file_tag+={out_of_scope_tools,"^xen/arch/x86/efi/fixmlehdr\\.c$"}
-file_tag+={out_of_scope_tools,"^xen/arch/x86/efi/mkreloc\\.c$"}
-file_tag+={out_of_scope_tools,"^xen/arch/x86/boot/mkelf32\\.c$"}
-doc_end
diff --git a/docs/hypervisor-guide/x86/how-xen-boots.rst b/docs/hypervisor-guide/x86/how-xen-boots.rst
index 050fe9c61f..63f81a8198 100644
--- a/docs/hypervisor-guide/x86/how-xen-boots.rst
+++ b/docs/hypervisor-guide/x86/how-xen-boots.rst
@@ -55,10 +55,12 @@ If ``CONFIG_PVH_GUEST`` was selected at build time, an Elf note is included
which indicates the ability to use the PVH boot protocol, and registers
``__pvh_start`` as the entrypoint, entered in 32bit mode.

-A combination of Multiboot 2 and MLE headers is used to implement DRTM for
-legacy (BIOS) boot. The separate entry point is used mainly to differentiate
-from other kinds of boots. It moves a magic number to EAX before jumping into
-common startup code.
+A combination of Multiboot 2 and MLE headers is used to implement DRTM. The
+separate entry point is used mainly to differentiate from other kinds of boots.
+For a legacy (BIOS) boot, it moves a magic number to EAX before jumping into
+common startup code. For a EFI boot, it resumes execution of Xen.efi which was
+paused by handing control to a part of a bootloader responsible for initiating
+DRTM sequence.


xen.gz
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 2527a1909c..7c2c0b5e4c 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -89,6 +89,7 @@ extra-y += xen.lds

hostprogs-y += boot/mkelf32
hostprogs-y += efi/mkreloc
+hostprogs-y += efi/fixmlehdr

$(obj)/efi/mkreloc: HOSTCFLAGS += -I$(srctree)/include

@@ -140,6 +141,10 @@ $(TARGET): $(TARGET)-syms $(efi-y) $(obj)/boot/mkelf32

CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI

+ifeq ($(XEN_BUILD_EFI),y)
+XEN_AFLAGS += -DXEN_BUILD_EFI
+endif
+
$(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds $< $(build_id_linker) \
$(objtree)/common/symbols-dummy.o -o $(dot-target).0
@@ -209,7 +214,7 @@ note_file_option ?= $(note_file)

extra-$(XEN_BUILD_PE) += efi.lds
ifeq ($(XEN_BUILD_PE),y)
-$(TARGET).efi: $(objtree)/prelink.o $(note_file) $(obj)/efi.lds $(obj)/efi/relocs-dummy.o $(obj)/efi/mkreloc
+$(TARGET).efi: $(objtree)/prelink.o $(note_file) $(obj)/efi.lds $(obj)/efi/relocs-dummy.o $(obj)/efi/mkreloc $(obj)/efi/fixmlehdr
ifeq ($(CONFIG_DEBUG_INFO),y)
$(if $(filter --strip-debug,$(EFI_LDFLAGS)),echo,:) "Will strip debug info from $(@F)"
endif
@@ -236,6 +241,8 @@ endif
$(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds $< \
$(dot-target).1r.o $(dot-target).1s.o $(orphan-handling-y) \
$(note_file_option) -o $@
+ # take image offset into account
+ $(obj)/efi/fixmlehdr $@ $(XEN_IMG_OFFSET)
$(NM) -pa --format=sysv $@ \
| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> $@.map
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 66e1a21033..5ec2a272a9 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -397,6 +397,12 @@ slaunch_stub_entry:
mov %ebx, %esi
sub $sym_offs(slaunch_stub_entry), %esi

+#ifdef XEN_BUILD_EFI
+ /* If the flag is already set, then Xen should continue execution. */
+ cmpb $0, sym_esi(slaunch_active)
+ jne slaunch_efi_jumpback
+#endif
+
/* On AMD, %ebp holds the base address of SLB, save it for later. */
mov %ebp, %ebx

@@ -836,6 +842,124 @@ trampoline_setup:
/* Jump into the relocated trampoline. */
lret

+#ifdef XEN_BUILD_EFI
+
+ /*
+ * The state matches that of slaunch_stub_entry above, but with %esi
+ * already initialized.
+ */
+slaunch_efi_jumpback:
+ lea STACK_SIZE - CPUINFO_sizeof + sym_esi(cpu0_stack), %esp
+
+ /* Prepare gdt and segments. */
+ add %esi, sym_esi(gdt_boot_base)
+ lgdt sym_esi(gdt_boot_descr)
+
+ mov $BOOT_DS, %ecx
+ mov %ecx, %ds
+ mov %ecx, %es
+ mov %ecx, %ss
+
+ push $BOOT_CS32
+ lea sym_esi(.Lgdt_is_set),%edx
+ push %edx
+ lret
+.Lgdt_is_set:
+
+ /*
+ * Stash TSC as above because it was zeroed on jumping into bootloader
+ * to not interfere with measurements.
+ */
+ rdtsc
+ mov %eax, sym_esi(boot_tsc_stamp)
+ mov %edx, 4 + sym_esi(boot_tsc_stamp)
+
+ /*
+ * Clear the pagetables before the use. We are loaded below 4GiB and
+ * this avoids the need for writing to higher dword of each entry.
+ * Additionally, this ensures those dwords are actually zero and the
+ * mappings aren't manipulated from outside.
+ */
+ lea sym_esi(bootmap_start), %edi
+ lea sym_esi(bootmap_end), %ecx
+ sub %edi, %ecx
+ xor %eax, %eax
+ shr $2, %ecx
+ rep stosl
+
+ /* 1x L1 page, 512 entries mapping total of 2M. */
+ lea sym_esi(l1_bootmap), %edi
+ mov $512, %ecx
+ mov $(__PAGE_HYPERVISOR + 512 * PAGE_SIZE), %edx
+.Lfill_l1_identmap:
+ sub $PAGE_SIZE, %edx
+ /* Loop runs for ecx=[512..1] for entries [511..0], hence -8. */
+ mov %edx, -8(%edi,%ecx,8)
+ loop .Lfill_l1_identmap
+
+ /* 4x L2 pages, each page mapping 1G of RAM. */
+ lea sym_esi(l2_bootmap), %edi
+ /* 1st entry points to L1. */
+ lea (sym_offs(l1_bootmap) + __PAGE_HYPERVISOR)(%esi), %edx
+ mov %edx, (%edi)
+ /* Other entries are 2MB pages. */
+ mov $(4 * 512 - 1), %ecx
+ /*
+ * Value below should be 4GB + flags, which wouldn't fit in 32b
+ * register. To avoid warning from the assembler, 4GB is skipped here.
+ * Substitution in first iteration makes the value roll over and point
+ * to 4GB - 2MB + flags.
+ */
+ mov $(_PAGE_PSE + __PAGE_HYPERVISOR), %edx
+.Lfill_l2_identmap:
+ sub $(1 << L2_PAGETABLE_SHIFT), %edx
+ /* Loop runs for ecx=[2047..1] for entries [2047..1]. */
+ mov %edx, (%edi,%ecx,8)
+ loop .Lfill_l2_identmap
+
+ /* 1x L3 page, mapping the 4x L2 pages. */
+ lea sym_esi(l3_bootmap), %edi
+ mov $4, %ecx
+ lea (sym_offs(l2_bootmap) + 4 * PAGE_SIZE + __PAGE_HYPERVISOR)(%esi), %edx
+.Lfill_l3_identmap:
+ sub $PAGE_SIZE, %edx
+ /* Loop runs for ecx=[4..1] for entries [3..0], hence -8. */
+ mov %edx, -8(%edi,%ecx,8)
+ loop .Lfill_l3_identmap
+
+ /* 1x L4 page, mapping the L3 page. */
+ lea (sym_offs(l3_bootmap) + __PAGE_HYPERVISOR)(%esi), %edx
+ mov %edx, sym_esi(l4_bootmap)
+
+ /* Restore CR4, PAE must be enabled before IA-32e mode */
+ mov %cr4, %ecx
+ or $X86_CR4_PAE, %ecx
+ mov %ecx, %cr4
+
+ /* Load PML4 table location into PT base register */
+ lea sym_esi(l4_bootmap), %eax
+ mov %eax, %cr3
+
+ /* Enable IA-32e mode and paging */
+ mov $MSR_EFER, %ecx
+ rdmsr
+ or $EFER_LME >> 8, %ah
+ wrmsr
+
+ mov %cr0, %eax
+ or $X86_CR0_PG | X86_CR0_NE | X86_CR0_TS | X86_CR0_MP, %eax
+ mov %eax, %cr0
+
+ /* Now in IA-32e compatibility mode, use lret to jump to 64b mode */
+ lea sym_esi(start_xen_from_efi), %ecx
+ push $BOOT_CS64
+ push %ecx
+ lret
+
+.global start_xen_from_efi
+
+#endif /* XEN_BUILD_EFI */
+
ENTRY(trampoline_start)
#include "trampoline.S"
ENTRY(trampoline_end)
diff --git a/xen/arch/x86/boot/x86_64.S b/xen/arch/x86/boot/x86_64.S
index ac33576d8f..67896f5fe5 100644
--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -221,14 +221,22 @@ GLOBAL(__page_tables_end)
/* Init pagetables. Enough page directories to map into 4GB. */
.section .init.data, "aw", @progbits

-DATA_LOCAL(l1_bootmap, PAGE_SIZE)
+bootmap_start:
+
+DATA_LOCAL(l1_bootmap, PAGE_SIZE) /* 1x L1 page, mapping 2M of RAM. */
.fill L1_PAGETABLE_ENTRIES, 8, 0
END(l1_bootmap)

-DATA(l2_bootmap, PAGE_SIZE)
+DATA(l2_bootmap, PAGE_SIZE) /* 4x L2 pages, each mapping 1G of RAM. */
.fill 4 * L2_PAGETABLE_ENTRIES, 8, 0
END(l2_bootmap)

-DATA(l3_bootmap, PAGE_SIZE)
+DATA(l3_bootmap, PAGE_SIZE) /* 1x L3 page, mapping the 4x L2 pages. */
.fill L3_PAGETABLE_ENTRIES, 8, 0
END(l3_bootmap)
+
+DATA_LOCAL(l4_bootmap, PAGE_SIZE) /* 1x L4 page, mapping the L3 page. */
+ .fill L4_PAGETABLE_ENTRIES, 8, 0
+END(l4_bootmap)
+
+bootmap_end:
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 1d8902a9a7..6aaba4a966 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -5,6 +5,12 @@
*/
#include <xen/vga.h>

+/*
+ * Tell <asm/intel-txt.h> to access TXT registers without address translation
+ * which has not yet been set up.
+ */
+#define __EARLY_SLAUNCH__
+
#include <asm/boot-helpers.h>
#include <asm/e820.h>
#include <asm/edd.h>
@@ -13,8 +19,11 @@
#include <asm/setup.h>
#include <asm/trampoline.h>
#include <asm/efi.h>
+#include <asm/intel-txt.h>
+#include <asm/slaunch.h>

static struct file __initdata ucode;
+static uint64_t __initdata xen_image_size;
static multiboot_info_t __initdata mbi = {
.flags = MBI_MODULES | MBI_LOADERNAME
};
@@ -230,10 +239,29 @@ static void __init efi_arch_pre_exit_boot(void)
}
}

-static void __init noreturn efi_arch_post_exit_boot(void)
+void __init asmlinkage noreturn start_xen_from_efi(void)
{
u64 cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, efer;

+ if ( slaunch_active )
+ {
+ struct slr_table *slrt = (struct slr_table *)efi.slr;
+ struct slr_entry_intel_info *intel_info;
+
+ intel_info = (struct slr_entry_intel_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+ if ( intel_info != NULL )
+ {
+ void *txt_heap = txt_init();
+ struct txt_os_mle_data *os_mle = txt_os_mle_data_start(txt_heap);
+ struct txt_os_sinit_data *os_sinit =
+ txt_os_sinit_data_start(txt_heap);
+
+ txt_verify_pmr_ranges(os_mle, os_sinit, intel_info, xen_phys_start,
+ xen_phys_start, xen_image_size);
+ }
+ }
+
efi_arch_relocate_image(__XEN_VIRT_START - xen_phys_start);
memcpy(_p(trampoline_phys), trampoline_start, cfg.size);

@@ -279,6 +307,63 @@ static void __init noreturn efi_arch_post_exit_boot(void)
unreachable();
}

+static void __init attempt_secure_launch(void)
+{
+ struct slr_table *slrt;
+ struct slr_entry_dl_info *dlinfo;
+ dl_handler_func handler_callback;
+
+ /* The presence of this table indicates a Secure Launch boot. */
+ slrt = (struct slr_table *)efi.slr;
+ if ( efi.slr == EFI_INVALID_TABLE_ADDR || slrt->magic != SLR_TABLE_MAGIC ||
+ slrt->revision != SLR_TABLE_REVISION )
+ return;
+
+ /* Avoid calls into firmware after DRTM. */
+ __clear_bit(EFI_RS, &efi_flags);
+
+ /*
+ * Make measurements less sensitive to hardware-specific details.
+ *
+ * Intentionally leaving efi_ct and efi_num_ct intact.
+ */
+ efi_ih = NULL;
+ efi_bs = NULL;
+ efi_bs_revision = 0;
+ efi_rs = NULL;
+ efi_version = 0;
+ efi_fw_vendor = NULL;
+ efi_fw_revision = 0;
+ StdOut = NULL;
+ StdErr = NULL;
+ boot_tsc_stamp = 0;
+
+ slaunch_active = true;
+ slaunch_slrt = efi.slr;
+
+ /* Jump through DL stub to initiate Secure Launch. */
+ dlinfo = (struct slr_entry_dl_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
+
+ handler_callback = (dl_handler_func)dlinfo->dl_handler;
+ handler_callback(&dlinfo->bl_context);
+
+ unreachable();
+}
+
+static void __init noreturn efi_arch_post_exit_boot(void)
+{
+ /*
+ * If Secure Launch happens, attempt_secure_launch() doesn't return and
+ * start_xen_from_efi() is invoked after DRTM has been initiated.
+ * Otherwise, attempt_secure_launch() returns and execution continues as
+ * usual.
+ */
+ attempt_secure_launch();
+
+ start_xen_from_efi();
+}
+
static void __init efi_arch_cfg_file_early(const EFI_LOADED_IMAGE *image,
EFI_FILE_HANDLE dir_handle,
const char *section)
@@ -775,6 +860,7 @@ static void __init efi_arch_halt(void)
static void __init efi_arch_load_addr_check(const EFI_LOADED_IMAGE *loaded_image)
{
xen_phys_start = (UINTN)loaded_image->ImageBase;
+ xen_image_size = loaded_image->ImageSize;
if ( (xen_phys_start + loaded_image->ImageSize - 1) >> 32 )
blexit(L"Xen must be loaded below 4Gb.");
if ( xen_phys_start & ((1 << L2_PAGETABLE_SHIFT) - 1) )
diff --git a/xen/arch/x86/efi/fixmlehdr.c b/xen/arch/x86/efi/fixmlehdr.c
new file mode 100644
index 0000000000..60a91c6b73
--- /dev/null
+++ b/xen/arch/x86/efi/fixmlehdr.c
@@ -0,0 +1,127 @@
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+/*
+ * Depending on the toolchain and its configuration the header can end up quite
+ * far from the start of the file.
+ */
+#define PREFIX_SIZE (8*1024)
+
+struct mle_header
+{
+ uint8_t uuid[16];
+ uint32_t header_len;
+ uint32_t version;
+ uint32_t entry_point;
+ uint32_t first_valid_page;
+ uint32_t mle_start;
+ uint32_t mle_end;
+ uint32_t capabilities;
+ uint32_t cmdline_start;
+ uint32_t cmdline_end;
+} __attribute__ ((packed));
+
+static const uint8_t MLE_HEADER_UUID[] = {
+ 0x5a, 0xac, 0x82, 0x90, 0x6f, 0x47, 0xa7, 0x74,
+ 0x0f, 0x5c, 0x55, 0xa2, 0xcb, 0x51, 0xb6, 0x42
+};
+
+int main(int argc, char *argv[])
+{
+ FILE *fp;
+ struct mle_header header;
+ int i;
+ char *end_ptr;
+ long long correction;
+ const char *file_path;
+
+ if ( argc != 3 )
+ {
+ fprintf(stderr, "Usage: %s <xen.efi> <entry-correction>\n", argv[0]);
+ return 1;
+ }
+
+ correction = strtoll(argv[2], &end_ptr, 0);
+ if ( *end_ptr != '\0' )
+ {
+ fprintf(stderr, "Failed to parse '%s' as a number\n", argv[2]);
+ return 1;
+ }
+ if ( correction < INT32_MIN )
+ {
+ fprintf(stderr, "Correction '%s' is too small\n", argv[2]);
+ return 1;
+ }
+ if ( correction > INT32_MAX )
+ {
+ fprintf(stderr, "Correction '%s' is too large\n", argv[2]);
+ return 1;
+ }
+
+ file_path = argv[1];
+
+ fp = fopen(file_path, "r+");
+ if ( fp == NULL )
+ {
+ fprintf(stderr, "Failed to open %s\n", file_path);
+ return 1;
+ }
+
+ for ( i = 0; i < PREFIX_SIZE; i += 16 )
+ {
+ uint8_t bytes[16];
+
+ if ( fread(bytes, sizeof(bytes), 1, fp) != 1 )
+ {
+ fprintf(stderr, "Failed to find MLE header in %s\n", file_path);
+ goto fail;
+ }
+
+ if ( memcmp(bytes, MLE_HEADER_UUID, 16) == 0 )
+ {
+ break;
+ }
+ }
+
+ if ( i >= PREFIX_SIZE )
+ {
+ fprintf(stderr, "Failed to find MLE header in %s\n", file_path);
+ goto fail;
+ }
+
+ if ( fseek(fp, -16, SEEK_CUR) )
+ {
+ fprintf(stderr, "Failed to seek back to MLE header in %s\n", file_path);
+ goto fail;
+ }
+
+ if ( fread(&header, sizeof(header), 1, fp) != 1 )
+ {
+ fprintf(stderr, "Failed to read MLE header from %s\n", file_path);
+ goto fail;
+ }
+
+ if ( fseek(fp, -(int)sizeof(header), SEEK_CUR) )
+ {
+ fprintf(stderr, "Failed to seek back again to MLE header in %s\n",
+ file_path);
+ goto fail;
+ }
+
+ header.entry_point += correction;
+
+ if ( fwrite(&header, sizeof(header), 1, fp) != 1 )
+ {
+ fprintf(stderr, "Failed to write MLE header in %s\n", file_path);
+ goto fail;
+ }
+
+ fclose(fp);
+ return 0;
+
+fail:
+ fclose(fp);
+ return 1;
+}
diff --git a/xen/arch/x86/slaunch.c b/xen/arch/x86/slaunch.c
index f0447f91d2..15cfe944a7 100644
--- a/xen/arch/x86/slaunch.c
+++ b/xen/arch/x86/slaunch.c
@@ -5,6 +5,7 @@
*/

#include <xen/compiler.h>
+#include <xen/efi.h>
#include <xen/init.h>
#include <xen/macros.h>
#include <xen/mm.h>
@@ -244,10 +245,23 @@ check_drtm_policy(struct slr_table *slrt,
{
uint32_t i;
uint32_t num_mod_entries;
+ int min_entries;

- if ( policy->nr_entries < 2 )
- panic("DRTM policy in SLRT contains less than 2 entries (%d)!\n",
- policy->nr_entries);
+ min_entries = efi_enabled(EFI_BOOT) ? 1 : 2;
+ if ( policy->nr_entries < min_entries )
+ {
+ panic("DRTM policy in SLRT contains less than %d entries (%d)!\n",
+ min_entries, policy->nr_entries);
+ }
+
+ if ( efi_enabled(EFI_BOOT) )
+ {
+ check_slrt_policy_entry(&policy_entry[0], 0, slrt);
+ /* SLRT was measured in tpm_measure_slrt(). */
+ return 1;
+ }
+
+ /* This must be legacy MultiBoot2 boot. */

/*
* MBI policy entry must be the first one, so that measuring order matches
@@ -316,6 +330,7 @@ void __init slaunch_process_drtm_policy(const struct boot_info *bi)
struct slr_table *slrt;
struct slr_entry_policy *policy;
struct slr_policy_entry *policy_entry;
+ int rc;
uint16_t i;
unsigned int measured;

@@ -331,7 +346,6 @@ void __init slaunch_process_drtm_policy(const struct boot_info *bi)

for ( i = measured; i < policy->nr_entries; i++ )
{
- int rc;
uint64_t start = policy_entry[i].entity;
uint64_t size = policy_entry[i].size;

@@ -376,6 +390,58 @@ void __init slaunch_process_drtm_policy(const struct boot_info *bi)

policy_entry[i].flags |= SLR_POLICY_FLAG_MEASURED;
}
+
+ /*
+ * On x86 EFI platforms Xen reads its command-line options and kernel/initrd
+ * from configuration files (several can be chained). Bootloader can't know
+ * contents of the configuration beforehand without parsing it, so there
+ * will be no corresponding policy entries. Instead, measure command-line
+ * and all modules here.
+ */
+ if ( efi_enabled(EFI_BOOT) )
+ {
+#define LOG_DATA(str) (uint8_t *)(str), (sizeof(str) - 1)
+
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR,
+ (const uint8_t *)bi->cmdline, strlen(bi->cmdline),
+ DLE_EVTYPE_SLAUNCH, LOG_DATA("Xen's command line"));
+
+ for ( i = 0; i < bi->nr_modules; i++ )
+ {
+ const struct boot_module *mod = &bi->mods[i];
+
+ paddr_t string = mod->cmdline_pa;
+ paddr_t start = mod->start;
+ size_t size = mod->size;
+
+ if ( mod->relocated || mod->released )
+ {
+ panic("A module \"%s\" (#%d) was consumed before measurement\n",
+ (const char *)__va(string), i);
+ }
+
+ /*
+ * Measuring module's name separately because module's command-line
+ * parameters are appended to its name when present.
+ *
+ * 2 MiB is minimally mapped size and it should more than suffice.
+ */
+ rc = slaunch_map_l2(string, 2 * 1024 * 1024);
+ BUG_ON(rc != 0);
+
+ tpm_hash_extend(DRTM_LOC, DRTM_DATA_PCR,
+ __va(string), strlen(__va(string)),
+ DLE_EVTYPE_SLAUNCH, LOG_DATA("MB module string"));
+
+ rc = slaunch_map_l2(start, size);
+ BUG_ON(rc != 0);
+
+ tpm_hash_extend(DRTM_LOC, DRTM_CODE_PCR, __va(start), size,
+ DLE_EVTYPE_SLAUNCH, LOG_DATA("MB module"));
+ }
+
+#undef LOG_DATA
+ }
}

int __init slaunch_map_l2(unsigned long paddr, unsigned long size)
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index e39fbc3529..35501ee4de 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -19,6 +19,7 @@
#if EFI_PAGE_SIZE != PAGE_SIZE
# error Cannot use xen/pfn.h here!
#endif
+#include <xen/slr-table.h>
#include <xen/string.h>
#include <xen/stringify.h>
#ifdef CONFIG_X86
@@ -1004,6 +1005,7 @@ static void __init efi_tables(void)
static EFI_GUID __initdata mps_guid = MPS_TABLE_GUID;
static EFI_GUID __initdata smbios_guid = SMBIOS_TABLE_GUID;
static EFI_GUID __initdata smbios3_guid = SMBIOS3_TABLE_GUID;
+ static EFI_GUID __initdata slr_guid = UEFI_SLR_TABLE_GUID;

if ( match_guid(&acpi2_guid, &efi_ct[i].VendorGuid) )
efi.acpi20 = (unsigned long)efi_ct[i].VendorTable;
@@ -1015,6 +1017,8 @@ static void __init efi_tables(void)
efi.smbios = (unsigned long)efi_ct[i].VendorTable;
if ( match_guid(&smbios3_guid, &efi_ct[i].VendorGuid) )
efi.smbios3 = (unsigned long)efi_ct[i].VendorTable;
+ if ( match_guid(&slr_guid, &efi_ct[i].VendorGuid) )
+ efi.slr = (unsigned long)efi_ct[i].VendorTable;
if ( match_guid(&esrt_guid, &efi_ct[i].VendorGuid) )
esrt = (UINTN)efi_ct[i].VendorTable;
}
diff --git a/xen/common/efi/runtime.c b/xen/common/efi/runtime.c
index 7e1fce291d..e1b339f162 100644
--- a/xen/common/efi/runtime.c
+++ b/xen/common/efi/runtime.c
@@ -70,6 +70,7 @@ struct efi __read_mostly efi = {
.mps = EFI_INVALID_TABLE_ADDR,
.smbios = EFI_INVALID_TABLE_ADDR,
.smbios3 = EFI_INVALID_TABLE_ADDR,
+ .slr = EFI_INVALID_TABLE_ADDR,
};

const struct efi_pci_rom *__read_mostly efi_pci_roms;
diff --git a/xen/include/xen/efi.h b/xen/include/xen/efi.h
index 160804e294..614dfce66a 100644
--- a/xen/include/xen/efi.h
+++ b/xen/include/xen/efi.h
@@ -19,6 +19,7 @@ struct efi {
unsigned long acpi20; /* ACPI table (ACPI 2.0) */
unsigned long smbios; /* SM BIOS table */
unsigned long smbios3; /* SMBIOS v3 table */
+ unsigned long slr; /* SLR table */
};

extern struct efi efi;
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:31 PMMay 13
to xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
MAINTAINERS | 15 +++++++++++++++
1 file changed, 15 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c11b82eca9..347b3bcbb0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -542,6 +542,21 @@ F: */configure
F: */*.ac
F: tools/

+TRENCHBOOT SECURE LAUNCH
+M: Daniel P. Smith <dps...@apertussolutions.com>
+R: Ross Philipson <ross.ph...@oracle.com>
+R: Sergii Dmytruk <sergii....@3mdeb.com>
+S: Supported
+F: xen/include/xen/slr-table.h
+F: xen/arch/x86/boot/slaunch-early.c
+F: xen/arch/x86/efi/fixmlehdr.c
+F: xen/arch/x86/include/asm/intel-txt.h
+F: xen/arch/x86/include/asm/slaunch.h
+F: xen/arch/x86/include/asm/tpm.h
+F: xen/arch/x86/intel-txt.c
+F: xen/arch/x86/slaunch.c
+F: xen/arch/x86/tpm.c
+
VM EVENT, MEM ACCESS and MONITOR
M: Tamas K Lengyel <ta...@tklengyel.com>
R: Alexandru Isaila <ais...@bitdefender.com>
--
2.49.0

Sergii Dmytruk

unread,
May 13, 2025, 1:07:37 PMMay 13
to xen-...@lists.xenproject.org, Jan Beulich, Andrew Cooper, Roger Pau Monné, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
From: Michał Żygowski <michal....@3mdeb.com>

Report TXT capabilities so that dom0 can query the Intel TXT or AMD
SKINIT support information using xl dmesg.

Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
xen/arch/x86/cpu/amd.c | 16 ++++++++++
xen/arch/x86/cpu/cpu.h | 1 +
xen/arch/x86/cpu/hygon.c | 1 +
xen/arch/x86/cpu/intel.c | 46 ++++++++++++++++++++++++++++
xen/arch/x86/include/asm/intel-txt.h | 5 +++
5 files changed, 69 insertions(+)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 37d67dd15c..18b6cbb50b 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -684,6 +684,21 @@ void amd_log_freq(const struct cpuinfo_x86 *c)
#undef FREQ
}

+void amd_log_skinit(const struct cpuinfo_x86 *c)
+{
+ /*
+ * Run only on BSP and not during resume to report the capability only once.
+ */
+ if ( system_state != SYS_STATE_resume && smp_processor_id() )
+ return;
+
+ printk("CPU: SKINIT capability ");
+ if ( !test_bit(X86_FEATURE_SKINIT, &boot_cpu_data.x86_capability) )
+ printk("not supported\n");
+ else
+ printk("supported\n");
+}
+
void cf_check early_init_amd(struct cpuinfo_x86 *c)
{
if (c == &boot_cpu_data)
@@ -1333,6 +1348,7 @@ static void cf_check init_amd(struct cpuinfo_x86 *c)
check_syscfg_dram_mod_en();

amd_log_freq(c);
+ amd_log_skinit(c);
}

const struct cpu_dev __initconst_cf_clobber amd_cpu_dev = {
diff --git a/xen/arch/x86/cpu/cpu.h b/xen/arch/x86/cpu/cpu.h
index 8be65e975a..5bcf118a93 100644
--- a/xen/arch/x86/cpu/cpu.h
+++ b/xen/arch/x86/cpu/cpu.h
@@ -20,6 +20,7 @@ extern bool detect_extended_topology(struct cpuinfo_x86 *c);

void cf_check early_init_amd(struct cpuinfo_x86 *c);
void amd_log_freq(const struct cpuinfo_x86 *c);
+void amd_log_skinit(const struct cpuinfo_x86 *c);
void amd_init_lfence(struct cpuinfo_x86 *c);
void amd_init_ssbd(const struct cpuinfo_x86 *c);
void amd_init_spectral_chicken(void);
diff --git a/xen/arch/x86/cpu/hygon.c b/xen/arch/x86/cpu/hygon.c
index f7508cc8fc..6ebb8b5fab 100644
--- a/xen/arch/x86/cpu/hygon.c
+++ b/xen/arch/x86/cpu/hygon.c
@@ -85,6 +85,7 @@ static void cf_check init_hygon(struct cpuinfo_x86 *c)
}

amd_log_freq(c);
+ amd_log_skinit(c);
}

const struct cpu_dev __initconst_cf_clobber hygon_cpu_dev = {
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 12c3ff65e0..a99aec80ad 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -14,6 +14,7 @@
#include <asm/apic.h>
#include <asm/i387.h>
#include <asm/trampoline.h>
+#include <asm/intel-txt.h>

#include "cpu.h"

@@ -605,6 +606,49 @@ static void init_intel_perf(struct cpuinfo_x86 *c)
}
}

+/*
+ * Print out the SMX and TXT capabilties, so that dom0 can determine if the
+ * system is DRTM-capable.
+ */
+static void intel_log_smx_txt(struct cpuinfo_x86 *c)
+{
+ unsigned long cr4_val, getsec_caps;
+
+ /*
+ * Run only on BSP and not during resume to report the capability only once.
+ */
+ if ( system_state != SYS_STATE_resume && smp_processor_id() )
+ return;
+
+ printk("CPU: SMX capability ");
+ if ( !test_bit(X86_FEATURE_SMX, &boot_cpu_data.x86_capability) )
+ {
+ printk("not supported\n");
+ return;
+ }
+ printk("supported\n");
+
+ /* Can't run GETSEC without VMX and SMX */
+ if ( !test_bit(X86_FEATURE_VMX, &boot_cpu_data.x86_capability) )
+ return;
+
+ cr4_val = read_cr4();
+ if ( !(cr4_val & X86_CR4_SMXE) )
+ write_cr4(cr4_val | X86_CR4_SMXE);
+
+ asm volatile ("getsec\n"
+ : "=a" (getsec_caps)
+ : "a" (GETSEC_CAPABILITIES), "b" (0) :);
+
+ if ( getsec_caps & GETSEC_CAP_TXT_CHIPSET )
+ printk("Chipset supports TXT\n");
+ else
+ printk("Chipset does not support TXT\n");
+
+ if ( !(cr4_val & X86_CR4_SMXE) )
+ write_cr4(cr4_val & ~X86_CR4_SMXE);
+}
+
static void cf_check init_intel(struct cpuinfo_x86 *c)
{
/* Detect the extended topology information if available */
@@ -619,6 +663,8 @@ static void cf_check init_intel(struct cpuinfo_x86 *c)
detect_ht(c);
}

+ intel_log_smx_txt(c);
+
/* Work around errata */
Intel_errata_workarounds(c);

diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
index 5d76443d35..c834e08903 100644
--- a/xen/arch/x86/include/asm/intel-txt.h
+++ b/xen/arch/x86/include/asm/intel-txt.h
@@ -85,6 +85,11 @@
#define TXT_AP_BOOT_CS 0x0030
#define TXT_AP_BOOT_DS 0x0038

+/* EAX value for GETSEC leaf functions. Intel SDM: GETSEC[CAPABILITIES] */
+#define GETSEC_CAPABILITIES 0
+/* Intel SDM: GETSEC Capability Result Encoding */
+#define GETSEC_CAP_TXT_CHIPSET 1
+
#ifndef __ASSEMBLY__

#include <xen/slr-table.h>
--
2.49.0

Demi Marie Obenour

unread,
May 13, 2025, 9:25:10 PMMay 13
to Sergii Dmytruk, xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, Nicola Vetrini, Doug Goldstein, Daniel P. Smith, Marek Marczykowski-Górecki, Ross Philipson, trenchbo...@googlegroups.com
On 5/13/25 1:05 PM, Sergii Dmytruk wrote:
> When running on an EFI-enabled system, Xen needs to have access to Boot
> Services in order to initialize itself properly and reach a state in
> which a dom0 kernel can operate without issues.
>
> This means that DRTM must be started in the middle of Xen's
> initialization process. This effect is achieved via a callback into
> bootloader (GRUB) which is responsible for initiating DRTM and
> continuing Xen's initialization process. The latter is done by
> branching in Slaunch entry point on a flag to switch back into long mode
> before calling the same function which Xen would execute as the next
> step without DRTM.

Depending on the bootloader for this unnecessarily ties DRTM to GRUB.
Instead, it would be much better for Xen to be able to perform DRTM
itself, which would allow DRTM to work without GRUB. Pop! OS already
uses systemd-boot and the trend seems to be from GRUB to systemd-boot.
Furthermore, this would allow DRTM with Xen launched directly from
the UEFI firmware.
--
Sincerely,
Demi Marie Obenour (she/her/hers)
OpenPGP_0xB288B55FFF9C22C1.asc
OpenPGP_signature.asc

Jan Beulich

unread,
May 14, 2025, 2:36:44 AMMay 14
to Sergii Dmytruk, Andrew Cooper, Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On 13.05.2025 19:05, Sergii Dmytruk wrote:
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -542,6 +542,21 @@ F: */configure
> F: */*.ac
> F: tools/
>
> +TRENCHBOOT SECURE LAUNCH
> +M: Daniel P. Smith <dps...@apertussolutions.com>
> +R: Ross Philipson <ross.ph...@oracle.com>
> +R: Sergii Dmytruk <sergii....@3mdeb.com>
> +S: Supported
> +F: xen/include/xen/slr-table.h

Nit: This wants to move ...

> +F: xen/arch/x86/boot/slaunch-early.c
> +F: xen/arch/x86/efi/fixmlehdr.c
> +F: xen/arch/x86/include/asm/intel-txt.h
> +F: xen/arch/x86/include/asm/slaunch.h
> +F: xen/arch/x86/include/asm/tpm.h
> +F: xen/arch/x86/intel-txt.c
> +F: xen/arch/x86/slaunch.c
> +F: xen/arch/x86/tpm.c
> +

... to the bottom, for proper sorting.

Jan

Sergii Dmytruk

unread,
May 14, 2025, 10:25:16 AMMay 14
to Demi Marie Obenour, xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, Nicola Vetrini, Doug Goldstein, Daniel P. Smith, Marek Marczykowski-Górecki, Ross Philipson, trenchbo...@googlegroups.com
That sentence in the commit message is worth rewording. GRUB isn't a
requirement, any TrenchBoot-enabled bootloader (or anything that wants
to act as a bootloader) can be used. systemd-boot could implement
Secure Launch specification [0] and start Xen/Linux/something else via
DRTM. Usage without a real bootloader could be implemented similarly
via some EFI stub that has binaries embedded into it or that can load
them from a drive.

Mind that at least Intel and AMD DRTM implementations require a DCE [1]
binary that depends on a vendor, firmware version or a CPU generation.
So even embedding all code into every kernel-like software won't produce
self-contained DRTM-capable images.

[0]: https://trenchboot.org/specifications/Secure_Launch/
[1]: https://trenchboot.org/theory/Glossary/#dynamic-configuration-environment-dce

Regards

Andrew Cooper

unread,
May 14, 2025, 10:55:55 AMMay 14
to Sergii Dmytruk, xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com
On 13/05/2025 6:05 pm, Sergii Dmytruk wrote:
> diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
> new file mode 100644
> index 0000000000..07be43f557
> --- /dev/null
> +++ b/xen/arch/x86/include/asm/intel-txt.h
> @@ -0,0 +1,277 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +

Please have at least a one-liner introduction to what TXT is.  Is there
a stable URL for the TXT spec?  (I can't spot an obvious one, googling
around)

> +#ifndef ASM__X86__INTEL_TXT_H
> +#define ASM__X86__INTEL_TXT_H

I realise this is what the documentation currently says, but we're just
in the process of finalising some changes.  Please make this
X86_INTEL_TXT_H.

> +
> +/*
> + * TXT configuration registers (offsets from TXT_{PUB, PRIV}_CONFIG_REGS_BASE)
> + */
> +#define TXT_PUB_CONFIG_REGS_BASE 0xfed30000U
> +#define TXT_PRIV_CONFIG_REGS_BASE 0xfed20000U
> +
> +/*
> + * The same set of registers is exposed twice (with different permissions) and
> + * they are allocated continuously with page alignment.
> + */
> +#define NR_TXT_CONFIG_SIZE \
> + (TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE)

Commit message needs to note that you're swapping NR_TXT_CONFIG_PAGES
for NR_TXT_CONFIG_SIZE, hence the change in logic in
tboot_protect_mem_regions().

> +#ifndef __ASSEMBLY__
> +
> +/* Need to differentiate between pre- and post paging enabled. */
> +#ifdef __EARLY_SLAUNCH__
> +#include <xen/macros.h>
> +#define _txt(x) _p(x)
> +#else
> +#include <xen/types.h>
> +#include <asm/page.h> // __va()
> +#define _txt(x) __va(x)
> +#endif

I have to admit that I'm rather suspicious of this, but I'm going to
have to wait to see later patches before making a suggestion.  (I highly
suspect we'll want a fixmap entry).

> +
> +/*
> + * Always use private space as some of registers are either read-only or not
> + * present in public space.
> + */
> +static inline uint64_t read_txt_reg(int reg_no)

unsigned int reg

I'd be tempted to name it simply txt_read().  I don't think the extra
"_reg" is a helpful suffix, and it makes the APIs consistent with
txt_reset().  But I expect others may have opinions here.

> +{
> + volatile uint64_t *reg = _txt(TXT_PRIV_CONFIG_REGS_BASE + reg_no);
> + return *reg;
> +}
> +
> +static inline void write_txt_reg(int reg_no, uint64_t val)
> +{
> + volatile uint64_t *reg = _txt(TXT_PRIV_CONFIG_REGS_BASE + reg_no);
> + *reg = val;
> + /* This serves as TXT register barrier */
> + (void)read_txt_reg(TXTCR_ESTS);

What is this barrier for?

It's not for anything in the CPU side of things, because UC accesses are
strictly ordered.

I presume it's for LPC bus ordering then, but making every write be a
write/read pair seems very expensive.

> +}
> +
> +static inline void txt_reset(uint32_t error)
> +{
> + write_txt_reg(TXTCR_ERRORCODE, error);
> + write_txt_reg(TXTCR_CMD_NO_SECRETS, 1);
> + write_txt_reg(TXTCR_CMD_UNLOCK_MEM_CONFIG, 1);
> + write_txt_reg(TXTCR_CMD_RESET, 1);
> + while (1);

for ( ;; )
    cpu_relax();

please.

Will the write to CMD_RESET try to initiate a platform reset, or will we
just hang here forever?
This is quite horrible, but I presume this is as specified by TXT?

To confirm, it's a tightly packed data structure where each of the 4
blobs is variable size?  Are there any alignment requirements on the
table sizes?  I suspect not, given the __packed attributes.

> diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
> index d5db60d335..8a573d8c79 100644
> --- a/xen/arch/x86/tboot.c
> +++ b/xen/arch/x86/tboot.c
> @@ -15,6 +15,7 @@
> #include <asm/tboot.h>
> #include <asm/setup.h>
> #include <asm/trampoline.h>
> +#include <asm/intel-txt.h>
>
> #include <crypto/vmac.h>

I'll send a prep patch to fix tboot.c, but we're trying to keep includes
sorted (for sanity of backports).

~Andrew

Demi Marie Obenour

unread,
May 14, 2025, 11:58:13 AMMay 14
to Sergii Dmytruk, xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, Nicola Vetrini, Doug Goldstein, Daniel P. Smith, Marek Marczykowski-Górecki, Ross Philipson, trenchbo...@googlegroups.com
Why is it better for Xen to rely on the bootloader to implement the
specification, instead of xen.efi itself implementing secure launch?
That would make secure launch significantly more usable. For an
initial implementation it makes sense to rely on the bootloader, but
in the future it would be better for xen.efi to have its own
implementation.

Is the code being added to GRUB for secure launch under a license
that would allow it to be used in Xen as well?
OpenPGP_0xB288B55FFF9C22C1.asc
OpenPGP_signature.asc

Marek Marczykowski-Górecki

unread,
May 14, 2025, 12:12:08 PMMay 14
to Demi Marie Obenour, Sergii Dmytruk, xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, Nicola Vetrini, Doug Goldstein, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Wed, May 14, 2025 at 11:58:49AM -0400, Demi Marie Obenour wrote:
> Why is it better for Xen to rely on the bootloader to implement the
> specification, instead of xen.efi itself implementing secure launch?
> That would make secure launch significantly more usable. For an
> initial implementation it makes sense to rely on the bootloader, but
> in the future it would be better for xen.efi to have its own
> implementation.

That might be true when looking at the very limited use case. But if you
look at the broader ecosystem, having a common part in the bootloader
makes much more sense, as it can be reused for different kernels without
duplicating a lot of work.

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmgkwNEACgkQ24/THMrX
1yypmggAiBWdTT3yTZKYA/28IVhTXRPemEGGqdDcqyC2mDQt102kgF7m46LwSv5I
eK1xY7Hz6AG3Mp9pRH4Jltx5j7SMZ0zzesYuADEpDTNIEWx6Xh+W6AQ5ttAKco1M
tcUFiYaDujesLwRNHJpjH1D9Ih82d1SbUoNyjBgnn1cmX2hXXntVDyXttz6P+xUy
Vl8eF3NYC90+sdc0g8aaKKWe6GqDzsBRj+heISYtymiWRcePWgFWMVMrIDB4Yqmo
+KnDYWsDTQn4Bddu1O6AVVVhvlnhgzP8sHWb20gjQpZG2jwJd1DQcNwQy2Ae04xS
Fxx8mBC75utOGNqnft/Pv098BnmYqQ==
=AulS
-----END PGP SIGNATURE-----

Andrew Cooper

unread,
May 14, 2025, 12:59:04 PMMay 14
to Sergii Dmytruk, xen-...@lists.xenproject.org, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
On 13/05/2025 6:05 pm, Sergii Dmytruk wrote:
> diff --git a/xen/include/xen/sha1.h b/xen/include/xen/sha1.h
> new file mode 100644
> index 0000000000..085f750a6a
> --- /dev/null
> +++ b/xen/include/xen/sha1.h
> @@ -0,0 +1,12 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef XEN__SHA1_H
> +#define XEN__SHA1_H
> +
> +#include <xen/inttypes.h>

Please crib from sha2.h as much as you can.  Use xen/types.h, drop the
double underscore in the guard, and provide a link to the spec.

I think it's https://csrc.nist.gov/pubs/fips/180-1/final

The rest of the header is fine; I don't think we need split-update()
support yet.
As it's uint64_t, the count field needs to be first to avoid padding.

> +
> +/* This "rolls" over the 512-bit array */
> +static void set_w(uint32_t w[SHA1_WORKSPACE_WORDS], size_t i, uint32_t val)
> +{
> +#ifdef CONFIG_X86
> + *(volatile uint32_t *)&w[i & SHA1_WORKSPACE_MASK] = val;
> +#else
> + w[i & SHA1_WORKSPACE_MASK] = val;
> +# ifdef CONFIG_ARM
> + barrier();
> +# endif
> +#endif

This is horrible.  I think the problems discussed are created by having
the loops in sha1_transform() broken in a wrong (read, unhelpful) way.  
The 5-way shuffle of the chaining variables probably is beyond the
compilers' ability to unroll given the multiples of 4 currently used.

See the implementation in SKL where I spent a while optimising the code
generation.  Admittedly that was optimising for size rather than speed,
but the end result look to be good for both.
Xen style has this { on the next line.
Unsigned int please.

> +
> + memcpy(sctx->buffer + partial, msg, rem);
> + msg += rem;
> + len -= rem;
> +
> + sha1_transform(sctx->state, sctx->buffer);
> + }
> +
> + for ( ; len >= SHA1_BLOCK_SIZE; len -= SHA1_BLOCK_SIZE )
> + {
> + sha1_transform(sctx->state, msg);
> + msg += SHA1_BLOCK_SIZE;
> + }
> + partial = 0;
> + }
> +
> + /* Remaining data becomes partial. */
> + memcpy(sctx->buffer + partial, msg, len);
> +}
> +
> +static void sha1_final(struct sha1_state *sctx, void *out)

Please make this uint8_t digest[SHA1_DIGEST_SIZE] straight away.  This
was an oversight of mine in sha2-256.c which was fixed when exposing the
function (c/s aea52ce607fe).

~Andrew

Krystian Hebel

unread,
May 14, 2025, 4:09:07 PMMay 14
to Andrew Cooper, Sergii Dmytruk, xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com
On 14.05.2025 16:55, 'Andrew Cooper' via trenchboot-devel wrote:
On 13/05/2025 6:05 pm, Sergii Dmytruk wrote:
diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
new file mode 100644
index 0000000000..07be43f557
--- /dev/null
+++ b/xen/arch/x86/include/asm/intel-txt.h
@@ -0,0 +1,277 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
Please have at least a one-liner introduction to what TXT is.  Is there
a stable URL for the TXT spec?  (I can't spot an obvious one, googling
around)
Not an official source and I don't know if it would be OK to use it
here, but my go-to source is https://kib.kiev.ua/x86docs/Intel/TXT/,
it even has previous versions, which may be helpful in understanding
some of deprecated (e.g. related to TPM 1.2) intricacies of
implementation.

+{
+    volatile uint64_t *reg = _txt(TXT_PRIV_CONFIG_REGS_BASE + reg_no);
+    return *reg;
+}
+
+static inline void write_txt_reg(int reg_no, uint64_t val)
+{
+    volatile uint64_t *reg = _txt(TXT_PRIV_CONFIG_REGS_BASE + reg_no);
+    *reg = val;
+    /* This serves as TXT register barrier */
+    (void)read_txt_reg(TXTCR_ESTS);
What is this barrier for?

It's not for anything in the CPU side of things, because UC accesses are
strictly ordered.

I presume it's for LPC bus ordering then, but making every write be a
write/read pair seems very expensive.
According to TXT spec, it is required only for TXT.CMD.CLOSE-PRIVATE
and TXT.CMD.UNLOCK-MEM-CONFIG, and it probably could be changed to any
other serializing instruction. IIRC, those registers are written to
only in txt_reset(), so maybe adding a barrier there would suffice.
Yes.
To confirm, it's a tightly packed data structure where each of the 4
blobs is variable size?  Are there any alignment requirements on the
table sizes?  I suspect not, given the __packed attributes.
Yes, those are all of variable sizes. TXT specification notes that all
of the sizes must be a multiple of 8 bytes, but we've seen platforms
where BiosData (filled by BIOS ACM, so beyond our control) isn't
aligned, which makes following fields also unaligned. Because of that,
we treated it as if there was no such requirement.
-- 
Krystian Hebel
Firmware Engineer
https://3mdeb.com | @3mdeb_com

Sergii Dmytruk

unread,
May 15, 2025, 10:39:16 AMMay 15
to Demi Marie Obenour, xen-...@lists.xenproject.org, Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, Nicola Vetrini, Doug Goldstein, Daniel P. Smith, Marek Marczykowski-Górecki, Ross Philipson, trenchbo...@googlegroups.com
That specification is not exactly about DRTM, which is specified by CPU
vendors. It's about an interface between what starts DRTM (a bootloader
in a broad sense) and what uses on it (kernels, hypervisors, etc.). If
the whole process is performed by a single entity, there is no need
for the specification.

What starts DRTM needs to ensure the system supports DRTM and should
then put it in a state suitable for DRTM start. What uses DRTM
needs to know much less and can heavily rely on the information from a
bootloader. Ideally, Xen/Linux would be able to handle DRTM uniformly
on different hardware, but the reality is that abstracting away some
differences is nearly impossible.

> Is the code being added to GRUB for secure launch under a license
> that would allow it to be used in Xen as well?

GRUB's changes are GPL3-or-later, but that shouldn't be a problem,
authors will likely agree to relicense it as GPL2-or-later for Xen.

Regards

Sergii Dmytruk

unread,
May 17, 2025, 2:18:05 PMMay 17
to Andrew Cooper, xen-...@lists.xenproject.org, Anthony PERARD, Michal Orzel, Jan Beulich, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com
On Wed, May 14, 2025 at 05:58:59PM +0100, Andrew Cooper wrote:
> Please crib from sha2.h as much as you can.  Use xen/types.h, drop the
> double underscore in the guard, and provide a link to the spec.

Until yesterday CODING_STYLE instructed to have 2 underscores, so I
thought sha2.h is outdated. I'll switch to <xen/types.h>, although it's
overkill there and only grows header dependency tree (it also brings
extra symbols thus hiding missing includes elsewhere).
Looks like https://csrc.nist.gov/pubs/fips/180-4/upd1/final is the
latest for all SHA algorithms.

> > +struct sha1_state {
> > + uint32_t state[SHA1_DIGEST_SIZE / 4];
> > + uint64_t count;
> > + uint8_t buffer[SHA1_BLOCK_SIZE];
> > +};
>
> As it's uint64_t, the count field needs to be first to avoid padding.

Will swap them.

> > +/* This "rolls" over the 512-bit array */
> > +static void set_w(uint32_t w[SHA1_WORKSPACE_WORDS], size_t i, uint32_t val)
> > +{
> > +#ifdef CONFIG_X86
> > + *(volatile uint32_t *)&w[i & SHA1_WORKSPACE_MASK] = val;
> > +#else
> > + w[i & SHA1_WORKSPACE_MASK] = val;
> > +# ifdef CONFIG_ARM
> > + barrier();
> > +# endif
> > +#endif
>
> This is horrible.  I think the problems discussed are created by having
> the loops in sha1_transform() broken in a wrong (read, unhelpful) way.  
> The 5-way shuffle of the chaining variables probably is beyond the
> compilers' ability to unroll given the multiples of 4 currently used.
>
> See the implementation in SKL where I spent a while optimising the code
> generation.  Admittedly that was optimising for size rather than speed,
> but the end result look to be good for both.

I tried just doing regular writes and didn't notice any massive
changes in the disassembly with GCC 14.2.0, in fact the function got
shorter by 41 bytes and its stack frame size decreased by 8 bytes. That
comment was probably addressing misbehaviour of some older version.

> > + /* Round 1 - iterations 0-16 take their input from 'data' */
> > + for ( ; i < 16; ++i ) {
>
> Xen style has this { on the next line.

Will fix.

> > +static void sha1_update(struct sha1_state *sctx, const uint8_t *msg, size_t len)
> > +{
> > + unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
> > +
> > + sctx->count += len;
> > +
> > + if ( (partial + len) >= SHA1_BLOCK_SIZE )
> > + {
> > + if ( partial )
> > + {
> > + int rem = SHA1_BLOCK_SIZE - partial;
>
> Unsigned int please.

Will do.

> > +static void sha1_final(struct sha1_state *sctx, void *out)
>
> Please make this uint8_t digest[SHA1_DIGEST_SIZE] straight away.  This
> was an oversight of mine in sha2-256.c which was fixed when exposing the
> function (c/s aea52ce607fe).

OK. I skipped that intentionally because it really only adds an
explicit cast in the body which is not much different from casting to
`void *` and back.

Regards

Jan Beulich

unread,
May 18, 2025, 4:34:13 AMMay 18
to Sergii Dmytruk, xen-...@lists.xenproject.org, Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com, Andrew Cooper
On 17.05.2025 20:17, Sergii Dmytruk wrote:
> On Wed, May 14, 2025 at 05:58:59PM +0100, Andrew Cooper wrote:
>> Please crib from sha2.h as much as you can.  Use xen/types.h, drop the
>> double underscore in the guard, and provide a link to the spec.
>
> Until yesterday CODING_STYLE instructed to have 2 underscores, so I
> thought sha2.h is outdated. I'll switch to <xen/types.h>, although it's
> overkill there and only grows header dependency tree (it also brings
> extra symbols thus hiding missing includes elsewhere).

If xen/stdint.h is sifficient here, I'm pretty sure Andrew would be okay
with its use instead of xen/types.h.

Jan

Sergii Dmytruk

unread,
May 18, 2025, 7:32:12 AMMay 18
to Jan Beulich, xen-...@lists.xenproject.org, Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné, Stefano Stabellini, trenchbo...@googlegroups.com, Andrew Cooper
Oh, there is size_t, so <xen/types.h> is actually appropriate.

Thanks

Sergii Dmytruk

unread,
May 18, 2025, 2:36:03 PMMay 18
to Andrew Cooper, xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com
On Wed, May 14, 2025 at 03:55:51PM +0100, Andrew Cooper wrote:
> Please have at least a one-liner introduction to what TXT is.  Is there
> a stable URL for the TXT spec?  (I can't spot an obvious one, googling
> around)

In addition to a short definition I'll add:
* https://www.intel.com/content/www/us/en/support/articles/000025873/processors.html
* unversioned link to Software Development Guide
* link to that unofficial archive (marked as such) mentioned by Krystian

> > +#ifndef ASM__X86__INTEL_TXT_H
> > +#define ASM__X86__INTEL_TXT_H
>
> I realise this is what the documentation currently says, but we're just
> in the process of finalising some changes.  Please make this
> X86_INTEL_TXT_H.

Will fix.

> Commit message needs to note that you're swapping NR_TXT_CONFIG_PAGES
> for NR_TXT_CONFIG_SIZE, hence the change in logic in
> tboot_protect_mem_regions().

Will clarify that.

> > +#ifndef __ASSEMBLY__
> > +
> > +/* Need to differentiate between pre- and post paging enabled. */
> > +#ifdef __EARLY_SLAUNCH__
> > +#include <xen/macros.h>
> > +#define _txt(x) _p(x)
> > +#else
> > +#include <xen/types.h>
> > +#include <asm/page.h> // __va()
> > +#define _txt(x) __va(x)
> > +#endif
>
> I have to admit that I'm rather suspicious of this, but I'm going to
> have to wait to see later patches before making a suggestion.  (I highly
> suspect we'll want a fixmap entry).

Leaving it as is for now.

> > +/*
> > + * Always use private space as some of registers are either read-only or not
> > + * present in public space.
> > + */
> > +static inline uint64_t read_txt_reg(int reg_no)
>
> unsigned int reg

OK, for both read and write functions.

> I'd be tempted to name it simply txt_read().  I don't think the extra
> "_reg" is a helpful suffix, and it makes the APIs consistent with
> txt_reset().  But I expect others may have opinions here.

Will be renamed to txt_read() and txt_write(), seems sensible to me.

> > +static inline void write_txt_reg(int reg_no, uint64_t val)
> > +{
> > + volatile uint64_t *reg = _txt(TXT_PRIV_CONFIG_REGS_BASE + reg_no);
> > + *reg = val;
> > + /* This serves as TXT register barrier */
> > + (void)read_txt_reg(TXTCR_ESTS);
>
> What is this barrier for?
>
> It's not for anything in the CPU side of things, because UC accesses are
> strictly ordered.
>
> I presume it's for LPC bus ordering then, but making every write be a
> write/read pair seems very expensive.

The barrier will be moved to txt_reset() as suggested by Krystian in his
reply.

> > +static inline void txt_reset(uint32_t error)
> > +{
> > + write_txt_reg(TXTCR_ERRORCODE, error);
> > + write_txt_reg(TXTCR_CMD_NO_SECRETS, 1);
> > + write_txt_reg(TXTCR_CMD_UNLOCK_MEM_CONFIG, 1);
> > + write_txt_reg(TXTCR_CMD_RESET, 1);
> > + while (1);
>
> for ( ;; )
>     cpu_relax();
>
> please.
>
> Will the write to CMD_RESET try to initiate a platform reset, or will we
> just hang here forever?

Will mark the function as `noreturn` and use `unreachable()` instead.

The write sends a platform into a warm reboot which retains the error
code for later examination.

> > +/*
> > + * Functions to extract data from the Intel TXT Heap Memory. The layout
> > + * of the heap is as follows:
>
> This is quite horrible, but I presume this is as specified by TXT?
>
> To confirm, it's a tightly packed data structure where each of the 4
> blobs is variable size?  Are there any alignment requirements on the
> table sizes?  I suspect not, given the __packed attributes.

Will improve the comment to state this explicitly, including that no
particular alignment is assumed and why (as in Krystian's reply).

> > diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
> > index d5db60d335..8a573d8c79 100644
> > --- a/xen/arch/x86/tboot.c
> > +++ b/xen/arch/x86/tboot.c
> > @@ -15,6 +15,7 @@
> > #include <asm/tboot.h>
> > #include <asm/setup.h>
> > #include <asm/trampoline.h>
> > +#include <asm/intel-txt.h>
> >
> > #include <crypto/vmac.h>
>
> I'll send a prep patch to fix tboot.c, but we're trying to keep includes
> sorted (for sanity of backports).
>
> ~Andrew

I've seen commits sorting includes and trying to do the same, but files
which don't have includes sorted yet often don't have any good place for
a new entry and I refrain from sorting them at the same time as I add a
new line.

Regards

Rich Persaud

unread,
May 18, 2025, 7:32:02 PMMay 18
to Sergii Dmytruk, Andrew Cooper, xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com
On May 18, 2025, at 2:36 PM, Sergii Dmytruk <sergii....@3mdeb.com> wrote:
>
> On Wed, May 14, 2025 at 03:55:51PM +0100, Andrew Cooper wrote:
>> Please have at least a one-liner introduction to what TXT is. Is there
>> a stable URL for the TXT spec? (I can't spot an obvious one, googling
>> around)
>
> In addition to a short definition I'll add:
> * https://www.intel.com/content/www/us/en/support/articles/000025873/processors.html
> * unversioned link to Software Development Guide
> * link to that unofficial archive (marked as such) mentioned by Krystian

If there's no stable URL for the TXT spec, can we mirror the relevant doc(s) somewhere in the Xen docs tree? A trusted archive of the spec for trusted execution.

Rich

Sergii Dmytruk

unread,
May 19, 2025, 9:43:46 AMMay 19
to Rich Persaud, Andrew Cooper, xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com
On Sun, May 18, 2025 at 07:31:49PM -0400, Rich Persaud wrote:
> If there's no stable URL for the TXT spec, can we mirror the relevant
> doc(s) somewhere in the Xen docs tree? A trusted archive of the spec
> for trusted execution.
>
> Rich

By "unversioned link to Software Development Guide" I meant
https://www.intel.com/content/www/us/en/content-details/315168/
which always provides the latest version.

Regards

Rich Persaud

unread,
May 19, 2025, 4:55:27 PMMay 19
to Sergii Dmytruk, Andrew Cooper, xen-...@lists.xenproject.org, Daniel P. Smith, Ross Philipson, Jan Beulich, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com
On May 19, 2025, at 9:43 AM, Sergii Dmytruk <sergii....@3mdeb.com> wrote:
By "trusted archive of the spec" I meant a server under control of Intel or the Xen community, hosting the specific version(s) of the spec that have been implemented in the Xen tree.

Unless Intel reference PDFs are digitally signed by an Intel certificate, we should not be linking to non-Intel mirrors of Intel PDFs, which could be targeted by attackers to relay malware onto the workstations of developers of trusted execution software. If Intel reference PDFs are signed, we should include instructions for verifying their authenticity, if we are linking to non-Intel sources.

Rich

Jan Beulich

unread,
May 21, 2025, 11:05:43 AMMay 21
to Andrew Cooper, Sergii Dmytruk, Daniel P. Smith, Ross Philipson, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On 14.05.2025 16:55, Andrew Cooper wrote:
> On 13/05/2025 6:05 pm, Sergii Dmytruk wrote:
>> +/*
>> + * Always use private space as some of registers are either read-only or not
>> + * present in public space.
>> + */
>> +static inline uint64_t read_txt_reg(int reg_no)
>
> unsigned int reg
>
> I'd be tempted to name it simply txt_read().  I don't think the extra
> "_reg" is a helpful suffix, and it makes the APIs consistent with
> txt_reset().  But I expect others may have opinions here.

+1

>> +static inline void txt_reset(uint32_t error)
>> +{
>> + write_txt_reg(TXTCR_ERRORCODE, error);
>> + write_txt_reg(TXTCR_CMD_NO_SECRETS, 1);
>> + write_txt_reg(TXTCR_CMD_UNLOCK_MEM_CONFIG, 1);
>> + write_txt_reg(TXTCR_CMD_RESET, 1);
>> + while (1);
>
> for ( ;; )
>     cpu_relax();
>
> please.

With (style nit) another blank between the semicolons.

Jan

Jan Beulich

unread,
May 21, 2025, 11:20:02 AMMay 21
to Sergii Dmytruk, Daniel P. Smith, Ross Philipson, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On 13.05.2025 19:05, Sergii Dmytruk wrote:
> From: Krystian Hebel <krystia...@3mdeb.com>
>
> The file contains TXT register spaces base address, registers offsets,
> error codes and inline functions for accessing structures stored on
> TXT heap.
>
> Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
> Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
> ---
> xen/arch/x86/include/asm/intel-txt.h | 277 +++++++++++++++++++++++++++
> xen/arch/x86/tboot.c | 20 +-
> 2 files changed, 279 insertions(+), 18 deletions(-)
> create mode 100644 xen/arch/x86/include/asm/intel-txt.h
>
> diff --git a/xen/arch/x86/include/asm/intel-txt.h b/xen/arch/x86/include/asm/intel-txt.h
> new file mode 100644
> index 0000000000..07be43f557
> --- /dev/null
> +++ b/xen/arch/x86/include/asm/intel-txt.h
> @@ -0,0 +1,277 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +
> +#ifndef ASM__X86__INTEL_TXT_H
> +#define ASM__X86__INTEL_TXT_H
> +
> +/*
> + * TXT configuration registers (offsets from TXT_{PUB, PRIV}_CONFIG_REGS_BASE)
> + */
> +#define TXT_PUB_CONFIG_REGS_BASE 0xfed30000U
> +#define TXT_PRIV_CONFIG_REGS_BASE 0xfed20000U
> +
> +/*
> + * The same set of registers is exposed twice (with different permissions) and
> + * they are allocated continuously with page alignment.
> + */
> +#define NR_TXT_CONFIG_SIZE \
> + (TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE)

What does the NR_ in the identifier try to express?

> +/*
> + * Secure Launch defined OS/MLE TXT Heap table
> + */
> +struct txt_os_mle_data {
> + uint32_t version;
> + uint32_t reserved;
> + uint64_t slrt;
> + uint64_t txt_info;
> + uint32_t ap_wake_block;
> + uint32_t ap_wake_block_size;
> + uint8_t mle_scratch[64];
> +} __packed;

This being x86-specific, what's the __packed intended to achieve here?

> +/*
> + * TXT specification defined BIOS data TXT Heap table
> + */
> +struct txt_bios_data {
> + uint32_t version; /* Currently 5 for TPM 1.2 and 6 for TPM 2.0 */
> + uint32_t bios_sinit_size;
> + uint64_t reserved1;
> + uint64_t reserved2;
> + uint32_t num_logical_procs;
> + /* Versions >= 3 && < 5 */
> + uint32_t sinit_flags;
> + /* Versions >= 5 with updates in version 6 */
> + uint32_t mle_flags;
> + /* Versions >= 4 */
> + /* Ext Data Elements */
> +} __packed;

It does affect sizeof() here, which I'm unsure is going to matter.

> +/*
> + * TXT specification defined OS/SINIT TXT Heap table
> + */
> +struct txt_os_sinit_data {
> + uint32_t version; /* Currently 6 for TPM 1.2 and 7 for TPM 2.0 */
> + uint32_t flags; /* Reserved in version 6 */
> + uint64_t mle_ptab;
> + uint64_t mle_size;
> + uint64_t mle_hdr_base;
> + uint64_t vtd_pmr_lo_base;
> + uint64_t vtd_pmr_lo_size;
> + uint64_t vtd_pmr_hi_base;
> + uint64_t vtd_pmr_hi_size;
> + uint64_t lcp_po_base;
> + uint64_t lcp_po_size;
> + uint32_t capabilities;
> + /* Version = 5 */
> + uint64_t efi_rsdt_ptr; /* RSD*P* in versions >= 6 */
> + /* Versions >= 6 */
> + /* Ext Data Elements */
> +} __packed;

It does really affect the layout here, so can't be removed in this case.

> +/*
> + * Functions to extract data from the Intel TXT Heap Memory. The layout
> + * of the heap is as follows:
> + * +------------------------------------+
> + * | Size of Bios Data table (uint64_t) |
> + * +------------------------------------+
> + * | Bios Data table |
> + * +------------------------------------+
> + * | Size of OS MLE table (uint64_t) |
> + * +------------------------------------+
> + * | OS MLE table |
> + * +-------------------------------- +
> + * | Size of OS SINIT table (uint64_t) |
> + * +------------------------------------+
> + * | OS SINIT table |
> + * +------------------------------------+
> + * | Size of SINIT MLE table (uint64_t) |
> + * +------------------------------------+
> + * | SINIT MLE table |
> + * +------------------------------------+
> + *
> + * NOTE: the table size fields include the 8 byte size field itself.
> + */
> +static inline uint64_t txt_bios_data_size(void *heap)

Here, below, and in general: Please try to have code be const-correct, i.e.
use pointers-to-const wherever applicable.

> +{
> + return *((uint64_t *)heap) - sizeof(uint64_t);

For readability it generally helps if excess parentheses like the ones here
are omitted.

> +}
> +
> +static inline void *txt_bios_data_start(void *heap)
> +{
> + return heap + sizeof(uint64_t);
> +}
> +
> +static inline uint64_t txt_os_mle_data_size(void *heap)
> +{
> + return *((uint64_t *)(txt_bios_data_start(heap) +
> + txt_bios_data_size(heap))) -
> + sizeof(uint64_t);

This line wants indenting a little further, such that the"sizeof" aligns
with the start of the expression. (Same elsewhere below.)

Jan

Jan Beulich

unread,
May 21, 2025, 11:45:09 AMMay 21
to Sergii Dmytruk, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On 13.05.2025 19:05, Sergii Dmytruk wrote:
> The file provides constants, structures and several helper functions for
> parsing SLRT.
>
> Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
> Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
> ---
> xen/include/xen/slr-table.h | 268 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 268 insertions(+)
> create mode 100644 xen/include/xen/slr-table.h
>
> --- /dev/null
> +++ b/xen/include/xen/slr-table.h
> @@ -0,0 +1,268 @@
> +/* SPDX-License-Identifier: GPL-2.0 */

GPL-2.0-only is, I think, the one to use for new code.

> +/*
> + * Copyright (c) 2025 Apertus Solutions, LLC
> + * Copyright (c) 2025 Oracle and/or its affiliates.
> + * Copyright (c) 2025 3mdeb Sp. z o.o

I'm curious: Considering the (just) 2 S-o-b, where's the 3rd copyright
line coming from?

> + * Secure Launch Resource Table definitions
> + */
> +
> +#ifndef XEN__SLR_TABLE_H
> +#define XEN__SLR_TABLE_H
> +
> +#include <xen/types.h>

Looks like xen/stdint.h would suffice?

> +#define UEFI_SLR_TABLE_GUID \
> + { 0x877a9b2aU, 0x0385, 0x45d1, { 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f } }

I'm not sure this is a good place to put UEFI GUIDs. Considering e.g ...

> +/* SLR table header values */
> +#define SLR_TABLE_MAGIC 0x4452544d
> +#define SLR_TABLE_REVISION 1
> +
> +/* Current revisions for the policy and UEFI config */
> +#define SLR_POLICY_REVISION 1
> +#define SLR_UEFI_CONFIG_REVISION 1

... this, is the whole concept perhaps bound to UEFI? In which casethe
whole header may want to move to the efi/ subdir?

> +/* SLR defined architectures */
> +#define SLR_INTEL_TXT 1
> +#define SLR_AMD_SKINIT 2

These are both x86, yet the header is put in the common include dir?

> +/* SLR defined bootloaders */
> +#define SLR_BOOTLOADER_INVALID 0
> +#define SLR_BOOTLOADER_GRUB 1
> +
> +/* Log formats */
> +#define SLR_DRTM_TPM12_LOG 1
> +#define SLR_DRTM_TPM20_LOG 2
> +
> +/* DRTM Policy Entry Flags */
> +#define SLR_POLICY_FLAG_MEASURED 0x1
> +#define SLR_POLICY_IMPLICIT_SIZE 0x2
> +
> +/* Array Lengths */
> +#define TPM_EVENT_INFO_LENGTH 32
> +#define TXT_VARIABLE_MTRRS_LENGTH 32
> +
> +/* Tags */
> +#define SLR_ENTRY_INVALID 0x0000
> +#define SLR_ENTRY_DL_INFO 0x0001
> +#define SLR_ENTRY_LOG_INFO 0x0002
> +#define SLR_ENTRY_DRTM_POLICY 0x0003
> +#define SLR_ENTRY_INTEL_INFO 0x0004
> +#define SLR_ENTRY_AMD_INFO 0x0005
> +#define SLR_ENTRY_ARM_INFO 0x0006
> +#define SLR_ENTRY_UEFI_INFO 0x0007
> +#define SLR_ENTRY_UEFI_CONFIG 0x0008
> +#define SLR_ENTRY_END 0xffff
> +
> +/* Entity Types */
> +#define SLR_ET_UNSPECIFIED 0x0000
> +#define SLR_ET_SLRT 0x0001
> +#define SLR_ET_BOOT_PARAMS 0x0002
> +#define SLR_ET_SETUP_DATA 0x0003
> +#define SLR_ET_CMDLINE 0x0004
> +#define SLR_ET_UEFI_MEMMAP 0x0005
> +#define SLR_ET_RAMDISK 0x0006
> +#define SLR_ET_MULTIBOOT2_INFO 0x0007
> +#define SLR_ET_MULTIBOOT2_MODULE 0x0008
> +#define SLR_ET_TXT_OS2MLE 0x0010
> +#define SLR_ET_UNUSED 0xffff
> +
> +/*
> + * Primary SLR Table Header
> + */
> +struct slr_table
> +{
> + uint32_t magic;
> + uint16_t revision;
> + uint16_t architecture;
> + uint32_t size;
> + uint32_t max_size;
> + /* entries[] */
> +} __packed;

If x86-specific, the question on the need for some of the __packed arises
again.

> +/*
> + * Common SLRT Table Header
> + */
> +struct slr_entry_hdr
> +{
> + uint32_t tag;
> + uint32_t size;
> +} __packed;
> +
> +/*
> + * Boot loader context
> + */
> +struct slr_bl_context
> +{
> + uint16_t bootloader;
> + uint16_t reserved[3];
> + uint64_t context;
> +} __packed;
> +
> +/*
> + * Prototype of a function pointed to by slr_entry_dl_info::dl_handler.
> + */
> +typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);

It being an internal header, ...

> +/*
> + * DRTM Dynamic Launch Configuration
> + */
> +struct slr_entry_dl_info
> +{
> + struct slr_entry_hdr hdr;
> + uint64_t dce_size;
> + uint64_t dce_base;
> + uint64_t dlme_size;
> + uint64_t dlme_base;
> + uint64_t dlme_entry;
> + struct slr_bl_context bl_context;
> + uint64_t dl_handler;

... why can't this type be used here then? This would presumably avoid a
typecast later.

> +static inline void *
> +slr_end_of_entries(struct slr_table *table)
> +{
> + return (uint8_t *)table + table->size;

Considering the function's return type, why not cast to void * (or perhaps
const void *, if the return type also can be such)?

> +}
> +
> +static inline struct slr_entry_hdr *
> +slr_next_entry(struct slr_table *table, struct slr_entry_hdr *curr)
> +{
> + struct slr_entry_hdr *next = (struct slr_entry_hdr *)
> + ((uint8_t *)curr + curr->size);
> +
> + if ( (void *)next >= slr_end_of_entries(table) )
> + return NULL;

Is this sufficient as a check? With it fulfilled, ...

> + if ( next->tag == SLR_ENTRY_END )

... this member access may still be out of bounds. IOW the question is what
level of checking is really adequate here.

> + return NULL;
> +
> + return next;
> +}
> +
> +static inline struct slr_entry_hdr *
> +slr_next_entry_by_tag (struct slr_table *table,
> + struct slr_entry_hdr *entry,
> + uint16_t tag)
> +{
> + if ( !entry ) /* Start from the beginning */
> + entry = (struct slr_entry_hdr *)((uint8_t *)table + sizeof(*table));

Extending from the earlier comment - if the inner cast was to void * here,
the outer one could be dropped altogether.

Jan

Andrew Cooper

unread,
May 21, 2025, 11:50:38 AMMay 21
to Jan Beulich, Sergii Dmytruk, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On 21/05/2025 4:45 pm, Jan Beulich wrote:
> On 13.05.2025 19:05, Sergii Dmytruk wrote:
>> +/* SLR defined architectures */
>> +#define SLR_INTEL_TXT 1
>> +#define SLR_AMD_SKINIT 2
> These are both x86, yet the header is put in the common include dir?

ARM have a DRTM technology in progress, which will be 3 here in due course.

OpenPOWER and RISC-V are also investigating DRTM.

~Andrew

Sergii Dmytruk

unread,
May 23, 2025, 3:51:28 PMMay 23
to Jan Beulich, Daniel P. Smith, Ross Philipson, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On Wed, May 21, 2025 at 05:19:57PM +0200, Jan Beulich wrote:
> > +/*
> > + * The same set of registers is exposed twice (with different permissions) and
> > + * they are allocated continuously with page alignment.
> > + */
> > +#define NR_TXT_CONFIG_SIZE \
> > + (TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE)
>
> What does the NR_ in the identifier try to express?

That's a leftover from the original name inside tboot.c, I'll rename it
to TXT_CONFIG_SPACE_SIZE.

> > +/*
> > + * Secure Launch defined OS/MLE TXT Heap table
> > + */
> > +struct txt_os_mle_data {
> > + uint32_t version;
> > + uint32_t reserved;
> > + uint64_t slrt;
> > + uint64_t txt_info;
> > + uint32_t ap_wake_block;
> > + uint32_t ap_wake_block_size;
> > + uint8_t mle_scratch[64];
> > +} __packed;
>
> This being x86-specific, what's the __packed intended to achieve here?

This structure is passed to Xen by a bootloader, __packed makes sure the
structure has a compatible layout.

> > +/*
> > + * TXT specification defined BIOS data TXT Heap table
> > + */
> > +struct txt_bios_data {
> > + uint32_t version; /* Currently 5 for TPM 1.2 and 6 for TPM 2.0 */
> > + uint32_t bios_sinit_size;
> > + uint64_t reserved1;
> > + uint64_t reserved2;
> > + uint32_t num_logical_procs;
> > + /* Versions >= 3 && < 5 */
> > + uint32_t sinit_flags;
> > + /* Versions >= 5 with updates in version 6 */
> > + uint32_t mle_flags;
> > + /* Versions >= 4 */
> > + /* Ext Data Elements */
> > +} __packed;
>
> It does affect sizeof() here, which I'm unsure is going to matter.

It doesn't hurt anything and makes sure offsets match those in the
specification.

> > +static inline uint64_t txt_bios_data_size(void *heap)
>
> Here, below, and in general: Please try to have code be const-correct, i.e.
> use pointers-to-const wherever applicable.

I assume this doesn't apply to functions returning `void *`. The
approach used in libc is to accept pointers-to-const but then cast the
constness away for the return value, but this header isn't a widely-used
code.

> > +{
> > + return *((uint64_t *)heap) - sizeof(uint64_t);
>
> For readability it generally helps if excess parentheses like the ones here
> are omitted.

OK.

> > +static inline uint64_t txt_os_mle_data_size(void *heap)
> > +{
> > + return *((uint64_t *)(txt_bios_data_start(heap) +
> > + txt_bios_data_size(heap))) -
> > + sizeof(uint64_t);
>
> This line wants indenting a little further, such that the"sizeof" aligns
> with the start of the expression. (Same elsewhere below.)
>
> Jan

Will update.

Regards

Sergii Dmytruk

unread,
May 23, 2025, 6:19:33 PMMay 23
to Jan Beulich, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On Wed, May 21, 2025 at 05:45:04PM +0200, Jan Beulich wrote:
> > +/* SPDX-License-Identifier: GPL-2.0 */
>
> GPL-2.0-only is, I think, the one to use for new code.

Right.

> > +/*
> > + * Copyright (c) 2025 Apertus Solutions, LLC
> > + * Copyright (c) 2025 Oracle and/or its affiliates.
> > + * Copyright (c) 2025 3mdeb Sp. z o.o
>
> I'm curious: Considering the (just) 2 S-o-b, where's the 3rd copyright
> line coming from?

I'll add "Daniel P. Smith" (already in CC), not sure why his S-o-B
wasn't there.

> > +#include <xen/types.h>
>
> Looks like xen/stdint.h would suffice?

It would for types, but there is also use of `NULL`.

> > +#define UEFI_SLR_TABLE_GUID \
> > + { 0x877a9b2aU, 0x0385, 0x45d1, { 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f } }
>
> I'm not sure this is a good place to put UEFI GUIDs. Considering e.g ...

It's here because the GUID is related more to SLRT than to EFI. I can
move it if there is a more fitting place for table GUIDs.

> > +/* SLR table header values */
> > +#define SLR_TABLE_MAGIC 0x4452544d
> > +#define SLR_TABLE_REVISION 1
> > +
> > +/* Current revisions for the policy and UEFI config */
> > +#define SLR_POLICY_REVISION 1
> > +#define SLR_UEFI_CONFIG_REVISION 1
>
> ... this, is the whole concept perhaps bound to UEFI? In which casethe
> whole header may want to move to the efi/ subdir?

This isn't EFI-specific, legacy boot is supported. Some types of
entries are there to provide EFI-specific information.

> > +/* SLR defined architectures */
> > +#define SLR_INTEL_TXT 1
> > +#define SLR_AMD_SKINIT 2
>
> These are both x86, yet the header is put in the common include dir?

It's x86-specific with the goal to add more architectures in the future.
I don't know, maybe the header should start as arch-specific and be
moved later, your call.

> > +/*
> > + * Primary SLR Table Header
> > + */
> > +struct slr_table
> > +{
> > + uint32_t magic;
> > + uint16_t revision;
> > + uint16_t architecture;
> > + uint32_t size;
> > + uint32_t max_size;
> > + /* entries[] */
> > +} __packed;
>
> If x86-specific, the question on the need for some of the __packed arises
> again.

The table is used to communicate data from pre-DRTM world to DRTM-world
and is produced and consumed by unrelated software components that don't
necessarily pad structures the same way by default.

> > +/*
> > + * Prototype of a function pointed to by slr_entry_dl_info::dl_handler.
> > + */
> > +typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);
>
> It being an internal header, ...
> > + uint64_t dl_handler;
>
> ... why can't this type be used here then? This would presumably avoid a
> typecast later.

It's not an internal header in my understanding of the phrase, Xen
parses what a bootloader has passed to it. In principle, pointers could
be 32-bit here.

> > +static inline void *
> > +slr_end_of_entries(struct slr_table *table)
> > +{
> > + return (uint8_t *)table + table->size;
>
> Considering the function's return type, why not cast to void * (or perhaps
> const void *, if the return type also can be such)?

No particular reason other than that pointer arithmetic on
pointers-to-void typically causes build issues. Can be changed for Xen.

> > +static inline struct slr_entry_hdr *
> > +slr_next_entry(struct slr_table *table, struct slr_entry_hdr *curr)
> > +{
> > + struct slr_entry_hdr *next = (struct slr_entry_hdr *)
> > + ((uint8_t *)curr + curr->size);
> > +
> > + if ( (void *)next >= slr_end_of_entries(table) )
> > + return NULL;
>
> Is this sufficient as a check? With it fulfilled, ...
>
> > + if ( next->tag == SLR_ENTRY_END )
>
> ... this member access may still be out of bounds. IOW the question is what
> level of checking is really adequate here.

SLR_ENTRY_END should really end the table, but it won't hurt to check
for out of bounds. Thanks, will correct the checks.

> > +static inline struct slr_entry_hdr *
> > +slr_next_entry_by_tag (struct slr_table *table,
> > + struct slr_entry_hdr *entry,
> > + uint16_t tag)
> > +{
> > + if ( !entry ) /* Start from the beginning */
> > + entry = (struct slr_entry_hdr *)((uint8_t *)table + sizeof(*table));
>
> Extending from the earlier comment - if the inner cast was to void * here,
> the outer one could be dropped altogether.
>
> Jan

Will update.

Regards

Jan Beulich

unread,
Jun 2, 2025, 3:17:36 AMJun 2
to Sergii Dmytruk, Daniel P. Smith, Ross Philipson, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On 23.05.2025 21:51, Sergii Dmytruk wrote:
> On Wed, May 21, 2025 at 05:19:57PM +0200, Jan Beulich wrote:
>>> +/*
>>> + * Secure Launch defined OS/MLE TXT Heap table
>>> + */
>>> +struct txt_os_mle_data {
>>> + uint32_t version;
>>> + uint32_t reserved;
>>> + uint64_t slrt;
>>> + uint64_t txt_info;
>>> + uint32_t ap_wake_block;
>>> + uint32_t ap_wake_block_size;
>>> + uint8_t mle_scratch[64];
>>> +} __packed;
>>
>> This being x86-specific, what's the __packed intended to achieve here?
>
> This structure is passed to Xen by a bootloader, __packed makes sure the
> structure has a compatible layout.

And it won't have a compatible layout without the attribute?

>>> +/*
>>> + * TXT specification defined BIOS data TXT Heap table
>>> + */
>>> +struct txt_bios_data {
>>> + uint32_t version; /* Currently 5 for TPM 1.2 and 6 for TPM 2.0 */
>>> + uint32_t bios_sinit_size;
>>> + uint64_t reserved1;
>>> + uint64_t reserved2;
>>> + uint32_t num_logical_procs;
>>> + /* Versions >= 3 && < 5 */
>>> + uint32_t sinit_flags;
>>> + /* Versions >= 5 with updates in version 6 */
>>> + uint32_t mle_flags;
>>> + /* Versions >= 4 */
>>> + /* Ext Data Elements */
>>> +} __packed;
>>
>> It does affect sizeof() here, which I'm unsure is going to matter.
>
> It doesn't hurt anything and makes sure offsets match those in the
> specification.

It similarly doesn't appear to hurt anything if the attribute was omitted.
Imo we ought to use compiler extensions on when there is a need to do so.

>>> +static inline uint64_t txt_bios_data_size(void *heap)
>>
>> Here, below, and in general: Please try to have code be const-correct, i.e.
>> use pointers-to-const wherever applicable.
>
> I assume this doesn't apply to functions returning `void *`. The
> approach used in libc is to accept pointers-to-const but then cast the
> constness away for the return value, but this header isn't a widely-used
> code.

Which is, from all I know, bad practice not only by my own view.

Jan

Jan Beulich

unread,
Jun 2, 2025, 3:31:07 AMJun 2
to Sergii Dmytruk, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On 24.05.2025 00:19, Sergii Dmytruk wrote:
> On Wed, May 21, 2025 at 05:45:04PM +0200, Jan Beulich wrote:
>>> +/*
>>> + * Copyright (c) 2025 Apertus Solutions, LLC
>>> + * Copyright (c) 2025 Oracle and/or its affiliates.
>>> + * Copyright (c) 2025 3mdeb Sp. z o.o
>>
>> I'm curious: Considering the (just) 2 S-o-b, where's the 3rd copyright
>> line coming from?
>
> I'll add "Daniel P. Smith" (already in CC), not sure why his S-o-B
> wasn't there.

Just to mention it: Be careful there; aiui you can't simply add someone else's
S-o-b without their agreement.

>>> +#define UEFI_SLR_TABLE_GUID \
>>> + { 0x877a9b2aU, 0x0385, 0x45d1, { 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f } }
>>
>> I'm not sure this is a good place to put UEFI GUIDs. Considering e.g ...
>
> It's here because the GUID is related more to SLRT than to EFI. I can
> move it if there is a more fitting place for table GUIDs.

It'll (at least somewhat) depend on where it's going to be used. A common problem
when definitions / declarations are introduced without any use.

>>> +/*
>>> + * Primary SLR Table Header
>>> + */
>>> +struct slr_table
>>> +{
>>> + uint32_t magic;
>>> + uint16_t revision;
>>> + uint16_t architecture;
>>> + uint32_t size;
>>> + uint32_t max_size;
>>> + /* entries[] */
>>> +} __packed;
>>
>> If x86-specific, the question on the need for some of the __packed arises
>> again.
>
> The table is used to communicate data from pre-DRTM world to DRTM-world
> and is produced and consumed by unrelated software components that don't
> necessarily pad structures the same way by default.

How do other environments matter when this header is solely used by Xen?

>>> +/*
>>> + * Prototype of a function pointed to by slr_entry_dl_info::dl_handler.
>>> + */
>>> +typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);
>>
>> It being an internal header, ...
>>> + uint64_t dl_handler;
>>
>> ... why can't this type be used here then? This would presumably avoid a
>> typecast later.
>
> It's not an internal header in my understanding of the phrase, Xen
> parses what a bootloader has passed to it. In principle, pointers could
> be 32-bit here.

"Internal" as opposed to "public", i.e. what's exposed to guests.

Jan

Sergii Dmytruk

unread,
Jun 2, 2025, 6:01:03 PMJun 2
to Jan Beulich, Daniel P. Smith, Ross Philipson, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On Mon, Jun 02, 2025 at 09:17:37AM +0200, Jan Beulich wrote:
> On 23.05.2025 21:51, Sergii Dmytruk wrote:
> > On Wed, May 21, 2025 at 05:19:57PM +0200, Jan Beulich wrote:
> >>> +/*
> >>> + * Secure Launch defined OS/MLE TXT Heap table
> >>> + */
> >>> +struct txt_os_mle_data {
> >>> + uint32_t version;
> >>> + uint32_t reserved;
> >>> + uint64_t slrt;
> >>> + uint64_t txt_info;
> >>> + uint32_t ap_wake_block;
> >>> + uint32_t ap_wake_block_size;
> >>> + uint8_t mle_scratch[64];
> >>> +} __packed;
> >>
> >> This being x86-specific, what's the __packed intended to achieve here?
> >
> > This structure is passed to Xen by a bootloader, __packed makes sure the
> > structure has a compatible layout.
>
> And it won't have a compatible layout without the attribute?

It will, but presence of __packed makes it trivial to see.

> >>> +/*
> >>> + * TXT specification defined BIOS data TXT Heap table
> >>> + */
> >>> +struct txt_bios_data {
> >>> + uint32_t version; /* Currently 5 for TPM 1.2 and 6 for TPM 2.0 */
> >>> + uint32_t bios_sinit_size;
> >>> + uint64_t reserved1;
> >>> + uint64_t reserved2;
> >>> + uint32_t num_logical_procs;
> >>> + /* Versions >= 3 && < 5 */
> >>> + uint32_t sinit_flags;
> >>> + /* Versions >= 5 with updates in version 6 */
> >>> + uint32_t mle_flags;
> >>> + /* Versions >= 4 */
> >>> + /* Ext Data Elements */
> >>> +} __packed;
> >>
> >> It does affect sizeof() here, which I'm unsure is going to matter.
> >
> > It doesn't hurt anything and makes sure offsets match those in the
> > specification.
>
> It similarly doesn't appear to hurt anything if the attribute was omitted.
> Imo we ought to use compiler extensions on when there is a need to do so.

I would argue that it hurts maintainability and code readability to some
extent:
* when the attribute is used, there is no need to verify compatibility
in any way (manually or using pahole) neither now nor on any future
modification
* when I see __packed, I immediately know the structure is defined
externally and can't be changed at will
* having the attribute only for some structures seems inconsistent

It would be nice if it was possible to verify the structure is packed
via a static assert using only standard C, but without such means I see
__packed as useful and harmless compiler extension.

I can of course drop unnecessary attributes if that's a standard
practice for Xen's sources, never thought it could be undesirable in
a context like this one.

> >>> +static inline uint64_t txt_bios_data_size(void *heap)
> >>
> >> Here, below, and in general: Please try to have code be const-correct, i.e.
> >> use pointers-to-const wherever applicable.
> >
> > I assume this doesn't apply to functions returning `void *`. The
> > approach used in libc is to accept pointers-to-const but then cast the
> > constness away for the return value, but this header isn't a widely-used
> > code.
>
> Which is, from all I know, bad practice not only by my own view.
>
> Jan

I actually ended up doing that to have const-correctness in v3. In the
absence of function overloads the casts have to be somewhere, can put
them in the calling code instead.

Regards

Sergii Dmytruk

unread,
Jun 2, 2025, 6:19:29 PMJun 2
to Jan Beulich, Daniel P. Smith, Ross Philipson, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
On Mon, Jun 02, 2025 at 09:31:11AM +0200, Jan Beulich wrote:
> >>> +#define UEFI_SLR_TABLE_GUID \
> >>> + { 0x877a9b2aU, 0x0385, 0x45d1, { 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f } }
> >>
> >> I'm not sure this is a good place to put UEFI GUIDs. Considering e.g ...
> >
> > It's here because the GUID is related more to SLRT than to EFI. I can
> > move it if there is a more fitting place for table GUIDs.
>
> It'll (at least somewhat) depend on where it's going to be used. A common problem
> when definitions / declarations are introduced without any use.

It's only used in xen/common/efi/boot.c (patch #20), so looks like it
should actually be defined there like the rest of GUIDs.

> >>> +/*
> >>> + * Primary SLR Table Header
> >>> + */
> >>> +struct slr_table
> >>> +{
> >>> + uint32_t magic;
> >>> + uint16_t revision;
> >>> + uint16_t architecture;
> >>> + uint32_t size;
> >>> + uint32_t max_size;
> >>> + /* entries[] */
> >>> +} __packed;
> >>
> >> If x86-specific, the question on the need for some of the __packed arises
> >> again.
> >
> > The table is used to communicate data from pre-DRTM world to DRTM-world
> > and is produced and consumed by unrelated software components that don't
> > necessarily pad structures the same way by default.
>
> How do other environments matter when this header is solely used by Xen?

Xen uses this header to read data prepared for it. Packing is an easy
way to ensure the data will be parsed consistently regardless of the
architecture or software component which prepared it (i.e., a way to
enforce proper "API").

Regards

Jan Beulich

unread,
Jun 3, 2025, 3:07:06 AMJun 3
to Sergii Dmytruk, Daniel P. Smith, Ross Philipson, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
> I actually ended up doing that to have const-correctness in v3. In the
> absence of function overloads the casts have to be somewhere, can put
> them in the calling code instead.

Casts of which kind? For context: There shouldn't be any casting away of
const-ness (or volatile-ness, for the sake of completeness).

Jan

Sergii Dmytruk

unread,
Jun 3, 2025, 4:50:33 AMJun 3
to Jan Beulich, Daniel P. Smith, Ross Philipson, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
Casting away const-ness inside of functions like

static inline void *txt_bios_data_start(const void *heap)

If a function accepts a const pointer and returns it, this turns a
non-const incoming pointer into a const one. Without duplicating the
code (either having const and non-const versions or repeating code in
other ways), nothing can be made const cleanly in here including
*_size() functions because they call *_start() functions:

static inline uint64_t txt_os_mle_data_size(const void *heap)
{
return *((const uint64_t *)(txt_bios_data_start(heap) +
// ^^^^ -- const
txt_bios_data_size(heap))) -
sizeof(uint64_t);
}

Regards

Jan Beulich

unread,
Jun 3, 2025, 4:52:22 AMJun 3
to Sergii Dmytruk, Daniel P. Smith, Ross Philipson, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
Yet just to repeat: Besides myself (and maybe others), Misra objects to
the casting away of const-ness.

Jan

Sergii Dmytruk

unread,
Jun 3, 2025, 11:20:43 AMJun 3
to Jan Beulich, Daniel P. Smith, Ross Philipson, Andrew Cooper, Roger Pau Monné, Lukasz Hawrylko, Mateusz Mówka, trenchbo...@googlegroups.com, xen-...@lists.xenproject.org
OK, I'll change it back then unless I'll find a way to preserve some
const-ness without duplication.

Regards
Reply all
Reply to author
Forward
0 new messages