[RFC PATCH v2 0/9] x86: Trenchboot Secure Launch DRTM for AMD SKINIT (Linux)

2 views
Skip to first unread message

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:07 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com, H. Peter Anvin, Ard Biesheuvel, Borislav Petkov, Dave Hansen, Ingo Molnar, Joerg Roedel, Suravee Suthikulpanit, Thomas Gleixner, x...@kernel.org
NOTE: this patch set follows up on Intel TXT DRTM patches that are
currently under review in their 14th version [0]; therefore, it is not
standalone!

The publication of the patches at this point pursues several goals:
- Make anyone tracking upstream aware of the maturity of the support
for AMD SKINIT.
- Collect early feedback on the SKINIT implementation.
- Finally, demonstrate the extensibility of Secure Launch for
incorporating additional platforms.

As the RFC suggest, this series is temporal and will be updated based on
changes made to the initial Secure Launch series. Review comments are
greatly welcomed and will be worked/addressed, but we would caution that
changes to the Secure Launch series will take precedence over review
comments. Once the Secure Launch series is merged, this series will
transition from RFC to a formally submitted series.

-----

The patches extend Secure Launch for legacy and UEFI boot with support
for AMD CPUs and their DRTM in two flavours: SKINIT on its own and SKINIT
with DRTM service running in PSP/ASP.

The code is adjusted to detect CPU type and handle AMD differently.
DRTM-specific differences relative to Intel TXT include:
- absence of DRTM-specific registers to pass data from bootloader to DLME,
resulting in passing some information via boot parameters
- use of a different SLRT entry
- not sending #INIT to APs
- special handling for TPM event logs to make them "compatible" with TXT logs

-----

[0]: https://lore.kernel.org/lkml/20250421162712.774...@oracle.com/

Changes in v2:
- rebase onto v14 of the main patch set
- cleaner handling of reset in sl_main.c leading to a smaller diff
- renamed slr_entry_amd_info::boot_params_{base,addr} for consistency with
slr_entry_intel_info
- slightly safer slaunch_reset() macro in slmodule.c

-----

Jagannathan Raman (1):
psp: Perform kernel portion of DRTM procedures

Michał Żygowski (1):
x86: Implement AMD support for Secure Launch

Ross Philipson (6):
x86: AMD changes for Secure Launch Resource Table header file
x86: Secure Launch main header file AMD support
x86: Split up Secure Launch setup and finalize functions
x86: Prepare CPUs for post SKINIT launch
x86/slmodule: Support AMD SKINIT
x86: AMD changes for EFI stub DRTM launch support

Sergii Dmytruk (1):
Documentation/x86: update Secure Launch for AMD SKINIT

.../secure_launch_details.rst | 83 +++++-
.../secure_launch_overview.rst | 61 ++--
arch/x86/Kconfig | 9 +-
arch/x86/boot/compressed/sl_main.c | 271 ++++++++++++++----
arch/x86/boot/compressed/sl_stub.S | 41 ++-
arch/x86/include/asm/svm.h | 2 +
arch/x86/include/uapi/asm/setup_data.h | 3 +-
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/setup.c | 2 +-
arch/x86/kernel/sl-psp.c | 239 +++++++++++++++
arch/x86/kernel/slaunch.c | 193 +++++++++++--
arch/x86/kernel/slmodule.c | 161 +++++++++--
arch/x86/kernel/smpboot.c | 15 +-
arch/x86/kernel/traps.c | 4 +
drivers/firmware/efi/libstub/x86-stub.c | 12 +-
drivers/iommu/amd/init.c | 12 +
include/linux/slaunch.h | 83 +++++-
include/linux/slr_table.h | 15 +
18 files changed, 1044 insertions(+), 163 deletions(-)
create mode 100644 arch/x86/kernel/sl-psp.c


base-commit: 616c6ae2fa0b736552873af08ad0e5532e04ad80
--
2.49.0

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:09 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com
* Switch ACM to DCE where not talking exclusively about Intel TXT
* Switch MLE to DLME where not talking exclusively about Intel TXT
* Add information about Secure Loader
* Update information about Secure Launch to account for AMD SKINIT

Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
.../secure_launch_details.rst | 83 ++++++++++++++++---
.../secure_launch_overview.rst | 61 +++++++++-----
2 files changed, 113 insertions(+), 31 deletions(-)

diff --git a/Documentation/security/launch-integrity/secure_launch_details.rst b/Documentation/security/launch-integrity/secure_launch_details.rst
index c58fa3a6a607..0936c29fd113 100644
--- a/Documentation/security/launch-integrity/secure_launch_details.rst
+++ b/Documentation/security/launch-integrity/secure_launch_details.rst
@@ -18,13 +18,13 @@ The settings to enable Secure Launch using Kconfig are under::
A kernel with this option enabled can still be booted using other supported
methods.

-To reduce the Trusted Computing Base (TCB) of the MLE [1]_, the build
+To reduce the Trusted Computing Base (TCB) of the DLME [1]_, the build
configuration should be pared down as narrowly as one's use case allows.
Fewer drivers (less active hardware) and features reduce the attack surface.
-As an example in the extreme, the MLE could only have local disk access with no
+As an example in the extreme, the DLME could only have local disk access with no
other hardware supports except optional network access for remote attestation.

-It is also desirable, if possible, to embed the initrd used with the MLE kernel
+It is also desirable, if possible, to embed the initrd used with the DLME kernel
image to reduce complexity.

The following are important configuration necessities to always consider:
@@ -39,7 +39,8 @@ other instabilities at boot. Even in cases where Secure Launch and KASLR work
together, it is still recommended that KASLR be disabled to avoid introducing
security concerns with unprotected kernel memory.

-If possible, a kernel being used as an MLE should be built with KASLR disabled::
+If possible, a kernel being used as an DLME should be built with KASLR
+disabled::

"Processor type and features" -->
"Build a relocatable kernel" -->
@@ -64,7 +65,7 @@ IOMMU Configuration

When doing a Secure Launch, the IOMMU should always be enabled and the drivers
loaded. However, IOMMU passthrough mode should never be used. This leaves the
-MLE completely exposed to DMA after the PMRs [2]_ are disabled. The current
+DLME completely exposed to DMA after the PMRs [2]_ are disabled. The current
default mode is to use IOMMU in lazy translated mode, but strict translated
mode, is the preferred IOMMU mode and this should be selected in the build
configuration::
@@ -109,9 +110,9 @@ Intel TXT Interface

The primary interfaces between the various components in TXT are the TXT MMIO
registers and the TXT heap. The MMIO register banks are described in Appendix B
-of the TXT MLE [1]_ Development Guide.
+of the TXT MLE Development Guide.

-The TXT heap is described in Appendix C of the TXT MLE [1]_ Development
+The TXT heap is described in Appendix C of the TXT MLE Development
Guide. Most of the TXT heap is predefined in the specification. The heap is
initialized by firmware and the pre-launch environment and is subsequently used
by the SINIT ACM. One section, called the OS to MLE Data Table, is reserved for
@@ -571,10 +572,68 @@ An error occurred in the Secure Launch module while mapping the Secure Launch
Resource table. The underlying issue is memremap() failure, most likely due to
a resource shortage.

+AMD SKINIT Interface
+====================
+
+This DRTM comes in two flavours: with DRTM service running in PSP/ASP and
+without one. The DRTM service effectively extends basic functionality of the
+SKINIT instruction providing stronger security guarantees at the cost of more
+complicated programming interface.
+
+As of the end of 2024 the DRTM service is available on Milan/Genoa platforms
+running suitable firmware releases. When firmware doesn't provide the service,
+simpler DRTM process is used.
+
+Basic SKINIT DRTM workflow is straightforward in its design. It defines only
+the bare minimum necessary to perform the DRTM and to pass some data from pre-
+to post-launch code. DRTM service extends the workflow by adding more metadata
+and performing some of the operations itself instead of leaving their
+implementation to a user-provided code (Secure Loader or SL).
+
+Secure Loader image is a binary to which SKINIT instruction passes control. The
+binary must start with a short header defined in the second volume of AMD64
+Architecture Programmer's Manual to have only two required fields. DRTM
+integration guide [4]_ adds an extended header which is mostly opaque and can be
+treated as reserved area in the kernel. Together these fields can be presented
+as the following structure::
+
+ struct sl_header {
+ u16 entry_point;
+ u16 image_size;
+ grub_uint8_t reserved[62];
+ /*
+ * Any other fields, if present, are implementation-specific.
+ */
+ } __packed;
+
+Secure Loader is loaded into Secure Loader Block (SLB) which is a 64 KiB area of
+RAM below 4 GiB that starts on a 64 KiB boundary. The smaller a particular SL
+image is, the more space is available for passing additional data which is to
+be placed after the image so it doesn't get measured by SKINIT.
+
+Passing of the information from bootloader to the kernel is carried out by the
+SLRT which is placed after the end of Secure Loader. A platform-specific entry
+of the SLRT is additionally linked as `setup_data` structure allowing the
+kernel to discover location of the SLRT by traversing boot parameters looking
+for the entry.
+
+Description of the header:
+
+===================== ========================================================================
+Field Use
+===================== ========================================================================
+entry_point Offset from the start of the image
+image_size How much of the SLB area is actually occupied by the image
+reserved Data for DRTM service
+===================== ========================================================================
+
.. [1]
- MLE: Measured Launch Environment is the binary runtime that is measured and
- then run by the TXT SINIT ACM. The TXT MLE Development Guide describes the
- requirements for the MLE in detail.
+ DLME: Dynamic Launch Measured Environment (which Intel calls MLE for
+ Measured Launch Environment) is the binary runtime that is measured and
+ then run by the DCE. The TXT MLE Development Guide describes the
+ requirements for the MLE in detail. Because AMD SKINIT doesn't impose any
+ specific requirements of that sort, TXT's format of MLE is used on AMD
+ devices as well for simplicity.

.. [2]
PMR: Intel VTd has a feature in the IOMMU called Protected Memory Registers.
@@ -585,3 +644,7 @@ a resource shortage.

.. [3]
Secure Launch Specification: https://trenchboot.org/specifications/Secure_Launch/
+
+.. [4]
+ Dynamic Root of Trust Measurement (DRTM) Service Integration Guide:
+ https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/user-guides/58453.pdf
diff --git a/Documentation/security/launch-integrity/secure_launch_overview.rst b/Documentation/security/launch-integrity/secure_launch_overview.rst
index 492f2b12e297..e9b8082314e1 100644
--- a/Documentation/security/launch-integrity/secure_launch_overview.rst
+++ b/Documentation/security/launch-integrity/secure_launch_overview.rst
@@ -47,9 +47,8 @@ documentation on these technologies can be readily found online; see
the `Resources`_ section below for references.

.. note::
- Currently, only Intel TXT is supported in this first release of the Secure
- Launch feature. AMD/Hygon SKINIT and Arm support will be added in a
- subsequent release.
+ Currently, only Intel TXT and AMD/Hygon SKINIT are supported by the Secure
+ Launch feature. Arm support will be added later.

To enable the kernel to be launched by GETSEC, the Secure Launch stub
must be built into the setup section of the compressed kernel to handle the
@@ -112,22 +111,26 @@ Pre-launch: *Phase where the environment is prepared and configured to initiate
the secure launch by the boot chain.*

- The SLRT is initialized, and dl_stub is placed in memory.
- - Load the kernel, initrd and ACM [2]_ into memory.
- - Set up the TXT heap and page tables describing the MLE [1]_ per the
+ - Load the kernel, initrd and DCE [1]_ into memory.
+ - For TXT, set up the TXT heap and page tables describing the DLME [2]_ per the
specification.
- If non-UEFI platform, dl_stub is called.
- If UEFI platform, SLRT registered with UEFI and efi-stub called.
- Upon completion, efi-stub will call EBS followed by dl_stub.
- The dl_stub will prepare the CPU and the TPM for the launch.
- - The secure launch is then initiated with the GETSET[SENTER] instruction.
+ - The secure launch is then initiated with either GETSEC[SENTER] (Intel) or
+ SKINIT (AMD) instruction.

-Post-launch: *Phase where control is passed from the ACM to the MLE and the secure
-kernel begins execution.*
+Post-launch: *Phase where control is passed from the DCE to the DLME and the
+secure kernel begins execution.*

- Entry from the dynamic launch jumps to the SL stub.
- - SL stub fixes up the world on the BSP.
+ - For TXT, SL stub fixes up the world on the BSP.
- For TXT, SL stub wakes the APs, fixes up their worlds.
- For TXT, APs are left in an optimized (MONITOR/MWAIT) wait state.
+ - For SKINIT, APs are woken up mostly as usual except that the INIT IPIs aren't
+ sent before Startup IPIs to avoid compromising security. INIT IPIs were sent
+ to APs in pre-launch before issuing SKINIT, thus halting them.
- SL stub jumps to startup_32.
- SL main does validation of buffers and memory locations. It sets
the boot parameter loadflag value SLAUNCH_FLAG to inform the main
@@ -137,16 +140,19 @@ kernel begins execution.*
- Kernel boot proceeds normally from this point.
- During early setup, slaunch_setup() runs to finish validation
and setup tasks.
- - The SMP bring up code is modified to wake the waiting APs via the monitor
- address.
+ - For AMD with DRTM service, Trusted Memory Region gets releases after
+ successful configuration of IOMMU.
+ - For TXT, the SMP bring up code is modified to wake the waiting APs via the
+ monitor address.
- APs jump to rmpiggy and start up normally from that point.
- SL platform module is registered as a late initcall module. It reads
the TPM event log and extends the measurements taken into the TPM PCRs.
- SL platform module initializes the securityfs interface to allow
- access to the TPM event log and TXT public registers.
+ access to the TXT public registers on Intel and TPM event log everywhere.
- Kernel boot finishes booting normally.
- - SEXIT support to leave SMX mode is present on the kexec path and
- the various reboot paths (poweroff, reset, halt).
+ - On Intel SEXIT support to leave SMX mode is present on the kexec path and
+ the various reboot paths (poweroff, reset, halt). A similar finalization
+ (locking of DRTM localities) happens on AMD with DRTM service.

PCR Usage
=========
@@ -224,17 +230,30 @@ GRUB Secure Launch support:

https://github.com/TrenchBoot/grub/tree/grub-sl-fc-38-dlstub

+secure-kernel-loader (Secure Loader for AMD SKINIT, a kind of DCE):
+
+https://github.com/TrenchBoot/secure-kernel-loader/
+
FOSDEM 2021: Secure Upgrades with DRTM

https://archive.fosdem.org/2021/schedule/event/firmware_suwd/

.. [1]
- MLE: Measured Launch Environment is the binary runtime that is measured and
- then run by the TXT SINIT ACM. The TXT MLE Development Guide describes the
- requirements for the MLE in detail.
+ DCE: Dynamic Configuration Environment. Either ACM (Intel's Authenticated
+ Code Module) for TXT or SKL (secure-kernel-loader) for AMD SKINIT.
+
+ ACM is a 32-bit binary blob that is run securely by the GETSEC[SENTER]
+ during a measured launch. It is described in the Intel documentation on TXT
+ and versions for various chipsets are signed and distributed by Intel.
+
+ SKL is an implementation of SL (Secure Loader) which is started securely by
+ SKINIT instruction in a flat 32-bit protected mode without paging. See AMD's
+ System Programming manual for more details on the format and operation.

.. [2]
- ACM: Intel's Authenticated Code Module. This is the 32b bit binary blob that
- is run securely by the GETSEC[SENTER] during a measured launch. It is described
- in the Intel documentation on TXT and versions for various chipsets are
- signed and distributed by Intel.
+ DLME: Dynamic Launch Measured Environment (which Intel calls MLE for
+ Measured Launch Environment) is the binary runtime that is measured and
+ then run by the DCE. The TXT MLE Development Guide describes the
+ requirements for the MLE in detail. Because AMD SKINIT doesn't impose any
+ specific requirements of that sort, TXT's format of MLE is used on AMD
+ devices as well for simplicity.
--
2.49.0

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:11 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com
From: Ross Philipson <ross.ph...@oracle.com>

Introduce the AMD info table that allows the SLRT to be linked in as a
setup_data entry. This allows the SLRT to be found and in addition all
the DLMR information needed by the SKL (Secure Kernel Loader).

Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
include/linux/slr_table.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)

diff --git a/include/linux/slr_table.h b/include/linux/slr_table.h
index fea666250033..aecdb62b8a53 100644
--- a/include/linux/slr_table.h
+++ b/include/linux/slr_table.h
@@ -180,6 +180,21 @@ struct slr_entry_intel_info {
struct slr_txt_mtrr_state saved_bsp_mtrrs;
} __packed;

+/*
+ * AMD SKINIT Info table
+ */
+struct slr_entry_amd_info {
+ struct slr_entry_hdr hdr;
+ u64 next;
+ u32 type;
+ u32 len;
+ u64 slrt_size;
+ u64 slrt_base;
+ u64 boot_params_addr;
+ u16 psp_version;
+ u16 reserved[3];
+} __packed;
+
/*
* UEFI config measurement entry
*/
--
2.49.0

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:14 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com
From: Ross Philipson <ross.ph...@oracle.com>

Add additional Secure Launch definitions and declarations for AMD/SKINIT
support.

Use a single implementation of slaunch_is_txt_launch(),
slaunch_get_flags() returns to 0 if Secure Launch support isn't enabled.

Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
include/linux/slaunch.h | 81 +++++++++++++++++++++++++++++++++++------
1 file changed, 70 insertions(+), 11 deletions(-)

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
index ae67314c2aad..ec7e0d736a03 100644
--- a/include/linux/slaunch.h
+++ b/include/linux/slaunch.h
@@ -14,11 +14,14 @@
*/
#define SL_FLAG_ACTIVE 0x00000001
#define SL_FLAG_ARCH_TXT 0x00000002
+#define SL_FLAG_ARCH_SKINIT 0x00000004
+#define SL_FLAG_SKINIT_PSP 0x00000008

/*
* Secure Launch CPU Type
*/
#define SL_CPU_INTEL 1
+#define SL_CPU_AMD 2

#define __SL32_CS 0x0008
#define __SL32_DS 0x0010
@@ -146,6 +149,8 @@
#define SL_ERROR_INVALID_SLRT 0xc0008022
#define SL_ERROR_SLRT_MISSING_ENTRY 0xc0008023
#define SL_ERROR_SLRT_MAP 0xc0008024
+#define SL_ERROR_MISSING_EVENT_LOG 0xc0008025
+#define SL_ERROR_MAP_SETUP_DATA 0xc0008026

/*
* Secure Launch Defined Limits
@@ -325,9 +330,25 @@ struct smx_rlp_mle_join {
u32 rlp_entry_point; /* phys addr */
} __packed;

+/* The TCG original Spec ID structure defined for TPM 1.2 */
+#define TCG_SPECID_SIG00 "Spec ID Event00"
+
+struct tpm_tcg_specid_event_head {
+ char signature[16];
+ u32 platform_class;
+ u8 spec_ver_minor;
+ u8 spec_ver_major;
+ u8 errata;
+ u8 uintn_size; /* reserved (must be 0) for 1.21 */
+ u8 vendor_info_size;
+ /* vendor_info[]; */
+} __packed;
+
/*
- * TPM event log structures defined in both the TXT specification and
- * the TCG documentation.
+ * TPM event log structures defined by the TXT specification derived
+ * from the TCG documentation. For TXT this is setup as the conainter
+ * header. On AMD this header is embedded in to vendor information
+ * after the TCG spec ID header.
*/
#define TPM_EVTLOG_SIGNATURE "TXT Event Container"

@@ -344,6 +365,25 @@ struct tpm_event_log_header {
/* PCREvents[] */
} __packed;

+/* TPM Event Log Size Macros */
+#define TCG_PCClientSpecIDEventStruct_SIZE \
+ (sizeof(struct tpm_tcg_specid_event_head))
+#define TCG_EfiSpecIdEvent_SIZE(n) \
+ ((n) * sizeof(struct tcg_efi_specid_event_algs) \
+ + sizeof(struct tcg_efi_specid_event_head) \
+ + sizeof(u8) /* vendorInfoSize */)
+#define TPM2_HASH_COUNT(base) (*((u32 *)(base) \
+ + (offsetof(struct tcg_efi_specid_event_head, num_algs) >> 2)))
+
+/* AMD Specific Structures and Definitions */
+struct sl_header {
+ u16 skl_entry_point;
+ u16 length;
+ u8 reserved[62];
+ u16 skl_info_offset;
+ u16 bootloader_data_offset;
+} __packed;
+
/*
* Functions to extract data from the Intel TXT Heap Memory. The layout
* of the heap is as follows:
@@ -512,16 +552,14 @@ void slaunch_fixup_jump_vector(void);
u32 slaunch_get_flags(void);
struct sl_ap_wake_info *slaunch_get_ap_wake_info(void);
struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header *dmar);
+void slaunch_cpu_setup_skinit(void);
+void __noreturn slaunch_skinit_reset(const char *msg, u64 error);
void __noreturn slaunch_txt_reset(void __iomem *txt,
const char *msg, u64 error);
void slaunch_finalize(int do_sexit);
-
-static inline bool slaunch_is_txt_launch(void)
-{
- u32 mask = SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT;
-
- return (slaunch_get_flags() & mask) == mask;
-}
+bool slaunch_psp_tmr_release(void);
+void slaunch_psp_setup(void);
+void slaunch_psp_finalize(void);

#else

@@ -529,6 +567,10 @@ static inline void slaunch_setup_txt(void)
{
}

+static inline void slaunch_cpu_setup_skinit(void)
+{
+}
+
static inline void slaunch_fixup_jump_vector(void)
{
}
@@ -545,14 +587,31 @@ static inline struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table

static inline void slaunch_finalize(int do_sexit)
{
+ (void)do_sexit;
}

+#endif /* !IS_ENABLED(CONFIG_SECURE_LAUNCH) */
+
static inline bool slaunch_is_txt_launch(void)
{
- return false;
+ u32 mask = SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT;
+
+ return (slaunch_get_flags() & mask) == mask;
}

-#endif /* !IS_ENABLED(CONFIG_SECURE_LAUNCH) */
+static inline bool slaunch_is_skinit_launch(void)
+{
+ u32 mask = SL_FLAG_ACTIVE | SL_FLAG_ARCH_SKINIT;
+
+ return (slaunch_get_flags() & mask) == mask;
+}
+
+static inline bool slaunch_is_skinit_psp(void)
+{
+ u32 mask = SL_FLAG_ACTIVE | SL_FLAG_ARCH_SKINIT | SL_FLAG_SKINIT_PSP;
+
+ return (slaunch_get_flags() & mask) == mask;
+}

#endif /* !__ASSEMBLY */

--
2.49.0

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:15 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com
From: Ross Philipson <ross.ph...@oracle.com>

Split up the setup and findalize functions internally to determine
the type of launch and call the appropriate function (TXT or SKINIT
version).

Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
arch/x86/include/asm/svm.h | 2 ++
arch/x86/kernel/setup.c | 2 +-
arch/x86/kernel/slaunch.c | 69 +++++++++++++++++++++++++++++++-------
include/linux/slaunch.h | 4 +--
4 files changed, 62 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 9b7fa99ae951..da9536c5a137 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -584,6 +584,8 @@ static inline void __unused_size_checks(void)

#define SVM_CPUID_FUNC 0x8000000a

+#define SVM_VM_CR_INIT_REDIRECTION 1
+
#define SVM_SELECTOR_S_SHIFT 4
#define SVM_SELECTOR_DPL_SHIFT 5
#define SVM_SELECTOR_P_SHIFT 7
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index afb1b238202f..3bcf5a5fbac7 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -999,7 +999,7 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
#endif

- slaunch_setup_txt();
+ slaunch_setup();

/*
* partially used pages are not usable - thus
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index b6ba4c526aa3..d81433a9b699 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -18,6 +18,7 @@
#include <asm/tlbflush.h>
#include <asm/e820/api.h>
#include <asm/setup.h>
+#include <asm/svm.h>
#include <asm/realmode.h>
#include <linux/slr_table.h>
#include <linux/slaunch.h>
@@ -437,21 +438,11 @@ void __init slaunch_fixup_jump_vector(void)
* Intel TXT specific late stub setup and validation called from within
* x86 specific setup_arch().
*/
-void __init slaunch_setup_txt(void)
+static void __init slaunch_setup_txt(void)
{
u64 one = TXT_REGVALUE_ONE, val;
void __iomem *txt;

- if (!boot_cpu_has(X86_FEATURE_SMX))
- return;
-
- /*
- * If booted through secure launch entry point, the loadflags
- * option will be set.
- */
- if (!(boot_params.hdr.loadflags & SLAUNCH_FLAG))
- return;
-
/*
* See if SENTER was done by reading the status register in the
* public space. If the public register space cannot be read, TXT may
@@ -523,6 +514,42 @@ void __init slaunch_setup_txt(void)
pr_info("Intel TXT setup complete\n");
}

+/*
+ * AMD SKINIT specific late stub setup and validation called from within
+ * x86 specific setup_arch().
+ */
+static void __init slaunch_setup_skinit(void)
+{
+ u64 val;
+
+ /*
+ * If the platform is performing a Secure Launch via SKINIT
+ * INIT_REDIRECTION flag will be active.
+ */
+ rdmsrl(MSR_VM_CR, val);
+ if (!(val & (1 << SVM_VM_CR_INIT_REDIRECTION)))
+ return;
+
+ /* Set flags on BSP so subsequent code knows it was a SKINIT launch */
+ sl_flags |= (SL_FLAG_ACTIVE|SL_FLAG_ARCH_SKINIT);
+ pr_info("AMD SKINIT setup complete\n");
+}
+
+void __init slaunch_setup(void)
+{
+ /*
+ * If booted through secure launch entry point, the loadflags
+ * option will be set.
+ */
+ if (!(boot_params.hdr.loadflags & SLAUNCH_FLAG))
+ return;
+
+ if (boot_cpu_has(X86_FEATURE_SMX))
+ slaunch_setup_txt();
+ else if (boot_cpu_has(X86_FEATURE_SKINIT))
+ slaunch_setup_skinit();
+}
+
static inline void smx_getsec_sexit(void)
{
asm volatile ("getsec\n"
@@ -533,7 +560,7 @@ static inline void smx_getsec_sexit(void)
* Used during kexec and on reboot paths to finalize the TXT state
* and do an SEXIT exiting the DRTM and disabling SMX mode.
*/
-void slaunch_finalize(int do_sexit)
+static void slaunch_finalize_txt(int do_sexit)
{
u64 one = TXT_REGVALUE_ONE, val;
void __iomem *config;
@@ -594,3 +621,21 @@ void slaunch_finalize(int do_sexit)

pr_info("TXT SEXIT complete.\n");
}
+
+/*
+ * Used during kexec and on reboot paths to finalize the SKINIT.
+ */
+static void slaunch_finalize_skinit(void)
+{
+ /* AMD CPUs with PSP-supported DRTM */
+ if (!slaunch_is_skinit_psp())
+ return;
+}
+
+void slaunch_finalize(int do_sexit)
+{
+ if (boot_cpu_has(X86_FEATURE_SMX))
+ slaunch_finalize_txt(do_sexit);
+ else if (boot_cpu_has(X86_FEATURE_SKINIT))
+ slaunch_finalize_skinit();
+}
diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
index ec7e0d736a03..22e253960fdd 100644
--- a/include/linux/slaunch.h
+++ b/include/linux/slaunch.h
@@ -547,7 +547,7 @@ static inline int tpm2_log_event(struct txt_heap_event_log_pointer2_1_element *e
/*
* External functions available in mainline kernel.
*/
-void slaunch_setup_txt(void);
+void slaunch_setup(void);
void slaunch_fixup_jump_vector(void);
u32 slaunch_get_flags(void);
struct sl_ap_wake_info *slaunch_get_ap_wake_info(void);
@@ -563,7 +563,7 @@ void slaunch_psp_finalize(void);

#else

-static inline void slaunch_setup_txt(void)
+static inline void slaunch_setup(void)
{
}

--
2.49.0

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:19 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com
From: Michał Żygowski <michal....@3mdeb.com>

AMD SKINIT uses the same entry point as Intel TXT (sl_stub_entry). It
follows similar, but simpler path than TXT; there is no TXT heap and
APs are started by a standard INIT/SIPI/SIPI sequence.

Contrary to the TXT, SKINIT does not use code provided by a CPU vendor.
Instead it requires an intermediate loader (SKL), whose task is to set
up memory protection and set a proper CPU context before handling
control over to the kernel.

In order to simplify adding new entries and to minimize the number of
differences between AMD and Intel, the event logs have actually two
headers, both for TPM 1.2 and 2.0.

For TPM 1.2 this is TCG_PCClientSpecIDEventStruct [1] with Intel's own
TXT-specific header embedded inside its 'vendorInfo' field. The offset
to this field is added to the base address on AMD path, making the code
for adding new events the same for both vendors.

TPM 2.0 in TXT uses HEAP_EVENT_LOG_POINTER_ELEMENT2_1 structure, which
is normally constructed on the TXT stack [2]. For AMD, this structure is
put inside TCG_EfiSpecIdEvent [3], also in 'vendorInfo' field. The
actual offset to this field depends on the number of hash algorithms
supported by the event log.

Other changes:
- update common code to handle reset on AMD as well
- reserve memory region occupied by SKL (called SLB) and event log

[1] https://www.trustedcomputinggroup.org/wp-content/uploads/TCG_PCClientImplementation_1-21_1_00.pdf
[2] http://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
[3] https://trustedcomputinggroup.org/wp-content/uploads/TCG_PCClientSpecPlat_TPM_2p0_1p04_pub.pdf

Signed-off-by: Michał Żygowski <michal....@3mdeb.com>
Signed-off-by: Krystian Hebel <krystia...@3mdeb.com>
Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
arch/x86/Kconfig | 9 +-
arch/x86/boot/compressed/sl_main.c | 271 ++++++++++++++++++++-----
arch/x86/boot/compressed/sl_stub.S | 41 +++-
arch/x86/include/uapi/asm/setup_data.h | 3 +-
arch/x86/kernel/slaunch.c | 99 ++++++++-
5 files changed, 347 insertions(+), 76 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index badde1e9742e..d521838c77db 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2006,11 +2006,12 @@ config SECURE_LAUNCH
depends on X86_64 && X86_X2APIC && TCG_TIS && TCG_CRB && CRYPTO_LIB_SHA1 && CRYPTO_LIB_SHA256
help
The Secure Launch feature allows a kernel to be loaded
- directly through an Intel TXT measured launch. Intel TXT
+ directly through a dynamic launch. Intel TXT or AMD SKINIT
establishes a Dynamic Root of Trust for Measurement (DRTM)
- where the CPU measures the kernel image. This feature then
- continues the measurement chain over kernel configuration
- information and init images.
+ where the CPU or a Dynamic Configuration Environment (DCE)
+ measures the kernel image. This feature then continues the
+ measurement chain over kernel configuration information and
+ init images.

source "kernel/Kconfig.hz"

diff --git a/arch/x86/boot/compressed/sl_main.c b/arch/x86/boot/compressed/sl_main.c
index 5e0fd0d7bd72..f0bb40b608be 100644
--- a/arch/x86/boot/compressed/sl_main.c
+++ b/arch/x86/boot/compressed/sl_main.c
@@ -13,6 +13,7 @@
#include <asm/msr.h>
#include <asm/mtrr.h>
#include <asm/processor-flags.h>
+#include <asm/svm.h>
#include <asm/asm-offsets.h>
#include <asm/bootparam.h>
#include <asm/bootparam_utils.h>
@@ -26,6 +27,14 @@
#define SL_TPM_LOG 1
#define SL_TPM2_LOG 2

+#define sl_reset(e) \
+ do { \
+ if (sl_cpu_type == SL_CPU_INTEL) \
+ sl_txt_reset(e); \
+ else \
+ sl_skinit_reset(); \
+ } while (0)
+
static void *evtlog_base;
static u32 evtlog_size;
static struct txt_heap_event_log_pointer2_1_element *log21_elem;
@@ -69,6 +78,14 @@ static void __noreturn sl_txt_reset(u64 error)
unreachable();
}

+static void __noreturn sl_skinit_reset(void)
+{
+ /* AMD does not have a reset mechanism or an error register */
+ asm volatile ("ud2");
+
+ unreachable();
+}
+
static u64 sl_rdmsr(u32 reg)
{
u64 lo, hi;
@@ -78,25 +95,41 @@ static u64 sl_rdmsr(u32 reg)
return (hi << 32) | lo;
}

-static struct slr_table *sl_locate_and_validate_slrt(void)
+static struct slr_table *sl_locate_and_validate_slrt(struct boot_params *bp)
{
struct txt_os_mle_data *os_mle_data;
+ struct slr_entry_amd_info *amd_info;
+ struct setup_data *data;
struct slr_table *slrt;
void *txt_heap;

- txt_heap = (void *)sl_txt_read(TXT_CR_HEAP_BASE);
- os_mle_data = txt_os_mle_data_start(txt_heap);
-
- if (!os_mle_data->slrt)
- sl_txt_reset(SL_ERROR_INVALID_SLRT);
+ if (sl_cpu_type & SL_CPU_AMD) {
+ slrt = NULL;
+ data = (struct setup_data *)bp->hdr.setup_data;
+ while (data) {
+ if (data->type == SETUP_SECURE_LAUNCH) {
+ amd_info =
+ (struct slr_entry_amd_info *)((u8 *)data -
+ sizeof(struct slr_entry_hdr));
+ slrt = (struct slr_table *)amd_info->slrt_base;
+ break;
+ }
+ data = (struct setup_data *)data->next;
+ }

- slrt = (struct slr_table *)os_mle_data->slrt;
+ if (!slrt || slrt->magic != SLR_TABLE_MAGIC ||
+ slrt->architecture != SLR_AMD_SKINIT)
+ sl_skinit_reset();
+ } else {
+ txt_heap = (void *)sl_txt_read(TXT_CR_HEAP_BASE);
+ os_mle_data = txt_os_mle_data_start(txt_heap);

- if (slrt->magic != SLR_TABLE_MAGIC)
- sl_txt_reset(SL_ERROR_INVALID_SLRT);
+ slrt = (struct slr_table *)os_mle_data->slrt;

- if (slrt->architecture != SLR_INTEL_TXT)
- sl_txt_reset(SL_ERROR_INVALID_SLRT);
+ if (!slrt || slrt->magic != SLR_TABLE_MAGIC ||
+ slrt->architecture != SLR_INTEL_TXT)
+ sl_txt_reset(SL_ERROR_INVALID_SLRT);
+ }

return slrt;
}
@@ -177,6 +210,26 @@ static void sl_txt_validate_msrs(struct txt_os_mle_data *os_mle_data)
sl_txt_reset(SL_ERROR_MSR_INV_MISC_EN);
}

+/*
+ * In order to simplify adding new entries and to minimize the number of
+ * differences between AMD and Intel, the event logs have actually two headers,
+ * both for TPM 1.2 and 2.0.
+ *
+ * For TPM 1.2 this is TCG_PCClientSpecIDEventStruct [1] with Intel's own
+ * TXT-specific header embedded inside its 'vendorInfo' field. The offset to
+ * this field is added to the base address in AMD path, making the code for
+ * adding new events the same for both vendors.
+ *
+ * TPM 2.0 in TXT uses HEAP_EVENT_LOG_POINTER_ELEMENT2_1 structure, which is
+ * normally constructed on the TXT stack [2]. For AMD, this structure is put
+ * inside TCG_EfiSpecIdEvent [3], also in 'vendorInfo' field. The actual offset
+ * to this field depends on number of hash algorithms supported by the event
+ * log.
+ *
+ * [1] https://www.trustedcomputinggroup.org/wp-content/uploads/TCG_PCClientImplementation_1-21_1_00.pdf
+ * [2] http://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+ * [3] https://trustedcomputinggroup.org/wp-content/uploads/TCG_PCClientSpecPlat_TPM_2p0_1p04_pub.pdf
+ */
static void sl_find_drtm_event_log(struct slr_table *slrt)
{
struct txt_os_sinit_data *os_sinit_data;
@@ -185,11 +238,31 @@ static void sl_find_drtm_event_log(struct slr_table *slrt)

log_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_LOG_INFO);
if (!log_info)
- sl_txt_reset(SL_ERROR_SLRT_MISSING_ENTRY);
+ sl_reset(SL_ERROR_SLRT_MISSING_ENTRY);

evtlog_base = (void *)log_info->addr;
evtlog_size = log_info->size;

+ if (sl_cpu_type == SL_CPU_AMD) {
+ /* Check if it is TPM 2.0 event log */
+ if (!memcmp(evtlog_base + sizeof(struct tcg_pcr_event),
+ TCG_SPECID_SIG, sizeof(TCG_SPECID_SIG))) {
+ log21_elem = evtlog_base + sizeof(struct tcg_pcr_event)
+ + TCG_EfiSpecIdEvent_SIZE(
+ TPM2_HASH_COUNT(evtlog_base
+ + sizeof(struct tcg_pcr_event)));
+ tpm_log_ver = SL_TPM2_LOG;
+ } else {
+ evtlog_base += sizeof(struct tcg_pcr_event)
+ + TCG_PCClientSpecIDEventStruct_SIZE;
+ evtlog_size -= sizeof(struct tcg_pcr_event)
+ + TCG_PCClientSpecIDEventStruct_SIZE;
+ }
+
+ return;
+ }
+
+ /* Else it is Intel and TXT */
txt_heap = (void *)sl_txt_read(TXT_CR_HEAP_BASE);

/*
@@ -213,7 +286,23 @@ static void sl_find_drtm_event_log(struct slr_table *slrt)
tpm_log_ver = SL_TPM2_LOG;
}

-static void sl_validate_event_log_buffer(void)
+static bool sl_check_buffer_kernel_overlap(void *buffer_base, void *buffer_end,
+ void *kernel_base, void *kernel_end,
+ bool allow_inside)
+{
+ if (buffer_base >= kernel_end && buffer_end > kernel_end)
+ return false; /* above */
+
+ if (buffer_end <= kernel_base && buffer_base < kernel_base)
+ return false; /* below */
+
+ if (allow_inside && buffer_end <= kernel_end && buffer_base >= kernel_base)
+ return false; /* inside */
+
+ return true;
+}
+
+static void sl_txt_validate_event_log_buffer(void)
{
struct txt_os_sinit_data *os_sinit_data;
void *txt_heap, *txt_end;
@@ -235,11 +324,9 @@ static void sl_validate_event_log_buffer(void)
* This check is to ensure the event log buffer does not overlap with
* the MLE image.
*/
- if (evtlog_base >= mle_end && evtlog_end > mle_end)
- goto pmr_check; /* above */
-
- if (evtlog_end <= mle_base && evtlog_base < mle_base)
- goto pmr_check; /* below */
+ if (!sl_check_buffer_kernel_overlap(evtlog_base, evtlog_end,
+ mle_base, mle_end, false))
+ goto pmr_check;

sl_txt_reset(SL_ERROR_MLE_BUFFER_OVERLAP);

@@ -254,6 +341,38 @@ static void sl_validate_event_log_buffer(void)
sl_check_pmr_coverage(evtlog_base, evtlog_size, true);
}

+static void sl_skinit_validate_buffers(struct slr_table *slrt, void *bootparams)
+{
+ void *evtlog_end, *kernel_start, *kernel_end;
+ struct slr_entry_dl_info *dl_info;
+
+ /* On AMD, all the buffers should be below 4Gb */
+ if ((u64)(evtlog_base + evtlog_size) > UINT_MAX)
+ sl_skinit_reset();
+
+ evtlog_end = evtlog_base + evtlog_size;
+
+ dl_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
+ if (!dl_info)
+ sl_skinit_reset();
+
+ kernel_start = (void *)dl_info->dlme_base;
+ kernel_end = (void *)(dl_info->dlme_base + dl_info->dlme_size);
+
+ /*
+ * This check is to ensure the event log buffer and the bootparams do
+ * overlap with the kernel image. Note on an EFI stub boot, the bootparams
+ * will be fully inside the kernel image.
+ */
+ if (sl_check_buffer_kernel_overlap(bootparams, bootparams + PAGE_SIZE,
+ kernel_start, kernel_end, true))
+ sl_skinit_reset();
+
+ if (sl_check_buffer_kernel_overlap(evtlog_base, evtlog_end,
+ kernel_start, kernel_end, false))
+ sl_skinit_reset();
+}
+
static void sl_find_event_log_algorithms(void)
{
struct tcg_efi_specid_event_head *efi_head =
@@ -261,17 +380,17 @@ static void sl_find_event_log_algorithms(void)
u32 i;

if (efi_head->num_algs == 0)
- sl_txt_reset(SL_ERROR_TPM_INVALID_ALGS);
+ sl_reset(SL_ERROR_TPM_INVALID_ALGS);

tpm_algs = &efi_head->digest_sizes[0];
tpm_num_algs = efi_head->num_algs;

for (i = 0; i < tpm_num_algs; i++) {
if (tpm_algs[i].digest_size > TPM_MAX_DIGEST_SIZE)
- sl_txt_reset(SL_ERROR_TPM_INVALID_ALGS);
+ sl_reset(SL_ERROR_TPM_INVALID_ALGS);
/* Alg ID 0 is invalid and maps to TPM_ALG_ERROR */
if (tpm_algs[i].alg_id == TPM_ALG_ERROR)
- sl_txt_reset(SL_ERROR_TPM_INVALID_ALGS);
+ sl_reset(SL_ERROR_TPM_INVALID_ALGS);
}
}

@@ -301,7 +420,7 @@ static void sl_tpm_log_event(u32 pcr, u32 event_type,
total_size = sizeof(*pcr_event) + event_size;

if (tpm_log_event(evtlog_base, evtlog_size, total_size, pcr_event))
- sl_txt_reset(SL_ERROR_TPM_LOGGING_FAILED);
+ sl_reset(SL_ERROR_TPM_LOGGING_FAILED);
}

static void sl_tpm2_log_event(u32 pcr, u32 event_type,
@@ -360,7 +479,7 @@ static void sl_tpm2_log_event(u32 pcr, u32 event_type,
total_size += sizeof(*event) + event_size;

if (tpm2_log_event(log21_elem, evtlog_base, evtlog_size, total_size, &event_buf[0]))
- sl_txt_reset(SL_ERROR_TPM_LOGGING_FAILED);
+ sl_reset(SL_ERROR_TPM_LOGGING_FAILED);
}

static void sl_tpm_extend_evtlog(u32 pcr, u32 type,
@@ -385,6 +504,13 @@ static struct setup_data *sl_handle_setup_data(struct setup_data *curr,

next = (struct setup_data *)(unsigned long)curr->next;

+ /*
+ * If this is the Secure Launch setup_data, it is the AMD info in the
+ * SLR table which is measured separately, skip it.
+ */
+ if (curr->type == SETUP_SECURE_LAUNCH)
+ return next;
+
/* SETUP_INDIRECT instances have to be handled differently */
if (curr->type == SETUP_INDIRECT) {
ind = (struct setup_indirect *)((u8 *)curr + offsetof(struct setup_data, data));
@@ -427,30 +553,54 @@ static void sl_extend_slrt(struct slr_policy_entry *entry)
struct slr_table *slrt = (struct slr_table *)entry->entity;
struct slr_entry_intel_info *intel_info;
struct slr_entry_intel_info intel_tmp;
+ struct slr_entry_amd_info *amd_info;
+ struct slr_entry_amd_info amd_tmp;

/*
* In revision one of the SLRT, the only table that needs to be
- * measured is the Intel info table. Everything else is meta-data,
- * addresses and sizes. Note the size of what to measure is not set.
- * The flag SLR_POLICY_IMPLICIT_SIZE leaves it to the measuring code
- * to sort out.
+ * measured is the platform-specific info table. Everything else is
+ * meta-data, addresses and sizes. Note the size of what to measure is
+ * not set. The flag SLR_POLICY_IMPLICIT_SIZE leaves it to the measuring
+ * code to sort out.
*/
if (slrt->revision == 1) {
- intel_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
- if (!intel_info)
- sl_txt_reset(SL_ERROR_SLRT_MISSING_ENTRY);
+ if (sl_cpu_type == SL_CPU_INTEL) {
+ intel_info =
+ slr_next_entry_by_tag(slrt, NULL,
+ SLR_ENTRY_INTEL_INFO);
+ if (!intel_info)
+ sl_txt_reset(SL_ERROR_SLRT_MISSING_ENTRY);

- /*
- * Make a temp copy and zero out address fields since they should
- * not be measured.
- */
- intel_tmp = *intel_info;
- intel_tmp.boot_params_addr = 0;
- intel_tmp.txt_heap = 0;
+ /*
+ * Make a temp copy and zero out address fields since they should
+ * not be measured.
+ */
+ intel_tmp = *intel_info;
+ intel_tmp.boot_params_addr = 0;
+ intel_tmp.txt_heap = 0;
+
+ sl_tpm_extend_evtlog(entry->pcr, TXT_EVTYPE_SLAUNCH,
+ (void *)&intel_tmp, sizeof(*intel_info),
+ entry->evt_info);
+ } else if (sl_cpu_type == SL_CPU_AMD) {
+ amd_info = slr_next_entry_by_tag(slrt, NULL,
+ SLR_ENTRY_AMD_INFO);
+ if (!amd_info)
+ sl_skinit_reset();

- sl_tpm_extend_evtlog(entry->pcr, TXT_EVTYPE_SLAUNCH,
- (void *)&intel_tmp, sizeof(*intel_info),
- entry->evt_info);
+ /*
+ * Make a temp copy and zero out address fields since
+ * they should not be measured.
+ */
+ amd_tmp = *amd_info;
+ amd_tmp.next = 0;
+ amd_tmp.boot_params_addr = 0;
+ amd_tmp.slrt_base = 0;
+
+ sl_tpm_extend_evtlog(entry->pcr, TXT_EVTYPE_SLAUNCH,
+ (void *)&amd_tmp, sizeof(amd_tmp),
+ entry->evt_info);
+ }
}
}

@@ -480,7 +630,7 @@ static void sl_process_extend_policy(struct slr_table *slrt)

policy = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_ENTRY_POLICY);
if (!policy)
- sl_txt_reset(SL_ERROR_SLRT_MISSING_ENTRY);
+ sl_reset(SL_ERROR_SLRT_MISSING_ENTRY);

for (i = 0; i < policy->nr_entries; i++) {
switch (policy->policy_entries[i].entity_type) {
@@ -545,20 +695,29 @@ asmlinkage __visible void sl_main(void *bootparams)
bp->hdr.loadflags &= ~SLAUNCH_FLAG;

/*
- * Currently only Intel TXT is supported for Secure Launch. Testing
+ * Intel TXT and AMD SKINIT are supported for Secure Launch. Testing
* this value also indicates that the kernel was booted successfully
- * through the Secure Launch entry point and is in SMX mode.
+ * through the Secure Launch entry point and is in SMX or SKINIT mode.
*/
- if (!(sl_cpu_type & SL_CPU_INTEL))
+ if (!(sl_cpu_type & (SL_CPU_INTEL | SL_CPU_AMD)))
return;

- slrt = sl_locate_and_validate_slrt();
+ slrt = sl_locate_and_validate_slrt(bp);

/* Locate the TPM event log. */
sl_find_drtm_event_log(slrt);

- /* Validate the location of the event log buffer before using it */
- sl_validate_event_log_buffer();
+ /*
+ * On a TXT launch, validate the logging buffer for overlaps with the
+ * MLE and proper PMR coverage before using it. On an SKINIT launch,
+ * the boot params have to be used here to find the base and extent of
+ * the launched kernel. These values can then be used to make sure the
+ * boot params and logging buffer do not overlap the kernel.
+ */
+ if (sl_cpu_type & SL_CPU_INTEL)
+ sl_txt_validate_event_log_buffer();
+ else
+ sl_skinit_validate_buffers(slrt, bootparams);

/*
* Find the TPM hash algorithms used by the ACM and recorded in the
@@ -585,13 +744,15 @@ asmlinkage __visible void sl_main(void *bootparams)

sl_tpm_extend_evtlog(17, TXT_EVTYPE_SLAUNCH_END, NULL, 0, "");

- /* No PMR check is needed, the TXT heap is covered by the DPR */
- txt_heap = (void *)sl_txt_read(TXT_CR_HEAP_BASE);
- os_mle_data = txt_os_mle_data_start(txt_heap);
+ if (sl_cpu_type & SL_CPU_INTEL) {
+ /* No PMR check is needed, the TXT heap is covered by the DPR */
+ txt_heap = (void *)sl_txt_read(TXT_CR_HEAP_BASE);
+ os_mle_data = txt_os_mle_data_start(txt_heap);

- /*
- * Now that the OS-MLE data is measured, ensure the MTRR and
- * misc enable MSRs are what we expect.
- */
- sl_txt_validate_msrs(os_mle_data);
+ /*
+ * Now that the OS-MLE data is measured, ensure the MTRR and
+ * misc enable MSRs are what we expect.
+ */
+ sl_txt_validate_msrs(os_mle_data);
+ }
}
diff --git a/arch/x86/boot/compressed/sl_stub.S b/arch/x86/boot/compressed/sl_stub.S
index 6c0f0b2a062d..7a6492bf04e4 100644
--- a/arch/x86/boot/compressed/sl_stub.S
+++ b/arch/x86/boot/compressed/sl_stub.S
@@ -23,6 +23,9 @@
/* CPUID: leaf 1, ECX, SMX feature bit */
#define X86_FEATURE_BIT_SMX (1 << 6)

+/* CPUID: leaf 0x80000001, ECX, SKINIT feature bit */
+#define X86_FEATURE_BIT_SKINIT (1 << 12)
+
#define IDT_VECTOR_LO_BITS 0
#define IDT_VECTOR_HI_BITS 6

@@ -71,7 +74,11 @@ SYM_FUNC_START(sl_stub_entry)
* On entry, %ebx has the entry abs offset to sl_stub_entry. The rva()
* macro is used to generate relative references using %ebx as a base, as
* to avoid absolute relocations, which would require fixups at runtime.
- * Only %cs and %ds segments are known good.
+ * Only %cs and %ds segments are known good. On Intel, the ACM guarantees
+ * this while on AMD the SKL (Secure Kernel Loader) likewise does.
+ *
+ * In addition, on Intel %ecx holds the MLE page directory pointer
+ * table and on AMD %edx holds the physical base address of the SKL.
*/

/* Load GDT, set segment regs and lret to __SL32_CS */
@@ -98,6 +105,12 @@ SYM_FUNC_START(sl_stub_entry)
lret

.Lsl_cs:
+ /*
+ * For AMD, save SKL base before the CPUID instruction overwrites it.
+ * Performs cast from u32 to 64b void* for simpler use later.
+ */
+ movl %edx, rva(sl_skl_base)(%ebx)
+
/* Save our base pointer reg and page table for MLE */
pushl %ebx
pushl %ecx
@@ -106,7 +119,7 @@ SYM_FUNC_START(sl_stub_entry)
movl $1, %eax
cpuid
testl $(X86_FEATURE_BIT_SMX), %ecx
- jz .Ldo_unknown_cpu
+ jz .Ldo_amd /* maybe AMD/SKINIT? */

popl %ecx
popl %ebx
@@ -189,9 +202,21 @@ SYM_FUNC_START(sl_stub_entry)

jmp .Lcpu_setup_done

-.Ldo_unknown_cpu:
- /* Non-Intel CPUs are not yet supported */
- ud2
+.Ldo_amd:
+ /* See if SKINIT feature is supported. */
+ movl $0x80000001, %eax
+ cpuid
+ testl $(X86_FEATURE_BIT_SKINIT), %ecx
+ jz .Ldo_unknown_cpu
+
+ popl %ecx
+ /* Base pointer reg saved in Intel check */
+ popl %ebx
+
+ /* Know it is AMD */
+ movl $(SL_CPU_AMD), rva(sl_cpu_type)(%ebx)
+
+ /* On AMD %esi is set up by the SKL, just go on */

.Lcpu_setup_done:
/*
@@ -201,6 +226,10 @@ SYM_FUNC_START(sl_stub_entry)

/* Done, jump to normal 32b pm entry */
jmp startup_32
+
+.Ldo_unknown_cpu:
+ /* Neither Intel nor AMD */
+ ud2
SYM_FUNC_END(sl_stub_entry)

SYM_FUNC_START(sl_find_mle_base)
@@ -722,6 +751,8 @@ SYM_DATA(sl_cpu_type, .long 0x00000000)

SYM_DATA(sl_mle_start, .long 0x00000000)

+SYM_DATA(sl_skl_base, .quad 0x0000000000000000)
+
SYM_DATA_LOCAL(sl_txt_spin_lock, .long 0x00000000)

SYM_DATA_LOCAL(sl_txt_stack_index, .long 0x00000000)
diff --git a/arch/x86/include/uapi/asm/setup_data.h b/arch/x86/include/uapi/asm/setup_data.h
index 50c45ead4e7c..6f376c050c76 100644
--- a/arch/x86/include/uapi/asm/setup_data.h
+++ b/arch/x86/include/uapi/asm/setup_data.h
@@ -13,7 +13,8 @@
#define SETUP_CC_BLOB 7
#define SETUP_IMA 8
#define SETUP_RNG_SEED 9
-#define SETUP_ENUM_MAX SETUP_RNG_SEED
+#define SETUP_SECURE_LAUNCH 10
+#define SETUP_ENUM_MAX SETUP_SECURE_LAUNCH

#define SETUP_INDIRECT (1<<31)
#define SETUP_TYPE_MAX (SETUP_ENUM_MAX | SETUP_INDIRECT)
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index d81433a9b699..3a031043d2f1 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -93,6 +93,20 @@ void __noreturn slaunch_txt_reset(void __iomem *txt,
unreachable();
}

+/*
+ * SKINIT has no sticky register to set an error code or a DRTM reset
+ * facility. The best that can be done is to trace an error and trigger
+ * a system reset using the undefined instruction.
+ */
+void __noreturn slaunch_skinit_reset(const char *msg, u64 error)
+{
+ pr_err("%s - error: 0x%llx", msg, error);
+
+ asm volatile ("ud2");
+
+ unreachable();
+}
+
/*
* The TXT heap is too big to map all at once with early_ioremap
* so it is done a table at a time.
@@ -217,7 +231,7 @@ static void __init slaunch_verify_pmrs(void __iomem *txt)
slaunch_txt_reset(txt, errmsg, err);
}

-static void __init slaunch_txt_reserve_range(u64 base, u64 size)
+static void __init slaunch_reserve_range(u64 base, u64 size)
{
int type;

@@ -255,15 +269,15 @@ static void __init slaunch_txt_reserve(void __iomem *txt)

base = TXT_PRIV_CONFIG_REGS_BASE;
size = TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE;
- slaunch_txt_reserve_range(base, size);
+ slaunch_reserve_range(base, size);

memcpy_fromio(&heap_base, txt + TXT_CR_HEAP_BASE, sizeof(heap_base));
memcpy_fromio(&heap_size, txt + TXT_CR_HEAP_SIZE, sizeof(heap_size));
- slaunch_txt_reserve_range(heap_base, heap_size);
+ slaunch_reserve_range(heap_base, heap_size);

memcpy_fromio(&base, txt + TXT_CR_SINIT_BASE, sizeof(base));
memcpy_fromio(&size, txt + TXT_CR_SINIT_SIZE, sizeof(size));
- slaunch_txt_reserve_range(base, size);
+ slaunch_reserve_range(base, size);

field_offset = offsetof(struct txt_sinit_mle_data,
sinit_vtd_dmar_table_size);
@@ -288,14 +302,14 @@ static void __init slaunch_txt_reserve(void __iomem *txt)
for (i = 0; i < mdrnum; i++, mdr++) {
/* Spec says some entries can have length 0, ignore them */
if (mdr->type > 0 && mdr->length > 0)
- slaunch_txt_reserve_range(mdr->address, mdr->length);
+ slaunch_reserve_range(mdr->address, mdr->length);
}

txt_early_put_heap_table(mdrs, mdroffset + mdrslen - 8);

nomdr:
- slaunch_txt_reserve_range(ap_wake_info.ap_wake_block,
- ap_wake_info.ap_wake_block_size);
+ slaunch_reserve_range(ap_wake_info.ap_wake_block,
+ ap_wake_info.ap_wake_block_size);

/*
* Earlier checks ensured that the event log was properly situated
@@ -304,16 +318,16 @@ static void __init slaunch_txt_reserve(void __iomem *txt)
* already reserved.
*/
if (evtlog_addr < heap_base || evtlog_addr > (heap_base + heap_size))
- slaunch_txt_reserve_range(evtlog_addr, evtlog_size);
+ slaunch_reserve_range(evtlog_addr, evtlog_size);

for (i = 0; i < e820_table->nr_entries; i++) {
base = e820_table->entries[i].addr;
size = e820_table->entries[i].size;
if (base >= vtd_pmr_lo_size && base < 0x100000000ULL)
- slaunch_txt_reserve_range(base, size);
+ slaunch_reserve_range(base, size);
else if (base < vtd_pmr_lo_size && base + size > vtd_pmr_lo_size)
- slaunch_txt_reserve_range(vtd_pmr_lo_size,
- base + size - vtd_pmr_lo_size);
+ slaunch_reserve_range(vtd_pmr_lo_size,
+ base + size - vtd_pmr_lo_size);
}
}

@@ -514,6 +528,67 @@ static void __init slaunch_setup_txt(void)
pr_info("Intel TXT setup complete\n");
}

+static void slaunch_skinit_prepare(void)
+{
+ struct slr_entry_amd_info amd_info_temp;
+ struct slr_entry_amd_info *amd_info;
+ struct slr_entry_log_info *log_info;
+ struct setup_data *data;
+ struct slr_table *slrt;
+ u64 pa_data;
+
+ pa_data = (u64)boot_params.hdr.setup_data;
+ amd_info = NULL;
+
+ while (pa_data) {
+ data = (struct setup_data *)early_memremap(pa_data, sizeof(*data));
+ if (!data)
+ slaunch_skinit_reset("Error failed to early_memremap setup data\n",
+ SL_ERROR_MAP_SETUP_DATA);
+
+ if (data->type == SETUP_SECURE_LAUNCH) {
+ early_memunmap(data, sizeof(*data));
+ amd_info = (struct slr_entry_amd_info *)
+ early_memremap(pa_data - sizeof(struct slr_entry_hdr),
+ sizeof(*amd_info));
+ if (!amd_info)
+ slaunch_skinit_reset("Error failed to early_memremap AMD info\n",
+ SL_ERROR_MAP_SETUP_DATA);
+ break;
+ }
+
+ pa_data = data->next;
+ early_memunmap(data, sizeof(*data));
+ }
+
+ if (!amd_info)
+ slaunch_skinit_reset("Error failed to find AMD info\n",
+ SL_ERROR_MISSING_EVENT_LOG);
+
+ amd_info_temp = *amd_info;
+ early_memunmap(amd_info, sizeof(*amd_info));
+
+ slaunch_reserve_range(amd_info_temp.slrt_base, amd_info_temp.slrt_size);
+
+ /* Get the SLRT and remap it */
+ slrt = early_memremap(amd_info_temp.slrt_base, amd_info_temp.slrt_size);
+ if (!slrt)
+ slaunch_skinit_reset("Error failed to early_memremap SLR Table\n",
+ SL_ERROR_SLRT_MAP);
+
+ log_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_LOG_INFO);
+ if (!log_info)
+ slaunch_skinit_reset("Error failed to find event log info SLR Table\n",
+ SL_ERROR_SLRT_MISSING_ENTRY);
+
+ slaunch_reserve_range(log_info->addr, log_info->size);
+
+ early_memunmap(slrt, amd_info_temp.slrt_size);
+
+ if (amd_info_temp.psp_version == 2 || amd_info_temp.psp_version == 3)
+ sl_flags |= SL_FLAG_SKINIT_PSP;
+}
+
/*
* AMD SKINIT specific late stub setup and validation called from within
* x86 specific setup_arch().
@@ -530,6 +605,8 @@ static void __init slaunch_setup_skinit(void)
if (!(val & (1 << SVM_VM_CR_INIT_REDIRECTION)))
return;

+ slaunch_skinit_prepare();
+
/* Set flags on BSP so subsequent code knows it was a SKINIT launch */
sl_flags |= (SL_FLAG_ACTIVE|SL_FLAG_ARCH_SKINIT);
pr_info("AMD SKINIT setup complete\n");
--
2.49.0

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:21 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x...@kernel.org, H. Peter Anvin
From: Ross Philipson <ross.ph...@oracle.com>

The SKINIT instruction disables the GIF and it must be re-enabled
on the BSP and APs as they are started. Since enabling GIF also
re-enables NMIs, it should be done after a valid IDT is loaded for
each CPU.

SKINIT also already performed #INIT on the APs and it should be
bypassed before issuing the startup IPIs.

Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
arch/x86/kernel/slaunch.c | 23 +++++++++++++++++++++++
arch/x86/kernel/smpboot.c | 15 ++++++++++++++-
arch/x86/kernel/traps.c | 4 ++++
3 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index 3a031043d2f1..a1c8be7de8d3 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -17,6 +17,7 @@
#include <asm/sections.h>
#include <asm/tlbflush.h>
#include <asm/e820/api.h>
+#include <asm/svm.h>
#include <asm/setup.h>
#include <asm/svm.h>
#include <asm/realmode.h>
@@ -716,3 +717,25 @@ void slaunch_finalize(int do_sexit)
else if (boot_cpu_has(X86_FEATURE_SKINIT))
slaunch_finalize_skinit();
}
+
+/*
+ * AMD specific SKINIT CPU setup and initialization.
+ */
+void slaunch_cpu_setup_skinit(void)
+{
+ u64 val;
+
+ if (!slaunch_is_skinit_launch())
+ return;
+
+ /*
+ * We don't yet handle #SX. Disable INIT_REDIRECTION first, before
+ * enabling GIF, so a pending INIT resets us, rather than causing a
+ * panic due to an unknown exception.
+ */
+ rdmsrl(MSR_VM_CR, val);
+ wrmsrl(MSR_VM_CR, val & ~(1 << SVM_VM_CR_INIT_REDIRECTION));
+
+ /* Enable Global Interrupts flag */
+ asm volatile ("stgi" ::: "memory");
+}
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 219523884fdc..322fa4f8c5df 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -251,6 +251,12 @@ static void notrace __noendbr start_secondary(void *unused)

cpu_init_exception_handling(false);

+ /*
+ * If this is an AMD SKINIT secure launch, some extra work is done
+ * to prepare to start the secondary CPUs.
+ */
+ slaunch_cpu_setup_skinit();
+
/*
* Load the microcode before reaching the AP alive synchronization
* point below so it is not part of the full per CPU serialized
@@ -703,7 +709,14 @@ static int wakeup_secondary_cpu_via_init(u32 phys_apicid, unsigned long start_ei

preempt_disable();
maxlvt = lapic_get_maxlvt();
- send_init_sequence(phys_apicid);
+
+ /*
+ * If this is an SKINIT secure launch, #INIT is already done on the APs
+ * by issuing the SKINIT instruction. For security reasons #INIT
+ * should not be done again.
+ */
+ if (!slaunch_is_skinit_launch())
+ send_init_sequence(phys_apicid);

mb();

diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 9f88b8a78e50..0a4d218a426b 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -43,6 +43,7 @@
#include <linux/atomic.h>
#include <linux/iommu.h>
#include <linux/ubsan.h>
+#include <linux/slaunch.h>

#include <asm/stacktrace.h>
#include <asm/processor.h>
@@ -1564,5 +1565,8 @@ void __init trap_init(void)
if (!cpu_feature_enabled(X86_FEATURE_FRED))
idt_setup_traps();

+ /* If SKINIT was done on the BSP, this is the spot to enable GIF */
+ slaunch_cpu_setup_skinit();
+
cpu_init();
}
--
2.49.0

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:24 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com
From: Ross Philipson <ross.ph...@oracle.com>

Some of the changes are related to generalization: common macro for
resetting the platform. The rest are:
- SKINIT-specific way of getting to SLRT
- handling of TPM log which has TXT-specific header embedded as vendor
data of a TCG-compliant one

Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
arch/x86/kernel/slmodule.c | 161 ++++++++++++++++++++++++++++++-------
1 file changed, 134 insertions(+), 27 deletions(-)

diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
index 64010bac038c..4d29c1628a90 100644
--- a/arch/x86/kernel/slmodule.c
+++ b/arch/x86/kernel/slmodule.c
@@ -18,12 +18,21 @@
#include <linux/security.h>
#include <linux/memblock.h>
#include <linux/tpm.h>
+#include <asm/setup.h>
#include <asm/segment.h>
#include <asm/sections.h>
#include <crypto/sha2.h>
#include <linux/slr_table.h>
#include <linux/slaunch.h>

+#define slaunch_reset(t, m, e) \
+ do { \
+ if (t) \
+ slaunch_txt_reset((t), (m), (e)); \
+ else \
+ slaunch_skinit_reset((m), (e)); \
+ } while (0)
+
/*
* The macro DECLARE_TXT_PUB_READ_U is used to read values from the TXT
* public registers as unsigned values.
@@ -83,6 +92,7 @@ struct memfile {

static struct memfile sl_evtlog = {"eventlog", NULL, 0};
static void *txt_heap;
+static void *skinit_evtlog;
static struct txt_heap_event_log_pointer2_1_element *evtlog21;
static DEFINE_MUTEX(sl_evt_log_mutex);
static struct tcg_efi_specid_event_head *efi_head;
@@ -239,12 +249,19 @@ static void slaunch_teardown_securityfs(void)
memunmap(txt_heap);
txt_heap = NULL;
}
+ } else if (slaunch_get_flags() & SL_FLAG_ARCH_SKINIT) {
+ if (skinit_evtlog) {
+ memunmap(skinit_evtlog);
+ skinit_evtlog = NULL;
+ }
+ sl_evtlog.addr = NULL;
+ sl_evtlog.size = 0;
}

securityfs_remove(slaunch_dir);
}

-static void slaunch_intel_evtlog(void __iomem *txt)
+static void slaunch_txt_evtlog(void __iomem *txt)
{
struct slr_entry_log_info *log_info;
struct txt_os_mle_data *params;
@@ -312,6 +329,88 @@ static void slaunch_intel_evtlog(void __iomem *txt)
efi_head = (struct tcg_efi_specid_event_head *)(sl_evtlog.addr + sizeof(struct tcg_pcr_event));
}

+static void slaunch_skinit_evtlog(void)
+{
+ struct slr_entry_amd_info amd_info_temp;
+ struct slr_entry_amd_info *amd_info;
+ struct slr_entry_log_info *log_info;
+ struct setup_data *data;
+ struct slr_table *slrt;
+ u64 pa_data;
+
+ pa_data = (u64)boot_params.hdr.setup_data;
+ amd_info = NULL;
+
+ while (pa_data) {
+ data = (struct setup_data *)memremap(pa_data, sizeof(*data), MEMREMAP_WB);
+ if (!data)
+ slaunch_skinit_reset("Error failed to memremap setup data\n",
+ SL_ERROR_MAP_SETUP_DATA);
+
+ if (data->type == SETUP_SECURE_LAUNCH) {
+ memunmap(data);
+ amd_info = (struct slr_entry_amd_info *)
+ memremap(pa_data - sizeof(struct slr_entry_hdr),
+ sizeof(*amd_info), MEMREMAP_WB);
+ if (!amd_info)
+ slaunch_skinit_reset("Error failed to memremap AMD info\n",
+ SL_ERROR_MAP_SETUP_DATA);
+ break;
+ }
+
+ pa_data = data->next;
+ memunmap(data);
+ }
+
+ if (!amd_info)
+ slaunch_skinit_reset("Error failed to find AMD info\n", SL_ERROR_MISSING_EVENT_LOG);
+
+ amd_info_temp = *amd_info;
+ memunmap(amd_info);
+
+ /* Get the SLRT and remap it */
+ slrt = memremap(amd_info_temp.slrt_base, amd_info_temp.slrt_size, MEMREMAP_WB);
+ if (!slrt)
+ slaunch_skinit_reset("Error failed to memremap SLR Table\n", SL_ERROR_SLRT_MAP);
+
+ log_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_LOG_INFO);
+ if (!log_info)
+ slaunch_skinit_reset("Error failed to find event log info SLR Table\n",
+ SL_ERROR_SLRT_MISSING_ENTRY);
+
+ /* Finally map the actual event log and find the proper offsets */
+ skinit_evtlog = memremap(log_info->addr, log_info->size, MEMREMAP_WB);
+ if (!skinit_evtlog)
+ slaunch_skinit_reset("Error failed to memremap TPM event log\n",
+ SL_ERROR_EVENTLOG_MAP);
+
+ sl_evtlog.size = log_info->size;
+ sl_evtlog.addr = skinit_evtlog;
+
+ memunmap(slrt);
+
+ /*
+ * See the comment for the following function concerning the
+ * logic used here:
+ * arch/x86/boot/compressed/sl_main.c:sl_find_event_log()
+ */
+ if (!memcmp(skinit_evtlog + sizeof(struct tcg_pcr_event),
+ TCG_SPECID_SIG, sizeof(TCG_SPECID_SIG))) {
+ evtlog21 = skinit_evtlog + sizeof(struct tcg_pcr_event)
+ + TCG_EfiSpecIdEvent_SIZE(
+ TPM2_HASH_COUNT(skinit_evtlog
+ + sizeof(struct tcg_pcr_event)));
+ } else {
+ sl_evtlog.addr += sizeof(struct tcg_pcr_event)
+ + TCG_PCClientSpecIDEventStruct_SIZE;
+ sl_evtlog.size -= sizeof(struct tcg_pcr_event)
+ + TCG_PCClientSpecIDEventStruct_SIZE;
+ }
+
+ /* Save pointer to the EFI SpecID log header */
+ efi_head = (struct tcg_efi_specid_event_head *)(skinit_evtlog + sizeof(struct tcg_pcr_event));
+}
+
static void slaunch_tpm2_extend_event(struct tpm_chip *tpm, void __iomem *txt,
struct tcg_pcr_event2_head *event)
{
@@ -331,8 +430,7 @@ static void slaunch_tpm2_extend_event(struct tpm_chip *tpm, void __iomem *txt,

digests = kzalloc(efi_head->num_algs * sizeof(*digests), GFP_KERNEL);
if (!digests)
- slaunch_txt_reset(txt, "Failed to allocate array of digests\n",
- SL_ERROR_GENERIC);
+ slaunch_reset(txt, "Failed to allocate array of digests\n", SL_ERROR_GENERIC);

for (i = 0; i < event->count; i++) {
dptr = (u8 *)alg_id_field + sizeof(u16);
@@ -349,8 +447,7 @@ static void slaunch_tpm2_extend_event(struct tpm_chip *tpm, void __iomem *txt,
ret = tpm_pcr_extend(tpm, event->pcr_idx, digests);
if (ret) {
pr_err("Error extending TPM20 PCR, result: %d\n", ret);
- slaunch_txt_reset(txt, "Failed to extend TPM20 PCR\n",
- SL_ERROR_TPM_EXTEND);
+ slaunch_reset(txt, "Failed to extend TPM20 PCR\n", SL_ERROR_TPM_EXTEND);
}

kfree(digests);
@@ -372,8 +469,8 @@ static void slaunch_tpm2_extend(struct tpm_chip *tpm, void __iomem *txt)
while ((void *)event < sl_evtlog.addr + evtlog21->next_record_offset) {
size = __calc_tpm2_event_size(event, event_header, false);
if (!size)
- slaunch_txt_reset(txt, "TPM20 invalid event in event log\n",
- SL_ERROR_TPM_INVALID_EVENT);
+ slaunch_reset(txt, "TPM20 invalid event in event log\n",
+ SL_ERROR_TPM_INVALID_EVENT);

/*
* Marker events indicate where the Secure Launch early stub
@@ -400,8 +497,8 @@ static void slaunch_tpm2_extend(struct tpm_chip *tpm, void __iomem *txt)
}

if (!start || !end)
- slaunch_txt_reset(txt, "Missing start or end events for extending TPM20 PCRs\n",
- SL_ERROR_TPM_EXTEND);
+ slaunch_reset(txt, "Missing start or end events for extending TPM20 PCRs\n",
+ SL_ERROR_TPM_EXTEND);
}

static void slaunch_tpm_extend(struct tpm_chip *tpm, void __iomem *txt)
@@ -442,8 +539,8 @@ static void slaunch_tpm_extend(struct tpm_chip *tpm, void __iomem *txt)
ret = tpm_pcr_extend(tpm, event->pcr_idx, &digest);
if (ret) {
pr_err("Error extending TPM12 PCR, result: %d\n", ret);
- slaunch_txt_reset(txt, "Failed to extend TPM12 PCR\n",
- SL_ERROR_TPM_EXTEND);
+ slaunch_reset(txt, "Failed to extend TPM12 PCR\n",
+ SL_ERROR_TPM_EXTEND);
}
}

@@ -452,8 +549,8 @@ static void slaunch_tpm_extend(struct tpm_chip *tpm, void __iomem *txt)
}

if (!start || !end)
- slaunch_txt_reset(txt, "Missing start or end events for extending TPM12 PCRs\n",
- SL_ERROR_TPM_EXTEND);
+ slaunch_reset(txt, "Missing start or end events for extending TPM12 PCRs\n",
+ SL_ERROR_TPM_EXTEND);
}

static void slaunch_pcr_extend(void __iomem *txt)
@@ -463,13 +560,11 @@ static void slaunch_pcr_extend(void __iomem *txt)

tpm = tpm_default_chip();
if (!tpm)
- slaunch_txt_reset(txt, "Could not get default TPM chip\n",
- SL_ERROR_TPM_INIT);
+ slaunch_reset(txt, "Could not get default TPM chip\n", SL_ERROR_TPM_INIT);

rc = tpm_chip_set_locality(tpm, 2);
if (rc)
- slaunch_txt_reset(txt, "Could not set TPM chip locality 2\n",
- SL_ERROR_TPM_INIT);
+ slaunch_reset(txt, "Could not set TPM chip locality 2\n", SL_ERROR_TPM_INIT);

if (evtlog21)
slaunch_tpm2_extend(tpm, txt);
@@ -482,19 +577,31 @@ static int __init slaunch_module_init(void)
void __iomem *txt;

/* Check to see if Secure Launch happened */
- if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) !=
- (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+ if (!(slaunch_get_flags() & SL_FLAG_ACTIVE))
return 0;

- txt = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
- PAGE_SIZE);
- if (!txt)
- panic("Error ioremap of TXT priv registers\n");
+ if (slaunch_get_flags() & SL_FLAG_ARCH_TXT) {
+ txt = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+ PAGE_SIZE);
+ if (!txt)
+ panic("Error ioremap of TXT priv registers\n");
+
+ slaunch_txt_evtlog(txt);
+
+ slaunch_pcr_extend(txt);
+
+ iounmap(txt);
+
+ pr_info("TXT Secure Launch module setup\n");
+ } else if (slaunch_get_flags() & SL_FLAG_ARCH_SKINIT) {
+ slaunch_skinit_evtlog();
+
+ slaunch_pcr_extend(NULL);
+
+ pr_info("SKINIT Secure Launch module setup\n");
+ } else
+ panic("Secure Launch unknown architecture\n");

- /* Only Intel TXT is supported at this point */
- slaunch_intel_evtlog(txt);
- slaunch_pcr_extend(txt);
- iounmap(txt);

return slaunch_expose_securityfs();
}
--
2.49.0

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:26 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com, Ard Biesheuvel
From: Ross Philipson <ross.ph...@oracle.com>

* Only do the TXT setup steps if this is a TXT launch not an SKINIT one.
* Initialize boot params address for SKINIT.

Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
drivers/firmware/efi/libstub/x86-stub.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
index bfa36466a79c..0453be1ba58d 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -798,15 +798,21 @@ static bool efi_secure_launch_update_boot_params(struct slr_table *slrt,
struct boot_params *boot_params)
{
struct slr_entry_intel_info *txt_info;
+ struct slr_entry_amd_info *skinit_info;
struct slr_entry_policy *policy;
bool updated = false;
int i;

txt_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
- if (!txt_info)
- return false;
+ if (txt_info)
+ txt_info->boot_params_addr = (u64)boot_params;
+
+ skinit_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_AMD_INFO);
+ if (skinit_info)
+ skinit_info->boot_params_addr = (u64)boot_params;

- txt_info->boot_params_addr = (u64)boot_params;
+ if (!txt_info && !skinit_info)
+ return false;

policy = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_ENTRY_POLICY);
if (!policy)
--
2.49.0

Sergii Dmytruk

unread,
Apr 30, 2025, 6:45:28 PMApr 30
to linux-...@vger.kernel.org, trenchbo...@googlegroups.com, Joerg Roedel, Suravee Suthikulpanit
From: Jagannathan Raman <jag....@oracle.com>

GRUB and AMD-SL would have executed the SKINIT instruction and performed
DRTM setup procedures. As part of DRTM, GRUB sets up TMRs to cover the
whole of the system's physical memory regions. The Kernel should release
these TMRs to facilitate communication between devices and drivers. TMRs
are released after the Kernel sets up IOMMU.

Releasing TMRs concludes DRTM. The Kernel should also execute
DRTM_CMD_TPM_LOCALITY_ACCESS to lock TPM locality two before removing
TMRs. But this prevents the Kernel's TPM driver (which loads
subsequently) from extending PCRs. As such, we are skipping the TPM
locality access command.

Signed-off-by: Jagannathan Raman <jag....@oracle.com>
Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>
---
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/sl-psp.c | 239 ++++++++++++++++++++++++++++++++++++++
arch/x86/kernel/slaunch.c | 4 +-
drivers/iommu/amd/init.c | 12 ++
4 files changed, 255 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/kernel/sl-psp.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index bed87b1c49a2..8ccad4f5c129 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -84,6 +84,7 @@ obj-y += step.o
obj-$(CONFIG_INTEL_TXT) += tboot.o
obj-$(CONFIG_SECURE_LAUNCH) += slaunch.o
obj-$(CONFIG_SECURE_LAUNCH) += slmodule.o
+obj-$(CONFIG_SECURE_LAUNCH) += sl-psp.o
obj-$(CONFIG_ISA_DMA_API) += i8237.o
obj-y += stacktrace.o
obj-y += cpu/
diff --git a/arch/x86/kernel/sl-psp.c b/arch/x86/kernel/sl-psp.c
new file mode 100644
index 000000000000..69d24f275042
--- /dev/null
+++ b/arch/x86/kernel/sl-psp.c
@@ -0,0 +1,239 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch early setup.
+ *
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/printk.h>
+#include <linux/slaunch.h>
+#include <asm/cpufeatures.h>
+#include <asm/msr.h>
+#include <asm/pci_x86.h>
+#include <asm/svm.h>
+
+#define DRTM_MBOX_READY_MASK 0x80000000
+#define DRTM_MBOX_TMR_INDEX_ID_MASK 0x0F000000
+#define DRTM_MBOX_CMD_MASK 0x00FF0000
+#define DRTM_MBOX_STATUS_MASK 0x0000FFFF
+
+#define DRTM_MBOX_CMD_SHIFT 16
+
+#define DRTM_NO_ERROR 0x00000000
+#define DRTM_NOT_SUPPORTED 0x00000001
+#define DRTM_LAUNCH_ERROR 0x00000002
+#define DRTM_TMR_SETUP_FAILED_ERROR 0x00000003
+#define DRTM_TMR_DESTROY_FAILED_ERROR 0x00000004
+#define DRTM_GET_TCG_LOGS_FAILED_ERROR 0x00000007
+#define DRTM_OUT_OF_RESOURCES_ERROR 0x00000008
+#define DRTM_GENERIC_ERROR 0x00000009
+#define DRTM_INVALID_SERVICE_ID_ERROR 0x0000000A
+#define DRTM_MEMORY_UNALIGNED_ERROR 0x0000000B
+#define DRTM_MINIMUM_SIZE_ERROR 0x0000000C
+#define DRTM_GET_TMR_DESCRIPTOR_FAILED 0x0000000D
+#define DRTM_EXTEND_OSSL_DIGEST_FAILED 0x0000000E
+#define DRTM_SETUP_NOT_ALLOWED 0x0000000F
+#define DRTM_GET_IVRS_TABLE_FAILED 0x00000010
+
+#define DRTM_CMD_GET_CAPABILITY 0x1
+#define DRTM_CMD_TMR_SETUP 0x2
+#define DRTM_CMD_TMR_RELEASE 0x3
+#define DRTM_CMD_LAUNCH 0x4
+#define DRTM_CMD_GET_TCG_LOGS 0x7
+#define DRTM_CMD_TPM_LOCALITY_ACCESS 0x8
+#define DRTM_CMD_GET_TMR_DESCRIPTORS 0x9
+#define DRTM_CMD_ALLOCATE_SHARED_MEMORY 0xA
+#define DRTM_CMD_EXTEND_OSSL_DIGEST 0xB
+#define DRTM_CMD_GET_IVRS_TABLE_INFO 0xC
+
+#define DRTM_TMR_INDEX_0 0
+#define DRTM_TMR_INDEX_1 1
+#define DRTM_TMR_INDEX_2 2
+#define DRTM_TMR_INDEX_3 3
+#define DRTM_TMR_INDEX_4 4
+#define DRTM_TMR_INDEX_5 5
+#define DRTM_TMR_INDEX_6 6
+#define DRTM_TMR_INDEX_7 7
+
+#define DRTM_CMD_READY 0
+#define DRTM_RESPONSE_READY 1
+
+static bool slaunch_psp_early_setup_done;
+
+static u32 __iomem *c2pmsg_72;
+
+static void slaunch_smn_register_read(u32 address, u32 *value)
+{
+ u32 val;
+
+ val = address;
+ pci_direct_conf1.write(0, 0, 0, 0xB8, 4, val);
+ pci_direct_conf1.read(0, 0, 0, 0xBC, 4, &val);
+
+ *value = val;
+}
+
+#define IOHC0NBCFG_SMNBASE 0x13B00000
+#define PSP_BASE_ADDR_LO_SMN_ADDRESS (IOHC0NBCFG_SMNBASE + 0x102E0)
+
+static u32 slaunch_get_psp_bar_addr(void)
+{
+ u32 pspbaselo = 0;
+
+ slaunch_smn_register_read(PSP_BASE_ADDR_LO_SMN_ADDRESS, &pspbaselo);
+
+ /* Mask out the lower bits */
+ pspbaselo &= 0xFFF00000;
+
+ return pspbaselo;
+}
+
+static void slaunch_clear_c2pmsg_regs(void)
+{
+ if (c2pmsg_72)
+ iounmap(c2pmsg_72);
+
+ c2pmsg_72 = NULL;
+}
+
+static bool slaunch_setup_c2pmsg_regs(void)
+{
+ phys_addr_t bar2;
+
+ bar2 = (phys_addr_t)slaunch_get_psp_bar_addr();
+ if (!bar2)
+ return false;
+
+ c2pmsg_72 = ioremap(bar2 + 0x10a20, 4);
+ if (!c2pmsg_72) {
+ slaunch_clear_c2pmsg_regs();
+ return false;
+ }
+
+ return true;
+}
+
+static const char *const slaunch_status_strings[] = {
+ "DRTM_NO_ERROR",
+ "DRTM_NOT_SUPPORTED",
+ "DRTM_LAUNCH_ERROR",
+ "DRTM_TMR_SETUP_FAILED_ERROR",
+ "DRTM_TMR_DESTROY_FAILED_ERROR",
+ "UNDEFINED",
+ "UNDEFINED",
+ "DRTM_GET_TCG_LOGS_FAILED_ERROR",
+ "DRTM_OUT_OF_RESOURCES_ERROR",
+ "DRTM_GENERIC_ERROR",
+ "DRTM_INVALID_SERVICE_ID_ERROR",
+ "DRTM_MEMORY_UNALIGNED_ERROR",
+ "DRTM_MINIMUM_SIZE_ERROR",
+ "DRTM_GET_TMR_DESCRIPTOR_FAILED",
+ "DRTM_EXTEND_OSSL_DIGEST_FAILED",
+ "DRTM_SETUP_NOT_ALLOWED",
+ "DRTM_GET_IVRS_TABLE_FAILED"
+};
+
+static const char *slaunch_status_string(u32 status)
+{
+ if (status > DRTM_GET_IVRS_TABLE_FAILED)
+ return "UNDEFINED";
+
+ return slaunch_status_strings[status];
+}
+
+static bool slaunch_wait_for_psp_ready(u32 *status)
+{
+ u32 reg_val = 0;
+ int retry = 5;
+
+ if (readl(c2pmsg_72) == 0xFFFFFFFF)
+ return false;
+
+ while (--retry) {
+ reg_val = readl(c2pmsg_72);
+ if (reg_val & DRTM_MBOX_READY_MASK)
+ break;
+
+ /* TODO: select wait time appropriately */
+ mdelay(100);
+ }
+
+ if (!retry)
+ return false;
+
+ if (status)
+ *status = reg_val & 0xffff;
+
+ return true;
+}
+
+static bool slaunch_tpm_locality_access(void)
+{
+ u32 status;
+
+ writel(DRTM_CMD_TPM_LOCALITY_ACCESS << DRTM_MBOX_CMD_SHIFT, c2pmsg_72);
+
+ if (!slaunch_wait_for_psp_ready(&status)) {
+ pr_err("Failed to execute DRTM_CMD_TPM_LOCALITY_ACCESS\n");
+ return false;
+ }
+
+ if (status != DRTM_NO_ERROR) {
+ pr_err("DRTM_CMD_TPM_LOCALITY_ACCESS failed - %s",
+ slaunch_status_string(status));
+ return false;
+ }
+
+ return true;
+}
+
+bool slaunch_psp_tmr_release(void)
+{
+ u32 status;
+
+ if (!slaunch_psp_early_setup_done)
+ return false;
+
+ writel(DRTM_CMD_TMR_RELEASE << DRTM_MBOX_CMD_SHIFT, c2pmsg_72);
+
+ if (!slaunch_wait_for_psp_ready(&status)) {
+ pr_err("Failed to execute DRTM_CMD_TMR_RELEASE_ACCESS\n");
+ return false;
+ }
+
+ if (status != DRTM_NO_ERROR) {
+ pr_err("DRTM_CMD_TMR_RELEASE failed - %s",
+ slaunch_status_string(status));
+ return false;
+ }
+
+ return true;
+}
+
+void slaunch_psp_setup(void)
+{
+ if (slaunch_psp_early_setup_done)
+ return;
+
+ if (!slaunch_setup_c2pmsg_regs())
+ return;
+
+ if (!slaunch_wait_for_psp_ready(NULL)) {
+ pr_err("PSP not ready to take commands\n");
+ return;
+ }
+
+ slaunch_psp_early_setup_done = true;
+}
+
+void slaunch_psp_finalize(void)
+{
+ if (!slaunch_tpm_locality_access()) {
+ pr_err("PSP failed to lock TPM DRTM localities\n");
+ return;
+ }
+
+ slaunch_clear_c2pmsg_regs();
+}
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index a1c8be7de8d3..0a806df74586 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -701,13 +701,15 @@ static void slaunch_finalize_txt(int do_sexit)
}

/*
- * Used during kexec and on reboot paths to finalize the SKINIT.
+ * Used during kexec and on reboot paths to finalize the SKINIT PSP state.
*/
static void slaunch_finalize_skinit(void)
{
/* AMD CPUs with PSP-supported DRTM */
if (!slaunch_is_skinit_psp())
return;
+
+ slaunch_psp_finalize();
}

void slaunch_finalize(int do_sexit)
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index dd9e26b7b718..c7c60183a27f 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -21,6 +21,7 @@
#include <linux/kmemleak.h>
#include <linux/cc_platform.h>
#include <linux/iopoll.h>
+#include <linux/slaunch.h>
#include <asm/pci-direct.h>
#include <asm/iommu.h>
#include <asm/apic.h>
@@ -3357,6 +3358,17 @@ int __init amd_iommu_enable(void)
if (ret)
return ret;

+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+ if (slaunch_is_skinit_psp()) {
+ /* Initialize PSP access to SKINIT DRTM functions */
+ slaunch_psp_setup();
+
+ /* Release the Trusted Memory Region since IOMMU is configured */
+ if (!slaunch_psp_tmr_release())
+ return -ENODEV;
+ }
+#endif
+
irq_remapping_enabled = 1;
return amd_iommu_xt_mode;
}
--
2.49.0

Ard Biesheuvel

unread,
May 9, 2025, 5:23:28 AMMay 9
to Sergii Dmytruk, linux-...@vger.kernel.org, trenchbo...@googlegroups.com
On Thu, 1 May 2025 at 00:45, Sergii Dmytruk <sergii....@3mdeb.com> wrote:
>
> From: Ross Philipson <ross.ph...@oracle.com>
>
> * Only do the TXT setup steps if this is a TXT launch not an SKINIT one.
> * Initialize boot params address for SKINIT.
>
> Signed-off-by: Ross Philipson <ross.ph...@oracle.com>
> Signed-off-by: Sergii Dmytruk <sergii....@3mdeb.com>

Acked-by: Ard Biesheuvel <ar...@kernel.org>
Reply all
Reply to author
Forward
0 new messages