futwarw veryll anchoret

0 views
Skip to first unread message

Katariina Washuk

unread,
Aug 2, 2024, 7:27:58 AM8/2/24
to ertepufo

Thank you so much for helping me out in pursuing this issue. If you ever find a solution it will be highly appreciated, it might even help others who are in the same situation as I. Please do not forget that I have installed along with Bodhi Cinnamon DE. Please let me know.

@uboyalu
Do you have any option to go 64 bit on that computer?
If your computer is so old, that incapapble of running a 64 bit OS, the video playback of Netflix would be choppy anyway. If it has just few memory, say 2GB, but the CPU can run 64 bit OS, I would advise you to go 64 bit. In that case you could also try Electron Player, which is a dedicated player.

Thank you for replying with some sound suggestions regarding playing Netflix on FF. I will try playing Netflix on Proton player as suggested, as for the computer it is very old with Intel Pentium 4 chip with 1.5 gb of ram and will not probably be able to upgrade to 64bit.

I hate to tell you this but Netflix was working just fine when I had Lubuntu 18,04LTS. Ihad to replace it and I figured it will be nice to Bodhi Linux for a change. Thereis for the change is USB flash stopped displaying on the desktop and remaind stuck on mount rather on media hence, I was unable to save any documents on them otherwise, Netflix was working just fine.

Note: If you're here because you just want to watch Netflix on Asahi, install this (Edit: for Arch users, grab this. Fedora users can install the widevine-installer package), and then scroll down to the "Netflix-Specific Annoyances" section.

Truth be told, I don't care very much about Netflix - the UX offered by BitTorrent is superior. On the other hand, I'm quite invested in the Spotify ecosystem, and while there are 3rd party open-source clients that run great on Asahi, I do prefer the official interface (which is a controversial take, I gather).

In the case of Widevine-in-Chrome-on-Linux, this CDM takes the form of a dynamic library called libwidevinecdm.so. This library is an opaque proprietary blob that we are forbidden to look inside of (at least, that's how they'd prefer it to be).

Graciously, as part of the Chromium project, Google provides the C++ headers required to interface with The Blob. This interface allows other projects like Firefox to implement support for Widevine, via the EME API, using the exact same libwidevinecdm.so blob as Chrome does. This is convenient, but it doesn't help us on aarch64 Linux - there is no corresponding libwidevinecdm.so for us to borrow from Chrome, because Widevine-in-Chrome-on-Linux-on-aarch64 is not an officially supported platform.

The short answer is: the DRM works differently on Android. Because of the differences between the Android platform and desktop Linux, the Android implementations of Widevine will not be useful to us, unless you're planning on losing the Do Not Violate The DMCA Challenge (this is left as an exercise to the reader).

Chromebooks exist, many have aarch64 CPUs, they run Chrome on Linux (more or less), and they officially support Widevine. At some point, somebody noticed that there's a perfectly cromulent libwidevinecdm.so available inside Google's Chromebook recovery images, and wrote tools to download and extract it from the publicly available images. As far as I can tell, the Raspberry Pi folks use scripts like that to obtain the CDM, and helpfully package it in a .deb file for easy installation on a Pi.

However, there's another catch: Although Chromebooks have aarch64 CPUs and kernels, they run an armv7l userspace. This means that the CDM blob is also armv7l. This is fine on Pis because they can also run an armv7l userspace (I think it's even the default still?). Unfortunately, Apple Silicon cannot run an armv7l userspace natively, the hardware simply does not support it.

Or rather, that was all true at the time when I first investigated Widevine-on-Asahi, several months ago. A few weeks ago, Google decided to enter the 21st century and started shipping aarch64 userspaces on certain Chromebook models. This means that "Widevine-in-Chrome-on-Linux-on-aarch64" does exist. The ChromeOS blob extraction process works as before, and the Pi Foundation conveniently packages it as a .deb for Pi users.

The next hurdle is that ChromeOS is not quite the same thing as desktop Linux. ChromeOS's glibc has some weird patches that make it incompatible with the glibc you'll find on a standard Arch Linux ARM box. If you try to load libwidevinecdm.so, you'll be hit with a hard-to-debug segfault somewhere in glibc.

The glibc-widevine AUR package neatly solves this problem, by patching glibc to work around ChromeOS's idiosyncrasies. As far as I can tell, Raspbian's glibc has similar compatibility patches out-of-the-box.

Since it's not an officially supported platform, the upstream Chromium sources are not configured to support Widevine on aarch64 by default. However, re-enabling that support at build-time is trivial, and thankfully that's what happens in the Arch Linux Arm Chromium package due to a forward-thinking patch.

If you haven't looked at the position of the scroll bar on this article, you might think we're almost there! If you have a non-Apple-Silicon aarch64 device (a Linux'd Nintendo Switch?), you can probably just install the widevine-aarch64 AUR package (which grabs the CDM blob from the Pi repos and sets things up), and you'll have a fully functioning Widevine install.

Not quite, there is one last showstopper for Asahi users, and it's a big one: Asahi Linux is built to use 16K page sizes. The Widevine blobs available to us only support 4K pages. While it is possible to run a 4K kernel on Apple Silicon, it's a bit of a bodge right now and it's not something I'm prepared to do on my daily-driver machine, and I assume most other people don't want to either.

We will not gaze directly into the abyss today, but we will tiptoe around the edge - both in an effort to preserve our sanity, and to make sure we don't get nerd-sniped into losing the Do Not Violate The DMCA Challenge.

In general, an ELF is loaded by a loader (wow!), which could be the Linux kernel itself, or a runtime loader like ld.so. In either case, the ELF headers tell the loader how to load the code (and data) into memory, and prepare it for execution.

The ELF contains a table of Program Headers, each of which describes a Segment of the program. These headers are parsed by the loader, and each LOAD segment (a specific segment type) contains metadata to tell the loader how it should be loaded into memory, and which permissions the memory for the Segment should have (read/write/execute). It's the alignment of these LOAD Segments that's causing issues with Asahi's 16K pages (more on this later!)

Libraries get loaded at a random base address. The purpose of the linker is to adjust the code and data to account for this arbitrary new location, resolving references within the library, but also potentially references to other libraries (e.g. where one library calls a function from another). The dynamic linker (part of glibc, in this case) parses a bunch of ELF structures to make this happen, including but not limited to the Section Header Table, and the Dynamic Section. These headers (of various types) enumerate Relocations, which tell the linker specifically where and how the code/data should be adjusted, to make everything work.

Side note: Earlier, chadmed meant to say "LOAD segments", not "LOAD sections". In short, Segments relate to how code is mapped into memory, and Sections relate to how it is linked and relocated. It's an easy mistake to make, and honestly I use them interchangeably when I'm not writing articles like this one. Whoever designed ELF really ought to have picked better names for things. Anyway, moving on...

Although a shared library is not an "executable" per se, it still has an INIT_ARRAY, which is an array of function pointers that are called sequentially by the loader, after the ELF has been linked. The library uses these INIT_ARRAY entries to do whatever startup initialisation tasks it requires.

During this process, the loader uses the mmap() system call to map subsections of the ELF file into memory at a particular address, as directed by the LOAD segments. If we RTFM (for mmap), we find out that:

To avoid becoming a goto loser, we must ensure that (vaddr - offset) % pagesize == 0, where vaddr is the Virtual (memory) Address to map a given segment at, and offset is the offset within the ELF file where the data to be mapped resides.

You may notice that none of the individual fields have any particular alignment properties. This is fine, because there isn't a 1:1 correspondence between segments and calls to mmap() - the loader has logic to figure out the actual mappings required.

We can't do anything that affects the relative positions of the segments in memory - this would make the CDM very angry, because it would break the relative offsets used to reference data in one segment from another. Furthermore, CDMs generally get pissed off if you make any changes to their code that are detectable at runtime. Soothing the CDM's rage could run afoul of the DMCA (we may not "circumvent" any "technological protection measures"), and so we must avoid enraging it in the first place.

As a reminder, we need some way to uphold the (vaddr - offset) % pagesize == 0 constraint. Changing vaddr is not an option (because it would affect the runtime memory layout), but we can adjust offset - i.e. by moving the data around in the ELF file itself.

We can fix this by adding 0x1000 bytes of padding into the ELF file, in between where the first and second segments are stored. We adjust the offset field accordingly, giving 0x00905290 - 0x00905290 == 0. We do a similar thing to fix the third and final segment.

This is slightly easier said than done, because once you start inserting padding bytes into the ELF file, lots of other offsets in the ELF headers become invalid and need to be adjusted (since anything after the padding-insertion point has moved).

90f70e40cf
Reply all
Reply to author
Forward
0 new messages