On Fri, Jan 29, 2016 at 10:29 PM, Chris H-C <
chu...@mozilla.com> wrote:
> tl;dr - Around 99.5% of Firefox Desktop clients on release channel
> represented by (a 20% sample of) pings submitted by on January 21, 2016 had
> "hasSSE2" detected.
>
> Here's the analysis and results on github. Please feel free to check my
> work:
https://gist.github.com/chutten/4959c873d7fbbec0785a
Thank you. Do I read this correctly that the percentage is calculated
relative to all installations that send telemetry and not just
relative to 32-bit x86?
On Mon, Feb 1, 2016 at 10:04 PM, Benjamin Smedberg
<
benj...@smedbergs.us> wrote:
> It might be worth reconsidering now: I'm not willing to throw away 0.5% of
> our users without good cause, but perhaps there is a good cause to be made
> here? What would the performance gain be for the remaining 99.5% of users,
> realizing that we already have dynamic SSE2/non-SSE switching in place for
> some of our hottest paths.
My interest is two-fold:
First, I'm working on some Rust code with the intent of getting it
into Gecko unconditionally on all platforms. It seems that the
official Rust toolchain for 32-bit x86 feels free to generate SSE2
instructions (for floating point or, I suppose, whenever the LLVM
autovectorizer decides to use SSE2). My understanding is that the new
MP4 demuxer that's written in Rust is currently only being built in
the x86_64 case, so, AFAICT (I'd love to be wrong!), Rust code in
32-bit Firefox isn't a solved problem yet.
For Rust code that doesn't explicitly try to use SSE2, are we going to
use the default official rustc which emits SSE2-requiring code and,
therefore, make Firefox require SSE2 on 32-bit x86? Or are we going to
use rustc in a non-default configuration so that SSE2 instructions are
absent in its output and, therefore, we'd ship using a non-default
compiler config? (I'm hoping this gets decided before figuring it out
becomes a blocker. If it has been already figured out, awesome.)
Second, the Rust code that I'm working on would (if things go like I
hope) replace C++ code that has an explicitly SSE2-optimized code
path. If we decide that rustc isn't allowed to emit SSE2
unconditionally, I'd need to figure out how to choose from SSE2 and
non-SSE2 function implementations at run time. AFAICT, this would
entail compiling the SSE2-enabled function as a separate crate and
have the two Rust crates see each other as C code behind the FFI. I'd
rather not put in the effort if we are on the verge of just allowing
SSE2 unconditionally.
As for the consequences of requiring SSE2 unconditionally, I'm
personally more worried about a conflict with Linux distros that don't
already require SSE2 (even if near 100% of their users actually had
SSE2-enabled hardware; this concern is not about the actual numbers)
than about dropping support for XP boxes. Curiously, Fedora seems to
document that llvmpipe requires SSE2 as if the distro as a whole
didn't. I wonder if there actually exist non-SSE2 boxes with
Gnome3-compatible OpenGL on the GPU. Ubuntu also relies on llvmpipe in
the absence of suitable GPU-base OpenGL support. This suggests that
the major distros are de facto pretty far along requiring SSE2, but I
don't know what their policies are and how unhappy they'd be about
Firefox requiring SSE2 (or how unhappy we'd be if distros shipped
de-optimized Firefox and users thought it works the same as the one
from Mozilla).