Hello!
The next FPTalks Community Meeting is Thursday, April 2 from 9–10am Pacific Time on Zoom:
https://washington.zoom.us/j/99708186928?pwd=HbhpebAtCWvoP4VQahYb8G1QpQnTgm.1We’re super excited to welcome Anastasia Isychev from TU Wein to present on the
cost of soundness in mixed-precision tuning. Here's the abstract:
Numerical code is often executed repetitively and on hardware with limited resources, which makes it a perfect target for optimizations. One of the most effective ways to boost performance—especially in terms of runtime—is by reducing the precision of computations. However, low precision can introduce significant rounding errors, potentially compromising the correctness of results. Mixed-precision tuning addresses this trade-off by assigning the lowest possible precision to a subset of variables and arithmetic operations in the program while ensuring that the overall error remains within acceptable bounds. State-of-the-art tools validate the accuracy of optimized programs using either sound static analysis or dynamic sampling. While sound methods are often considered safer but overly conservative, and dynamic methods are more aggressive and potentially more effective, the question remains: how do these approaches compare in practice?
We present the first comprehensive evaluation of existing mixed-precision tuning tools for floating-point programs, offering a **quantitative comparison** between sound static and (unsound) dynamic approaches. We measure the trade-offs between performance gains, utilizing optimization potential, and the soundness guarantees on the accuracy—what we refer to as the cost of soundness. Our experiments on the standard FPBench benchmark suite challenge the common belief that dynamic optimizers consistently generate faster programs. In fact, for small straight-line numerical programs, we find that sound tools enhanced with regime inference match or outperform dynamic ones, while providing formal correctness guarantees, albeit at the cost of increased optimization time. Standalone sound tools, however, are overly conservative, especially when accuracy constraints are tight; whereas dynamic tools are consistently effective for different targets, but exceed the maximum allowed error by up to 9 orders of magnitude.
The work is based on Anastasia's
paper at OOPSLA 2025. Looking forward to seeing everyone!
As a reminder, if you would like to give a talk or know of someone that would be great for an FPBench Community meeting, please have them fill out the speaker suggestion form!