Quick reminder that the annual FPTalks workshop will be Thursday, July 11th this year starting around 7:45am Pacific Time!
Somehow we have an even more exciting lineup of the latest and greatest research from across numerical computing, including tools, program analyses, hardware design, and formulation verification. It's going to be a fantastic event :)
Talks will be over Zoom (we will send the link to all registered attendees) and streamed to YouTube.
Snapshot of the full program below. Please help us out by spreading the word!
8:00–9:00 PDTSession 1
8-bit Transformer Inference and Fine-tuning for Edge Accelerators
Jeffrey Yu, Stanford University
Precision Learning for DNN Compression via Adaptive Quantization
Cédric Gernigon, Inria Rennes
Accumulator-Aware Quantization with Guaranteed Overflow Avoidance
Ian Colbert, AMD Software Architecture
FTTN: Feature-Targeted Testing of NVIDIA & AMD Matrix Accelerators
Xinyi Li, Pacific Northwest National Laboratory
9:30–10:30 PDTSession 2
Type-based approaches to rounding error analysis
Ariel E. Kellison, Cornell University
End-to-End Verification of a Fast and Accurate Floating-Point Approximation
Guillaume Melquiond, Université Paris Saclay, Inria
Bit Blasting Probabilistic Programs
Guy Van den Broeck, University of California, Los Angeles
A Formal Specification of Tensor Cores via Satisfiability Modulo Theories
Benjamin Valpey, University of Rochester
10:30–11:00 PDT
Session 3
Verification of Digital Numerics for High Consequence Systems
Sam Pollard, Sandia National Laboratory
Predicting Performance and Accuracy for Precision Tuning
Yutong Wang, University of California, Davis
An Overview of the NASA LaRC Tool Suite for Floating-Point Analysis
Mariano Moscato, NASA LaRC / AMA Inc.
Customizing Elementary Function Approximations for Hardware Accelerators
Benjamin Carleton, Cornell University