Insights on Implementing Orthogonal Polynomial Sequences in Function Approximation

32 views
Skip to first unread message

Stephen Crowley

unread,
Dec 2, 2023, 6:04:36 PM12/2/23
to flint-devel

Dear FLINT connoisseurs,

I am currently working on a method of function approximation using orthogonal polynomial sequences as part of arb4j (which btw has passed a significant milestone in being able to translate mathematical expressions directly into JVM bytecodes via the Expression.express method in the  expression compiler package , specifically within contexts like the Galerkin method. A significant aspect of this work involves utilizing Legendre polynomials. For this purpose, I have adopted the approach of memoizing polynomial coefficients and then using arb's polynomial evaluation abilities, as it appears to be the most efficient and practical strategy, especially when dealing with transformations of these coefficients such as various forms of linear operators and whatnot.

While I am proceeding with this approach, I would greatly value the opinions and advice of the FLINT development community, especially regarding:

  1. Efficiency and Scalability: How does the memoization of polynomial coefficients fare in terms of computational efficiency and scalability, particularly for large-scale applications? I have a feeling I might be re-inventing some of arb's internal caching of polynomial coeffecients but I think this would be cleaner given the API I'm trying to design

  2. Evaluation of High-Degree Polynomials: What can I expect when evaluating the 20th Legendre polynomial with the coefficients stored as a real 256-bit polynomial versus using the 256-bit optimized C method to evaluate them? Do the polynomials have to be computed to more than 256 bit precision to evaluate the results effectively at 256 bits?

  3. Memory Management: Any advice on efficiently managing the memory overhead that comes with storing these coefficients, especially in large computations? Does arb have any support for block matrices or tri-diagonal matrices? (I'm thinking implementing of the Jacobi operator matrix here)

  4. Specific Considerations for Function Approximation: Are there particular nuances or additional tips that might be beneficial for my specific application in function approximation? In my specific application, I have formulas I've derived for the projection of a function onto each of the eigenfunctions of a reproducing kernel Hilbert space (RKHS) but in many instances this is computed using numerical methods, which is where I find the technique referenced in the excellent book Stochastic Finite Elements: The Spectral Approach

  5. Adapting to Dynamic Computational Needs: How can one best adapt the memoization approach in environments where the requirements for precision or polynomial degrees might change? Is it possible to speed up higher precision calculations by starting off with the lower-precision ones for instance? This doesnt seem to be an issue because in the applications I am envisioning the need to vary the accuracy at runtime does not seem critical

Thanks very much for your input and guidance on this


Best regards,

Stephen

Reply all
Reply to author
Forward
0 new messages