Chebyshev interpolation beyond 3D

7 views
Skip to first unread message

kars hofstra

unread,
Feb 10, 2026, 7:55:20 AM (10 days ago) Feb 10
to chebfun-users
Dear all,

I am currently writing my MSc thesis, in which I plan to use Chebyshev interpolation, and by extension Chebfun, to price financial derivatives. As part of this work, I am evaluating methods for approximating higher-dimensional functions using Chebyshev techniques.

Since Chebfun is limited to three dimensions, I am looking for papers or software packages that address Chebyshev interpolation in dimensions beyond this. If you are aware of published work or existing implementations that perform higher‑dimensional interpolation, I would be very grateful for your suggestions.

I am familiar with research which attempt to alleviate the curse of dimensionality using some form of Low-rank tensor format, such as [1] in option pricing (as well as the methodology underlying chebfun3 itself). The framework in [1] can be applied to high dimensional problems, but I am curious about other implementations as well, specifically those who have ready-to-use code available.

Thank you very much in advance for your help.

Kind regards,
Kars Hofstra, Student MSc Quantitative Finance at the Erasmus University Rotterdam.

Reference
[1] Glau, K., Kressner, D., and Statti, F. (2020). Low-rank tensor approximation for chebyshev in-
terpolation in parametric option pricing. SIAM Journal on Financial Mathematics, 11(3):897–
927.

Behnam Hashemi

unread,
Feb 11, 2026, 12:41:37 AM (9 days ago) Feb 11
to chebfun-users
Dear Kars,

I'm aware of the following two extensions: 
Best wishes,
Behnam


kars hofstra

unread,
Feb 11, 2026, 5:27:39 AM (9 days ago) Feb 11
to chebfun-users
Dear Behnam, 

Thank you for your quick response. I will have a look at your suggestions, they both seem promising. Thank you again for the help.

Kind regards,
Kars

Op woensdag 11 februari 2026 om 06:41:37 UTC+1 schreef Behnam Hashemi:

Ian Ajzenszmidt

unread,
Feb 11, 2026, 9:11:47 AM (9 days ago) Feb 11
to kars hofstra, chebfun-users
Here is a C++ code and run demo that implements a Tasmanian style algorithm from first principles. Assistance by gemini.google.com Ai llm. 
userland@localhost:~$ rm tfp.cpp
userland@localhost:~$ nano tfp.cpp
userland@localhost:~$ g++ tfp.cppuserland@localhost:~$ ./a.out
Initializing 5D Chebyshev Sparse Grid (Level 3)...
Interpolant constructed.

Validation vs Closed Form:
True Price     Interp Price   Abs Error
--------------------------------------------
7.7060         7.6996         6.39e-03
0.0785         0.0929         1.44e-02
2.1704         2.1703         1.01e-04
0.0579         0.0594         1.46e-03
0.0364         0.0435         7.15e-03
17.1757        17.1771        1.38e-03
3.8619         3.8584         3.44e-03
0.0230         0.0356         1.26e-02
1.0964         1.0519         4.45e-02
0.3457         0.3332         1.25e-02
--------------------------------------------
Max Error: 4.45e-02
userland@localhost:~$ cat tfp.cpp#include <iostream>
#include <vector>
#include <cmath>
#include <map>
#include <functional>
#include <numeric>
#include <iomanip>
#include <random>

const double PI = 3.14159265358979323846;

// ============================================================================
// 1. Core Mathematics: Barycentric Lagrange Basis
// ============================================================================
struct BarycentricBasis {
    std::vector<double> nodes;
    std::vector<double> weights;

    // Initialize Chebyshev nodes and weights (Clenshaw-Curtis)
    void setup(int N) {
        nodes.resize(N + 1);
        weights.resize(N + 1);
        for (int j = 0; j <= N; ++j) {
            // Nodes: cos(j * pi / N) -> range [1, -1]
            nodes[j] = (N == 0) ? 0.0 : -std::cos(j * PI / N);

            // Weights: (-1)^j * delta_j
            double w = (j % 2 == 0) ? 1.0 : -1.0;
            if (j == 0 || j == N) w *= 0.5;
            weights[j] = w;
        }
    }

    // Evaluate Lagrange basis functions L_j(x) at a specific point x
    // Stores result in 'out' vector
    void eval_basis(double x, std::vector<double>& out) const {
        out.resize(nodes.size());
        double den = 0.0;
        int match = -1;

        for (size_t j = 0; j < nodes.size(); ++j) {
            double diff = x - nodes[j];
            if (std::abs(diff) < 1e-14) {
                match = j;
                break;
            }
            double t = weights[j] / diff;
            out[j] = t;
            den += t;
        }

        if (match != -1) {
            std::fill(out.begin(), out.end(), 0.0);
            out[match] = 1.0;
        } else {
            for (size_t j = 0; j < nodes.size(); ++j) out[j] /= den;
        }
    }
};

// ============================================================================
// 2. Sparse Grid Interpolator (Optimized for Thesis)
// ============================================================================
class SparseGrid {
    struct GridIndex {
        std::vector<int> levels; // Level per dimension
        double coeff;            // Smolyak coefficient
    };

    int dim;
    std::vector<GridIndex> indices;

    // Cache: Map from Degree N -> Basis Structure
    std::map<int, BarycentricBasis> bases;

    // Stored function values for each component grid
    std::vector<std::vector<double>> grid_values;

    std::vector<double> domain_min, domain_max;

public:
    SparseGrid(int d, const std::vector<double>& min, const std::vector<double>& max)
        : dim(d), domain_min(min), domain_max(max) {}

    // Build the grid and compute pricing values
    void build(int depth, std::function<double(const std::vector<double>&)> f) {
        indices.clear();
        grid_values.clear();
        bases.clear();

        // Helper: Binomial Coefficient
        auto nCr = [](int n, int r) {
            if (r < 0 || r > n) return 0.0;
            double res = 1;
            for (int i = 1; i <= r; ++i) res = res * (n - i + 1) / i;
            return res;
        };

        // Recursive generator for Smolyak indices
        // Logic: Includes grids where (depth <= sum(levels) <= depth + dim - 1)
        std::function<void(std::vector<int>&, int)> generate;
        generate = [&](std::vector<int>& current, int d_idx) {
            if (d_idx == dim) {
                int sum_k = std::accumulate(current.begin(), current.end(), 0);

                // Calculate Smolyak Coefficient
                int target = depth + dim - 1;
                int diff = target - sum_k;

                // Check if valid index
                if (diff >= 0 && diff < dim) {
                    double c = std::pow(-1.0, diff) * nCr(dim - 1, diff);
                    if (std::abs(c) > 1e-9) {
                        indices.push_back({current, c});
                    }
                }
                return;
            }

            // Optimization: Prune recursion
            int current_sum = std::accumulate(current.begin(), current.begin() + d_idx, 0);
            int max_rem = depth + dim - 1 - current_sum;

            for (int l = 0; l <= max_rem; ++l) {
                current[d_idx] = l;
                generate(current, d_idx + 1);
            }
        };

        std::vector<int> buf(dim);
        generate(buf, 0);

        // Compute Values & Store Bases
        for (const auto& idx : indices) {
            std::vector<int> pts_per_dim(dim);
            int total_pts = 1;

            for (int d = 0; d < dim; ++d) {
                int l = idx.levels[d];
                int N = (l == 0) ? 1 : (1 << l); // Degree
                pts_per_dim[d] = N + 1;
                total_pts *= (N + 1);

                if (bases.find(N) == bases.end()) {
                    bases[N].setup(N);
                }
            }

            // Evaluate Function on this Tensor Grid
            std::vector<double> vals(total_pts);
            for (int i = 0; i < total_pts; ++i) {
                std::vector<double> x(dim);
                int temp = i;
                for (int d = dim - 1; d >= 0; --d) {
                    int p_idx = temp % pts_per_dim[d];
                    temp /= pts_per_dim[d];

                    int N = (idx.levels[d] == 0) ? 1 : (1 << idx.levels[d]);
                    double node = bases[N].nodes[p_idx];

                    // Map [-1, 1] -> Physical Domain
                    x[d] = domain_min[d] + (node + 1.0) * 0.5 * (domain_max[d] - domain_min[d]);
                }
                vals[i] = f(x);
            }
            grid_values.push_back(vals);
        }
    }

    // Evaluate the Interpolant
    double evaluate(const std::vector<double>& x_phys) {
        double total = 0.0;

        // Map x to canonical [-1, 1]
        std::vector<double> x(dim);
        for(int d=0; d<dim; ++d)
            x[d] = 2.0 * (x_phys[d] - domain_min[d]) / (domain_max[d] - domain_min[d]) - 1.0;

        // Sum weighted contributions from all sub-grids
        for (size_t g = 0; g < indices.size(); ++g) {
            const auto& idx = indices[g];
            const auto& vals = grid_values[g];

            // Compute 1D basis weights for this specific grid's levels
            // w[d][k] = Value of k-th Lagrange basis for dim 'd' at point x[d]
            std::vector<std::vector<double>> w(dim);
            std::vector<int> pts(dim);

            for(int d=0; d<dim; ++d) {
                int N = (idx.levels[d] == 0) ? 1 : (1 << idx.levels[d]);
                bases[N].eval_basis(x[d], w[d]);
                pts[d] = w[d].size();
            }

            // Tensor contraction
            double grid_sum = 0.0;
            int n_vals = vals.size();

            for(int i=0; i<n_vals; ++i) {
                double weight = 1.0;
                int temp = i;
                for(int d=dim-1; d>=0; --d) {
                    int p = temp % pts[d];
                    temp /= pts[d];
                    weight *= w[d][p];
                }
                grid_sum += vals[i] * weight;
            }
            total += idx.coeff * grid_sum;
        }
        return total;
    }
};

// ============================================================================
// 3. Financial Logic & Main
// ============================================================================

// Helper: Normal CDF
double norm_cdf(double x) {
    return 0.5 * std::erfc(-x * M_SQRT1_2);
}

// Target Function: Geometric Basket Option
double geometric_basket(const std::vector<double>& S) {
    double K = 100.0, T = 1.0, r = 0.05, sigma = 0.2;
    int d = S.size();

    double prod = 1.0;
    for(double val : S) prod *= val;
    double B = std::pow(prod, 1.0/d);

    double sigma_b = sigma / std::sqrt((double)d);
    double mu_b = r - 0.5*sigma*sigma + 0.5*sigma_b*sigma_b;

    double d1 = (std::log(B/K) + (mu_b + 0.5*sigma_b*sigma_b)*T) / (sigma_b*std::sqrt(T));
    double d2 = d1 - sigma_b*std::sqrt(T);

    return std::exp(-r*T) * (std::exp(mu_b*T)*B*norm_cdf(d1) - K*norm_cdf(d2));
}

int main() {
    // 1. Setup
    int dim = 5;
    int level = 3; // Sparse Grid Level

    std::vector<double> min_s(dim, 50.0);
    std::vector<double> max_s(dim, 150.0);

    std::cout << "Initializing 5D Chebyshev Sparse Grid (Level " << level << ")..." << std::endl;

    SparseGrid pricer(dim, min_s, max_s);

    // 2. Build Interpolant
    // We pass the function pointer/lambda to the builder
    pricer.build(level, geometric_basket);

    std::cout << "Interpolant constructed." << std::endl;

    // 3. Validation
    std::cout << "\nValidation vs Closed Form:\n";
    std::cout << std::left << std::setw(15) << "True Price"
              << std::setw(15) << "Interp Price"
              << "Abs Error" << std::endl;
    std::cout << "--------------------------------------------" << std::endl;

    // Random Number Generator
    std::mt19937 gen(42);
    std::uniform_real_distribution<> dis(50.0, 150.0);

    double max_err = 0.0;
    for(int i=0; i<10; ++i) {
        std::vector<double> p(dim);
        for(int k=0; k<dim; ++k) p[k] = dis(gen);

        double exact = geometric_basket(p);
        double approx = pricer.evaluate(p);
        double err = std::abs(exact - approx);

        if(err > max_err) max_err = err;

        std::cout << std::left << std::setw(15) << std::fixed << std::setprecision(4) << exact
                  << std::setw(15) << approx
                  << std::scientific << std::setprecision(2) << err << std::endl;
    }

    std::cout << "--------------------------------------------" << std::endl;
    std::cout << "Max Error: " << std::scientific << max_err << std::endl;

    return 0;
}
userland@localhost:~$

--
You received this message because you are subscribed to the Google Groups "chebfun-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chebfun-user...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/chebfun-users/ffb11cea-7d41-44ea-89f8-20e60fd5b46dn%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages