Chase Coleman and I (Spencer Lyon) have been working on some routines to do Smolyak interpolation/function approximation in Julia and Python. We are still in the early stages of the project, but have some promising results so far.
Smolyak interpolation consists of two objects: a grid and a interpolating polynomial. Typically, the sparse grid is generating using nested sets of the extrema of the Chebychev polynomials. Similarly, the polynomial is constructed using nested sets of uni-dimensional basis polynomials (generally Chebychev polynomials of the first kind). There have been numerous applications of this procedure to economic models. A few examples are:
Malin, Krueger, and Kubler (2011)
Gordon (2011)
To make the algorithm more efficient, Judd, et. al. (2013) suggest grid and interpolating polynomial construction routines that rely on disjoint sets of points and basis elements instead of the traditional nested sets.
The novel idea from Judd, el. al. (2013) is to not repeat calculations or function evaluations by having the grid and polynomial to be constructed using unique elements only. We have taken this idea and improved upon it using some combinatorics tricks. We’ll be sure to document them soon (sorry I don’t do it here: it takes some background, which I don’t have time to lay right now). So far, this has enabled us to build grids with many more dimensions or much greater “density” (which typically translates to greater accuracy in interpolation) than is possible with classical nested sets.
At this stage, the project is very young and we have some more work to go, but wanted to make everyone aware of it and invite all who are interested to contribute. The code is on github under the EconForge organization and can be found here https://github.com/EconForge/Smolyak.jl.
As this is a bi-lingual project, the structure of the repository is a little unique. We try to follow conventions for both Julia and Python packages as explained in the README.
We will be updating the issue list on github soon, which should give a good idea of where the project currently stands and what still needs to happen. We also wanted to open this thread so we can have another location where people can give input.
Let us know if you are interested in using/contributing to the project!
These are actually near the top of our todo list. I have updated the github issue list to reflect this. I will be filling in the details on the issues soon. A summary of the TODO list is:
I am not sure what you mean by hierarchical sparse grids. Can you elaborate?
These are actually near the top of our todo list. I have updated the github issue list to reflect this. I will be filling in the details on the issues soon. A summary of the TODO list is:
- Develop a more efficient algorithm for constructing the matrix
(matrix used for interpolating polynomial)
- Implement anisotropic grids. This allows different
parameters (density of grid) along each dimension. Should be fairly simple
- Implement adaptive domain. Right now our grids are defined on
. We will have two methods of generalizing this. I will give details on the GH issue soon
- Implement linear interpolation. All the pieces are there for this, we just need to design a good interface and write a few functions.
I am not sure what you mean by hierarchical sparse grids. Can you elaborate?
On Thursday, November 14, 2013 3:54:05 AM UTC-5, Pablo Winant wrote:
Excellent ! That's a very good project, I'm impatient to see its progress.
The old smolyak code in dolo needed a refresh anyway since it was consuming to much memory. It's probably good to take a fresh start here. Have you thought about implementing linear interpolation, using hierarchical sparse grids too ? (as in Matlab's sparse grid interpolation toolbox ?) It has automatic dimension refinement and there maybe some inspiration to take there.
Pablo
Le mercredi 13 novembre 2013 19:37:52 UTC+1, Spencer Lyon a écrit :Chase Coleman and I (Spencer Lyon) have been working on some routines to do Smolyak interpolation/function approximation in Julia and Python. We are still in the early stages of the project, but have some promising results so far.
Smolyak Basics
Smolyak interpolation consists of two objects: a grid and a interpolating polynomial. Typically, the sparse grid is generating using nested sets of the extrema of the Chebychev polynomials. Similarly, the polynomial is constructed using nested sets of uni-dimensional basis polynomials (generally Chebychev polynomials of the first kind). There have been numerous applications of this procedure to economic models. A few examples are:
- Krueger and Kubler (2004)
- Malin, Krueger, and Kubler (2007)
Malin, Krueger, and Kubler (2011)
Gordon (2011)
To make the algorithm more efficient, Judd, et. al. (2013) suggest grid and interpolating polynomial construction routines that rely on disjoint sets of points and basis elements instead of the traditional nested sets.
Smolyak.jl
The novel idea from Judd, el. al. (2013) is to not repeat calculations or function evaluations by having the grid and polynomial to be constructed using unique elements only. We have taken this idea and improved upon it using some combinatorics tricks. We’ll be sure to document them soon (sorry I don’t do it here: it takes some background, which I don’t have time to lay right now). So far, this has enabled us to build grids with many more dimensions or much greater “density” (which typically translates to greater accuracy in interpolation) than is possible with classical nested sets.
At this stage, the project is very young and we have some more work to go, but wanted to make everyone aware of it and invite all who are interested to contribute. The code is on github under the EconForge organization and can be found here https://github.com/EconForge/Smolyak.jl.
As this is a bi-lingual project, the structure of the repository is a little unique. We try to follow conventions for both Julia and Python packages as explained in the README.
We will be updating the issue list on github soon, which should give a good idea of where the project currently stands and what still needs to happen. We also wanted to open this thread so we can have another location where people can give input.
Let us know if you are interested in using/contributing to the project!
--
You received this message because you are subscribed to the Google Groups "econforge" group.
To unsubscribe from this group and stop receiving emails from it, send an email to econforge+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I was able to confirm from the Julia guys that the repositories do not need to end in .jl. I have changed the name of the repo to be just Smolyak now. (see my post for discussion).
Also we will have to think carefully and do some testing to come up with the best solution as far as the possible use of generators goes. Like you said Pablo, julia does have co-routines they cal l Tasks, which we are already using a bit in the code where we have generators on the Python side (see util.jl and check out the pmute function and the comb_with_replacement function). I did, however, read on a Julia google group post that the Task, produce, consume interface has a bit of overhead in Julia and we might be better off implementing an iterator by defining the start(), next(), and done() methods for a custom type.
Cartesian is fantastic. We are actually using using one of their macros @forcartesian in the cartesian function in util.jl. I think the @nloops and @nref are quite flexible and will help write concise, de-vectorized, performant code in these splines and interpolation routines.
Hi,I thought some of you guys might be interested in this. Its linear interpolation Clenshaw-curtis or no boundary grids but as it's based on some stuff I used in matlab the core of itis in a c++ library, this code is also dumped at the above link. It's got a fairly quite routine for generating the grids and aliows you to use different bounds for variables (not [0,1]) and has some rudimentary grid size reduction based on hierarchical surpluses.
--
Hi Zac,
Thanks for the post, this does seem like it could be quite useful! The things Chase and I are working on are still very early in development — as can be seen in the fact that our code doesn’t actually “do” anything besides build the grid and basis functions.
Lately, most of our time working on this project (which is quite limited because we are both first year PhD students) has been spent designing an algorithm that should drastically decrease the times you have reported. We will also have parallel implementations soon (our algorithm makes it trivial to parallelize). Hopefully then these raw benchmarks will be more meaningful.
Do you mind posting the code used to do the benchmark? I am what your object Q is and how that enters into our code.
Also, we are more interested making sure the code is efficient for high dimensionality cases instead of “high density” cases. Therefore, the current code scales much better in what we call d than it does in what we call mu.