Spline Alternatives

0 views
Skip to first unread message

Emerenciana Mcgreal

unread,
Aug 3, 2024, 5:33:15 PM8/3/24
to arylniquan

Find the top alternatives to Spline currently available. Compare ratings, reviews, pricing, and features of Spline alternatives in 2024. Slashdot lists the best Spline alternatives on the market that offer competing products that are similar to Spline. Sort through Spline alternatives below to make the best choice for your needs

Compare Spline alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Spline in 2024. Compare features, ratings, user reviews, pricing, and more from Spline competitors and alternatives in order to make an informed decision for your business.

Was this ironic? If not, well, there are more reasons, but this one is enough: standardization. Splines are waaaay to ubiquitous in games nowadays for them to not have a native implementation with shared code and UX across its uses.

Splines integrates very nicely with existing core features like terrain and cameras, as well as basic scene construction which is why it should definitely be something that is tightly integrated into the core engine itself.

One common pattern with all the examples you gave is that Unity provided a flawed or insufficient solution initially, and then the assetstore provided better ones. After that Unity tried solving that exact issue again, in an often lackluster way. E.g. Unity Input System isnt a lot better than Rewired, NGUI and UGUI suffer from the same issues. Who knows what Multiplayer is up to these days, nobody in their right mind uses it.

And I see it like this.
Implementing splines is a relatively low effort yet profoundly useful feature.
Built in splines allow for interoperability between separately developed tools by different vendors.

Implementing a Spline system is arguably more similar to creating a Camera. The design is known, the featureset decades old, we dont need options here we just need a good core feature that can tie into other core features, but also utilized to enhance all 3rd party plugins that can think of a way to use it.

I am porting a script written in R over to Python. In R I am using smooth.spline and in Python I am using SciPy UnivariateSpline. They don't produce the same results (even though they are both based on a cubic spline method). Is there a way, or an alternative to UnivariateSpline, to make the Python spline return the same spline as R?

smooth.spline in R is a "smoothing spline", which is an overparametrized natural spline (knots at every data point, cubic spline in the interior, linear extrapolation), with penalized least squares used to choose the parameters. You can read the help page for the details of how the penalty is computed.

These are completely different algorithms, and I wouldn't expect them to give equal results.I don't know if there's an R package that uses the same adaptive choice of knots as Python does. This answer: claims to reference a natural smoothing spline implementation in Python, but I don't know if it matches R's implementation.

Note that this code is not fully compatible with Jupyter-notebooks for the latest versions of rpy2. You can fix this by using !pip install -Iv rpy2==3.4.2 as described in NotImplementedError: Conversion 'rpy2py' not defined for objects of type '' only after I run the code twice

I have had two sets of lug nuts with internal engagement tools, and both worked well. The one used a regular 12mm Allen wrench and the other used a special 8-point drive that was really two sets of 1/2 square drive, and they could be taken loose with a 1/2" ratchet extension.

I have a set of Enkei wheels that require spline drive nuts. They have two different hole PCDs, so the holes for each nut are small. My narrowest 17 mm socket doesn't come close to going into the holes.

You know how if you have both envelope deformation chains and spline deformation chains in the same deformation group and you hook something up to said group via a kinematic output, the envelope chains will effect that object too, and often in unfortunate ways?

Its good post, But its not my work. I am intersted in essay writing, So I focus on writing work only. Now I am reading a good tips from essay writing here This site expert providing importnt tips to write an essay. So i am time spending on this

Far away points in their mass, not individually, may have effect on filling in the "holes" of corresponding scale. In the presence of enough neighboring points the influence of far trends will be negligible. As a demonstration, write a univariate linear or cubic spline as a N-sum of RBF formally including "far-away" terms.

The evaluation of large RBF could be done with the method of fast multipoles (FMM) to a given precision. The idea is to pre-compute all the influences of hierarchical groups of far-away points. This is similar to multipole decomposition in electrostatics: far away charges are approximated as one net charge or dipole, a finer structure of remote cloud doesn't matter here.

Fitting a large globally supported RBF interpolant is indeed not an easy problem. One current approach is to use various matrix preconditioning methods. I for example am developing with my mentor an alternative approach, a greedy algorithm, which should allow fitting to very large oversampled datasets.

If your data is large but have more or less regular sparsity, or you can estimate the scales of holes, I would recommend compactly supported RBF or a series of them with different radius, that is how it's done I think in the ALGLIB library, for example.

"...if the data are much further apart thanthe support size of the compactly supported radial basis functions, we willget a useless approximation to the data, although it always interpolates... Conversely, if thedata are far closer to each other on average than the support size of the ra-dial basis function, we lose the benefits of compact support. Thus, the scalingshould be related to the local spacing of the centres." -- Buhman, Radial Basis Functions. See also here.

If you don't have an estimate on the diameter of the largest possible gap and can't use a uniform radius, you may want to look for variable scaling schemes, for example:"This algorithm builds multiscale (hierarchical) RBF models... with decreasing radii... Values predicted by the first layer of the RBF model are subtracted from the function values at nodes ... residuals are passed to the next iteration."

Even though CSRBF may look visually unpleasant when the radius is too small, they still minimize a norm, i.e., are the best approximator in a so-called native functional space which for Wendland's CSRBF of minimal degree is a classical Sobolev space. This is Theorem 10.35 in Wendland, Scattered Data Approximation.

There's been a fair bit of buzz about Kolmogorov-Arnold networks online lately. Some research papers were posted around claiming that they offer better accuracy or faster training compared to traditional neural networks/MLPs for the same parameter count.

That being said, KANs can usually come close to or match the performance of regular neural networks at the same parameter count. However, they are much more complicated to implement than neural networks and require a lot of tricks and hacky-feeling techniques to make them work.

I do believe that there are specialized use cases where they could be objectively better than NNs and be worth pursuing, but in my opinion the brutal simplicity of NNs make them a much stronger default choice.

One big difference to note is that there are far fewer connections between nodes in KANs compared to neural networks/MLPs. KANs move the majority of the learnable parameters into the nodes/activation functions themselves.

There's a lot of things you can customize with B-splines. You can pick the degree of polynomial used to represent the different grid segments, you can pick the number of knots which determines how many polynomials are strung together, and you can specify the domain/range of the spline however you want.

Another nice thing about B-splines is that they are entirely differentiable. That means that the autograd implementations in machine learning frameworks like Torch, Jax, and Tinygrad can optimize the coefficients used to define the splines directly. This is how the "learning" in machine learning happens, so definitely something that's needed for an activation function to be usable in a KAN.

After reading up enough on KANs to feel like I understood what was going on, I decided to try implementing them myself from scratch and try them out on some toy problems. I decided to build it in Tinygrad, a minimal ML framework I've had success working with in the past. What I ended up with is here.

The basic KAN architecture wasn't too complicated, and the only really tricky part was the B-spline implementation. For that, I just ported the implementation that the authors of the original KAN research paper created for their PyKAN library.

After a bit of effort and some debugging, I had a working KAN implementation. To test it out, I set up a small KAN with 2 layers and trained it to fit some relatively simple 1D functions and it did a pretty good job:

Pretty solid results! The first layer's outputs are pretty simple and represent a single spline each. Then the second layer creates more complicated representations that are stitched together from the outputs of the previous layer, and the final layer has a single spline which combines it all together and returns the model's output.

Inspired by this early success, I decided to try turning up the complexity. I set up a training pipeline to parameterize images - learning a function like (normalizedXCoord, normalizedYCoord) -> pixelLuminance.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages