symjit

215 views
Skip to first unread message

Shahriar Iravanian

unread,
Feb 19, 2025, 5:14:37 PMFeb 19
to sympy
Could you please add symjit (https://github.com/siravan/symjit) to the list of SymPy projects?

Symjit is a lightweight just-in-time (JIT) compiler that directly translates basic sympy expressions into x86-64 and aarch64 machine codes (and, optionally, to WebAssembly). Currently, its main utility is to generate fast numerical functions to feed into different numerical solvers (quadrature, ode solvers...). It has minimum dependency on external libraries and does not use a separate compiler, such as LLVM. It also works very well in the REPL environment.

In addition, if anyone is interested in collaborating to improve and extend it, please contact me. There are many possibilities for future works, such as adding modular arithmetic for fast polynomial computations, adding complex numbers, SIMD instructions, and other instruction sets.

Thanks,

-- Shahriar

Oscar Benjamin

unread,
Feb 19, 2025, 5:41:54 PMFeb 19
to sy...@googlegroups.com
Hi Shahriar,

The symjit package sounds very interesting. I will have to take a look at it.

I'm not sure what the list of packages you are referring to is.
Presumably a PR to the website can add this?

https://github.com/sympy/sympy.github.com

Oscar
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/sympy/7468597d-a73a-4aa6-ae0c-c0dd04105cb5n%40googlegroups.com.

Jason Moore

unread,
Feb 19, 2025, 11:05:24 PMFeb 19
to sy...@googlegroups.com
Yes, this looks interesting, especially that you choose a sane function name "compile_func". We should have named lambdify that.

Jason


Shahriar Iravanian

unread,
Feb 20, 2025, 6:48:36 AMFeb 20
to sy...@googlegroups.com
Thanks a lot. Yes, I meant https://www.sympy.org/en/index.html. I will send a PR.

Regarding the name, I was thinking about a variation of lambdify but couldn't come up with one, so I went with compile_func. 

-- Shahriar

You received this message because you are subscribed to a topic in the Google Groups "sympy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sympy/TBGpgEYtnWw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sympy+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/sympy/CAP7f1AgM_3x0yi0oc7EvXy0us5moRWHqV6L5T9jRiFTmAiPmVQ%40mail.gmail.com.

Jason Moore

unread,
Feb 20, 2025, 7:50:55 AMFeb 20
to sy...@googlegroups.com
Dear Shahriar,

I opened a PR to package symjit for conda here: https://github.com/conda-forge/staged-recipes/pull/29211

I've never tried building any rust packages. There are a couple issues, but maybe it has to do with conda-forge. I'm not sure. If you have any tips you can comment there.

I much prefer the name "compile_func" over lambdify.

Jason


Shahriar Iravanian

unread,
Feb 20, 2025, 9:21:20 AMFeb 20
to sy...@googlegroups.com
Hi,

I’m using setuptools. I should try conda next.

Currently, it comes with binaries for Windows, linux x86-64 (built on ubuntu), and raspbian linux (aarch64). No Mac yet. I will try to compile it on a Mac.

In the long run, it might be easier to rewrite it in pure python with hardware dependencies confined to the mmap module.

Shahriar





Jason Moore

unread,
Feb 20, 2025, 10:35:48 AMFeb 20
to sy...@googlegroups.com
Hi,

The way conda forge works is that it starts from a source distribution and compiles the code using their toolchain (they have a rust toolchain). This generates conda binaries, not wheels.

So, to get it to build there on conda forge we have to debug any Rust compilation or Rust->Setuptools/python issues. Right now, it is not compiling for Linux or Mac and it does compile on Windows, but it doesn't seem to run the test file. So something is not quite right. If you have tips to fix the build issues, you can post to that PR and we can eventually get it built for distribution on conda forge.

Jason


Shahriar Iravanian

unread,
Feb 20, 2025, 7:24:04 PMFeb 20
to sy...@googlegroups.com
Hi Jason,

Thanks for your help. I got conda-forge working. You can install symjit on Windows and Linux (Mac will be coming soon). 

Here is the meta.yaml that works:

```
{% set name = "symjit" %}
{% set version = "1.2.1" %}

package:
  name: {{ name|lower }}
  version: {{ version }}

source:
  url: https://pypi.org/packages/source/{{ name[0] }}/{{ name }}/symjit-{{ version }}.tar.gz
  sha256: 8398ceb11d557c5b5bdce4e7d358b741d2d5278d1e374588f5bd6bbb5a581ebd

build:
  noarch: python
  script: {{ PYTHON }} -m pip install . -vv --no-deps --no-build-isolation
  number: 0

requirements:
  build:    
    - {{ compiler('rust') }}
    - {{ compiler('c') }}
    - {{ stdlib('c') }}
  host:
    - python >=3.7
    - setuptools
    - setuptools-rust
    - pip
  run:
    - python >=3.7  
    - numpy
    - sympy
    - libgcc
   
test:
  imports:
    - symjit
  commands:
    - pip check
  requires:
    - pip

about:
  home: https://github.com/siravan/symjit
  summary: a light-weight jit for sympy expressions
  license: MIT
  license_file: LICENSE

extra:
  recipe-maintainers:
    - shahriariravanian
```

Jason Moore

unread,
Feb 20, 2025, 10:54:31 PMFeb 20
to sy...@googlegroups.com
Dear Shahrari,

We have already debugged everything and merged the PR to conda forge:


You can install symjit with:

conda install -c conda-forge symjit

Enjoy!

Jason


Shahriar Iravanian

unread,
Apr 11, 2025, 3:49:52 PMApr 11
to sy...@googlegroups.com
The latest version of symjit (1.5.0) has just been published. By now, the Rust backend is stabilized and generates code on Linus/Darwin/Windows and x86-64 and arm64 machines. 

Symjit also has a new plain Python-based backend, which depends only on the Python standard library and numpy (the numpy dependency is not strictly necessary) but can generate and run machine code routines. Currently, the Python backend is used as a backup in cases where the compiled Rust code is unavailable. However, it already works very well with a minimum performance drop compared to the Rust backend. 

I would like to have your suggestions and recommendations about the next steps. I hope to add features that align with the maintainers' goals for sympy. Some possibilities:

1. Expanding on the current focus on numerical computation and numpy/scipy/matplotlib inter-operability, for example, adding other data types besides double (single floats, complex numbers...).

2. Fast polynomial evaluation, not only for floating point types, but also over Z, Zp, and Q. The Python-only backend can be tightly coupled to the polynomial subsystem. However, I don't know how useful having such a fast polynomial evaluation function is, but, for example, it may be useful in the combinatorial phase of the Zassenhaus algorithm. On the other hand, it seems that sympy pivots toward using Flint for many such computations. 

3. A different area would be the Satisfiability module, where writing fast SAT/SMT solver, with or without interfacing with Z3 or other solvers, is possible. 

Thanks,

Shahriar Iravanian




peter.st...@gmail.com

unread,
Apr 12, 2025, 1:24:21 AMApr 12
to sy...@googlegroups.com

Dear Shahriar,

 

If I understand correctly, symjit is similar to sympy.lambdify(….) but faster?

 

I found this on Anaconda’s website:

So I could install like this and it will install the dependencies? I have windows.

 

Thanks a lot!

 

Peter

image001.png

Shahriar Iravanian

unread,
Apr 12, 2025, 7:27:37 AMApr 12
to sy...@googlegroups.com
That's right. This, or equivalently, `conda install -c conda-forge symjit` should work on Windows, Linux, and MacOS. Even `python -m pip install symjit` may work, but the conda route is preferable. However, there are always corner cases. Please let me know if you have any problems.

It has three main exported functions: `compile_func` is similar to lambdify. `compile_ode` and `compile_jac` generate functions to pass to scipy ODE solvers.

-- Shahriar



Oscar Benjamin

unread,
Apr 12, 2025, 9:02:06 AMApr 12
to sy...@googlegroups.com
On Fri, 11 Apr 2025 at 20:49, Shahriar Iravanian <irvani...@gmail.com> wrote:
>
> The latest version of symjit (1.5.0) has just been published. By now, the Rust backend is stabilized and generates code on Linus/Darwin/Windows and x86-64 and arm64 machines.

Wow this is amazing. I have been thinking for a long time that exactly
this is needed for precisely the reasons you show in the README but I
was working under the assumption that it would need something like
llvmlite. In protosym I added a lambdify function based on llvmlite
but symjit is just as fast without the massive llvmlite dependency and
can even be pure Python so super-portable.

I am amazed at how simple the symjit code seems to be for what it
achieves. Maybe these things are not as complicated as they seem if
you are just someone who just knows how to write machine code...

I have a prototype of how I wanted this to work for sympy in protosym:

https://github.com/oscarbenjamin/protosym

For comparison this is how protosym does it:

# pip install protosym llvmlite
from protosym.simplecas import x, y, cos, sin, lambdify, Matrix, Expr

e = x**2 + x
for _ in range(10):
e = e**2 + e
ed = e.diff(x)

f = lambdify([x], ed)
print(f(.0001))

The expression here is converted to LLVM IR and compiled with
llvmlite. I'll show a simpler expression as a demonstration:

In [9]: print((sin(x)**2 + cos(x)).to_llvm_ir([x]))

; ModuleID = "mod1"
target triple = "unknown-unknown-unknown"
target datalayout = ""

declare double @llvm.pow.f64(double %Val1, double %Val2)
declare double @llvm.sin.f64(double %Val)
declare double @llvm.cos.f64(double %Val)

define double @"jit_func1"(double %"x")
{
%".0" = call double @llvm.sin.f64(double %"x")
%".1" = call double @llvm.pow.f64(double %".0", double 0x4000000000000000)
%".2" = call double @llvm.cos.f64(double %"x")
%".3" = fadd double %".1", %".2"
ret double %".3"
}

For the particular benchmark ed shown above protosym is faster both at
compilation and evaluation:

This is protosym:

In [3]: %time f(0.001)
CPU times: user 37 μs, sys: 4 μs, total: 41 μs
Wall time: 51 μs
Out[3]: 1.0223342283660657

In [4]: %timeit f(0.001)
657 ns ± 18.6 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)

This is the equivalent with symjit's compile_func:

In [3]: %time f(0.001)
CPU times: user 306 μs, sys: 8 μs, total: 314 μs
Wall time: 257 μs
Out[3]: array([0.00100401])

In [4]: %timeit f(0.001)
25.1 μs ± 148 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

I think the reason for the speed difference here is that protsym first
converts the expression into a forward graph much like sympy's cse
function which handles all the repeating subexpressions efficiently. I
think symjit generates the code recursively without handling the
repeating subexpressions. Also the number from symjit here is
incorrect as confirmed by using exact rational numbers:

In [6]: ed.subs({x: Rational(0.001)}).evalf()
Out[6]: 1.02233422836607

I'm not sure if that difference is to do with the forward graph being
more numerically accurate or is it a bug in symjit?

> Symjit also has a new plain Python-based backend, which depends only on the Python standard library and numpy (the numpy dependency is not strictly necessary) but can generate and run machine code routines. Currently, the Python backend is used as a backup in cases where the compiled Rust code is unavailable. However, it already works very well with a minimum performance drop compared to the Rust backend.

Possibly the most useful thing to do is to publish this as two
separate packages like symjit and symjit-rust. Then anyone can pip
install symjit for any Python version without needing binaries on PyPI
or a Rust toolchain installed locally. The symjit-rust backend can be
an optional dependency that makes things faster if installed. It can
also be possible to have symjit depend conditionally on symjit-rust
but only for platforms where a binary is provided on PyPI. That way no
one ever ends up doing pip install symjit and having it fail due to
missing rust toolchain.

> I would like to have your suggestions and recommendations about the next steps. I hope to add features that align with the maintainers' goals for sympy. Some possibilities:
>
> 1. Expanding on the current focus on numerical computation and numpy/scipy/matplotlib inter-operability, for example, adding other data types besides double (single floats, complex numbers...).

I don't know how difficult it is to do these things but generally yes
those would be useful.

> 2. Fast polynomial evaluation, not only for floating point types, but also over Z, Zp, and Q. The Python-only backend can be tightly coupled to the polynomial subsystem. However, I don't know how useful having such a fast polynomial evaluation function is, but, for example, it may be useful in the combinatorial phase of the Zassenhaus algorithm. On the other hand, it seems that sympy pivots toward using Flint for many such computations.

Generally SymPy is going to use FLINT for these things but FLINT is
only an optional dependency. Some downstream users may prefer not to
use FLINT because it has a different licence (LGPL) whereas symjit has
the MIT license that pairs better with SymPy's BSD license.

If symjit provided more general capability to just generate machine
code then I am sure that SymPy could make use of it for many of these
things. It would probably make more sense for the code that implements
those things to be in SymPy itself though with symjit as an optional
dependency that provides the code generation.

> 3. A different area would be the Satisfiability module, where writing fast SAT/SMT solver, with or without interfacing with Z3 or other solvers, is possible.

That would also be great but again I wonder if it makes sense to
include such specific things in symjit itself.

I think that what you have made here in symjit is something that
people will want to use more broadly than SymPy. Maybe the most useful
thing would be for symjit to focus on the core code generation and
execution as a primitive that other libraries can build on. In other
words the ideal thing here would be that symjit provides a general
interface so that e.g. sympy's lambdify function could use symjit to
generate the code that it wants rather than symjit providing a
compile_func function directly.

One downside of compile_func is precisely the fact that its input has
to be a sympy expression and just creating sympy expressions is slow.
This is something that we want to improve in sympy but realistically
the way to improve that is by using other types/representations like
symengine or protosym etc. I have some ideas for building new
representations of expressions so that many internal parts of sympy
could use those instead of the current slow expressions. Unfortunately
it is not going to be possible to make the user-facing sympy
expressions much faster unless at some point there is a significant
break in compatibility.

The ideal thing here would be for symjit to provide an interface that
can be used to generate the code without needing a SymPy expression as
input. For example how would protosym use symjit without needing to
create a SymPy expression?

I think that the reason that protosym is faster for the benchmark
shown above is because of the forward graph and so symjit could use
the same idea. What might be best though is to leave that sort of
thing for other libraries that would build on symjit and for symjit to
focus on being very good at providing comprehensive code generation
capabilities. I think that many Python libraries would want to build
on symjit to do all sorts of things because being able to generate
machine code directly like this is better in many ways than existing
approaches like llvmlite, numba, numexpr etc.

The thing that is nice about generating the LLVM IR as compared to
generating machine code directly is that it gives you unlimited
registers but then LLVM figures out how to use a finite number of
registers on the backend. This makes the IR particularly suitable for
dumping in the forward graph without needing to think about different
architectures. Can symjit's machine code builders achieve the same
sort of thing? It's not clear to me exactly how the registers are
being managed.

There is one important architecture for SymPy that symjit does not yet
generate code for which is wasm e.g. so you can run it in the browser:

https://live.sympy.org/

I don't know whether this sort of thing is even possible in wasm
though with its different memory safety rules.

Does symjit work with PyPy or GraalPython or can it only be for CPython?

--
Oscar

Peter Stahlecker

unread,
Apr 12, 2025, 9:52:25 AMApr 12
to sy...@googlegroups.com
Thanks a lot!
I will try to install it and if I run into trouble I'll take the liberty to contact you again.

Best regards,

Peter


Isuru Fernando

unread,
Apr 12, 2025, 11:32:37 AMApr 12
to sy...@googlegroups.com
Hey Oscar,

SymEngine's lambdify can be used too. It uses numpy arrays to support broadcasting
and other features of sympy.

from symengine import *
x = symbols('x')

e = x**2 + x
for _ in range(10):
    e = e**2 + e
ed = e.diff(x)

f = lambdify([x], [ed])
print(f(.0001))

To avoid a bit of overhead,

import numpy as np
from symengine.lib.symengine_wrapper import LLVMDouble
a = np.array([0.0001])
b = a.copy()
f = LLVMDouble([x], ed, cse=True, opt_level=3)
f.unsafe_real(a, b)

The last one gives me

In [43]: %time f.unsafe_real(a, b)
CPU times: user 25 µs, sys: 2 µs, total: 27 µs
Wall time: 30.5 µs

In [49]: %timeit f.unsafe_real(a, b)
470 ns ± 170 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)

compared to protosym's

In [45]: %time f(.0001)
CPU times: user 0 ns, sys: 44 µs, total: 44 µs
Wall time: 47.7 µs

In [47]: %timeit f(.0001)
399 ns ± 131 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)

SymEngine is a bit slower than protosym due to using memoryviews, but
we can add an interface to avoid those.

Regards,
Isuru



--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.

Oscar Benjamin

unread,
Apr 12, 2025, 12:08:16 PMApr 12
to sy...@googlegroups.com
On Sat, 12 Apr 2025 at 16:32, Isuru Fernando <isu...@gmail.com> wrote:
>
> SymEngine is a bit slower than protosym due to using memoryviews, but
> we can add an interface to avoid those.

I'm sure it can be made faster. To be clear to anyone reading this
both SymEngine and protosym are using LLVM for this. I could have also
demonstrated SymPy's own llvm printing module as well.

It could be possible for SymPy, SymEngine etc to use symjit instead of
LLVM. I see potential advantages here given that symjit is a very
light-weight install (1MB or more like 200KB without the Rust binary)
and could be a pure Python package that is easily installed. I can
imagine lots of libraries wanting to use this. Despite the name
llvmlite takes up 128MB here and is much harder to build.

My main point though is that it would be better if symjit can be used
starting from something other than a SymPy expression because just
creating a SymPy expression is slow and also SymPy is a beefy
dependency, slow to import etc, compared to symjit which is small and
lean.

Isuru, would you consider using symjit in SymEngine?

--
Oscar

Shahriar Iravanian

unread,
Apr 12, 2025, 1:01:19 PMApr 12
to sy...@googlegroups.com
Hi Oscar,

Thank you very much for your response. I appreciate it. There is a lot to think about. 

Regarding the example, this is a tough test! It shows that there is a bug in the x64-86 rust backend. Interestingly, the Python backend and the rust one on ARM64 (MacOS) give the correct answer:

e = x**2 + x
for _ in range(10):
    e = e**2 + e
ed = e.diff(x)
f = compile_func([x], [ed], backend='python')

In [23]: %time f(0.001)
CPU times: user 94 μs, sys: 9 μs, total: 103 μs
Wall time: 105 μs
Out[23]: array([1.02233423])

In the short term, my goal is to work on correctness. I appreciate this example and similar ones that stress test the code generator. 

However, the bigger question is where symjit fits in the Python/sympy ecosystem. It is lightweight because sympy expressions act as an intermediate representation. LLVM and other compilers do a lot of work to recreate the control flow graph (first as a tree and later on as a DAG) from a linear sequence of instructions. Symjit doesn't do this because it starts from a tree representation. Of course, as you mentioned, the downside is that generating sympy expressions can be computationally expensive. I don't know what the right abstraction is. Symjit already converts sympy expressions into its internal tree structure (with nodes like Unary and Binary). We could expose this structure to the users. Moreover, it is possible to augment the tree structure by adding loops, aggregate functions, and various functional accessories to allow for more complex programs. However, this interface will be the key and must be designed carefully. 

Register allocation is a critical part of the code generators. It is tricky because it depends on the exact ABI (which registers are used to pass arguments, which ones are caller-saved and which ones are callee-saved, the stack-frame...), which differ on different architectures and (of course) different in Windows from Linux even on the same processor architecture. At least MacOS follows *nix systems. Symjit uses a simple algorithm for register allocation. It generates code like a stack machine but then shadows some stack slots with scratch registers if possible. LLVM uses a much more elaborate algorithm to allocate registers, but the results for simple use cases like here are not that different. 

Thanks again,

Shahriar

You received this message because you are subscribed to a topic in the Google Groups "sympy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sympy/TBGpgEYtnWw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sympy+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/sympy/CA%2B01voPGjNM4wR7CrkT2pMztZiza2KqipFzpwwJbWk3o0%2BT4hg%40mail.gmail.com.

Isuru Fernando

unread,
Apr 12, 2025, 2:47:40 PMApr 12
to sy...@googlegroups.com
Hi Oscar,

Yes, we can add symjit as another backend if it offers a C/C++ API.
We also have a pure C/C++ backend, just replace `LLVMDouble` by `LambdaDouble`.

Isuru

--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.

Oscar Benjamin

unread,
Apr 12, 2025, 3:59:46 PMApr 12
to sy...@googlegroups.com
On Sat, 12 Apr 2025 at 18:01, Shahriar Iravanian <irvani...@gmail.com> wrote:
>
> Regarding the example, this is a tough test!

It is and in some ways it is not realistic but actually in some ways
it is. A common case will certainly be small expressions e.g. for
simple ODEs as you show in the README. Another case though is large
uncanonicalised expressions that result from things like solving a
system of linear equations or differentiating large expressions etc.
In these cases it is very common that large expressions will have many
repeating subexpressions and it is important to have an implementation
that can handle that.

In my example if N is the number of iterations in the loop then the
tree for e grows exponentially like O(2^N) whereas the DAG grows
linearly i.e. O(N). The differentiation to make ed at the end explodes
this even further. You can see the sizes of these from protosym for
N=10:

In [5]: e.count_ops_graph()
Out[5]: 24

In [6]: e.count_ops_tree()
Out[6]: 8189

In [7]: ed.count_ops_graph()
Out[7]: 68

In [8]: ed.count_ops_tree()
Out[8]: 55291

Those numbers correspond to the number of instructions in the IR if
repeating subexpressions are handled or not. For ed the difference is
a factor of 1000 but if you increase N it can be arbitrarily large.

Usually more realistic expressions will not be as extreme as this but
this sort of effect is very often there so if we want to evaluate
large expressions numerically it is generally important to take it
into account somehow.

> It shows that there is a bug in the x64-86 rust backend. Interestingly, the Python backend and the rust one on ARM64 (MacOS) give the correct answer:
>
> e = x**2 + x
> for _ in range(10):
> e = e**2 + e
> ed = e.diff(x)
> f = compile_func([x], [ed], backend='python')
>
> In [23]: %time f(0.001)
> CPU times: user 94 μs, sys: 9 μs, total: 103 μs
> Wall time: 105 μs
> Out[23]: array([1.02233423])

Sorry I should have said that I was testing on Linux x86-64. I confirm
that the Python backend gives the correct answer:

In [7]: from symjit import compile_func

In [8]: f = compile_func([x], [ed], backend='python')

In [9]: f(0.001)
Out[9]: array([1.02233423])

In [10]: f = compile_func([x], [ed])
In [11]: f(0.001)
Out[11]: array([0.00100401])

> However, the bigger question is where symjit fits in the Python/sympy ecosystem. It is lightweight because sympy expressions act as an intermediate representation. LLVM and other compilers do a lot of work to recreate the control flow graph (first as a tree and later on as a DAG) from a linear sequence of instructions. Symjit doesn't do this because it starts from a tree representation. Of course, as you mentioned, the downside is that generating sympy expressions can be computationally expensive. I don't know what the right abstraction is. Symjit already converts sympy expressions into its internal tree structure (with nodes like Unary and Binary). We could expose this structure to the users. Moreover, it is possible to augment the tree structure by adding loops, aggregate functions, and various functional accessories to allow for more complex programs. However, this interface will be the key and must be designed carefully.

Is it possible to have an interface that builds instructions one at a
time? You could imagine something like we have the expression
sin(x**2) + x**2 and we can compile this with something like:

from symjit import FuncBuilder

B, [x] = FuncBuilder('x')
a = B.pow(x, 2)
b = B.sin(a)
c = B.add(a, b)
f = B.compile(c)

print(f(1.0))

In my mind each step here appends a new operation with a new
destination operand. The intermediate variables a, b and c refer to
the output of a particular step rather than being trees. My hope would
be to get something equivalent to this LLVM IR which reuses %".0" to
avoid computing x**2 twice:

In [13]: print((sin(x**2) + x**2).to_llvm_ir([x]))
...
define double @"jit_func1"(double %"x")
{
%".0" = call double @llvm.pow.f64(double %"x", double 0x4000000000000000)
%".1" = call double @llvm.sin.f64(double %".0")
%".2" = fadd double %".1", %".0"
ret double %".2"
}

--
Oscar

Shahriar Iravanian

unread,
Apr 13, 2025, 10:04:57 AMApr 13
to sy...@googlegroups.com
Hi Oscar,

I wrote a light wrapper around the symjit python backend. It can be installed using `pip install funcbuilder`. The only dependency is numpy. The GitHub repo is https://github.com/siravan/funcbuilder.

It can compile your example:

In [1]: from funcbuilder import FuncBuilder
In [2]: B, [x] = FuncBuilder('x')
In [3]: a = B.pow(x, 2)
In [4]: b = B.sin(a)
In [5]: c = B.add(a, b)
In [6]: f = B.compile(c)
In [7]: f(1.0)
Out[7]: 1.8414709848078965

The builder exports the following construction functions:

add, sub, mul, div, pow, exp, log, sqrt, square, cube, recip,
sin, cos, tan, sinh, cosh, tanh, asin, acos, atan, asinh, acosh, atanh,
lt, leq, gt, leq, eq, neq, logical_and, logical_or, logical_xor, ifelse

compile_func is also available and works as before. 

The bug you mentioned yesterday is fixed in the upcoming version of symjit (1.5.1). 

Thanks,

Shahriar







--
You received this message because you are subscribed to a topic in the Google Groups "sympy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sympy/TBGpgEYtnWw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sympy+un...@googlegroups.com.

Shahriar Iravanian

unread,
Apr 13, 2025, 10:17:38 AMApr 13
to sympy
Hi Isuru,

The Rust backend has a C API for communication with Python. However, it expects the whole model as a JSON string, which may not be the most convenient form for SymEngine. I think we need to design a better API that is useful to SymEngine. 

Thanks,

Shahriar

Oscar Benjamin

unread,
Apr 13, 2025, 11:56:14 AMApr 13
to sy...@googlegroups.com
On Sun, 13 Apr 2025 at 15:04, Shahriar Iravanian <irvani...@gmail.com> wrote:
>
> Hi Oscar,
>
> I wrote a light wrapper around the symjit python backend. It can be installed using `pip install funcbuilder`. The only dependency is numpy. The GitHub repo is https://github.com/siravan/funcbuilder.

Thanks Shahriar. I will try adding it to protosym to do some testing.

--
Oscar

Shahriar Iravanian

unread,
Jul 20, 2025, 9:42:44 AMJul 20
to sympy
The latest version of symjit (v2.3.0) is now available (with thanks to Jason). You can install it as `conda install -c conda-forge symjit`.

In addition to being more stable, Symjit should be significantly faster, especially for certain workloads relevant to SymPy. The main improvements are:

1. Better codegen with optimized register allocator and fewer mem-copy operations. 
2. Multi-threading is enabled on all platforms by default. Note that this is at the level of the Rust code and is not visible to Python nor affected by the GIL. Multi-threading can significantly improve the performance of parallel workloads when the compile function is called on vectors.
3. Symjit now recognizes and emits specialized code for exponentiation to an integer power with or without mod operation. This is especially important when evaluating polynomials, as we observe a speed-up of more than 100x for high-degree polynomial evaluation. 

The planned additions in the near future:

1. RISC-V support.
2. Common sub-expression elimination. 

Comments and recommendations are welcome, especially if there is a need for a specific feature.

Thanks,

Shahriar

Reply all
Reply to author
Forward
0 new messages