Assumptions modules

71 views
Skip to first unread message

Davide Sandona'

unread,
Jun 6, 2020, 10:18:28 AM6/6/20
to sympy
Until a couple of days ago, I always used the old assumption module defined in sympy.core.assumptions, where we create symbols with assumptions and then the expression infer the values for its assumptions. For example, I know that with this old module it's not possible to create a symbol x assuming x > y.

Now I'm exploring the new assumption module, defined in sympy.assumptions. Looking at the documentation, at questions over StackOverflow and into this mailing list, I understand that this module should be more powerful. I also read this interesting thread [1] from 2016, to understand how these two modules interact with each other. Back then, the two modules were separate entities. Also, by looking at questions over StackOverflow, people say the new module is not integrated that much with the rest of Sympy.

What I understood so far is that the new module is great to ask queries about expressions and their assumption, it is good to refine certain kind of expression, but it is not really good to perform computations with given assumptions... Let me explain with an example:

x = symbols("x")
a
= symbols("a")
exp
(a * x).integrate(x)

This is going to return a piecewise result, with the condition being a=0 or different from 0. Suppose I use the old assumptions system:

x = symbols("x")
a
= symbols("a", nonzero=True)
exp
(a * x).integrate(x)

This is going to produce a single expression.

Is there any way to produce the same single-expression result by using the new assumption module? I tried with this code:

x = symbols("x")
a
= symbols("a")
with assuming(Q.nonzero(a)):
    r
= exp(a * x).integrate(x)
r

However, it returns the piecewise result, so the assumption was not applied.

In light of this, my questions are:
  1. Are the two modules interacting together now?
  2. Is the new module able to do everything the old module does?
  3. What are the limitation of the new module? Where is it not implemented?
Thank you for your time,

Davide.

Oscar Benjamin

unread,
Jun 7, 2020, 6:08:33 PM6/7/20
to sympy
Hi Davide,

I'll answer your questions one by one and then summarise the situation below.

1. Are the two modules interacting together now?

The old and new assumptions interact in one direction: the new
assumptions will query the old assumptions if it helps but the old
assumptions are ignorant of the new. The rest of sympy does not really
use the new assumptions though so any assumptions stated using e.g.
"assuming" will be ignored by almost all of the code in sympy (e.g.
integrate, solve etc).

2. Is the new module able to do everything the old module does?

Since the new assumptions query the old they can make use of any
information that is available in the old assumptions. In that sense a
query using "ask" from the new assumptions API can answer all the same
questions that the old assumptions API can. The new assumptions can
also be used for a wider range of queries for example queries about
matrices. The old assumptions are really just about numbers and can
only answer questions like "is this a positive number".

3. What are the limitation of the new module? Where is it not implemented?

The new assumptions were made a lot faster last Summer. Their slowness
was one major limitation before. They are still slow in comparison to
the old assumptions. The real limitation is the fact that the old
assumptions are used all across the codebase in both elementary
evaluations and in more complex routines. A huge amount of code would
need to be changed in order to make it so that general routines like
"integrate" use the new assumptions.

I guess that the original plan was for the new assumptions to replace
the old so that they could be a dropin replacement and then the old
assumptions API would still be there but it would call into the new
assumptions in order to resolve its queries. That's not possible
though because the new assumptions actually depend on the old and
there is a huge amount of work invested in making the old assumptions
work the way that they do so replicating that in the new assumptions
so that they can stand on their own without the old is not a small
task.

My main problem with the idea that the new assumptions can replace the
old is about the internal organisation of the code base itself. The
old assumptions are used as part of the internals of evaluating an
expression such as "cos(n*pi)". To me that makes sense because
evaluation creates Basic instances and the old assumptions sit on a
"level below" symbolic evaluation and (mostly) do not create any Basic
instances. The new assumptions API uses Basic instances even in order
to express a query. That means that using the new assumptions as part
of evaluation creates an inherent circularity and it also explains why
the new assumptions are so slow (or at least limits how much faster
they can get).

There are other problems with the new assumptions such as the idea
that we can have a global flag as the "with assuming(...)" API
implies. The "assuming" context manager seems like a fundamentally bad
idea to me because the intention is that it will bring in a global
flag that will alter the behavour of every other part of the codebase.
That means that it breaks thread-safety, that the cache would need to
be cleared every time anything is assumed, that code is unpredictable
because its behaviour can be manipulated from outside and many more
problems. Basically the assuming API implies a massive and very
complicated global variable and those are not usually good.

If we want to use the new assumptions properly as part of lots of
different functions around the codebase then the clean way is for most
APIs to grow an assumptions keyword e.g.:

sol = solve(a*x - b, x, assumptions=Q.nonzero(a))

Adding the assumptions keyword across all APIs and making use of it in
the relevant functions is a lot of work but nowhere near as much as
rewriting the old assumptions as the new. Then the new assumptions can
be used judiciously at key places in key APIs so they won't cause a
slowdown like they would if they were used as part of every elementary
evaluation.

The future I see for the old and new assumptions is that they will
coexist. The old assumptions will be there for simple cases and will
be used internally as part of evaluation. The new assumptions will be
able to express more complex queries but will not be used
automatically although some higher level APIs like integrate/solve
could use the new assumptions for simplification or refinement. That
gives us a clear path forward that can be implemented incrementally

So to summarise the situation:

1. The new assumptions are basically not used anywhere in sympy.
2. You should not expect them to alter the behaviour of evaluation or
of higher-level APIs like integrate.
3. You can try to refine the results from integrate yourself using
"refine" but the "assuming" context manager basically has no effect
because none of the code inside integrate uses "ask".

Oscar
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/d3ba620e-1a3b-4248-b30f-e62989370d8eo%40googlegroups.com.

Davide Sandona'

unread,
Jun 11, 2020, 5:56:35 AM6/11/20
to sy...@googlegroups.com
Thank you Oscar for this really exhaustive answer! You clarified all my doubts.

Davide.


Aaron Meurer

unread,
Jun 12, 2020, 7:22:50 PM6/12/20
to sympy
On Sun, Jun 7, 2020 at 4:08 PM Oscar Benjamin
<oscar.j....@gmail.com> wrote:
>
> Hi Davide,
>
> I'll answer your questions one by one and then summarise the situation below.
>
> 1. Are the two modules interacting together now?
>
> The old and new assumptions interact in one direction: the new
> assumptions will query the old assumptions if it helps but the old
> assumptions are ignorant of the new. The rest of sympy does not really
> use the new assumptions though so any assumptions stated using e.g.
> "assuming" will be ignored by almost all of the code in sympy (e.g.
> integrate, solve etc).
>
> 2. Is the new module able to do everything the old module does?
>
> Since the new assumptions query the old they can make use of any
> information that is available in the old assumptions. In that sense a
> query using "ask" from the new assumptions API can answer all the same
> questions that the old assumptions API can. The new assumptions can
> also be used for a wider range of queries for example queries about
> matrices. The old assumptions are really just about numbers and can
> only answer questions like "is this a positive number".

The real benefit of the new assumptions is that assumptions can be
made on expressions, not just symbols. So it is now possible to say
something like Q.positive(x - y). You can also use arbitrary logical
predicates, like Or(Q.integer(x), Q.integer(y)).

>
> 3. What are the limitation of the new module? Where is it not implemented?
>
> The new assumptions were made a lot faster last Summer. Their slowness
> was one major limitation before. They are still slow in comparison to
> the old assumptions. The real limitation is the fact that the old
> assumptions are used all across the codebase in both elementary
> evaluations and in more complex routines. A huge amount of code would
> need to be changed in order to make it so that general routines like
> "integrate" use the new assumptions.

The biggest thing that is missing is the other direction, to make
expr.is_integer check ask(Q.integer(expr)). With the improved
performance, this may now be possible to do, although I suspect more
work is still required to make it so we don't slow down SymPy. For
instance, IMO we need a better model of assumptions evaluation so that
we can do fast checks vs. slow checks. The (new) assumptions do
various levels of checks starting from a simple "is this already
assumed" going all the way to a full SAT solve. But there's no way to
limit how far it tries to go to get an answer. It's either all or
nothing.

>
> I guess that the original plan was for the new assumptions to replace
> the old so that they could be a dropin replacement and then the old
> assumptions API would still be there but it would call into the new
> assumptions in order to resolve its queries. That's not possible
> though because the new assumptions actually depend on the old and
> there is a huge amount of work invested in making the old assumptions
> work the way that they do so replicating that in the new assumptions
> so that they can stand on their own without the old is not a small
> task.
>
> My main problem with the idea that the new assumptions can replace the
> old is about the internal organisation of the code base itself. The
> old assumptions are used as part of the internals of evaluating an
> expression such as "cos(n*pi)". To me that makes sense because
> evaluation creates Basic instances and the old assumptions sit on a
> "level below" symbolic evaluation and (mostly) do not create any Basic
> instances. The new assumptions API uses Basic instances even in order
> to express a query. That means that using the new assumptions as part
> of evaluation creates an inherent circularity and it also explains why
> the new assumptions are so slow (or at least limits how much faster
> they can get).

I don't think the new system should completely replace the old one.
The syntax for the old assumptions is very useful and shouldn't go
away. It is also useful in many cases to store assumptions logic on
the relevant object itself.

A benefit of the new assumptions, though, is that the handler logic is
often much simpler to write, especially the sathandlers style, because
you can write things down as logical statements. In the old
assumptions you have to carefully do the logic evaluation yourself,
and on top of it, you have to be careful everywhere to handle the
three-valued logic correctly. With a logical predicate, you don't have
to worry about three-valued logic. The top-level solver takes care of
the None case.

One goal is to remove assumptions based evaluation, or at least limit
it to only cases where the assumptions can be computed quickly. The
"n" in cos(n*pi) can be an arbitrarily complicated expression. With
the new assumptions, it may require a lot of computation to compute
whether or not n is an integer. This is due to the fact that the new
assumptions allow assuming more complicated logical predicates and
more complicated logical queries.

>
> There are other problems with the new assumptions such as the idea
> that we can have a global flag as the "with assuming(...)" API
> implies. The "assuming" context manager seems like a fundamentally bad
> idea to me because the intention is that it will bring in a global
> flag that will alter the behavour of every other part of the codebase.
> That means that it breaks thread-safety, that the cache would need to
> be cleared every time anything is assumed, that code is unpredictable
> because its behaviour can be manipulated from outside and many more
> problems. Basically the assuming API implies a massive and very
> complicated global variable and those are not usually good.
>
> If we want to use the new assumptions properly as part of lots of
> different functions around the codebase then the clean way is for most
> APIs to grow an assumptions keyword e.g.:
>
> sol = solve(a*x - b, x, assumptions=Q.nonzero(a))

But then any function that calls solve() must also have that keyword
argument, or you won't be able to make use of the assumptions. The end
result is that you have to extend the API of hundreds of functions,
and you can still easily have functions that don't support the API.
The context manager is simpler API wise although I can see how there
are technical issues with things like the cache.

I don't think the "globalness" of the new assumptions can be avoided.
With the old assumptions, you can only assume things about symbols, so
it's easy to make Symbol('x') and Symbol('x', positive=True) compare
unequal. But for something like, x > y (Q.positive(x - y)), it has to
be stored separately from the expressions that contain x and y.

Aaron Meurer
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAHVvXxTsk9FJQ8okFuHHBeb_7tkMFzm-6kY7diff7QjnWLdakQ%40mail.gmail.com.

Oscar Benjamin

unread,
Jun 13, 2020, 7:50:12 AM6/13/20
to sympy
On Sat, 13 Jun 2020 at 00:22, Aaron Meurer <asme...@gmail.com> wrote:
>
> On Sun, Jun 7, 2020 at 4:08 PM Oscar Benjamin
> <oscar.j....@gmail.com> wrote:
> >
> > 3. What are the limitation of the new module? Where is it not implemented?
> >
> > The new assumptions were made a lot faster last Summer. Their slowness
> > was one major limitation before. They are still slow in comparison to
> > the old assumptions. The real limitation is the fact that the old
> > assumptions are used all across the codebase in both elementary
> > evaluations and in more complex routines. A huge amount of code would
> > need to be changed in order to make it so that general routines like
> > "integrate" use the new assumptions.
>
> The biggest thing that is missing is the other direction, to make
> expr.is_integer check ask(Q.integer(expr)). With the improved
> performance, this may now be possible to do, although I suspect more
> work is still required to make it so we don't slow down SymPy. For
> instance, IMO we need a better model of assumptions evaluation so that
> we can do fast checks vs. slow checks. The (new) assumptions do
> various levels of checks starting from a simple "is this already
> assumed" going all the way to a full SAT solve. But there's no way to
> limit how far it tries to go to get an answer. It's either all or
> nothing.

We can't have `expr.is_integer` call `ask(Q.integer(expr))` without
first making the new assumptions free standing. At the moment the new
assumptions are propped up by the old and depend heavily on them for
their queries.

The old assumptions for something like `expr.is_integer` are *already*
too slow. This is the main cause of slowness when working with large
expressions as you can see if you profile e.g. a slow call to expand:
https://github.com/sympy/sympy/pull/19485#issuecomment-639416355

If the new assumptions are any slower then I don't think that we
should use them by default in evaluation. Instead I would much rather
remove the slower parts of the old assumptions and implement the more
complicated ways of evaluating queries in the new assumptions and then
have those used judiciously so core evaluation can use the old
assumptions and be fast while key algorithms or APIs can choose at the
appropriate place to use the new assumptions to answer a critical
query or to simplify something important like a Piecewise.

> One goal is to remove assumptions based evaluation, or at least limit
> it to only cases where the assumptions can be computed quickly. The
> "n" in cos(n*pi) can be an arbitrarily complicated expression. With
> the new assumptions, it may require a lot of computation to compute
> whether or not n is an integer. This is due to the fact that the new
> assumptions allow assuming more complicated logical predicates and
> more complicated logical queries.

It would be very difficult to remove assumptions based evaluation but
it would be potentially worthwhile. The goal as I would put it is to
limit evaluation itself. We should also remove the slower parts of the
old assumptions when the new assumptions can pick up the slack.

> > There are other problems with the new assumptions such as the idea
> > that we can have a global flag as the "with assuming(...)" API
> > implies. The "assuming" context manager seems like a fundamentally bad
> > idea to me because the intention is that it will bring in a global
> > flag that will alter the behavour of every other part of the codebase.
> > That means that it breaks thread-safety, that the cache would need to
> > be cleared every time anything is assumed, that code is unpredictable
> > because its behaviour can be manipulated from outside and many more
> > problems. Basically the assuming API implies a massive and very
> > complicated global variable and those are not usually good.
> >
> > If we want to use the new assumptions properly as part of lots of
> > different functions around the codebase then the clean way is for most
> > APIs to grow an assumptions keyword e.g.:
> >
> > sol = solve(a*x - b, x, assumptions=Q.nonzero(a))
>
> But then any function that calls solve() must also have that keyword
> argument, or you won't be able to make use of the assumptions. The end
> result is that you have to extend the API of hundreds of functions,
> and you can still easily have functions that don't support the API.

That's correct. It will be a lot of work. Changing the API is the
easier part compared to identifying and implementing support for the
new assumptions in the right places.

> The context manager is simpler API wise although I can see how there
> are technical issues with things like the cache.
>
> I don't think the "globalness" of the new assumptions can be avoided.
> With the old assumptions, you can only assume things about symbols, so
> it's easy to make Symbol('x') and Symbol('x', positive=True) compare
> unequal. But for something like, x > y (Q.positive(x - y)), it has to
> be stored separately from the expressions that contain x and y.

The assumptions should be stored in the call stack as an explicit
local variable. They should be provided as an explicit parameter to
API that needs them. That API should document what it will potentially
use the assumptions for.

The other problem with "with assuming" is that it's very confusing for
users to know when it should have any effect. The "new" assumptions
have been new for years and throughout that time people have been
trying to use "with assuming" because it claims to be able to affect
everything when even now it still affects nothing. An explicit
parameter can clearly document when and how the new assumptions are
being used as support for using them is added (which will still take
years from now).

--
Oscar

Aaron Meurer

unread,
Jun 14, 2020, 2:10:19 AM6/14/20
to sympy
I didn't realize that. Do we know which parts are slow? Is it the
logic in the core, or are the _eval handlers slow? I have a hard time
reading the graph on that comment.

>
> If the new assumptions are any slower then I don't think that we
> should use them by default in evaluation. Instead I would much rather
> remove the slower parts of the old assumptions and implement the more
> complicated ways of evaluating queries in the new assumptions and then
> have those used judiciously so core evaluation can use the old
> assumptions and be fast while key algorithms or APIs can choose at the
> appropriate place to use the new assumptions to answer a critical
> query or to simplify something important like a Piecewise.

That's why I think we need a clear stepping stone of assumptions
evaluations from

faster <====> more powerful

which makes it possible to avoid using the more powerful evaluation
unless it is in a place where the user will likely want to wait for it
to happen. We currently have a hodgepodge of different methods but it
isn't easy to only do the fastest evaluation. It also isn't clear
which methods are completely redundant and are either too slow or
insufficiently powerful relative to other methods. For example, the
ask() quick fact check is likely completely unnecessary if it were to
be combined with the old assumptions which does the same thing (or
visa versa, whichever is slower).

>
> > One goal is to remove assumptions based evaluation, or at least limit
> > it to only cases where the assumptions can be computed quickly. The
> > "n" in cos(n*pi) can be an arbitrarily complicated expression. With
> > the new assumptions, it may require a lot of computation to compute
> > whether or not n is an integer. This is due to the fact that the new
> > assumptions allow assuming more complicated logical predicates and
> > more complicated logical queries.
>
> It would be very difficult to remove assumptions based evaluation but
> it would be potentially worthwhile. The goal as I would put it is to
> limit evaluation itself. We should also remove the slower parts of the
> old assumptions when the new assumptions can pick up the slack.

Yes, definitely. Assumptions based evaluation is among the more
egregious evaluations because it can be non-constant time (c.f.
https://github.com/sympy/sympy/wiki/Automatic-Simplification).
That's fair. I agree the new assumptions haven't been sufficiently
documented as being experimental. The problem is that no one
anticipated how slowly they would be to fully integrate into the core.

Aaron Meurer

>
> --
> Oscar
>
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAHVvXxQZ7S8%3DdZZ2DcpcpS_2cHbufwsp%2BYz0iCGtjN_ExC9Gyw%40mail.gmail.com.

Oscar Benjamin

unread,
Jun 14, 2020, 6:46:07 AM6/14/20
to sympy
The call graph is complicated (too complicated? - note that what you
see is already a pruned graph). The key points are that:

1. 99% of the time is spent in expand so we're profiling a slow call to expand.
2. 84% of the time is spent in _ask so the bulk of the time taken by
expand is in the old assumptions
3. 83% of the time is spent in
Expr._eval_is_extended_positive_negative so most of the time is spent
checking e.g. expr.is_positive
4. 86% of the time is spent in evalf so is_positive is slow because it
is attempting to evaluate the expression numerically

You can see where that all happens here:
https://github.com/sympy/sympy/blob/master/sympy/core/expr.py#L888-L918

Note that this is on the handlers at the Expr level so it affects any
subclass that does not override these handlers. This means that
assumptions queries on objects with is_number=True (i.e. those that
can in principle be evalf()ed to a complex number) will very often end
up leading to numerical evaluation. The numerical evaluation seems to
be reliable enough in this usage but it can be slow and fails for
really large/small numbers like

>>> (exp(exp(exp(exp(100*pi))))-1).is_positive
OverflowError

You can also see that if numerical evaluation fails but the expression
is evalf()able and is algebraic then the handler will try to construct
its minimal polynomial which can be very expensive. It's rare to hit
this code path (expression is algebraic and evalf's to zero) but when
we do it can lead to a substantial slowdown. From memory there is one
test that depends on using the minimal polynomial and it is something
to do with assumptions on an expression involving the golden ratio.
There is also at least one issue about something being very slow and
it's because it also hits this path.

These handlers are depended upon heavily and in particular are used by
things like Add.is_positive which overrides the method but calls it
via super
https://github.com/sympy/sympy/blob/master/sympy/core/add.py#L643
before going on to the next potentially slow function which is _monotonic_sign:
https://github.com/sympy/sympy/blob/f7318dbfef2bf61e5ed8bc55f3e374880ec6ea67/sympy/core/exprtools.py#L29

In the non is_number case (expressions with symbols) the
_monotonic_sign function will often dominate the run time for
expressions with large Adds. The _monotonic sign function does all
kinds of things like attempting to compute the roots of a polynomial:
https://github.com/sympy/sympy/blob/f7318dbfef2bf61e5ed8bc55f3e374880ec6ea67/sympy/core/exprtools.py#L123

The other thing that is potentially slow in the old assumptions is
that in the indeterminate case we often end up evaluating every
handler e.g. if x and y are vanilla symbols with no assumptions then
(x*y).is_positive calls the following handlers:

x.is_positive
x.is_negative
x.is_extended_negative
x.is_extended_positive
x*y.is_positive
x*y.is_finite
y.is_negative
y.is_positive
y.is_extended_positive
y.is_extended_negative
x*y.is_even
x*y.is_integer
x*y.is_odd
x*y.is_imaginary
x*y.is_zero
x*y.is_irrational
x*y.is_complex
x*y.is_negative
x*y.is_extended_negative
x*y.is_composite
x*y.is_extended_real
x*y.is_rational
x*y.is_commutative
x*y.is_hermitian
x*y.is_algebraic
x*y.is_infinite
x*y.is_extended_positive
x*y.is_antihermitian

There is no Mul._eval_is_prime handler but if there was then it would
also be called because prime->positive. This is something that I think
few people really understand which is that adding a handler slows
everything else down because the handler will be called in situations
that you didn't expect e.g. expr.is_imaginary will check expr.is_prime
and conversely expr.is_prime will check expr.is_imaginary. This means
that we need to make every handler simple and fast because it will
potentially be checked as part of unrelated queries during basic
operations like multiplying two expressions.

Another potential slowdown is that some queries evaluate new
expressions as part of the query e.g.:
https://github.com/sympy/sympy/blob/f7318dbfef2bf61e5ed8bc55f3e374880ec6ea67/sympy/core/power.py#L580-L583
when you already have a large expression even something as simple as
taking the abs or adding one can be expensive.

Progress on this can be made but we need other ways to replace the
handlers that are being removed in the situations where they are
depended upon. The new assumptions have a role to play in this: if we
can use them sparingly for more complicated queries then we can
gradually remove the slow handlers from the old assumptions while
using and extending the new assumptions to take up the slack.

Another important step that I think we need is this PR:
https://github.com/sympy/sympy/pull/19107
which reworks the comparison <, <=, ==, !=, >, >= hierarchy. With that
we can significantly reduce the dependence on Add.is_positive
throughout the codebase. Currently every comparison lhs < rhs goes
through (lhs - rhs).is_negative which then goes through all the slow
things I've listed above. Still for all of that complexity it isn't
possible to resolve something as simple as:

>>> x = Symbol('x', integer=True)
>>> 2**x < 2**(x + 1)
2**x < 2**(x + 1)

The PR would make that kind of case trivial without depending on any
of the slow things listed above.

--
Oscar

Aaron Meurer

unread,
Jun 17, 2020, 6:54:31 PM6/17/20
to sympy
Thanks for the writeup here. So it sounds like the problem is that a
lot of assumptions handlers are calling out to other functions, which
are usually fast but can be slow, such as

- numerical evaluation
- root finding
- other polynomial functions
- you didn't mention it but isprime() would likely be another example

This is the sort of thing I mean by saying we need a hierarchy of
evaluation from faster but less powerful to slower but more powerful.
Currently we have two systems, old and new, which are separated by
their syntax and expressibility. But I don't think that same
separation needs to be what is done for the power hierarchy. One
should be able to do both "fast" or "slow" assumptions evaluations
with ask. The old assumptions syntax doesn't allow keywords, so we
would have to just pick a good default.

I think what we need is an assumptions solver that has this notion
built-in to it. The old assumptions solver clearly doesn't. It is able
to test the user-defined assumptions before calling _eval handlers,
but that's it. It doesn't know how to not call the _eval handlers if
the user only wants a fast evaluation, and it also doesn't know how to
not go too deep in the evaluation tree in an attempt to get what it
wants.

ask() does have more "levels" of what it tries, but it also doesn't
have any way to control how deep into these levels it goes. satask()
has an iterations flag that controls this to some degree, although not
completely (iterations tells it how "deep" to go in the expressions,
but doesn't control things like calling out to numeric evaluation of
isprime()). Generally speaking, we need a more unified notion.

Once we do, then we can have rules for the codebase like "automatic
evaluation should only use the fastest evaluation", "user calls to
ask() would use the most powerful evaluation by default", or "library
calls to ask() should use some middle evaluation that balances power
and speed".

I wonder if SMT solver systems like Z3 expose any notions like this.
It would be good to copy something else that is out there if it
exists.

Aaron Meurer
Reply all
Reply to author
Forward
0 new messages