High-level extension API and internal operations

0 views
Skip to first unread message

Andreas Sodeur

unread,
Feb 23, 2017, 12:24:38 PM2/23/17
to Numba Public Discussion - Public
Hi

trying to overload '*' for an internal operation fails at the lowering stage (simplified code):
@overload('*')
def mul_quantity(xty, yty):
if isinstance(xty, QuantityType) and isinstance(yty, QuantityType):
def impl(x, y):
return 1234.
return impl


jitmul = jit(types.float64(QuantityType(m), QuantityType(km)), nopython=True)((lambda x, y: x * y))

....

numba.errors.LoweringError: Failed at nopython (nopython mode backend)
Internal error:
NotImplementedError: No definition for lowering *(QuantityType(m), QuantityType(m)) -> float64

Is there a way to overload internal operations without resorting to type_callable/lower_builtin? The docs for type_callable/lower_builtin explicitly mention internal operations, the docs for overload do not. Hence I am suspecting overload might not be the right choice. (I tried implementing __mul__ on the user defined type via overload_method but had no luck either getting x[QuantityType] * y[QuantityType] to work).

Thanks,
Andreas





Siu Kwan Lam

unread,
Feb 23, 2017, 12:33:52 PM2/23/17
to Numba Public Discussion - Public
How is your user-defined type defined?  Is it a jitclass?

--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
To post to this group, send email to numba...@continuum.io.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/630b894e-d22c-46ca-bdb3-f3d79cdcb3c8%40continuum.io.
For more options, visit https://groups.google.com/a/continuum.io/d/optout.
--
Siu Kwan Lam
Software Engineer
Continuum Analytics

Andreas Sodeur

unread,
Feb 23, 2017, 12:50:40 PM2/23/17
to Numba Public Discussion - Public
No I extended Type, and implemented boxing etc, like in the Interval example in the docs. (the code works when I overload operator.mul and change the lambda to lambda x, y: mul(x, y)). 

Andreas

Siu Kwan Lam

unread,
Feb 23, 2017, 1:37:23 PM2/23/17
to Numba Public Discussion - Public
To overload operators, you will have to use the lower_builtin for now.  There are no user-api for overloading operators.  If you need examples, you can find the all the internal `lower_builtin("*", ...)` in http://numba.pydata.org/numba-doc/latest/developer/autogen_lower_listing.html#id21

Andreas Sodeur

unread,
Feb 25, 2017, 7:48:23 AM2/25/17
to Numba Public Discussion - Public
Siu,

after some poking around in the dispatchers I came-up with the decorator below which seems to work. Essentially I am applying overload to a dummy function, jit that, and shoe horn the result into type_callable/lower_builtin.

However I am slightly worried I might be missing more subtle issues (explicitly keeping weakrefs alive etc does not sound like a good idea ...). Any ideas?

Andreas

def overload_binop(op_name):
def decorator(func):
# overload dummy function with the provided func (cannot use operator.mul etc as numba will resolve this
        # as '*' ending in a recursion...)
def dummy(x, y):
raise NotImplementedError

overload(dummy)(func)

# define dummy2 calling dummy and jit dummy2 (using the overload func)
def dummy2(x, y):
return dummy(x, y)

disp = jit(nopython=True)(dummy2)
disp_type = types.Dispatcher(disp) # will access the impl via this in lower_builtin below
dispatcher = disp_type.dispatcher # keep a reference around for use in typer, Dispatcher.dispatcher is a weakref

cache = {}

@type_callable(op_name)
def typer(ctx):
def type_binop(x, y):
sig = cache.get((x, y), -1)
if sig == -1:
try:
# try if dummy2's dispatcher can type the call (compiles func for the signature if possible)
template, pysig, args, kws = dispatcher.get_call_template((x, y), {})
sig = template(ctx).apply((x, y), {})
except TypingError:
cache[x, y] = None
return None

# if dummy2's dispatcher can type the call provide an impl for op_name that calls the corresponding
# dummy2 impl
@lower_builtin(op_name, *sig.args)
def lower_mul_quantity(ctx, builder, sig, args):
impl = ctx.get_function(disp_type, sig)
return impl(builder, args)

cache[x, y] = sig

return sig

return type_binop

return func

return decorator

Siu Kwan Lam

unread,
Feb 27, 2017, 1:29:35 PM2/27/17
to Numba Public Discussion - Public
That's a creative way to go about this problem.  I am not sure you need the `overload` in there.  

I am sorry it has to be so complicated.  I think it will be better off for numba to solve this internally.  Perhaps, it is best if users can do `@overload(operator.mul)` and numba needs to do some refactoring to change the way operators are defined on the builtin types.  But, before that is done, I don't have any better suggestion.

--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
To post to this group, send email to numba...@continuum.io.

Andreas Sodeur

unread,
Apr 4, 2017, 8:13:18 AM4/4/17
to Numba Public Discussion - Public
Siu,

sorry for the late response but I was busy with other stuff in the meantime. Your proposed solution  actually seems a lot cleaner. I settled for overloading operator.* and changed the performance critical sections of my code to use operator.* as well. This is a little invasive but luckily isolated to just few places in my code.

Thx,
Andreas  


Saul Shanabrook

unread,
Jul 30, 2018, 10:56:11 AM7/30/18
to Numba Public Discussion - Public
Hey Andreas,

Do you have a copy of this updated approach you could post? Your approach above is what I was about to implement, so thanks for providing that!

Best,
Saul

Andreas Sodeur

unread,
Aug 1, 2018, 11:30:27 AM8/1/18
to Numba Public Discussion - Public
Saul,

I did not do more work on the decorator and eventually switched to using functions from `operator.*` in anticipation of #2297. I have since opened PR2983 that is slowly moving towards resolving #2297. There are still some questions on how to implement this in a clean fashion but I am optimistic it can be pulled off with some support by Sui. 

Andreas
Reply all
Reply to author
Forward
0 new messages