np.dot() only supported on contiguous arrays

0 views
Skip to first unread message

Nuno Calaim

unread,
Jan 26, 2016, 11:23:06 AM1/26/16
to Numba Public Discussion - Public
Hello all,

I've run into the following error while running this code

import numpy as np
from numba import njit
A = np.random.randn(10, 5)
B = np.random.randn(5, 20)
@njit
def f(i):
    return np.dot(A, B[:, i])
f(3)

TypingError: Failed at nopython (nopython frontend)
np.dot() only supported on contiguous arrays

What is a contiguous array? How can I solve this? Is np.dot() not yet fully implemented? I thought it was!

Thank you
Nuno

Antoine Pitrou

unread,
Jan 26, 2016, 11:26:34 AM1/26/16
to numba...@continuum.io

Hi Nuno,
A contiguous array is an array whose data can be scanned "naturally" by
incrementing a memory pointer. It's a Numpy concept and is required to
invoke high-speed linear algebra routines provided by BLAS.
(see https://docs.python.org/dev/glossary.html#term-contiguous)

To check whether an array is contiguous, you can query the "flags"
attribute on the array.

To get a contiguous array from a non-contiguous one, simply call the
copy() method.

Regards

Antoine.


Stanley Seibert

unread,
Jan 26, 2016, 11:28:12 AM1/26/16
to Numba Public Discussion - Public
Also, just to elaborate: Our plan is to eventually support non-contiguous arrays, but this will require us to make a copy of the array in cases where BLAS can't handle a strided array.  This will incur some performance overhead, of course.



--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
To post to this group, send email to numba...@continuum.io.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/20160126172606.2c9b44dd%40fsol.

Naveen Michaud-Agrawal

unread,
Jan 26, 2016, 11:33:00 AM1/26/16
to numba...@continuum.io
I thought the plan was to supplant BLAS by jit-compiling optimized linear-algebra kernels using the shape of the data as it already is in memory ;)




--
-----------------------------------
Naveen Michaud-Agrawal

Stanley Seibert

unread,
Jan 26, 2016, 12:20:22 PM1/26/16
to Numba Public Discussion - Public
Hah, no.  We have a lot of respect for people who create optimized BLAS routines, and really don't want to get into that business unless we absolutely have to.  :)

Tem Pl

unread,
Jan 26, 2016, 12:25:16 PM1/26/16
to Numba Public Discussion - Public
Naveen- I think that is more a Dynd thing. IIRC, there will eventually be integration with Numba. 

Stanley Seibert

unread,
Jan 26, 2016, 12:26:27 PM1/26/16
to Numba Public Discussion - Public
At this point, I would think of DyND as a data container that goes beyond the data type limitations of NumPy and ndarrays.  And yes, we definitely are looking into how to ensure that DyND and Numba work together.

Tem Pl

unread,
Jan 26, 2016, 12:31:48 PM1/26/16
to Numba Public Discussion - Public
I thought there would be multiple dispatch/pattern matching  on user defined types/patterns so there could be reuse of numerical code by implementing interfaces? This is a generalization of what Naveen asked for and seems to be quite a bit more involved than the numba dispatch and type system. 

Naveen Michaud-Agrawal

unread,
Jan 27, 2016, 11:56:23 AM1/27/16
to numba...@continuum.io
Actually, what I'm hoping for is something like Terra (http://terralang.org/) or Halide(http://halide-lang.org/) but within the python ecosystem - using data descriptors (dynd), computation descriptors (blaze) and templated computation kernels (numba) to create a fused, runtime optimized kernel.

Stanley - have you seen the work on Lightweight Modular Staging (in either Scala - https://scala-lms.github.io/ or Lua - http://terralang.org/)? Do you think it's possible to meta-program numba using python?

Naveen




--
-----------------------------------
Naveen Michaud-Agrawal
Reply all
Reply to author
Forward
0 new messages