How to enable NRT?

1 view
Skip to first unread message

Ewan Douglas

unread,
Nov 6, 2017, 7:41:57 PM11/6/17
to Numba Public Discussion - Public
Hello,

I'm trying to implement the following fftshift algorithm, and getting an NRT not enabled error:

from numba import cuda
import numpy as np


@cuda.jit(debug=True)
def cufftShift_2D_kernel(data, N):
    #adopting cuda shift code from:
    #GNU Lesser public license
    
    #// 2D Slice & 1D Line
    sLine = N
    sSlice = N * N
    #// Transformations Equations
    sEq1 = int((sSlice + sLine) / 2)
    sEq2 = int((sSlice - sLine) / 2)
    x, y = cuda.grid(2)
    #// Thread Index Converted into 1D Index
    index = (y * N) + x
    #T regTemp;
    #data[index]=0
    if (x < N / 2):
        if (y < N / 2):
            #// First Quad
            temp =data[index]
            data[index] = data[index + sEq1]
            #// Third Quad
            data[index + sEq1] = temp
    else:
        if (y < N / 2):
            #// Second Quad
            temp=data[index]
            data[index] = data[index + sEq2];
            data[index + sEq2] = temp

    
n=4
array=np.ones([n,n])#,dtype=np.complex128)

cufftShift_2D_kernel(array,n)

which returns:

LoweringError: Failed at nopython (nopython mode backend)
NRT required but not enabled
File "<ipython-input-7-78335da43e0e>", line 26
[1] During: lowering "data[index] = $94.8" at <ipython-input-7-78335da43e0e> (26)

Github issue 1118 indicates NRT is not enabled by default, but I don't see anything in the documentation (https://docs.anaconda.com/docs_oss/numba/developer/numba-runtime) saying how to enable it.


How do I enable NRT?
 Or is this error a symptom of another problem with the code above?
Thanks in advance.

Stanley Seibert

unread,
Nov 7, 2017, 10:25:52 AM11/7/17
to Numba Public Discussion - Public
The NRT is used to manage reference-counted memory allocation and is not available on the GPU.

This error is coming up because you are reading and writing 1D slices of your array.  Numba doesn't support array expressions in GPU kernels (and it is quite likely that you will want to control the looping to ensure memory coalescing), but you could rewrite the code around line 26 to loop over the column index to do the swap.  In fact, you might want to pull that out into a device function so you can reuse the same code in your else branch.

--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users+unsubscribe@continuum.io.
To post to this group, send email to numba...@continuum.io.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/48947545-6549-49fd-8dac-504cdc93eb8f%40continuum.io.
For more options, visit https://groups.google.com/a/continuum.io/d/optout.

Ewan Douglas

unread,
Nov 12, 2017, 12:41:43 AM11/12/17
to Numba Public Discussion - Public
Thanks for the response, that helped clear up the error message.
Code worked when I remembered to flatten the input array, the last line should have been:
cufftShift_2D_kernel(array.ravel(),n)

To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
Reply all
Reply to author
Forward
0 new messages