Re: [Numba] Re: Cuda- Type Object is not subscriptable

2 views
Skip to first unread message

Stanley Seibert

unread,
Jun 15, 2017, 3:00:11 PM6/15/17
to Numba Public Discussion - Public
Hi Bobby,

It looks like there are a few issues with your example code:
  • Your CUDA kernel does not reference any of the block or thread index variables used by CUDA threads to do independent tasks.
  • Due to the way the GPU behaves, Numba does not support the full range of Python features that can be used on the CPU.  Things like lists and the numpy random module are not available.
If you are getting started with Numba GPU programming, I would encourage you to take a look at the tutorial notebooks we present at GTC 2017:


On Thu, Jun 15, 2017 at 1:39 PM, Bobby Garza <garza...@gmail.com> wrote:


On Thursday, June 15, 2017 at 1:28:31 PM UTC-5, Bobby Garza wrote:
I'm having a lot of trouble just getting my code to work with cuda. I know that Cuda works with my machine because I found some code on github that uses @cuda.jit and it runs fine. The error I keep receiving cannot type empty list, but I made sure all of my numpy arrays are populated and that they are numpy arrays and not python lists. Any help would be amazing!
P.S. This is just a code snippet, so it won't run with a copy paste

def main():
    hash_for_y_axis()
    hash_nums()
    
    #fi = []
    #trying to encrypt Issac Asimov's short story, "The Last Question"
    #The code below puts each string as an input value in a list
    with open("theLastQuestion.txt") as f:
        fi = ([word for line in f for word in line.split()])
    short_story = np.asarray(fi)
    for x in range(0, len(fi)):    
        map_key(short_story[x]) #the call to map_key assigns ascii values to each character in the short story, preps data
    encrypted_message_array = np.empty_like(iterate_list) #iterate is filled in map_key
    np_array = np.asarray(iterate_list)
    #print(short_story) #contains the story in a numpy array
    #print(np_array) # contains the story in ascii
    
    stream = cuda.stream()
    with stream.auto_synchronize():
        gpu_array = cuda.to_device(np_array, stream)
        run_encryption[32,32](gpu_array, encrypted_message_array) #i'm not sure about my griddim and blockdim values...
        gpu_array.copy_to_host(encrypted_message_array, stream)
    print(encrypted_message_array)

#function takes an ascii value and an output array and returns the number of times it took to get in the range of x/256    
@cuda.jit("int32(int32, int32[:])", device = True)
def run_encryption(y_coord_list, output_array):
    r = 3.8    
    b = rand.uniform(0,1) #random initial condition
    b_list.append(b)
    j = 0
    while(j < len(iterate_list)):
        for i in range(1,max_iterate): 
            result = r * b * (1 - b)
            b = r * b * (1 - b)
            if check_range(result, y_coord_list[j]):
                output_array[j] = i
                print(i)
                j = j + 1
    return output_array




--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users+unsubscribe@continuum.io.
To post to this group, send email to numba...@continuum.io.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/0c9a1480-b5b0-4d29-b7a0-dfd52b548926%40continuum.io.

Bobby Garza

unread,
Jun 15, 2017, 3:20:04 PM6/15/17
to Numba Public Discussion - Public
Ok, I am just getting started, thanks for link! 
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.

To post to this group, send email to numba...@continuum.io.
Reply all
Reply to author
Forward
0 new messages