Numba Cuda: Dynamic data structures on the device side

0 views
Skip to first unread message

az...@unileon.es

unread,
Nov 18, 2016, 12:02:47 AM11/18/16
to Numba Public Discussion - Public
Hello,

Is it possible to create dynamic data structures on the device side in Numba Cuda?

i.e. allocate memory on the device side and what is the best way to do it?

with kind regards,

Alexander Zhdanov

Researcher, University of Leon, Spain

Siu Kwan Lam

unread,
Nov 18, 2016, 1:41:49 AM11/18/16
to Numba Public Discussion - Public
Numba supports allocation of array on the device using the API described in http://numba.pydata.org/numba-doc/latest/cuda/memory.html.  These API are called by the host to allocate on the device.   

Numba does not support dynamic memory allocation from the device (from within a kernel).  CUDA-C supports it with malloc() inside the kernel as described in http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#dynamic-global-memory-allocation-and-operations.  But, it is not efficient (see discussion in http://stackoverflow.com/questions/7476560/efficiency-of-malloc-function-in-cuda).



--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
To post to this group, send email to numba...@continuum.io.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/cbeb3652-e4b9-4038-b897-9a1da540b612%40continuum.io.
For more options, visit https://groups.google.com/a/continuum.io/d/optout.
--
Siu Kwan Lam
Software Engineer
Continuum Analytics
Reply all
Reply to author
Forward
0 new messages