Tinker Board, Armv7, FFT on Mali GPU

100 views
Skip to first unread message

ivb...@gmail.com

unread,
Jan 24, 2020, 8:46:33 AM1/24/20
to reikna
Hi,
I'm trying to use reikna on Tinker Board with  Mali-T760 GPU/
I started with very simple code:

import numpy
import reikna.cluda as cluda
from reikna.fft import FFT

        
api = cluda.ocl_api()


for platform in api.get_platforms():
    for device in platform.get_devices():
        if device.type == api.cl.device_type.GPU:
            gpu_device = device              
   
thr=api.Thread(gpu_device)

arr = numpy.random.normal(size=2048).astype(numpy.complex64)
fft = FFT(arr)
cfft = fft.compile(thr)

arr_dev = thr.to_device(arr)
res_dev = thr.array(arr.shape, numpy.complex64)
cfft(res_dev, arr_dev)
result = res_dev.get()

reference = numpy.fft.fft(arr)

print('Error:',numpy.linalg.norm(result - reference) / numpy.linalg.norm(reference))


I receive error in compilation for Mali GPU

ERROR:root:Failed to compile:
Traceback (most recent call last):
  File "reikna-fft-simple.py", line 27, in <module>
    cfft = fft.compile(thr)
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/core/computation.py", line 207, in compile
    self._tr_tree, translator, thread, fast_math, compiler_options, keep).finalize()
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/core/computation.py", line 192, in _get_plan
    return self._build_plan(plan_factory, thread.device_params, *args)
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/fft/fft.py", line 581, in _build_plan
    plan_factory, device_params, local_kernel_limit, output, input_, inverse)
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/fft/fft.py", line 553, in _build_limited_plan
    global_size=gsize, local_size=lsize, render_kwds=kwds)
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/core/computation.py", line 473, in kernel_call
    keep=self._keep)
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/cluda/api.py", line 535, in compile_static
    constant_arrays=constant_arrays, keep=keep)
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/cluda/api.py", line 755, in __init__
    constant_arrays=constant_arrays, keep=keep)
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/cluda/api.py", line 624, in __init__
    self.source, fast_math=fast_math, compiler_options=compiler_options, keep=keep)
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/cluda/api.py", line 473, in _create_program
    src, fast_math=fast_math, compiler_options=compiler_options, keep=keep)
  File "/home/linaro/.local/lib/python3.5/site-packages/reikna/cluda/ocl.py", line 145, in _compile
    return cl.Program(self._context, src).build(options=options, cache_dir=temp_dir)
  File "/home/linaro/.local/lib/python3.5/site-packages/pyopencl/__init__.py", line 510, in build
    options_bytes=options_bytes, source=self._source)
  File "/home/linaro/.local/lib/python3.5/site-packages/pyopencl/__init__.py", line 554, in _build_and_catch_errors
    raise err
pyopencl._cl.RuntimeError: clBuildProgram failed: BUILD_PROGRAM_FAILURE - clBuildProgram failed: BUILD_PROGRAM_FAILURE - clBuildProgram failed: BUILD_PROGRAM_FAILURE

Build on <pyopencl.Device 'Mali-T760' on 'ARM Platform' at 0x-4abe06c8>:

<source>:878:5: error: casting to void is not allowed
    VIRTUAL_SKIP_THREADS;
    ^
<source>:148:30: note: expanded from here
#define VIRTUAL_SKIP_THREADS MARK_VIRTUAL_FUNCTIONS_AS_USED; if(virtual_skip_local_threads() || virtual_skip_groups() || virtual_skip_global_threads()) return
                             ^
<source>:143:46: note: expanded from here
#define MARK_VIRTUAL_FUNCTIONS_AS_USED (void)(virtual_num_groups(0)); (void)(virtual_global_flat_id()); (void)(virtual_global_flat_size())
                                             ^

<source>:878:5: error: casting to void is not allowed
<source>:148:30: note: expanded from here
#define VIRTUAL_SKIP_THREADS MARK_VIRTUAL_FUNCTIONS_AS_USED; if(virtual_skip_local_threads() || virtual_skip_groups() || virtual_skip_global_threads()) return
                             ^
<source>:143:77: note: expanded from here
#define MARK_VIRTUAL_FUNCTIONS_AS_USED (void)(virtual_num_groups(0)); (void)(virtual_global_flat_id()); (void)(virtual_global_flat_size())
                                                                            ^

<source>:878:5: error: casting to void is not allowed
<source>:148:30: note: expanded from here
#define VIRTUAL_SKIP_THREADS MARK_VIRTUAL_FUNCTIONS_AS_USED; if(virtual_skip_local_threads() || virtual_skip_groups() || virtual_skip_global_threads()) return
                             ^
<source>:143:111: note: expanded from here
#define MARK_VIRTUAL_FUNCTIONS_AS_USED (void)(virtual_num_groups(0)); (void)(virtual_global_flat_id()); (void)(virtual_global_flat_size())
                                                                                                              ^

error: Compiler frontend failed (error code 59)

(options: -I /home/linaro/.local/lib/python3.5/site-packages/pyopencl/cl)
(source saved as /tmp/tmppp30_m1z.cl)
linaro@tinkerboard:~/fft_gpu$ ~



This is an output of clinfo for my board

linaro@tinkerboard:~/fft_gpu$ clinfo
Number of platforms                               1
  Platform Name                                   ARM Platform
  Platform Vendor                                 ARM
  Platform Version                                OpenCL 1.2 v1.r9p0-05rel0-git(f980191).e4ba9e4c6ff8005348d0332aae160089
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_3d_image_writes cl_khr_fp64 cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_fp16 cl_khr_gl_sharing cl_khr_icd cl_khr_egl_event cl_khr_egl_image cl_arm_core_id cl_arm_printf cl_arm_thread_limit_hint cl_arm_non_uniform_work_group_size cl_arm_import_memory
  Platform Extensions function suffix             ARM

  Platform Name                                   ARM Platform
Number of devices                                 1
  Device Name                                     Mali-T760
  Device Vendor                                   ARM
  Device Vendor ID                                0x7500001
  Device Version                                  OpenCL 1.2 v1.r9p0-05rel0-git(f980191).e4ba9e4c6ff8005348d0332aae160089
  Driver Version                                  1.2
  Device OpenCL C Version                         OpenCL C 1.2 v1.r9p0-05rel0-git(f980191).e4ba9e4c6ff8005348d0332aae160089
  Device Type                                     GPU
  Device Profile                                  FULL_PROFILE
  Max compute units                               4
  Max clock frequency                             99MHz
  Device Partition                                (core)
    Max number of sub-devices                     0
    Supported partition types                     None
  Max work item dimensions                        3
  Max work item sizes                             256x256x256
  Max work group size                             256
  Preferred work group size multiple              4
  Preferred / native vector sizes
    char                                                16 / 16
    short                                                8 / 8
    int                                                  4 / 4
    long                                                 2 / 2
    half                                                 8 / 8        (cl_khr_fp16)
    float                                                4 / 4
    double                                               2 / 2        (cl_khr_fp64)
  Half-precision Floating-point support           (cl_khr_fp16)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Single-precision Floating-point support         (core)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Address bits                                    64, Little-Endian
  Global memory size                              2109886464 (1.965GiB)
  Error Correction support                        No
  Max memory allocation                           527471616 (503MiB)
  Unified memory for Host and Device              Yes
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        <printDeviceInfo:89: get CL_DEVICE_GLOBAL_MEM_CACHE_SIZE : error -30>
  Global Memory cache line                        64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             16
    Max size for 1D images from buffer            65536 pixels
    Max 1D or 2D image array size                 2048 images
    Max 2D image size                             65536x65536 pixels
    Max 3D image size                             65536x65536x65536 pixels
    Max number of read image args                 128
    Max number of write image args                8
  Local memory type                               Global
  Local memory size                               32768 (32KiB)
  Max constant buffer size                        65536 (64KiB)
  Max number of constant args                     8
  Max size of kernel argument                     1024
  Queue properties
    Out-of-order execution                        Yes
    Profiling                                     Yes
  Prefer user sync for interop                    No
  Profiling timer resolution                      1000ns
  Execution capabilities
    Run OpenCL kernels                            Yes
    Run native kernels                            No
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Device Extensions                               cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_3d_image_writes cl_khr_fp64 cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_fp16 cl_khr_gl_sharing cl_khr_icd cl_khr_egl_event cl_khr_egl_image cl_arm_core_id cl_arm_printf cl_arm_thread_limit_hint cl_arm_non_uniform_work_group_size cl_arm_import_memory

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  ARM Platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   Success [ARM]
  clCreateContext(NULL, ...) [default]            Success [ARM]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 ARM Platform
    Device Name                                   Mali-T760
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 ARM Platform
    Device Name                                   Mali-T760

ICD loader properties
  ICD loader Name                                 OpenCL ICD Loader
  ICD loader Vendor                               OCL Icd free software
  ICD loader Version                              2.2.11
  ICD loader Profile                              OpenCL 2.1
linaro@tinkerboard:~/fft_gpu$

Can someone help me with an idea how to start to troubleshoot this ?

Thanks,
Igor

Bogdan Opanchuk

unread,
Jan 24, 2020, 2:19:38 PM1/24/20
to reikna
Basically, there are some helper functions that are added to every kernel. Since not all of them are used by each kernel, in OpenCL they produce warning, which are kind of annoying and can obscure other more important problems. So the prelude contains lines like 

(void)(function_name());

which make the compiler believe the function actually was used.

The problem you mentioned occurred before, but I can't find the corresponding thread or issue. Apparently some OpenCL compilers do not quite follow the C99 standard in this case and don't allow such expressions. Conditionally turning it off for certain devices would be easy, the question here is what the condition should be like. And I don't think OpenCL standard supports any options to disable specific warnings.

Could you go to wherever you installed reikna, and in `cluda/vsize.mako` comment the line `#define MARK_VIRTUAL_FUNCTIONS_AS_USED ${mark_used}` (line 192)? This should fix the issue (as a temporary workaround). Could you also tell me if you get compiler warning about unused functions when you do that?

Bogdan Opanchuk

unread,
Jan 24, 2020, 2:25:53 PM1/24/20
to reikna
> comment the line `#define MARK_VIRTUAL_FUNCTIONS_AS_USED ${mark_used}`

Scratch that, instead just delete the "${mark_used}" part.


ivb...@gmail.com

unread,
Jan 24, 2020, 2:41:09 PM1/24/20
to reikna
Thanks a lot, it worked perfect. No warnings

ivb...@gmail.com

unread,
Jan 24, 2020, 3:25:25 PM1/24/20
to reikna
Hi,
I continued with the same simple example to get some performance measurement:

import numpy
import reikna.cluda as cluda
from reikna.fft import FFT
import timeit

n_run = 100
# Pick the first available GPGPU API and make a Thread on it.
api = cluda.ocl_api()

for platform in api.get_platforms():
    for device in platform.get_devices():
        if device.type == api.cl.device_type.GPU:
            gpu_device = device              

thr=api.Thread(gpu_device)

arr = numpy.random.normal(size=2048).astype(numpy.complex64)
fft = FFT(arr)
cfft = fft.compile(thr)

arr_dev = thr.to_device(arr)
res_dev = thr.array(arr.shape, numpy.complex64)
tic = timeit.default_timer()
for i in range(n_run):
    cfft(res_dev, arr_dev)
    result = res_dev.get()

toc = timeit.default_timer()
t_gpu_ms = 1e3*(toc-tic)/n_run

# calculate FFT with numpy on CPU to compare results and performance
tic = timeit.default_timer()
for i in range(n_run):
    reference = numpy.fft.fft(arr)
toc = timeit.default_timer()
t_numpy_ms = 1e3*(toc-tic)/n_run

print('For', n_run, 'runs:\nNumpy average time:', t_numpy_ms, 'ms;\nCL average time:  ', t_gpu_ms, 'ms')

print('Error:',numpy.linalg.norm(result - reference) / numpy.linalg.norm(reference))

Surprisingly, performance of CPU is better than GPU. 

Mali-T760
For 100 runs:
Numpy average time: 0.3925104700101656 ms;
CL average time:   1.590604369994253 ms

What I'm doing wrong?

Bogdan Opanchuk

unread,
Jan 24, 2020, 5:04:55 PM1/24/20
to reikna
For starters, in the GPU case you are measuring the time of FFT + copying the buffer to the CPU memory. That's not usually what you want to do in your application - the amount of transfers between CPU and GPU should be minimized (although, of, course, if you do need to get the result back every time, it means that GPU usage doesn't have any benefits here). What are the results for just a sequence of FFT calls? (don't forget to call thr.synchronize() before measuring time)

Also, the results may be better for GPU if you batch several FFTs - a single 2048 FFT probably doesn't use all 4 compute units, plus there is global memory usage latency, so the occupancy is not great.

ivb...@gmail.com

unread,
Jan 25, 2020, 6:57:20 AM1/25/20
to reikna
Thanks a lot for prompt answer. And special thanks for amazing software :-)

ivb...@gmail.com

unread,
Jan 30, 2020, 4:41:01 PM1/30/20
to reikna
Hi,

Is it  possible to use pinned host memory for GPU buffers? I don't understand how to do this with reikna API

Thanks

Bogdan Opanchuk

unread,
Jan 30, 2020, 6:16:33 PM1/30/20
to reikna
I didn't use it myself, so I don't know much about it. How would you use it in PyOpenCL? For now you can just use its arrays/buffers with Reikna, but I will include it in the plan for the Cluda refactoring I'm working on to make it easier to use (if PyOpenCL exposes it, of course).

ivb...@gmail.com

unread,
Jan 31, 2020, 5:13:46 AM1/31/20
to reikna
PyOpenCL exposes this via
class pyopencl.Buffer(contextflagssize=0hostbuf=None)
with  mem_flags = ALLOC_HOST_PTR

I tried to use this from reikna with api.cl.Buffer, but I don't know where to take context:

api.cl.Buffer()
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
    1. pyopencl._cl.Buffer(context: pyopencl._cl.Context, flags: int, size: int = 0, hostbuf: object = None)

Bogdan Opanchuk

unread,
Jan 31, 2020, 7:49:54 PM1/31/20
to reikna
Pyopencl context can be found in `thr._context`, that should work for now. Thanks for drawing my attention to it, I will need to make it more convenient somehow. 

ivb...@gmail.com

unread,
Feb 1, 2020, 4:30:52 AM2/1/20
to reikna
Thanks. I'll try to use this

ivb...@gmail.com

unread,
Feb 1, 2020, 11:14:35 AM2/1/20
to reikna
Hi,
I don't know exactly which FFT implementation you used, but performance of GPU FFT on arrays with size, which is not a power of 2, significantly
lower, than Numpy on CPU. 

See example below (run on Intel desktop)

Intel(R) UHD Graphics 630
array shape = (220000,)
100 runs. GPU:  12.47 ms; Numpy:  7.32 ms

Above measurement does not include transfer to/from GPU.

Is this reasonable ?
Thanks

Bogdan Opanchuk

unread,
Feb 5, 2020, 1:50:04 PM2/5/20
to reikna
Yeah, the non-power-of-2 sizes are a weak point. For now Reikna's FFT algorithm uses Bluestein's algorithm for everything else, which essentially amounts to two power-of-2 FFTs and some scaling. I do want to add radix-3,5,7,etc, but just never have time for that (and for my purposes, power of 2 was always enough). In fact, in some computational books you can even find a statement that you should just always use power of 2 and pad your data :) But I admit that non-2 radices are occasionally useful.

How does power of 2 compare to CPU? What is the problem size, and are you using batching (if you could show me the full benchmark code, that would be even better)?

ivb...@gmail.com

unread,
Feb 5, 2020, 4:33:43 PM2/5/20
to reikna
Hi,

You mean - all axes should be power of 2? 
It seems, power of 2 helps even for axes, over which transform is not performed

Bogdan Opanchuk

unread,
Feb 5, 2020, 8:52:21 PM2/5/20
to reikna
I meant the problem size specifically, that is yes, the axes over which the transformation is performed. It's strange that it matters for the batch size too - I don't see why it should. Either there's some strange effect with global size derivation (but for OpenCL the limits should be at least 2^32 in each dimension), or some weird OpenCL driver behavior. 
Reply all
Reply to author
Forward
0 new messages