Binary contains no PTX

163 views
Skip to first unread message

Rod

unread,
Aug 11, 2012, 2:33:51 PM8/11/12
to gpuo...@googlegroups.com
Hello,

I am trying to merge the AMD backend with the Ocelot trunk but I am running into the following error:

$ cd gpuocelot/ocelot
$ sudo ./build.py --install --no_llvm -d
$ cd ..
$ ./build.py --install --no_llvm -d --build_target=tests/cuda4.1sdk/
$ cd tests/cuda4.1sdk/
$ ../../.debug_build/tests/cuda4.1sdk/MatrixMul

(0.006513) FatBinaryContext.cpp:79:   Assertion message: Binary contains no PTX.
MatrixMul: ocelot/cuda/implementation/FatBinaryContext.cpp:79: cuda::FatBinaryContext::FatBinaryContext(const void*): Assertion `entry->type & FATBIN_2_PTX' failed.

My system is:

$ nvcc --version
Cuda compilation tools, release 4.2, V0.2.1221

$ cat /etc/issue
Ubuntu 10.04.4 LTS \n \l

$ uname -a
Linux tayrona 2.6.32-41-generic #89-Ubuntu SMP Fri Apr 27 22:18:56 UTC 2012 x86_64 GNU/Linux

$ g++ --version
g++ (Ubuntu 4.4.3-4ubuntu5.1) 4.4.3


Thank you,

Rod

Rod

unread,
Aug 13, 2012, 3:52:16 PM8/13/12
to gpuo...@googlegroups.com
More information on this issue:

I am running the 'emulated' device. The problem doesn't happen with other CUDA 4.1 benchmarks like VectorAdd and Transpose. It seems to be related to the CUBLAS library. Turning on some of Ocelot's reporting messages I get:

(3.943287) CudaRuntime.cpp:593:  Loading module (fatbin) - /home/buildmeister/build/rel/gpgpu/toolkit/r4.2/cublas/src/magma_fermi_zgemm.cu
(3.943316) CudaRuntime.cpp:736:  Registered kernel - _Z24fermiZgemm_v3_kernel_refILb1ELb1ELb1ELb1ELi16ELi24ELi8ELi8ELi8ELb0EEviiiPK7double2iS2_iPS0_iS2_S2_ii in module '/home/buildmeister/build/rel/gpgpu/toolkit/r4.2/cublas/src/magma_fermi_zgemm.cu'
<a bunch of messages like the previous one>
(3.943854) CudaRuntime.cpp:736:  Registered kernel - _Z24fermiZgemm_v3_kernel_valILb0ELb0ELb0ELb0ELi16ELi24ELi8ELi8ELi8ELb0EEviiiPK7double2iS2_iPS0_iS0_S0_ii in module '/home/buildmeister/build/rel/gpgpu/toolkit/r4.2/cublas/src/magma_fermi_zgemm.cu'
(3.943874) CudaRuntime.cpp:672:  cudaRegisterTexture('cublasZgemmMagmaTexA, dim: 1, norm: 0, ext: 0
(3.943896) CudaRuntime.cpp:672:  cudaRegisterTexture('cublasZgemmMagmaTexB, dim: 1, norm: 0, ext: 0
(3.943923) FatBinaryContext.cpp:60:   Found new fat binary format!
(3.943940) FatBinaryContext.cpp:65:    binary size is: 390152 bytes
(3.943954) FatBinaryContext.cpp:79:   Assertion message: Binary contains no PTX.

MatrixMul: ocelot/cuda/implementation/FatBinaryContext.cpp:79: cuda::FatBinaryContext::FatBinaryContext(const void*): Assertion `entry->type & FATBIN_2_PTX' failed.
Aborted

Andrew R Kerr

unread,
Aug 20, 2012, 4:16:41 PM8/20/12
to gpuo...@googlegroups.com
Rodrigo,

This was a problem with how the list of entries in the CUBIN object were traversed. I just committed a fix (see r2019)
and demonstrated the MatrixMul benchmark runs with CUBLAS from CUDA 4.2.

I did notice that some entries within CUBLAS containing ELF objects do not contain PTX, but these were rare and I'm
not aware of any programs that call those kernels. For instance, the module barbieri_sgemm.cu does not seem to contain any
PTX.

Kind regards.


--
You received this message because you are subscribed to the Google Groups "gpuocelot" group.
To view this discussion on the web visit https://groups.google.com/d/msg/gpuocelot/-/WBwcKt8mv1EJ.
To post to this group, send email to gpuo...@googlegroups.com.
To unsubscribe from this group, send email to gpuocelot+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/gpuocelot?hl=en.

Greg Diamos

unread,
Aug 20, 2012, 5:28:47 PM8/20/12
to gpuo...@googlegroups.com
Andy,

Any chance you could add a regression test that uses CUBLAS to one of the SDK test lists?

Thanks,

Greg

Rod

unread,
Aug 20, 2012, 8:30:41 PM8/20/12
to gpuo...@googlegroups.com
Andy,

I don't get the error anymore. Thank you.

Rodrigo

Luis Freire

unread,
Sep 18, 2018, 11:49:31 AM9/18/18
to gpuocelot
Hi. I am having the same problem. Can you tell me how to access your fix? 

Andrew Kerr

unread,
Sep 18, 2018, 7:40:28 PM9/18/18
to gpuo...@googlegroups.com
I don't think you'll find much PTX in cuBLAS these days. I'm afraid there's no workaround.

To unsubscribe from this group and stop receiving emails from it, send an email to gpuocelot+unsubscribe@googlegroups.com.

To post to this group, send email to gpuo...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages