None of the below worked, and apologies if it's too silly or
documented somewhere, but: how to do a dot product of two vectors?
(Been doing some Copperhead tests for a class project.)
There's also a secondary question about continued development of
Copperhead. Friend of mine who attended Bryan's talk at SC'11 says
the project is very much alive, but Google code commits are from a
year ago. Is there another place one should go looking?
Thanks!
Sajith.
from copperhead import *
from itertools import imap
from operator import mul
import numpy
import timeit
import sys
@cu
def dot_product(x, y):
return sum(imap(mul, x, y))
@cu
def dot_product(x, y):
return sum([x[i]*y[i] for i in range(len(x))])
@cu
def dot_product(x, y):
return reduce(lambda sum, p: sum + p[0] * p[1], zip(x, y), 0)
@cu
def dot_product(x, y):
return sum(lambda a, b: return a * b, x, y)
@cu
def dot_product(x, y):
def m(xi, yi):
prod = xi * yi
return prod
return sum(map(m, x, y))
@cu
def dot_product(x, y):
return numpy.dot(x, y)
--
"the lyf so short, the craft so long to lerne."
-- Chaucer.
However both didn't quite work for me. This is what I get:
Traceback (most recent call last):
File "./dot.py", line 39, in <module>
t1 = do_run(dogpu, "GPU")
File "./dot.py", line 27, in do_run
n = t.timeit(count)
File "/usr/lib64/python2.7/timeit.py", line 194, in timeit
timing = self.inner(it, self.timer)
File "/usr/lib64/python2.7/timeit.py", line 100, in inner
_func()
File "./dot.py", line 19, in dogpu
gpu = dot_product(x, y)
File "/home/sasasidh/software/lib64/python2.7/site-packages/copperhead-0.1a1-py2.7.egg/copperhead/runtime/cufunction.py", line 56, in __call__
return P.execute(self, args, kwargs)
File "/home/sasasidh/software/lib64/python2.7/site-packages/copperhead-0.1a1-py2.7.egg/copperhead/runtime/driver.py", line 60, in execute
return execute(cufn, *args, **kwargs)
File "/home/sasasidh/software/lib64/python2.7/site-packages/copperhead-0.1a1-py2.7.egg/copperhead/runtime/driver.py", line 86, in execute
return_value = compiled_fn(*cu_inputs)
File "<string>", line 10, in dot_product
File "/home/sasasidh/software/lib64/python2.7/site-packages/copperhead-0.1a1-py2.7.egg/copperhead/runtime/cubox.py", line 31, in __call__
return self.fn(*args_cache)
File "/home/sasasidh/software/lib64/python2.7/site-packages/copperhead-0.1a1-py2.7.egg/copperhead/thrust/reduce.py", line 63, in sum
result = module.entryPoint(array)
TypeError: No registered converter was able to produce a C++ rvalue of type unsigned long long from this Python object of type PooledDeviceAllocation
(The code I'm trying is attached, if you can risk looking at some
poor greenhorn lines of Python.)
I'm using Copperhead from Google code main repository. Perhaps I
should switch to another clone?
Lately I've had a chance to look at several GPU programming DSLs
(Copperhead papers including your dissertation, but admittedly I
haven't spent a lot of time with it -- along with Accelerate, Nikola
etc), and Copperhead is certainly among the most promising ones. Good
luck with the new direction!
I've found Thrust to be very interesting and useful, and I'm looking
forward to having Copperhead too with the official CUDA distribution
one day. Particularly so since (no offense!) I've found the whole
thing a pain to set up, but it's only expected of new code. :)
Out of curiosity, do you think if there ever will be an OpenCL
backend?
Regards,
Sajith.
> TypeError: No registered converter was able to produce a C++ rvalue of type unsigned long long from this Python object of type PooledDeviceAllocation
Oh, in fact this is the same error I've been getting from all sample
programs except simple_tests.py. Does it suggest that something is
wrong with my Copperhead install?
Thanks,
Sajith.
Linux localhost 3.0.4-gentoo-r1 #1 SMP Fri Sep 30 12:05:35 EDT 2011 x86_64 Intel(R) Xeon(R) CPU X5365 @ 3.00GHz GenuineIntel GNU/Linux
CUDA 4.0, Codepy 2011.1, PyCUDA 2011.1.3, cgen 2011.1.
Thank you for the additional pointers also -- they are very helpful.
Sajith.
Bryan Catanzaro <bryan.c...@gmail.com> wrote:
> I've seen this bug before - it arises from changes in the way PyCUDA and
> Boost export the functions PyCUDA provides, which Copperhead programs
> expect to use. In the past, I've solved it by:
> 1. Not using PyCUDA's shipped Boost library, and instead using the system
> Boost library when building PyCUDA.
> 2. Sometimes I have had to use an older version of Boost. 1.41 has worked
> for me. I'm not sure if this is absolutely necessary, or if just building
> PyCUDA with the system Boost library is good enough.
>
>
> For what it's worth, the new version of the Copperhead runtime and compiler
> do not use PyCUDA (although they still use Codepy, another of Andreas
> Kl�ckner's projects). In other words, this particular issue is solved in
Ah, yes -- disabling shipped Boost library, and using system Boost (I
used 1.42) and then rebuilding and re-installing PyCUDA did the trick.
Thanks!
Thank you for the additional pointers also -- they are very helpful.
> I've seen this bug before - it arises from changes in the way PyCUDA and
> Boost export the functions PyCUDA provides, which Copperhead programs
> expect to use. In the past, I've solved it by:
> 1. Not using PyCUDA's shipped Boost library, and instead using the system
> Boost library when building PyCUDA.
> 2. Sometimes I have had to use an older version of Boost. 1.41 has worked
> for me. I'm not sure if this is absolutely necessary, or if just building
> PyCUDA with the system Boost library is good enough.
>
>
> For what it's worth, the new version of the Copperhead runtime and compiler
> do not use PyCUDA (although they still use Codepy, another of Andreas
> Klöckner's projects). In other words, this particular issue is solved in
Thank you for your patience. I guess I should try testing it to the
extreme. You know, the way people are supposed conduct themselves in
mailing lists. So I've got the next set of questions!
First, what would it take to make something like this work?
@cu
def vector_sum(x):
sum(map((lambda xi: xi if xi > 0 else xi * -1), x))
It dumps a bunch of traceback on me, ending with:
"ValueError: visiting unknown node: <_ast.Expr object at 0x2999950>".
I can send the whole thing if you're interested.
Second, have you tried to make Black & Scholes kernel (the one shipped
with Nvidia SDK) work with Copperhead? It doesn't look like a line by
line translation to Copperhead would work, in the absence of abs(),
exp(), sqrt() etc. Do you have suggestions on how to approach this?
Regards,
Sajith.
Bryan Catanzaro <bryan.c...@gmail.com> wrote:
> Glad to hear that worked!
>
> On Mon, Dec 5, 2011 at 11:02 AM, Sajith T S <saj...@gmail.com> wrote:
>
> > Ah, yes -- disabling shipped Boost library, and using system Boost (I
> > used 1.42) and then rebuilding and re-installing PyCUDA did the trick.
> > Thanks!
> >
> > Thank you for the additional pointers also -- they are very helpful.
> >
> > Sajith.
> >
> > Bryan Catanzaro <bryan.c...@gmail.com> wrote:
> > > I've seen this bug before - it arises from changes in the way PyCUDA and
> > > Boost export the functions PyCUDA provides, which Copperhead programs
> > > expect to use. In the past, I've solved it by:
> > > 1. Not using PyCUDA's shipped Boost library, and instead using the
> > system
> > > Boost library when building PyCUDA.
> > > 2. Sometimes I have had to use an older version of Boost. 1.41 has
> > worked
> > > for me. I'm not sure if this is absolutely necessary, or if just
> > building
> > > PyCUDA with the system Boost library is good enough.
> > >
> > >
> > > For what it's worth, the new version of the Copperhead runtime and
> > compiler
> > > do not use PyCUDA (although they still use Codepy, another of Andreas
> > > Kl�ckner's projects). In other words, this particular issue is solved in
> > > Klöckner's projects). In other words, this particular issue is solved in
> > > Klöckner's projects). In other words, this particular issue is solved in
That was the first thing I tried, but that didn't work; doing map(abs,
x) outside Copperhead did. I've attached the code I've been trying to
run and the traceback, in case you might want to see that.
(I realize that numpy.arange() do not generate negative numbers; but I
wasn't exactly interested in that...)
I haven't switched to the new bryancatanzaro-copperhead clone repo
yet; maybe I should try doing that?
(For whatever it's worth, friend of mine and I have been doing a
timing comparison between Accelerate and Copperhead for a class
project. Copperhead seems to be doing really well in our tests;
however surely it's too soon to draw conclusions since both of us are
not experienced in writing well performing Haskell and/or Python
and/or GPU programs. Still, thought you might be interested.)
Thanks,
Sajith.
> > > > > Kl�ckner's projects). In other words, this particular issue is
> > > > > Klöckner's projects). In other words, this particular issue is
Thanks,
Sajith.
> > > > > > > Kl�ckner's projects). In other words, this particular issue is