TypeError: __init__(): incompatible constructor arguments in PySPH simulations with OpenCL

55 views
Skip to first unread message

Stephan

unread,
Jan 28, 2022, 11:00:38 AM1/28/22
to pysph-users
Dear all,

I am currently trying to run some code on my GPU and I have set the whole system up with CUDA as well as OpenCL. When I run the PySPH tests I don't get any failures (although I get around 9000 warnings) and when I run some test code to test Cupy, PyOpenCL and PyCUDA independently all of them seem to be working fine. Also, I ran the test case of the dambreak problem, which was discussed here in the group between Prabhu and Marina, with success.

Now when I run square_droplet.py from the examples it shows some errors and in some cases the simulation crashes completely. Some schemes seem to work while other schemes don't. I tried the following simulations:

  • python square_droplet.py --opencl
    • simulation worked fine but gave some warnings
  • python square_droplet.py --scheme morris --opencl
    • simulation worked same as previous one which makes sense because morris scheme is defined as the default option in square_droplet.py
  • python square_droplet.py --scheme morris --opencl --use-double
    • simulation gives same results as previous one but took little bit longer than  the case without --use-double option which is obvious
  • python square_droplet.py --scheme adami --opencl
    • simulation finishes with some new warnings not seen before and takes a very long time
  • python square_droplet.py --scheme adami --opencl --use-double
    • simulation crashes with a TypeError
The exact terminal output containing the warnings and error messages can be found in the attachment. Could someone help me out here please?

Kind regards,

Stephan


square_droplet_adami_opencl.txt
square_droplet_morris_opencl_double.txt
square_droplet_adami_opencl_double.txt
square_droplet_morris_opencl.txt

Stephan

unread,
Mar 4, 2022, 4:49:22 AM3/4/22
to pysph-users
Dear developers,

I figured out what the issue was more or less. Apparently, when running on a GPU it is not possible to assign values to particles after a ParticleArray has been created. Furthermore, constants cannot be created and then changed during simulations as you would normally do when you want to calculate some reduced properties and store them, e.g. center of mass calculation via a reduce method and then updating that center of mass in a constant every time step by overwriting it.

Normally I was used to this:

def create_particles(self):
    x, y = NumPy.mgrid[dxb2:domain_width:dx, dyb2:domain_height:dy]
    x = x.ravel()
    y = y.ravel()
    m = NumPy.ones_like(x) * volume * rho0
    etc.

    additional_props = ['V', 'alpha', etc2. ]
    fluid = get_particle_array(name='fluid', x=x, y=y, m=m, h=h, etc., additional_props=additional_props, constants=constants)

    fluid.V[:] = 1./volume
    fluid.alpha[:] = alpha0
    etc.
    
    return [fluid]

Now what happens when you do it like this is that: a) sometimes simulations seem to start but then when you check the actual values of V and alpha they are initialized to the default value of 0 and the results are obviously wrong, or b) the simulations don't even start and give this TypeError mentioned in the title of this thread. I figured out that the work around to this is to do the following:

def create_particles(self):
    x, y = NumPy.mgrid[dxb2:domain_width:dx, dyb2:domain_height:dy]
    x = x.ravel()
    y = y.ravel()
    m = NumPy.ones_like(x) * volume * rho0
    V = NumPy.ones_like(x) / volume
    alpha = NumPy.ones_like(x) * alpha0
    etc.

    additional_props = [ etc2., ]
    fluid = get_particle_array(name='fluid', x=x, y=y, m=m, h=h, etc., V=V, alpha=alpha, additional_props=additional_props, constants=constants)
    
    return [fluid]

This is very important for things that really need a preset value. For example, viscosity is often a value which is assigned to each particle in the beginning of a simulation and is never updated during the simulation. That means it will remain 0 during simulations if it would be declared as in the first example.

My work around to perform reduce operations on GPU computations, for example the maximum velocity of all particles, is to simply have an additional property called max_v and then every time step I let the particle check if it exceeded its previous velocity and either overwrite it or not. Then in the post processing I can check the max_v over all particles for every output file, but this is kind of a slow work around. Furthermore, it is not possible to evaluate the max_v at every iteration in this way. I hope we can figure out a better way to solve these issues as it would really benefit the project. The GPU capabilities really enhance speed up of the simulations a lot!

Tanks, kind regards,

Stephan
Op vrijdag 28 januari 2022 om 17:00:38 UTC+1 schreef Stephan:

Prabhu Ramachandran

unread,
Mar 4, 2022, 12:39:20 PM3/4/22
to Stephan, pysph-users
Hi Stephan,

I think our documentation for the GPU support is very wanting. My apologies for
not doing this carefully. GPU support does require a few points to keep in mind
and I somehow missed mentioning this anywhere. Here are a few pointers so you
can see how one can use the GPU support a bit better.

Look at the tests here:
https://github.com/pypr/pysph/blob/master/pysph/sph/tests/test_acceleration_eval.py

The tests are pretty extensive, we test every feature that is available on both
CPU and GPU backends -- as you can see from the sheer length of that file. You
can see the minor changes that are needed for GPU support. Search for "gpu" and
you will notice a .gpu attribute. For example see this test:

https://github.com/pypr/pysph/blob/master/pysph/sph/tests/test_acceleration_eval.py#L684

You will see a pa.gpu.pull('u')

Similarly see this:
https://github.com/pypr/pysph/blob/master/pysph/sph/tests/test_acceleration_eval.py#L910

You will see a pa.gpu.push('u')

This is one major point that we have not documented properly. Basically if you
choose a GPU option, the particle array has a ".gpu" attribute that manages the
data going between the host and the device. For CPU backends the .gpu is None.
So you can use that as a simple check.

When you change a value on the host, i.e. with

pa.u[1] = 123.0

You must follow it up with a pa.gpu.push('u'), this will push just that
attribute and you can pass multiple args like pa.gpu.push('u', 'v'). If you
just call pa.gpu.push() it will push all arrays which can be a large amount of
transfers and may not be what you want.

Now when the data changes on the GPU you need to pull it back which explains the
pa.gpu.pull calls. Once you get this set, the rest is pretty straightforward.

Apart from this change you need to be careful when writing reductions by again
keeping these in mind. The tests check all of these and show examples for all
of them so is a good resource to read. The tests are also fairly easy to read (I
hope) and show how you can use the framework to do things.

Outside of the pull/push we try to make the methods all do the right thing in
the particle array class. If you find specific bugs please do let us know.

I hope this clarifies and makes it easier to use the GPU support.

cheers,
Prabhu
> * python square_droplet.py --opencl
> o simulation worked fine but gave some warnings
> * python square_droplet.py --scheme morris --opencl
> o simulation worked same as previous one which makes sense because
> morris scheme is defined as the default option in square_droplet.py
> * python square_droplet.py --scheme morris --opencl --use-double
> o simulation gives same results as previous one but took little bit
> longer than  the case without --use-double option which is obvious
> * python square_droplet.py --scheme adami --opencl
> o simulation finishes with some new warnings not seen before and takes
> a very long time
> * python square_droplet.py --scheme adami --opencl --use-double
> o simulation crashes with a TypeError
Reply all
Reply to author
Forward
0 new messages