Hi Mariana,
I am no specialist in Hardware (or software for that matter), but I work with large scale SPH simulations and at some point parallelism is a must so you might consider running your simulations (if large enough) in a HPC cluster if you have access to one. Nonetheless, if you want to buy a new machine for your own use, I would recommend you get a desktop with at least a 8-core processor (Xeon?) and minimum 32Gb of RAM. If you need to generate lots of output, hd memory will be an issue, and you might consider > 2-3Tb of HD. As for GPUs, If you can get a top card with CUDA/OpenCL support, in the future might be helpful. Finally, get a Linux based system or creat a boot for Linux in your dedicated machine, it will help you so much over a Windows based one.
I hope this helps a little, and maybe other people with more experience can give more details (names, models) of specific processors, etc.
To add to the excellent points above, with GPUs, it all depends
on your budget and requirements. You can get much cheaper gaming
cards like the GeForce 1080Ti for a fraction of the price of a
P100 but the difficulty is that they do not perform nearly as well
on double precision calculations. Their single precision
performance can be equivalent to the much more expensive Tesla
cards. So it really depends on your budget and requirements. You
can see the specs here:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
cheers,
Prabhu
It's been a while since I last asked, but I am still unsure concerning the GPUdoes it then makes sence to get a gaming GPU with something like 300 GFlops if I TeslaTFLOPsyond the budget? is the double precision processing power the only feature that matters? in this case AMD GPU have much better price/perfomance ratio...
We recently bought a few gaming GPUs, specifically 1050 and 1070
Ti's -- they are old but my budget is limited and our goal is to
make sure that pysph performs reasonably on this hardware. They
are quite fast though (the equivalent of a 40-50 core CPU for a
large problem). With this hardware, the double precision does
slow things by a factor of 1.7 or so which is not too bad
considering that the cards are cheap. The performance is
comparable to a P100. I think a 1080Ti is very close in
performance to a P100. I am sure people with hand tuned codes may
be able to extract more performance from these. My numbers are all
based on some simple tests with PySPH. It is true that the gaming
GPUs are in theory much worse at double precision than the tesla
ones, however, most of our CFD problems are not compute limited
but limited by memory bandwidth, so a simplistic comparison is not
enough. You also need to make sure you have enough particles to
feed the GPU, with too few particles it may not give you any speed
up.
We do not yet support multiple GPUs with PySPH but hope to
support that in the future. Unfortunately, things are a bit tricky
when finding the right hardware. Your best bet may be to test
your own code and then decide if it works well enough for you.
cheers,
Prabhu
Am Dienstag, 5. Februar 2019 08:21:01 UTC+1 schrieb Prabhu Ramachandran:Hi Marina,
On 2/4/19 9:42 PM, alom...@gmail.com wrote:
Hi Mariana,
I am no specialist in Hardware (or software for that matter), but I work with large scale SPH simulations and at some point parallelism is a must so you might consider running your simulations (if large enough) in a HPC cluster if you have access to one. Nonetheless, if you want to buy a new machine for your own use, I would recommend you get a desktop with at least a 8-core processor (Xeon?) and minimum 32Gb of RAM. If you need to generate lots of output, hd memory will be an issue, and you might consider > 2-3Tb of HD. As for GPUs, If you can get a top card with CUDA/OpenCL support, in the future might be helpful. Finally, get a Linux based system or creat a boot for Linux in your dedicated machine, it will help you so much over a Windows based one.
I hope this helps a little, and maybe other people with more experience can give more details (names, models) of specific processors, etc.To add to the excellent points above, with GPUs, it all depends on your budget and requirements. You can get much cheaper gaming cards like the GeForce 1080Ti for a fraction of the price of a P100 but the difficulty is that they do not perform nearly as well on double precision calculations. Their single precision performance can be equivalent to the much more expensive Tesla cards. So it really depends on your budget and requirements. You can see the specs here: https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
cheers,
Prabhu
--
You received this message because you are subscribed to the Google Groups "pysph-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pysph-users...@googlegroups.com.
Visit this group at https://groups.google.com/group/pysph-users.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to pysph...@googlegroups.com.
Thanks Parabu, yor experience is very helpful! Is there a particular reason why do you prefere Nvidia Grafic cards? I am thinking about AMD Radeon VII which has an intersting perfomance/price ratio. It supports openCL 2.0 this should be also usefull for pyOpenCl libraries, right?
No particular reason, I think these nVidia cards were easily
available. I haven't tried with the Radeon VII, I have a Radeon
560 on my macbook but it is not too fast but it does work. I
would need to test the Radeon VII but yes if it supports OpenCL it
should work.
Regards,
Prabhu
Well in this case I will give it a try :) If you have a standartised text example and are interested I could run test once I get the system up and going
Sure, once you install pyopencl you should install compyle from master and pysph also from master. Here are some quick instructions assuming you have a suitable python environment -- a miniconda env works very well with the latest python 3.7 for example.
pip install cyarray pyopencl
git clone https://github.com/pypr/pysph
cd pysph
pip install -r requirements.txt
python setup.py develop
Once this is all set up you should be able to run the following:
pysph run cube --opencl --np 1e6 --tf 2e-3 --disable-output
This will take 20 timesteps with 1million particles and not dump any output, its a silly test but useful as you can just asses raw performance.
You can compare with a CPU by looking at the numbers for instance:
pysph run cube --openmp --np 1e6 --tf 2e-3 --disable-output
You could run a more realistic case if you want for example::
pysph run dam_break_3d --opencl --tf 0.5
or
pysph run sphysics.dam_break --opencl --tf 0.5
The 3D benchmarks perform much better and we are still optimizing the GPU performance but it does work. You can change the --tf options to suit your needs. The new progressbar is pretty handy to get a quick sense of the performance. The default on the GPU is with floating point precision. For double precision you can do:
pysph run sphysics.dam_break --opencl --tf 0.5 --use-double
HTH.
cheers,
Prabhu
--
You received this message because you are subscribed to the Google Groups "pysph-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pysph-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pysph-users/46df13d2-a6b7-4344-91f1-c6b0f2bac05e%40googlegroups.com.