So far I've read through the "CUDA Support" issue, the "Roadmap for inclusion of GPU implementation of matrix free in Deal.II" issue, the Doxygen documentation for classes in the CUDAWrappers namespace, and the manuscript by Karl Ljungkvist. Are there any other pages I should be looking at?
My understanding from these pages is that deal.II has partial support for using CUDA with matrix free calculations. Currently, calculations can be done with scalar variables (but not vector variables) and adaptive meshes.
A few (somewhat inter-related) questions:1). Do all of the tools exist to create a GPU version of step-48? Has anyone done so?
2). What exactly would be involved in creating a GPU version of step-48? Is it just changing the CPU Vector, MatrixFree, and FEEvaluation classes to their GPU counterparts, plus packaging some data (plus local apply functions?) into a CUDAWrappers::MatrixFree< dim, Number >::Data struct?
3). Most of the discussions seemed to revolve around linear solves. For something like step-48 with explicit updates, will the current paradigm work well? Or would that require shuttling data between the GPU and CPU every time step, causing too much overhead? (I know that in general GPUs can work very well for explicit codes.)
Steve,
Just to add on what Bruno said: Explicit time integration typically requires only a subset of what you need for running an iterative solver (no decision making like in convergence test of conjugate gradient, no preconditioner), so running that should be pretty straight-forward.
Best,
Martin
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.