switching from Epetra to Tpetra in a deal.II application

36 views
Skip to first unread message

Massimo Bernaschi

unread,
Dec 24, 2018, 2:13:22 AM12/24/18
to deal.II User Group
First of all, I beg your pardon if the question has been asked before or somewhere else.
I have inherited a fully working deal.II application using Trilinos and Epetra for the solution of the system (using the GMRES method).
I would like to port the application to GPU (currently it runs on multi CPU using MPI). In the beginning it would be
enough to use a single GPU (for relatively small problems) but the final goal is to use multiple GPU.
I understand that Trilinos supports CUDA through the Tpetra package but I am not familiar neither with deal.II nor with Trilinos 
(although I am very familiar with multi-GPU programming) and I wonder if there is a (relatively) easy path to migrate from Epetra to Tpetra
(I could not find specific documentation in Trilinos) and then activate the CUDA support in Tpetra.
Did someone have a similar problem and how did you manage it?
Is there a specific deal.II wrapper for Trilinos using Tpetra? 
Any other suggestion/example of how to use CUDA for the solution of the system in a deal.II application that makes use of Trilinos/Epetra?
Thanks in advance for any help/pointer and best Season Greetings,
Massimo

Daniel Arndt

unread,
Dec 24, 2018, 4:23:14 AM12/24/18
to deal.II User Group
Massimo,

we are intending to add Tpetra support in deal.II but this is not yet available.
with MemorySpace::CUDA).

Best,
Daniel

Massimo Bernaschi

unread,
Dec 24, 2018, 10:07:20 AM12/24/18
to dea...@googlegroups.com
Thanks Daniel, do you mean that I can use CUDA through the cuSPARSE wrappers for having the matrix and vectors on the GPU (and then implement GMRES by myself)
or is it still possible to use Trilinos solvers having it running automagically on GPU? Please forgive me if this is a dumb question.
As I already mentioned  I am not familiar neither with deal.II nor with Trilinos...
Thanks again and best regards,
Massimo

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
--- Massimo Bernaschi: Istituto Applicazioni del Calcolo ----
|  IAC-CNR                  | e-mail: massimo....@cnr.it |
|  Via dei Taurini, 19     | phone: +39 06 49937350              |
|  00185 Roma - ITALY | fax:   +39 06 4404306                  |
------------------------------------------------------------------------------------------------------------------------
|Skype nickname: m.bernaschi                                                                           |
------------------------------------------------------------------------------------------------------------------------
|See http://www.iac.cnr.it/~massimo for my GPG public key or check                       |
|GnuPG Public Key Fingerprint (keyserver.linux.it)                                                    |
|pub  1024/CAA3FB48 2001/01/04 Massimo Bernaschi <mas...@iac.rm.cnr.it>  |
|     Key fingerprint = 3EFF 7AFF F8A4 F34E 382B  DD81 57F3 700A CAA3 FB48 |
------------------------------------------------------------------------------------------------------------------------

Daniel Arndt

unread,
Dec 24, 2018, 3:26:43 PM12/24/18
to deal.II User Group
Massimo,

Thanks Daniel, do you mean that I can use CUDA through the cuSPARSE wrappers for having the matrix and vectors on the GPU (and then implement GMRES by myself)
or is it still possible to use Trilinos solvers having it running automagically on GPU? Please forgive me if this is a dumb question.
As I already mentioned  I am not familiar neither with deal.II nor with Trilinos...

You can use the cuSPARSE wrappers and deal.II's SolverGMRES should work out of the box (also compare tests/cuda/solver_03.cu).
Currently, these classes don't work with multiple MPI processes or multiple CUDA devices.

These are not dumb questions. The CUDA support is pretty new and we should certainly improve the description of its capabilities.

Best,
Daniel
Reply all
Reply to author
Forward
0 new messages