Groups
Sign in
Groups
deal.II User Group
Conversations
Labels
basics
boundary_conditions
bug
complex_number
cpp
cuda
development
dg_methods
eclipse
eigen_problem
electro_magnetics
fe_spaces
feature_request
fluid_mechanics
fluid_structure_interation
h-refinement
hp_adaptivity
installation
laplace_poisson
linear_algebra
mac
manifold
matrix-free
mesh_generator
meshworker
mpi
multigrid
multithreading
news
p4est
parameter_handler
petsc
post-processing
pre-processing
slepc
solid_mechanics
suggestion
thermo_mechanics
time_integration
trilinos
tutorials
windows
About
Send feedback
Help
deal.II User Group
Contact owners and managers
1–30 of 5084
Welcome to the deal.II mailing list. If you are new to the mailing list, please take the time to read these posts:
Getting started and posting guidelines for new users
and
deal.II discussion group: Feedback and guidelines
.
deal.II website:
http://dealii.org
Github:
https://github.com/dealii/
dealii
Mark all as read
Report group
0 selected
vachan potluri
, …
Bruno Turcksin
13
2/14/20
Q&A
Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names
Here is a summary of the installation process on Cray XC50. I have configured deal.II with MPI,
unread,
installation
petsc
Q&A
Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names
Here is a summary of the installation process on Cray XC50. I have configured deal.II with MPI,
2/14/20
Ihar Suvorau
, …
David Wells
12
1/30/20
Q&A
"PETSc installation does not include a copy of the hypre package" while running the step-40 program
Yup - deal.II did not pick up HYPRE or MUMPS even though you configured PETSc with both (which can be
unread,
installation
mac
petsc
tutorials
Q&A
"PETSc installation does not include a copy of the hypre package" while running the step-40 program
Yup - deal.II did not pick up HYPRE or MUMPS even though you configured PETSc with both (which can be
1/30/20
Juan Carlos Araujo Cabarcas
,
Jean-Paul Pelteret
5
1/20/20
Q&A
Different shape representations with manifolds on the same triangulation
Dear Jean-Paul, thanks again for your support and kind suggestions. I have worked with
unread,
hp_adaptivity
manifold
petsc
Q&A
Different shape representations with manifolds on the same triangulation
Dear Jean-Paul, thanks again for your support and kind suggestions. I have worked with
1/20/20
Zhidong Brian Zhang
,
David Wells
5
12/10/19
Q&A
Vector conversion problem: between dealii::PETScWrappers::MPI::Vector and a PETSc Vec
It makes much sense! Right now, it works by using the deprecated function (generating MPI::Vector
unread,
basics
mpi
petsc
Q&A
Vector conversion problem: between dealii::PETScWrappers::MPI::Vector and a PETSc Vec
It makes much sense! Right now, it works by using the deprecated function (generating MPI::Vector
12/10/19
richard....@gmx.at
,
Daniel Arndt
3
7/31/19
Q&A
petsc & trilinos blocksparsematrix reinit with zero locally owned components
Dear Daniel, thank you very much for your quick and concise answer! Just for the record & other
unread,
basics
fluid_mechanics
mpi
petsc
trilinos
Q&A
petsc & trilinos blocksparsematrix reinit with zero locally owned components
Dear Daniel, thank you very much for your quick and concise answer! Just for the record & other
7/31/19
Ramprasad R
, …
Bruno Turcksin
9
7/19/19
Q&A
Compatibility of Petsc with step 18
Hi Daniel, The problem is now solved. The issue was that, the bash rc did not have the location of
unread,
cpp
mpi
petsc
Q&A
Compatibility of Petsc with step 18
Hi Daniel, The problem is now solved. The issue was that, the bash rc did not have the location of
7/19/19
Franco Milicchio
, …
Daniel Arndt
11
7/30/19
Q&A
Porting tutorials to PETSc from Trilinos
Thanks Daniel, now it runs. Of course it won't converge, lacking preconditiones, but this is for
unread,
petsc
trilinos
tutorials
Q&A
Porting tutorials to PETSc from Trilinos
Thanks Daniel, now it runs. Of course it won't converge, lacking preconditiones, but this is for
7/30/19
Vivek Kumar
,
Daniel Arndt
3
7/11/19
Q&A
Equivalent option for local_range() for Trilinos vectors
Thanks Daniel, it worked. On Wednesday, July 10, 2019 at 9:59:08 PM UTC-4, Daniel Arndt wrote: Vivek,
unread,
boundary_conditions
petsc
trilinos
Q&A
Equivalent option for local_range() for Trilinos vectors
Thanks Daniel, it worked. On Wednesday, July 10, 2019 at 9:59:08 PM UTC-4, Daniel Arndt wrote: Vivek,
7/11/19
Pai Liu
, …
Wolfgang Bangerth
7
4/4/19
Q&A
How to manually create sparsity pattern for PETSc sparsity matrix in parallel
Hi Wolfgang, Thank you so much for your kind help. I tried the dynamic sparsity pattern, and with the
unread,
mpi
petsc
Q&A
How to manually create sparsity pattern for PETSc sparsity matrix in parallel
Hi Wolfgang, Thank you so much for your kind help. I tried the dynamic sparsity pattern, and with the
4/4/19
gabriel...@koeln.de
, …
Gabriel Peters
17
4/5/19
Q&A
Applying boundary values in parll::distr:triang setting for two dof_handler Sparsematrux
Gabriel Peters Endenicher Str. 310 53121 Bonn 00491525/5478185 Gabriel...@koeln.de Am 05.04.19 um
unread,
boundary_conditions
p4est
petsc
Q&A
Applying boundary values in parll::distr:triang setting for two dof_handler Sparsematrux
Gabriel Peters Endenicher Str. 310 53121 Bonn 00491525/5478185 Gabriel...@koeln.de Am 05.04.19 um
4/5/19
Pai Liu
, …
David F
16
3/26/19
Q&A
Is parallel direct solver extremely slow?
Dear Pai, I'm very interested in solving a problem with characteristics very similar to yours.
unread,
mpi
petsc
solid_mechanics
Q&A
Is parallel direct solver extremely slow?
Dear Pai, I'm very interested in solving a problem with characteristics very similar to yours.
3/26/19
RAJAT ARORA
, …
Giorgos Kourakos
10
9/12/19
Q&A
Getting RHS values at nodes with DBC
As always there is a "deal.ii" way of doing the calculations. The FEValues::
unread,
mpi
petsc
Q&A
Getting RHS values at nodes with DBC
As always there is a "deal.ii" way of doing the calculations. The FEValues::
9/12/19
Eva Lilje
, …
张嘉宁
3
8/5/19
Q&A
Dealii Installtion fails because it uses the wrong MPI Version of the intel compiler. It uses debug_mt instead of release_mt.
Hi, recently, I have the same problem. When I complie the PETsc, it link the debug_mt/linmpi.so. So I
unread,
petsc
Q&A
Dealii Installtion fails because it uses the wrong MPI Version of the intel compiler. It uses debug_mt instead of release_mt.
Hi, recently, I have the same problem. When I complie the PETsc, it link the debug_mt/linmpi.so. So I
8/5/19
mrjonm...@gmail.com
,
Daniel Arndt
3
6/28/18
Q&A
KellyErrorEstimator failure when running multiple processes
Thank you. I don't know how I missed item 1. That's a bit embarrassing. Your first suggestion
unread,
basics
mpi
petsc
Q&A
KellyErrorEstimator failure when running multiple processes
Thank you. I don't know how I missed item 1. That's a bit embarrassing. Your first suggestion
6/28/18
Feimi Yu
, …
Wolfgang Bangerth
7
5/15/18
Q&A
Deprecated function PETScWrappers::VectorBase::ratio()
Oh, yes. Sorry I did not say it clearly. What I did is using an identity vector whose elements are
unread,
petsc
Q&A
Deprecated function PETScWrappers::VectorBase::ratio()
Oh, yes. Sorry I did not say it clearly. What I did is using an identity vector whose elements are
5/15/18
mrjonm...@gmail.com
,
Denis Davydov
2
5/11/18
Q&A
Mac OS X 10.13.4 Installation problem
Hi Jon, Try this .dmg https://github.com/luca-heltai/dealii/releases/tag/v9.0.0-rc1 If that won't
unread,
installation
mac
p4est
petsc
Q&A
Mac OS X 10.13.4 Installation problem
Hi Jon, Try this .dmg https://github.com/luca-heltai/dealii/releases/tag/v9.0.0-rc1 If that won't
5/11/18
Feimi Yu
, …
Weixiong Zheng
8
4/8/18
Q&A
Reason for SolverGMRES being slower in parallel?
Hi Weixiong, I did consider this problem so I wanted to avoid using a fake ILU like BlockJacobi. As I
unread,
linear_algebra
petsc
Q&A
Reason for SolverGMRES being slower in parallel?
Hi Weixiong, I did consider this problem so I wanted to avoid using a fake ILU like BlockJacobi. As I
4/8/18
Alexander Knieps
,
Wolfgang Bangerth
4
3/20/18
Contributing back a small feature enhancement (MGTransferPrebuilt parallel PETSc support)
On 03/20/2018 10:43 AM, Alexander Knieps wrote: > > I think that makes sense. Are the other MG
unread,
development
multigrid
petsc
Contributing back a small feature enhancement (MGTransferPrebuilt parallel PETSc support)
On 03/20/2018 10:43 AM, Alexander Knieps wrote: > > I think that makes sense. Are the other MG
3/20/18
Feimi Yu
,
Wolfgang Bangerth
13
3/21/18
Q&A
Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel
Got it. Thank you so much! Thanks, Feimi On Wednesday, March 21, 2018 at 10:51:24 AM UTC-4, Wolfgang
unread,
linear_algebra
mpi
petsc
Q&A
Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel
Got it. Thank you so much! Thanks, Feimi On Wednesday, March 21, 2018 at 10:51:24 AM UTC-4, Wolfgang
3/21/18
Roberto Porcù
, …
Timo Heister
7
4/17/18
Q&A
Problem with parallelization when using hyper_cube_slit
Dear Timo. thank you very much. I removed the check on the cell when setting the boundary indicators
unread,
boundary_conditions
petsc
Q&A
Problem with parallelization when using hyper_cube_slit
Dear Timo. thank you very much. I removed the check on the cell when setting the boundary indicators
4/17/18
Sukhminder Singh
,
Denis Davydov
2
1/24/18
Q&A
PetSc Hybrid MPI-OPENMP Parallelization with Spack Dealii
Hi, On Wednesday, January 24, 2018 at 9:13:06 PM UTC+1, Sukhminder Singh wrote: I installed Spack
unread,
installation
petsc
Q&A
PetSc Hybrid MPI-OPENMP Parallelization with Spack Dealii
Hi, On Wednesday, January 24, 2018 at 9:13:06 PM UTC+1, Sukhminder Singh wrote: I installed Spack
1/24/18
Marek Čapek
,
Denis Davydov
2
12/16/17
Q&A
surprising results from DoFHandler.locally_owned_dofs() calls in the fully distributed triangulation
Hi, On Friday, December 15, 2017 at 11:14:24 PM UTC+1, Marek Čapek wrote: Hello, I have downloaded
unread,
bug
petsc
trilinos
Q&A
surprising results from DoFHandler.locally_owned_dofs() calls in the fully distributed triangulation
Hi, On Friday, December 15, 2017 at 11:14:24 PM UTC+1, Marek Čapek wrote: Hello, I have downloaded
12/16/17
Jie Cheng
,
Wolfgang Bangerth
11
12/18/17
Q&A
Temporary distributed vectors
On 12/17/2017 10:10 PM, Jie Cheng wrote: > > The way to deal with the sparsity pattern is to
unread,
mpi
petsc
Q&A
Temporary distributed vectors
On 12/17/2017 10:10 PM, Jie Cheng wrote: > > The way to deal with the sparsity pattern is to
12/18/17
Lucas Campos
, …
Jie Cheng
10
12/17/17
Q&A
PETSc Sparse LU Preallocation
Hi Lucas and Wolfgang I have something to say on this issue because I think it might be helpful to
unread,
linear_algebra
mpi
petsc
Q&A
PETSc Sparse LU Preallocation
Hi Lucas and Wolfgang I have something to say on this issue because I think it might be helpful to
12/17/17
Jie Cheng
,
Wolfgang Bangerth
3
12/6/17
Q&A
General questions in distributed parallelization
Hi Wolfgang Thank you so much for the clear answer! Jie On Wednesday, December 6, 2017 at 3:14:46 PM
unread,
mpi
p4est
petsc
Q&A
General questions in distributed parallelization
Hi Wolfgang Thank you so much for the clear answer! Jie On Wednesday, December 6, 2017 at 3:14:46 PM
12/6/17
Lucas Campos
, …
Timo Heister
9
11/30/17
Q&A
Errors when using MUMPS/PETSc LU
>> Then you have to simplify your problem as much as possible until we >> can reproduce
unread,
linear_algebra
mpi
multithreading
petsc
Q&A
Errors when using MUMPS/PETSc LU
>> Then you have to simplify your problem as much as possible until we >> can reproduce
11/30/17
Lucas Campos
, …
Timo Heister
4
11/13/17
Q&A
LU Decomposition on multiple processors
> However I still do not understand what that line in the documentation means. > Maybe it is a
unread,
mpi
multithreading
petsc
Q&A
LU Decomposition on multiple processors
> However I still do not understand what that line in the documentation means. > Maybe it is a
11/13/17
Frederik S.
, …
Jean-Paul Pelteret
9
11/10/17
Q&A
CellDataStorage with mesh refinement
Hey Jean-Paul! Sorry it took so long to answer, I was on a conference this week and didn't come
unread,
h-refinement
mpi
multithreading
petsc
Q&A
CellDataStorage with mesh refinement
Hey Jean-Paul! Sorry it took so long to answer, I was on a conference this week and didn't come
11/10/17
Carlo Marcati
,
Bruno Turcksin
5
10/18/17
Q&A
SolutionTranfer with PETScWrappers::MPI::Vector
Dear Bruno, thank you. I ended up using prepare_for_pure_refinement() and refine_interpolate(), and
unread,
mpi
petsc
Q&A
SolutionTranfer with PETScWrappers::MPI::Vector
Dear Bruno, thank you. I ended up using prepare_for_pure_refinement() and refine_interpolate(), and
10/18/17
RAJAT ARORA
,
Wolfgang Bangerth
4
10/3/17
Q&A
adaptive mesh refinement doubts
Rajat, > 1. My problem involves the mesh movement in every time step. But with > adaptive mesh
unread,
boundary_conditions
cpp
petsc
tutorials
Q&A
adaptive mesh refinement doubts
Rajat, > 1. My problem involves the mesh movement in every time step. But with > adaptive mesh
10/3/17