Groups
Sign in
Groups
deal.II User Group
Conversations
Labels
basics
boundary_conditions
bug
complex_number
cpp
cuda
development
dg_methods
eclipse
eigen_problem
electro_magnetics
fe_spaces
feature_request
fluid_mechanics
fluid_structure_interation
h-refinement
hp_adaptivity
installation
laplace_poisson
linear_algebra
mac
manifold
matrix-free
mesh_generator
meshworker
mpi
multigrid
multithreading
news
p4est
parameter_handler
petsc
post-processing
pre-processing
slepc
solid_mechanics
suggestion
thermo_mechanics
time_integration
trilinos
tutorials
windows
About
Send feedback
Help
deal.II User Group
Contact owners and managers
1–30 of 5077
Welcome to the deal.II mailing list. If you are new to the mailing list, please take the time to read these posts:
Getting started and posting guidelines for new users
and
deal.II discussion group: Feedback and guidelines
.
deal.II website:
http://dealii.org
Github:
https://github.com/dealii/
dealii
Mark all as read
Report group
0 selected
Paras Kumar
,
Wolfgang Bangerth
3
5/21/20
Q&A
Broadcast arbitrary types
Dear Wolfgang, Thank you for the reply. I will try to add it soon. Best, Paras On Thursday, May 21,
unread,
mpi
Q&A
Broadcast arbitrary types
Dear Wolfgang, Thank you for the reply. I will try to add it soon. Best, Paras On Thursday, May 21,
5/21/20
Konrad Simon
, …
heena patel
7
9/27/20
Q&A
Evaluating FE-solution on distributed mesh, semi-Lagrangian method
Hi Konrad, I have the following suggestion. You can use (1) https://www.dealii.org/current/doxygen/
unread,
fe_spaces
mpi
Q&A
Evaluating FE-solution on distributed mesh, semi-Lagrangian method
Hi Konrad, I have the following suggestion. You can use (1) https://www.dealii.org/current/doxygen/
9/27/20
Doug Shi-Dong
, …
Bruno Turcksin
5
4/24/20
Q&A
LinearOperator MPI Parallel Vector
On Thursday, April 23, 2020 at 4:50:40 AM UTC-4, peterrum wrote: Regarding GPU: currently there is
unread,
mpi
trilinos
Q&A
LinearOperator MPI Parallel Vector
On Thursday, April 23, 2020 at 4:50:40 AM UTC-4, peterrum wrote: Regarding GPU: currently there is
4/24/20
Michał Wichrowski
3/21/20
Coarse direct solver for MatrixFree (also block version)
Dear all, I've wrote a interface for Trilinos direct solver so that it may be used as coarse
unread,
development
mpi
Coarse direct solver for MatrixFree (also block version)
Dear all, I've wrote a interface for Trilinos direct solver so that it may be used as coarse
3/21/20
Ahmad Shahba
,
Wolfgang Bangerth
3
3/25/20
Q&A
Imposing Dirichlet-type conditions via AffineConstraints: Expected behavior or bug?
Thanks Wolfgang for your help! I tried it out and it worked. Regards, Ahmad On Fri, Mar 20, 2020 at 7
unread,
basics
boundary_conditions
mpi
Q&A
Imposing Dirichlet-type conditions via AffineConstraints: Expected behavior or bug?
Thanks Wolfgang for your help! I tried it out and it worked. Regards, Ahmad On Fri, Mar 20, 2020 at 7
3/25/20
vachan potluri
, …
Wolfgang Bangerth
8
2/10/20
Q&A
deal.II installation on cray XC50 giving MPI_VERSION=0.0
On 2/7/20 11:43 PM, vachan potluri wrote: > I really appreciate and value your involvement in this
unread,
installation
mpi
Q&A
deal.II installation on cray XC50 giving MPI_VERSION=0.0
On 2/7/20 11:43 PM, vachan potluri wrote: > I really appreciate and value your involvement in this
2/10/20
Chaitanya Dev
, …
Marc Fehling
5
2/20/20
Q&A
Error in make_hanging_node_constraints() while using parallel::distributed::Triangulation with hp::DoFHandler
Hi Marc, Thank you for providing the fix to the problem in #8365. I am happy that my code was useful
unread,
hp_adaptivity
mpi
Q&A
Error in make_hanging_node_constraints() while using parallel::distributed::Triangulation with hp::DoFHandler
Hi Marc, Thank you for providing the fix to the problem in #8365. I am happy that my code was useful
2/20/20
Feimi Yu
, …
Daniel Arndt
8
1/28/20
Q&A
Instantiation problem for Utilities::MPI::sum (const ArrayView< const T > &values, const MPI_Comm &mpi_communicator, const ArrayView< T > &sums)
Hi Daniel, Thanks for the solution! From my understanding, it's picking template<typename T,
unread,
mpi
Q&A
Instantiation problem for Utilities::MPI::sum (const ArrayView< const T > &values, const MPI_Comm &mpi_communicator, const ArrayView< T > &sums)
Hi Daniel, Thanks for the solution! From my understanding, it's picking template<typename T,
1/28/20
Ellen M. Price
, …
luca.heltai
6
12/22/19
Q&A
Parallelizing step-33 with MPI
Dear Ellen, you may want to compare with this: https://github.com/luca-heltai/dealii/pull/91/files#
unread,
basics
mpi
Q&A
Parallelizing step-33 with MPI
Dear Ellen, you may want to compare with this: https://github.com/luca-heltai/dealii/pull/91/files#
12/22/19
Zhidong Brian Zhang
,
David Wells
5
12/10/19
Q&A
Vector conversion problem: between dealii::PETScWrappers::MPI::Vector and a PETSc Vec
It makes much sense! Right now, it works by using the deprecated function (generating MPI::Vector
unread,
basics
mpi
petsc
Q&A
Vector conversion problem: between dealii::PETScWrappers::MPI::Vector and a PETSc Vec
It makes much sense! Right now, it works by using the deprecated function (generating MPI::Vector
12/10/19
vachan potluri
, …
Matthias Maier
5
11/25/19
Q&A
Is a call to compress() required after scale()?
On Mon, Nov 25, 2019, at 00:23 CST, vachan potluri <vachanpo...@gmail.com> wrote: > I
unread,
linear_algebra
mpi
Q&A
Is a call to compress() required after scale()?
On Mon, Nov 25, 2019, at 00:23 CST, vachan potluri <vachanpo...@gmail.com> wrote: > I
11/25/19
vachan potluri
, …
Doug Shi-Dong
4
10/10/19
Q&A
Query regarding DoFTools::dof_indices_with_subdomain_association()
Hello Vachan, Sounds like you're implementing nodal DG, hence why you only need values and
unread,
dg_methods
mpi
Q&A
Query regarding DoFTools::dof_indices_with_subdomain_association()
Hello Vachan, Sounds like you're implementing nodal DG, hence why you only need values and
10/10/19
Bruno Blais
,
Wolfgang Bangerth
5
9/4/19
Q&A
Difference between BlockDynamicSparsityPattern and TrilinosWrappers::BlockSparsityPattern
Dear Wolfgang, Thank you, everything is clear now and i managed to accomplish what I wanted. Thanks!
unread,
linear_algebra
mpi
trilinos
Q&A
Difference between BlockDynamicSparsityPattern and TrilinosWrappers::BlockSparsityPattern
Dear Wolfgang, Thank you, everything is clear now and i managed to accomplish what I wanted. Thanks!
9/4/19
Maxi Miller
,
Timo Heister
3
9/2/19
Q&A
VectorTools::Compress() fails after mesh_loop and distribute_local_to_global
Will test that, just need to recompile my debug-version of deal.II Am Samstag, 31. August 2019 18:17:
unread,
meshworker
mpi
Q&A
VectorTools::Compress() fails after mesh_loop and distribute_local_to_global
Will test that, just need to recompile my debug-version of deal.II Am Samstag, 31. August 2019 18:17:
9/2/19
张嘉宁
,
Wolfgang Bangerth
2
8/17/19
Q&A
When Lame and nu are big number, the result is always zero.
On 8/17/19 2:49 AM, 张嘉宁 wrote: > I am the new one here, and recently I test a program solving
unread,
boundary_conditions
mpi
multigrid
solid_mechanics
suggestion
Q&A
When Lame and nu are big number, the result is always zero.
On 8/17/19 2:49 AM, 张嘉宁 wrote: > I am the new one here, and recently I test a program solving
8/17/19
richard....@gmx.at
,
Daniel Arndt
3
7/31/19
Q&A
petsc & trilinos blocksparsematrix reinit with zero locally owned components
Dear Daniel, thank you very much for your quick and concise answer! Just for the record & other
unread,
basics
fluid_mechanics
mpi
petsc
trilinos
Q&A
petsc & trilinos blocksparsematrix reinit with zero locally owned components
Dear Daniel, thank you very much for your quick and concise answer! Just for the record & other
7/31/19
Maxi Miller
,
Wolfgang Bangerth
11
7/23/19
Q&A
Code tries to access DoF indices on artificial cells, even though those cells should not be artificial
On 7/22/19 5:23 AM, 'Maxi Miller' via deal.II User Group wrote: > Furthermore, based on my
unread,
mpi
multigrid
Q&A
Code tries to access DoF indices on artificial cells, even though those cells should not be artificial
On 7/22/19 5:23 AM, 'Maxi Miller' via deal.II User Group wrote: > Furthermore, based on my
7/23/19
Ramprasad R
, …
Bruno Turcksin
9
7/19/19
Q&A
Compatibility of Petsc with step 18
Hi Daniel, The problem is now solved. The issue was that, the bash rc did not have the location of
unread,
cpp
mpi
petsc
Q&A
Compatibility of Petsc with step 18
Hi Daniel, The problem is now solved. The issue was that, the bash rc did not have the location of
7/19/19
Reza Rastak
,
Daniel Arndt
6
8/5/19
Q&A
constraining dofs across distributed mesh
Thank you Danial for the explanation. I used the IndexSet::add_index() method and its seems to be
unread,
boundary_conditions
mpi
Q&A
constraining dofs across distributed mesh
Thank you Danial for the explanation. I used the IndexSet::add_index() method and its seems to be
8/5/19
insaneSwami
,
Bruno Turcksin
6
6/3/19
Q&A
trouble configuring lapack over a server
Swami, Le lun. 3 juin 2019 à 12:11, insaneSwami <manuj...@gmail.com> a écrit : > If I
unread,
installation
mpi
Q&A
trouble configuring lapack over a server
Swami, Le lun. 3 juin 2019 à 12:11, insaneSwami <manuj...@gmail.com> a écrit : > If I
6/3/19
Mathias Anselmann
, …
Daniel Arndt
7
5/23/19
Q&A
hp::DoFHandler with FESystem and FE_Nothing - crash in parallel when compressing Trilinos sparsity pattern
Daniel this patch works great for me! Both the little example that I uploaded here and my problem
unread,
hp_adaptivity
mpi
trilinos
Q&A
hp::DoFHandler with FESystem and FE_Nothing - crash in parallel when compressing Trilinos sparsity pattern
Daniel this patch works great for me! Both the little example that I uploaded here and my problem
5/23/19
bobsp...@gmail.com
, …
Wolfgang Bangerth
15
4/16/19
Q&A
Mass matrix for a distributed vector problem
On 4/16/19 12:15 PM, Robert Spartus wrote: > > Thanks a ton for your input! You are completely
unread,
mpi
multithreading
solid_mechanics
time_integration
Q&A
Mass matrix for a distributed vector problem
On 4/16/19 12:15 PM, Robert Spartus wrote: > > Thanks a ton for your input! You are completely
4/16/19
Pai Liu
, …
Wolfgang Bangerth
7
4/4/19
Q&A
How to manually create sparsity pattern for PETSc sparsity matrix in parallel
Hi Wolfgang, Thank you so much for your kind help. I tried the dynamic sparsity pattern, and with the
unread,
mpi
petsc
Q&A
How to manually create sparsity pattern for PETSc sparsity matrix in parallel
Hi Wolfgang, Thank you so much for your kind help. I tried the dynamic sparsity pattern, and with the
4/4/19
Jean Ragusa
, …
Denis Davydov
5
3/4/19
Q&A
L2 norm of distribution solution
Hi Jean, Minor unrelated note to your snippet of manually written L2 norm: have a look at https://www
unread,
mpi
Q&A
L2 norm of distribution solution
Hi Jean, Minor unrelated note to your snippet of manually written L2 norm: have a look at https://www
3/4/19
gabriel...@koeln.de
,
David Wells
5
2/27/19
Q&A
Sparsematrix initialization in P4est program
Hi Gabriel, In case you are interested, I have written a patch and a new test for this issue: its
unread,
fe_spaces
mpi
p4est
Q&A
Sparsematrix initialization in P4est program
Hi Gabriel, In case you are interested, I have written a patch and a new test for this issue: its
2/27/19
Daniel
, …
Daniel
4
1/21/19
Q&A
MPI processes: use only a part of the available processes for dealii
Dear Wolfgang, thanks for pointing this out; I completely missed that aspect. assuming all processes
unread,
mpi
Q&A
MPI processes: use only a part of the available processes for dealii
Dear Wolfgang, thanks for pointing this out; I completely missed that aspect. assuming all processes
1/21/19
Michał Wichrowski
,
Daniel Arndt
3
1/10/19
Q&A
copy_triangulation removes limit_level_difference_at_vertices flag
Done: https://github.com/dealii/dealii/issues/7581 W dniu wtorek, 8 stycznia 2019 23:42:06 UTC+1
unread,
bug
mesh_generator
mpi
multigrid
Q&A
copy_triangulation removes limit_level_difference_at_vertices flag
Done: https://github.com/dealii/dealii/issues/7581 W dniu wtorek, 8 stycznia 2019 23:42:06 UTC+1
1/10/19
giovann...@hotmail.it
,
Bruno Turcksin
5
1/3/19
Q&A
Operations between sparse matrices with incompatible IndexSets
The version I am using is the developeing one, maybe a couple of days old, so it should work...Thank
unread,
linear_algebra
mpi
multigrid
multithreading
trilinos
Q&A
Operations between sparse matrices with incompatible IndexSets
The version I am using is the developeing one, maybe a couple of days old, so it should work...Thank
1/3/19
Maxi Miller
, …
Wolfgang Bangerth
9
1/10/19
Q&A
Usage of FEFieldFunction.vector_value_list on a parallel::distributed::Triangulation
On 1/8/19 12:43 AM, 'Maxi Miller' via deal.II User Group wrote: > > I assume it tries
unread,
mpi
Q&A
Usage of FEFieldFunction.vector_value_list on a parallel::distributed::Triangulation
On 1/8/19 12:43 AM, 'Maxi Miller' via deal.II User Group wrote: > > I assume it tries
1/10/19
ky...@math.uh.edu
, …
Wolfgang Bangerth
4
12/30/18
Q&A
Interpolated Boundary Conditions from Distributed Solution
On 12/13/18 4:24 AM, ky...@math.uh.edu wrote: > > I will report back with a minimal test case.
unread,
boundary_conditions
mpi
Q&A
Interpolated Boundary Conditions from Distributed Solution
On 12/13/18 4:24 AM, ky...@math.uh.edu wrote: > > I will report back with a minimal test case.
12/30/18