Solving Maxwell's equation.

681 views
Skip to first unread message

qince168

unread,
Jul 30, 2013, 9:54:50 PM7/30/13
to dea...@googlegroups.com
Dear all,

I would like to solve Maxwell's equation in frequency domain with homogeneous Dirichlet BC. It is complex vector equation. After reading the tutorial, I implemented it on locally refined meshes. But I encountered some problems.

I got an exception like this:

          ...
Cycle 3:
   Number of active cells:       33018
   Number of degrees of freedom: 273182
   Number of constraints       : 104952
terminate called after throwing an instance of 'dealii::SparseDirectUMFPACK::ExcUMFPACKError'
  what():  
--------------------------------------------------------
An error occurred in line <288> of file </.../deal.II/source/lac/sparse_direct.cc> in function
void dealii::SparseDirectUMFPACK::factorize(const Matrix&) [with Matrix = dealii::SparseMatrix<double>]                           
The violated condition was: 
    status == UMFPACK_OK
The name and call sequence of the exception was:
    ExcUMFPACKError("umfpack_dl_numeric", status)
Additional Information: 
UMFPACK routine umfpack_dl_numeric returned error status -1. See the file <bundled/umfpack/UMFPACK/I
nclude/umfpack.h> for a description of 'status codes'.                                              
Stacktrace:
-----------
#0  /opt/deal.II.8.0.0/lib/libdeal_II.g.so.8.0.0: void dealii::SparseDirectUMFPACK::factorize<dealii::SparseMatrix<double> >(dealii::SparseMatrix<double> const&)
#1  /opt/deal.II.8.0.0/lib/libdeal_II.g.so.8.0.0: void dealii::SparseDirectUMFPACK::initialize<deali
i::SparseMatrix<double> >(dealii::SparseMatrix<double> const&, dealii::SparseDirectUMFPACK::AdditionalData)                       #2  ./h_csem: MaxwellProblem<3>::run()
#3  ./h_csem: main
--------------------------------------------------------

Following is my implementation:

template <int dim>
void MaxwellProblem<dim>::setup_system ()
{
  dof_handler.distribute_dofs (fe);

  solution.reinit (dof_handler.n_dofs());
  system_rhs.reinit (dof_handler.n_dofs());

  constraints.clear ();
  VectorTools::project_boundary_values_curl_conforming(dof_handler, 0,
      ZeroFunction<dim>(fe.n_components()), 0, constraints);
  VectorTools::project_boundary_values_curl_conforming(dof_handler, dim,
      ZeroFunction<dim>(fe.n_components()), 0, constraints);
  DoFTools::make_hanging_node_constraints (dof_handler, constraints);
  constraints.close ();

  CompressedSetSparsityPattern csp (dof_handler.n_dofs(), dof_handler.n_dofs());
  DoFTools::make_sparsity_pattern (dof_handler, csp, constraints, false);
  sparsity_pattern.copy_from (csp);

  system_matrix.reinit (sparsity_pattern);
}

template <int dim>
void MaxwellProblem<dim>::assemble_system ()
{
  QGauss<dim>  quadrature_formula(2);

  FEValues<dim> fe_values (fe, quadrature_formula,
      update_values    |  update_gradients |
      update_quadrature_points  |  update_JxW_values);

  const size_t dofs_per_cell = fe.dofs_per_cell;
  const size_t n_q_points = quadrature_formula.size();

  FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
  Vector<double> cell_rhs(dofs_per_cell);

  std::vector<size_t> local_dof_indices(dofs_per_cell);

  std::vector< Point<dim> > epr_rhs(n_q_points);
  std::vector< Point<dim> > epi_rhs(n_q_points);

  const FEValuesExtractors::Vector Er(0);
  const FEValuesExtractors::Vector Ei(dim);

  size_t i, j, q_point;
  double omega, sigma, d_sigma;

  omega = 2 * numbers::PI * f;

  auto cell = dof_handler.begin_active();
  for (; cell != dof_handler.end(); ++cell)
  {
    fe_values.reinit (cell);

    cell_matrix = 0;
    cell_rhs = 0;


    sigma = 1.0 / rho[cell->material_id() - 1];
    d_sigma = sigma - 1.0 / bg_rho[cell->material_id() - 1];

    // Call some functions to calculate the right hand side epr_rhs,
    // epi_rhs here.

    for (i = 0; i < dofs_per_cell; ++i) {
      auto base_index_i = fe.system_to_base_index(i);

      for (j = 0; j < dofs_per_cell; ++j) {
        auto base_index_j = fe.system_to_base_index(j);

        if (base_index_i.first.first == base_index_j.first.first) {
          if (base_index_i.first.first == 0) {
            for (q_point = 0; q_point < n_q_points; ++q_point) {
              cell_matrix(i, j) += fe_values[Er].curl(i, q_point) *
                fe_values[Er].curl(j, q_point) * fe_values.JxW(q_point);
            }
          } else {
            for (q_point = 0; q_point < n_q_points; ++q_point) {
              cell_matrix(i, j) += fe_values[Ei].curl(i, q_point) *
                fe_values[Ei].curl(j, q_point) * fe_values.JxW(q_point);
            }
          }
        } else {
          if (base_index_i.first.first == 0) {
            for (q_point = 0; q_point < n_q_points; ++q_point) {
              cell_matrix(i, j) += omega * mu * sigma *
                fe_values[Er].value(i, q_point) *
                fe_values[Ei].value(j, q_point) * fe_values.JxW(q_point);
            }
          } else {
            for (q_point = 0; q_point < n_q_points; ++q_point) {
              cell_matrix(i, j) += -1.0 * omega * mu * sigma *
                fe_values[Ei].value(i, q_point) *
                fe_values[Er].value(j, q_point) * fe_values.JxW(q_point);
            }
          }
        }
        for (q_point = 0; q_point < n_q_points; ++q_point) {
          if (base_index_i.first.first == 0) {
            cell_rhs(i) += -1.0 * omega * mu * d_sigma * fe_values[Er].value(i, q_point) *
              epi_rhs[q_point] * fe_values.JxW(q_point);
          } else {
            cell_rhs(i) += omega * mu * d_sigma * fe_values[Ei].value(i, q_point) *
              epr_rhs[q_point] * fe_values.JxW(q_point);
          }
        }
      }
    }

    cell->get_dof_indices (local_dof_indices);
    constraints.distribute_local_to_global (cell_matrix, local_dof_indices, system_matrix);
    constraints.distribute_local_to_global (cell_rhs, local_dof_indices, system_rhs);
  }
}

template <int dim>
void MaxwellProblem<dim>::solve ()
{
  SparseDirectUMFPACK A_direct;
  A_direct.initialize(system_matrix);

  A_direct.vmult (solution, system_rhs);

  constraints.distribute (solution);
}

template <int dim>
void MaxwellProblem<dim>::postprocess ()
{
  Vector<float> estimated_error_per_cell (triangulation.n_active_cells());
  KellyErrorEstimator<dim>::estimate (dof_handler, QGauss<dim - 1>(2),
      typename FunctionMap<dim>::type(), solution, estimated_error_per_cell);

  GridRefinement::refine_and_coarsen_fixed_number (triangulation,
      estimated_error_per_cell,
      0.3, 0.03);
  triangulation.execute_coarsening_and_refinement ();
}

Can someone provide some hints ?  

Thanks!

Best,
Ce Qin


Wolfgang Bangerth

unread,
Aug 13, 2013, 4:11:51 PM8/13/13
to dea...@googlegroups.com, qince168
>[...]
> Can someone provide some hints ?

What have you already tried? Did you, for example, try to solve a problem on
only 4 cells in 2d? If so, did you try to compute the matrix by hand and see
whether it matches the one you compute with deal.II? Have you tried to vary
parts of your program to find out which part it is that makes the matrix be
singular?

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@math.tamu.edu
www: http://www.math.tamu.edu/~bangerth/

qince168

unread,
Aug 15, 2013, 10:38:30 AM8/15/13
to Wolfgang Bangerth, dea...@googlegroups.com
Dear Prof. Wolfgang,

Thanks for your reply.

I have tried to solve it in 2D and print the local matrix out. I found that some entries of the local matrix are zero.
Is it reasonable? I have tried my best to solve this problem, but I failed.

I'll attach my implementation. Could you take some time to see it and provide some hints for me?

Thanks a lot!

Best,
Ce Qin
maxwell_complex.cc

Wolfgang Bangerth

unread,
Aug 15, 2013, 12:33:26 PM8/15/13
to qince168, dea...@googlegroups.com

> I have tried to solve it in 2D and print the local matrix out. I found
> that some entries of the local matrix are zero.
> Is it reasonable? I have tried my best to solve this problem, but I failed.

Of course the matrix will have zero entries. We are dealing with sparse
matrices, after all. Or do you mean the local matrix?


> I'll attach my implementation. Could you take some time to see it and
> provide some hints for me?

No, you need to work independently some more. I suggested computing the
matrix for a small problem by hand and compare it with what you get from
your program. Say for a problem on only one cell, or maybe 4. Your
problem refines the mesh 4 times, and it has not a single comment. You
will need to make your case as simple as possible in order to find out
what the problem is.

qince168

unread,
Aug 16, 2013, 10:42:14 PM8/16/13
to Wolfgang Bangerth, dea...@googlegroups.com
Of course the matrix will have zero entries. We are dealing with sparse matrices, after all. Or do you mean the local matrix?
Yes, I mean the local matrix (cell_matrix).

Thank you a lot!

Best Regards,
Ce Qin

qinc...@gmail.com

unread,
Aug 24, 2013, 8:34:15 AM8/24/13
to dea...@googlegroups.com
Dear all,
Dear Wolfgang,

Now I can solve this problem in 2D with number of dofs up to a million. But in 3D it still not work. I have already tried to use both direct solver(UMFPACK, MUMPS) and iterative solver(GMRES, BicgStab).

With SparseDirectMUMPS, the error message is
PETSC ERROR: Error in external library!
PETSC ERROR: Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory 382825844 megabytes
...
and with iterative solver it did not convergence.

So, does anyone know what kind of solver should use in such problem(3D)?

Any help will be appreciated. Thanks.

Bests regards,
Ce Qin


Timo Heister

unread,
Aug 24, 2013, 9:10:37 PM8/24/13
to dea...@googlegroups.com
What preconditioners did you try with the iterative methods like
GMRES? ILU or AMG come to my mind.
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+un...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.



--
Timo Heister
http://www.math.clemson.edu/~heister/

Ce Qin

unread,
Aug 24, 2013, 9:22:26 PM8/24/13
to dea...@googlegroups.com
What preconditioners did you try with the iterative methods like
GMRES? ILU or AMG come to my mind.
I tried the  PETScWrappers::PreconditionBoomerAMG with GMRES, but it did not work.
 
Thanks.


2013/8/25 Timo Heister <hei...@clemson.edu>

Timo Heister

unread,
Aug 24, 2013, 9:37:54 PM8/24/13
to dea...@googlegroups.com
Well, you can investigate what the ILU is doing by setting
output_details to true in the AdditionalData structure. You can play
with the parameters to make sure a reasonable number of levels are
created.

Addtionally, I see you have an FESystem. You might need to look into
assembling into a blocksystem and precondition the individual blocks
(somewhat similar to step-32). This requires some more effort though.

On Sat, Aug 24, 2013 at 9:22 PM, Ce Qin <qinc...@gmail.com> wrote:
>> What preconditioners did you try with the iterative methods like
>> GMRES? ILU or AMG come to my mind.
>
> I tried the PETScWrappers::PreconditionBoomerAMG with GMRES, but it did not
> work.
>
> Thanks.
>
>
> 2013/8/25 Timo Heister <hei...@clemson.edu>
>>

Ce Qin

unread,
Aug 24, 2013, 9:46:45 PM8/24/13
to dea...@googlegroups.com

Thanks, I will try to do it.
 
I have another question. It works fine in 2D, but in 3D it did not work. Why? I changed nothing except the template parameter dim.
Bests Regards,
Ce Qin

Wolfgang Bangerth

unread,
Aug 24, 2013, 11:44:53 PM8/24/13
to dea...@googlegroups.com, Ce Qin

> I have another question. It works fine in 2D, but in 3D it did not work. Why?
> I changed nothing except the template parameter dim.

For the direct solver, you ran out of memory. Try a smaller problem.

Ce Qin

unread,
Aug 25, 2013, 12:59:17 AM8/25/13
to dea...@googlegroups.com
 For the direct solver, you ran out of memory. Try a smaller problem
Maybe. In fact it runs well in 3D when the number dofs less than 50000. Once the number of dofs exceed 50000, it did not work. In 2D it runs properly with number of dofs up to a million. I ran it in my laptop with 4GB memory.
 
Thanks.
 
Bests Regards,
Ce Qin

Wolfgang Bangerth

unread,
Aug 25, 2013, 1:05:00 AM8/25/13
to dea...@googlegroups.com, Ce Qin

> Maybe. In fact it runs well in 3D when the number dofs less than 50000. Once
> the number of dofs exceed 50000, it did not work. In 2D it runs properly with
> number of dofs up to a million. I ran it in my laptop with 4GB memory.

I am not surprised that you can't go very far in 3d. You have many more
entries per row, and the sparse decomposition takes much more memory in 3d as
well. See, for example, here:
http://www.math.tamu.edu/~bangerth/videos.676.34.html

As for iterative solvers: GMRES is definitely the right choice, but AMG may or
may not work, depending on the structure of the equations. You may wish to try
an ILU as a preconditioner first. Constructing preconditioners for non-trivial
problems is definitely not a trivial task.

Ce Qin

unread,
Aug 25, 2013, 4:00:40 AM8/25/13
to dea...@googlegroups.com

Thanks for your fast reply!
 
Best Regards,
Ce Qin
Reply all
Reply to author
Forward
0 new messages