On 03/28/2018 03:48 PM, Qing Yin wrote:
>
> I would like to post that snippet of code. To be honest, I verified the
> solution in a naive way, that is, I compare the two different number of
> processes solutions in paraview. The comparison(including data range,
> plot curve over time at a point) shows they are exactly the same.
>
> template<int dim >
> Tensor<1,dim > Postprocessor<dim >::
> compute_load_force(const DoFHandler<dim > &dof_handler,
> const LA::MPI::Vector&solution,
> const double &lame_mu,
> const double &lame_lambda,
> const unsigned int °ree,
> const unsigned int &boundary_id,
> const MPI_Comm&mpi_comm) {
>
> const QGauss<dim-1> face_quadrature_formula(degree+ 1);
> FEFaceValues<dim > fe_face_values(dof_handler.get_fe(),
> face_quadrature_formula,
> update_gradients| update_normal_vectors|
> update_JxW_values);
>
> const unsigned int dofs_per_cell= dof_handler.get_fe().dofs_per_cell;
> const unsigned int n_face_q_points= face_quadrature_formula.size();
>
> std::vector<unsigned int> local_dof_indices(dofs_per_cell);
>
> std::vector<std::vector<Tensor<1, dim >>> u_grads_per_cell(n_face_q_points,
> std::vector<Tensor<1, dim >>(dim));
>
> // Declare load force per processor
> Tensor<1,dim > locally_owned_load_force;
>
> // Declare identity tensor
> SymmetricTensor<2, dim > identity= unit_symmetric_tensor<dim >();
>
> for (const auto &cell: dof_handler.active_cell_iterators())
> if (cell->is_locally_owned()) {
> for (unsigned int f= 0; f< GeometryInfo<dim >::faces_per_cell; ++f)
> // Find the top boundary based on the id
> if (cell->face(f)->at_boundary()
> && cell->face(f)->boundary_id() == boundary_id) {
> fe_face_values.reinit(cell, f);
> fe_face_values.get_function_gradients(solution, u_grads_per_cell);
>
> for (unsigned int q= 0; q< n_face_q_points; ++q) {
>
> // Compute strain and stress
> Tensor<2, dim > u_grad;
> for (unsigned int d= 0; d< dim; ++d)
> u_grad[d] = u_grads_per_cell[q][d];
>
> const SymmetricTensor<2, dim >
> epsilon= 0.5 * (u_grad+ transpose(u_grad));
>
> const SymmetricTensor<2, dim >
> sigma= 2.0 * lame_mu* epsilon
> + lame_lambda* trace(epsilon) * identity;
>
> locally_owned_load_force+= sigma* fe_face_values.normal_vector(q)
> * fe_face_values.JxW(q);
> }
> }
> }
>
> // Collect values from all processors
> Tensor<1,dim > load_force;
> load_force[0] = Utilities::MPI::sum(locally_owned_load_force[0], mpi_comm);
> load_force[1] = Utilities::MPI::sum(locally_owned_load_force[1], mpi_comm);
>
> return load_force;
> }
That's pretty much exactly how I would have written this code myself. I
don't see anything that is obviously wrong. You'll have to debug this a
bit, for example by outputting epsilon or sigma or sigma \dot n for each
quadrature point and comparing what you get one one processor with what
you get on two or four processors. If you choose the mesh coarse enough,
you should be able to see where these quadrature points are and what the
values are. Then, if you know whether the stresses are the same, you
know whether the problem is with the integration routine above, or
whether it is with the solution that is passed to the function. There is
no easier way to find out what the issue is, I believe, given that the
code "looks ok to me".
I assume that you are obviously running in debug mode.