Getting error while running code following step-55

97 views
Skip to first unread message

Md Mahmudul Islam

unread,
Jul 10, 2025, 1:40:53 AM7/10/25
to dea...@googlegroups.com
Hello Everyone,
Hope this email finds you all well. I am working on NSE flow. I wrote my code in parallel following step-55. I could compile the code, but when I run the code I get this following error of ghost elements:

An error occurred in line <941> of file </usr/src/dealii-v9.6.0/include/deal.II/lac/petsc_vector_base.h> in function

    const dealii::PETScWrappers::internal::VectorReference& dealii::PETScWrappers::internal::VectorReference::operator+=(const PetscScalar&) const

The violated condition was: 

    !vector.has_ghost_elements()

Additional information: 

    You are trying an operation on a vector that is only allowed if the

    vector has no ghost elements, but the vector you are operating on does

    have ghost elements.

    

    Specifically, there are two kinds of operations that are typically not

    allowed on vectors with ghost elements. First, vectors with ghost

    elements are read-only and cannot appear in operations that write into

    these vectors. Second, reduction operations (such as computing the

    norm of a vector, or taking dot products between vectors) are not

    allowed to ensure that each vector element is counted only once (as

    opposed to once for the owner of the element plus once for each

    process on which the element is stored as a ghost copy).




I can send part of my code to find out which vector might be ghosted.


Thanks,

Md Mahmudul Islam


Wolfgang Bangerth

unread,
Jul 10, 2025, 3:08:34 AM7/10/25
to dea...@googlegroups.com
I think that the error message is actually quite clear. It explains what the
problem is, and at the top it also says which function causes trouble. Which
part is not clear to you, and what stops you from fixing the issue yourself?

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/


Md Mahmudul Islam

unread,
Jul 11, 2025, 11:36:34 PM7/11/25
to dea...@googlegroups.com
Thanks a lot professor Bangerth. It's my pleasure to get back from you. I have read about the ghost element which is related to the locally_owned_dofs and locally_relevant_dofs. I was confused if we need to create block vector for each of the vectors that are used in the computation later. Like in solve we have created completely_distributed_solution for locally_owned_dofs only and then passed it to locally_relevant_solution which we used to compute output results later.
In my case, I have present_mean_solution which I computed inside run so I need to create a block vector for locally_owned_dofs to compute it and then pass it to present mean solution.
Please correct me if I am wrong.

And thank you again for creating such a nice community.

Best regards,
Md Mahmudul Islam 

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/dealii/f27e9cb4-be79-429c-a98f-a059ca094114%40colostate.edu.

Wolfgang Bangerth

unread,
Jul 14, 2025, 5:59:46 AM7/14/25
to dea...@googlegroups.com
On 7/11/25 21:36, Md Mahmudul Islam wrote:
> I have read about the ghost element which is related to the locally_owned_dofs
> and locally_relevant_dofs. I was confused if we need to create block vector
> for each of the vectors that are used in the computation later. Like in solve
> we have created completely_distributed_solution for locally_owned_dofs only
> and then passed it to locally_relevant_solution which we used to compute
> output results later.

Whether a vector does/does not have ghost entries is an entirely separate
question from whether a vector is/is not substructured into blocks. In other
words, you need to follow the same kind of scheme of programs such as step-40
regarding where you need a ghosted and where you need a completely distributed
vector. The only difference lies in how you *create* the vector: step-40
creates non-block ghosted and completely distributed vectors, whereas other
programs create block ghosted and completely distributed vectors.

Md Mahmudul Islam

unread,
Jul 17, 2025, 2:19:43 PM7/17/25
to dea...@googlegroups.com
Thank you again, Professor, for your kind reply. I hope I have solved the ghost element problem. But now I am encountering a new error of 

[0]PETSC ERROR: Argument out of range

[0]PETSC ERROR: Inserting a new nonzero at (6,7) in the matrix


I have reviewed the previous messages and found it challenging to pinpoint the exact problem. 

Md Mahmudul Islam
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
parameter-file.prm.txt
step-55.cc

Wolfgang Bangerth

unread,
Jul 17, 2025, 4:11:39 PM7/17/25
to dea...@googlegroups.com
On 7/17/25 12:19, Md Mahmudul Islam wrote:
> Thank you again, Professor, for your kind reply. I hope I have solved the
> ghost element problem. But now I am encountering a new error of
>
> [0]PETSC ERROR: Argument out of range
>
> [0]PETSC ERROR: Inserting a new nonzero at (6,7) in the matrix
>
>
> I have reviewed the previous messages and found it challenging to pinpoint the
> exact problem.

Mahmudul:
as before, I think that the error is actually quite clear: You are adding an
entry to the matrix in a place where it is not allowed to add one. I assume
this is a sparse matrix and that you need to make sure that the sparsity
pattern knows that this is an entry you want to write into.

Best
W.

Md Mahmudul Islam

unread,
Jul 30, 2025, 12:52:54 AM7/30/25
to dea...@googlegroups.com
Thanks a lot, Professor, for your kind reply again. I have checked the block sparsity pattern but couldn't find any issue. It is identical to step-55 in my code. Professor, my problem is uncertainty quantification with time-dependent NSE. I have checked if I keep the assembly system as it is, like step-55 it runs. So, I think my problem is in assemble system or inside run.

template <int dim>
void StochasticNavierStokesProblem<dim>::assemble_system(double viscosity, double mean_viscosity)
{
TimerOutput::Scope t(computing_timer, "assembly");

system_matrix = 0;
system_rhs = 0;
preconditioner_matrix = 0;

QGauss<dim> quadrature_formula(degree + 2);

FEValues<dim> fe_values(fe,
quadrature_formula,
update_values | update_quadrature_points |
update_JxW_values | update_gradients);

const unsigned int dofs_per_cell = fe.dofs_per_cell;
const unsigned int n_q_points = quadrature_formula.size();

FullMatrix<double> local_matrix(dofs_per_cell, dofs_per_cell);
FullMatrix<double> local_preconditioner_matrix(dofs_per_cell, dofs_per_cell);
Vector<double> local_rhs(dofs_per_cell);
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);

std::vector<Vector<double>> rhs_values(n_q_points, Vector<double>(dim + 1));

const FEValuesExtractors::Vector velocity (0);
const FEValuesExtractors::Scalar pressure (dim);

std::vector<Tensor<2, dim>> grad_phi(dofs_per_cell);
std::vector<double> div_phi(dofs_per_cell);
std::vector<double> phi_p(dofs_per_cell);
std::vector<Tensor<1, dim>> phi(dofs_per_cell);

std::vector<Tensor<1, dim>> old_time_values(n_q_points);
std::vector<Tensor<2, dim>> old_gradients_values(n_q_points);
std::vector<Tensor<1, dim>> old_old_time_values(n_q_points);
std::vector<Tensor<2, dim>> old_old_gradients_values(n_q_points);
std::vector<Tensor<1, dim>> old_time_mean_values(n_q_points);
std::vector<Tensor<1, dim>> old_time_fluc_u_values(n_q_points);

for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
{
fe_values.reinit(cell);

local_matrix = 0;
local_rhs = 0;
local_preconditioner_matrix = 0;

fe_values[velocity].get_function_values(old_old_timestep_solution, old_old_time_values);
fe_values[velocity].get_function_gradients(old_old_timestep_solution, old_old_gradients_values);
fe_values[velocity].get_function_values(present_mean_solution, old_time_mean_values);
fe_values[velocity].get_function_values(old_timestep_solution, old_time_values);
fe_values[velocity].get_function_gradients(old_timestep_solution, old_gradients_values);

right_hand_side.vector_value_list(fe_values.get_quadrature_points(), rhs_values);

for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
{
grad_phi[k] = fe_values[velocity].gradient(k, q);
div_phi[k] = fe_values[velocity].divergence(k, q);
phi_p[k] = fe_values[pressure].value(k, q);
phi[k] = fe_values[velocity].value(k, q);
}

double prandtl_mixing_length = 0.0;
for (unsigned int j = 0; j < Number_of_realizations; ++j)
{
fe_values[velocity].get_function_values(fluctuation_solution[j], old_time_fluc_u_values);
prandtl_mixing_length += old_time_fluc_u_values[q] * old_time_fluc_u_values[q];
}

for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
{
local_matrix(i, j) +=
(1.5 * phi[i] * phi[j] +
time_step * mean_viscosity * scalar_product(grad_phi[i], grad_phi[j]) -
time_step * div_phi[i] * phi_p[j] +
time_step * gamma * div_phi[i] * div_phi[j] +
0.5 * time_step * grad_phi[j] * old_time_mean_values[q] * phi[i] -
0.5 * time_step * grad_phi[i] * old_time_mean_values[q] * phi[j] -
time_step * phi_p[i] * div_phi[j]) +
2.0 * mu * std::pow(time_step, 2) * prandtl_mixing_length * scalar_product(grad_phi[i], grad_phi[j]) *
*fe_values.JxW(q);

local_preconditioner_matrix(i, j) +=
phi_p[i] * phi_p[j] * fe_values.JxW(q) / mean_viscosity;
}

const unsigned int component_i = fe.system_to_component_index(i).first;

local_rhs(i) +=
(time_step * fe_values.shape_value(i, q) * rhs_values[q](component_i)) +
2.0 * old_time_values[q] * phi[i] -
0.5 * old_old_time_values[q] * phi[i] -
2.0 * time_step * (viscosity - mean_viscosity) * scalar_product(grad_phi[i], old_gradients_values[q]) +
time_step * (viscosity - mean_viscosity) * scalar_product(grad_phi[i], old_old_gradients_values[q]) -
2.0 * time_step * old_time_values[q] * old_gradients_values[q] * phi[i] +
time_step * old_old_time_values[q] * old_gradients_values[q] * phi[i] +
time_step * old_time_mean_values[q] * old_gradients_values[q] * phi[i] +
time_step * old_time_values[q] * old_old_gradients_values[q] * phi[i] -
0.5 * time_step * old_old_time_values[q] * old_old_gradients_values[q] * phi[i] -
0.5 * time_step * old_time_mean_values[q] * old_old_gradients_values[q] * phi[i] +
2.0 * time_step * old_time_values[q] * grad_phi[i] * old_time_values[q] -
time_step * old_time_values[q] * grad_phi[i] * old_old_time_values[q] -
time_step * old_time_mean_values[q] * grad_phi[i] * old_time_values[q] -
time_step * old_old_time_values[q] * grad_phi[i] * old_time_values[q] +
0.5 * time_step * old_old_time_values[q] * grad_phi[i] * old_old_time_values[q] +
0.5 * time_step * old_time_mean_values[q] * grad_phi[i] * old_old_time_values[q] *
* fe_values.JxW(q);
}
}

cell->get_dof_indices(local_dof_indices);
constraints.distribute_local_to_global(local_matrix, local_rhs,
local_dof_indices,
system_matrix, system_rhs);
constraints.distribute_local_to_global(local_preconditioner_matrix,
local_dof_indices,
preconditioner_matrix);
}

system_matrix.compress(VectorOperation::add);
preconditioner_matrix.compress(VectorOperation::add);
system_rhs.compress(VectorOperation::add);
}

and the run

template <int dim>
void StochasticNavierStokesProblem<dim>::run ()
{
//double viscosity;
double mean_viscosity = 0.0;

pFile = fopen("energy.txt", "w");

pcout << "Problem Data:" << std::endl;
pcout << "--------------------------------------------------" << std::endl;
pcout << "expected_viscosity = " << expected_viscosity << std::endl;
pcout << "gamma = " << gamma << std::endl;
pcout << "epsilon = " << epsilon << std::endl;
pcout << "mu = " << mu << std::endl;
pcout << "n_global_refines = " << n_global_refines << std::endl;
pcout << "--------------------------------------------------" << std::endl;

double random1[20] = {0.975712,-0.345349,-0.166474,-0.081588,0.088924,0.640705,-0.607515,-0.022931,0.635968,0.847934,0.242202,0.055026,-0.695076,0.886830,-0.724688,-0.162158,0.521596,-0.688013,0.453780,-0.325601};
double random2[20] = {0.414552,-0.233426,0.786402,0.208497,-0.381272,0.370817,-0.265752,0.935250,-0.470776,-0.226254,-0.039371,0.277783,0.804871,0.365681,0.284943,0.551865,0.075924,-0.262991,-0.381761,0.284678};

//double ep = 0.0;
double factor = 1.0;
unsigned int Number_of_realizations = 20;

for (unsigned int j = 0; j < Number_of_realizations; ++j)
{
mean_viscosity += expected_viscosity / Number_of_realizations + factor * random1[j] * expected_viscosity / 10.0 / Number_of_realizations;
}
GridGenerator::hyper_cube (triangulation, 0, 1);
boundary_ids = triangulation.get_boundary_ids();
triangulation.refine_global(n_global_refines);

setup_dofs ();

time = 0.0;

initial_condition.set_time(time);

for (unsigned int j = 0; j < Number_of_realizations; ++j)
{
double ep = epsilon * random2[j];
initial_condition.set_parameter(ep);
LA::MPI::BlockVector tmp(owned_partitioning, mpi_communicator);
VectorTools::interpolate(dof_handler, initial_condition, tmp);
Storage_nn[j] = tmp;
}

time = time_step;
pcout << "Time step size = " << time_step << ", Number of time steps = " << timestep_number << std::endl;
initial_condition.set_time(time);

for (unsigned int j = 0; j < Number_of_realizations; ++j)
{
double ep = epsilon * random2[j];
initial_condition.set_parameter(ep);
LA::MPI::BlockVector tmp(owned_partitioning, mpi_communicator);
VectorTools::interpolate(dof_handler, initial_condition, tmp);
Storage_n[j] = tmp;
}

old_old_timestep_solution = 0;
old_timestep_solution = 0;
initial_condition.set_parameter(0.0);

for (unsigned int n = 2; n <= timestep_number; ++n)
{
present_mean_solution = 0;
time = time_step * n;

right_hand_side.set_time(time);
zero_fxn.set_time(time);
initial_condition.set_time(time);
LA::MPI::BlockVector mean_solution(owned_partitioning, mpi_communicator);
mean_solution = 0;

for (unsigned int j = 0; j < Number_of_realizations; ++j)
{
double viscosity = expected_viscosity + random1[j] * expected_viscosity / 10.0;
double ep = epsilon * random2[j];

right_hand_side.set_parameter(ep, viscosity);
zero_fxn.set_parameter(ep);
initial_condition.set_parameter(ep);

old_old_timestep_solution = Storage_nn[j];
old_timestep_solution = Storage_n[j];

assemble_system(viscosity, mean_viscosity);
solve();

{
LA::MPI::BlockVector tmp(owned_partitioning, mpi_communicator);
tmp = Storage_n[j];
Storage_nn[j] = tmp;

tmp = solution;
Storage_n[j] = tmp;
}

LA::MPI::BlockVector tmp_mean(owned_partitioning, mpi_communicator);
tmp_mean.add(2.0 / Number_of_realizations, Storage_n[j],
-1.0 / Number_of_realizations, Storage_nn[j]);
mean_solution += tmp_mean;
LA::MPI::BlockVector tmp_fluct(owned_partitioning, mpi_communicator);
tmp_fluct = 0;
fluctuation_solution[j] = tmp_fluct;
}

for (unsigned int j = 0; j < Number_of_realizations; ++j)
{
LA::MPI::BlockVector tmp_fluct(owned_partitioning, mpi_communicator);
tmp_fluct.add(2.0, Storage_n[j], -1.0, Storage_nn[j]);
tmp_fluct.add(-1.0, mean_solution);
fluctuation_solution[j] = tmp_fluct;
}
present_mean_solution = mean_solution;

output_results(n);
Energy(time);
}

pcout << "\nTotal number of timesteps: " << timestep_number << std::endl;
}
}

Md Mahmudul Islam

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.

Wolfgang Bangerth

unread,
Jul 30, 2025, 11:47:31 AM7/30/25
to dea...@googlegroups.com
On 7/29/25 22:52, Md Mahmudul Islam wrote:
> Thanks a lot, Professor, for your kind reply again. I have checked the block
> sparsity pattern but couldn't find any issue. It is identical to step-55 in my
> code. Professor, my problem is uncertainty quantification with time-dependent
> NSE. I have checked if I keep the assembly system as it is, like step-55 it
> runs. So, I think my problem is in assemble system or inside run.

Well, yes, of course. In your assembly, you are apparently writing into more
matrix entries than step-55 does, but you have not previously told the
sparsity pattern that you want to write into these entries. That can have two
reasons:
* You really *do* want to write into these entries, but you have not allocated
these entries in the sparsity pattern.
* You *mistakingly* write into these entries.

I don't know which it is -- you will need to figure out in which specific ways
the assembly you perform is different from the one in step-55, for example by
isolating the terms you have added, and thinking about their correctness.

Best
W.
Reply all
Reply to author
Forward
0 new messages