Changes in SolutionTransfer

106 views
Skip to first unread message

Winnifried Wollner

unread,
Jan 23, 2025, 6:16:51 AM1/23/25
to deal.II User Group
Dear all,
I noticed that apparently a change in the SolutionTransfer class is
coming with the next version.
Particularly, the function 'prepare_for_pure_refinement' seems to be
removed (was present in version 9.6.2 but is gone in git)?
My code presently relies on the possibility to utilize this mesh
transfer as it can be prepared without prior knowledge of the vectors to
be interpolated. Is there a fundamental reason why this function is
removed - if so do you have a suggestion for a resolution of the
following problem:

In a time dependent problem (on a single mesh for all timesteps), I have
the possibilities to store finite element vectors associated to certain
timesteps on disk - to avoid keeping all vectors in RAM.
Now with 'prepare_for_coarsening_and_refinement' it seems to me that I
have to load all timesteps to memory as the vectors need to be passed to
the function? Is there a way to avoid this?

Thanks
Winni


Marc Fehling

unread,
Jan 29, 2025, 9:04:59 AM1/29/25
to deal.II User Group
Hello Winnifried,

> I noticed that apparently a change in the SolutionTransfer class is coming with the next version. Particularly, the function 'prepare_for_pure_refinement' seems to be removed (was present in version 9.6.2 but is gone in git)?

We had separate implementations of `SolutionTransfer` each for a different Triangulation class. With the coming release, we decided to merge them into one, which helps us to better maintain the functionality, but also means that a common interface had to be found. Unfortunately `prepare_for_pure_refinement` did not make it. See also

If you need the old implementation, you can use the `Legacy::SolutionTransfer` class. However, it is deprecated and will be removed in the future.

> Now with 'prepare_for_coarsening_and_refinement' it seems to me that I have to load all timesteps to memory as the vectors need to be passed to the function? Is there a way to avoid this?

You were using a SolutionTransfer object and set it up once with `prepare_for_pure_refinement`, and loaded one vector after the other from disk into memory. The mesh doesn't change, so you are not using mesh refinement.

The serial SolutionTransfer class just had the responsibility to transfer vectors between differently refined meshes. How did you use it to load vectors from disk? I don't remember that it had such a functionality.

If the mesh doesn't change, and you only store and load the vectors during the runtime of the program, then I would suggest to serialize them with boost. We have a new tutorial on serialization. Please have a look at step-83 for more information.

Let me know if that helps.

Best,
Marc

Winnifried Wollner

unread,
Jan 29, 2025, 9:28:17 AM1/29/25
to dea...@googlegroups.com
Dear Marc,
thanks for the information, I think I missed this merge in functionality.

> The serial SolutionTransfer class just had the responsibility to transfer
> vectors between differently refined meshes. How did you use it to load
> vectors from disk? I don't remember that it had such a functionality.
Sorry I was not clear in my explanation. We have implemented a storage
logic that stores currently not needed vectors on the disk and reloads
them when they are required by the computation within an optimization
loop. However, it may happen that the mesh is changed between storage
and load.

With the old functionality it was possible to check if a refinement of
the mesh was performed between storage and loading and then interpolate
the vector onto the refined mesh used for computation. In this logic, a
vector is only loaded into memory (and if necessarily refined) when it
is needed in the computation.

If I understood the change correctly this will no longer be possible (up
to short term switch to Legacy::...) and I need to load the entire
trajectory to main memory to be interpolated immediately?

> If the mesh doesn't change, and you only store and load the vectors during
> the runtime of the program, then I would suggest to serialize them with
> boost. We have a new tutorial on serialization. Please have a look at
> step-83 for more information.
> https://dealii.org/developer/doxygen/deal.II/step_83.html
I am aware of the functionality - my difficulty lies in the situation
when the mesh is supposed to be changed between store and load.

To clarify: is the loss in functionality due to a lack in time and
people contributing but it could be fixed if I am willing to write the
code or is it coming from the problem that locally relevant dof's can
move to different cores in a distributed situation and thus it can't be
(reasonably) resolved in a unified manner for 'standard' and
'distributed' vectors?

Thanks Winni

Marc Fehling

unread,
Feb 5, 2025, 7:19:12 AM2/5/25
to deal.II User Group
Hello Winnifried,


> With the old functionality it was possible to check if a refinement of
> the mesh was performed between storage and loading and then interpolate
> the vector onto the refined mesh used for computation.

The new SolutionTransfer workflow performs the transfer of data during the
Triangulation::execute_coarsening_and_refinement() call. The initial
implementation for the parallel::distributed versions of SolutionTransfer
and Triangulation featured this workflow, and I think it was because of
a requirement of early versions of p4est. Unfortunately, the new
SolutionTransfer class can not be used for what you described at the moment.

The main reason for the merge was the following. We did not have a serialization
mechanism for the ParticleHandler class in the serial case yet, but already
in the parallel distributed case. Instead of adding another specialized
implementation, we decided that merging it with the previous implementation
would be a reasonable approach, and it would also be more easily maintainable
in the future to only handle only one implementation.

However, this choice is not a permanent one. We are considering to redesign our
current interface and implementation for the SolutionTransfer functionality.
https://github.com/dealii/dealii/issues/15280

With the most recent versions of p4est (>= 2.0), it should actually now be
possible to separate the data transfer stage from the refinement stage,
which would make the implementation more flexible. So what you describe is
potentially possible in a unified SolutionTransfer manner. I will add a
comment to the above issue to make sure that we follow this design choice.

For now, I would propose to keep the Legacy::SolutionTransfer class 'un-deprecated'
for the time being until we have the new implementation. I would open up an
issue on github and cast a vote among the main developers.

Alternatively, you can extract the functionality from the old SolutionTransfer
class and copy it into your project, provided you follow the licensing terms
of deal.II.



> To clarify: is the loss in functionality due to a lack in time and
> people contributing but it could be fixed if I am willing to write the
> code or is it coming from the problem that locally relevant dof's can
> move to different cores in a distributed situation and thus it can't be
> (reasonably) resolved in a unified manner for 'standard' and
> 'distributed' vectors?

To pick up on my comments above, it was not our intention to remove a feature
and the clash with your code was an unwanted result. We believed that the
effect of the prepare_for_pure_refinement() function could be reproduced
by setting refinement flags on all cells, and then walking through the
general SolutionTransfer process.

To make sure that we retain your use case, may I suggest that you submit
a small test case to our test suite? It should somehow recreate your
scenario while using Legacy::SolutionTransfer::prepare_for_pure_refinement().
This way we will be alerted about breaking changes.

In conclusion, a re-haul of SolutionTransfer means a lot of work. I speak
only for myself when I say that I can't find the time to start working on
it. Further, since it is 'just' a redesign, no new functionality would be
added. Thus it's hard to justify spending time and resources on the
redesign if it doesn't help us with any of our projects.

If you would like to help us with the process, we would welcome any contribution.
The above mentioned minimal test would be a good start.

Marc
Reply all
Reply to author
Forward
0 new messages