Exporting hdf5 file in a loop

63 wyświetlenia
Przejdź do pierwszej nieodczytanej wiadomości

Uclus Heis

nieprzeczytany,
15 lut 2022, 11:04:5415.02.2022
do deal.II User Group
Good afternoon,

I want to store my results in a hdf5 file using a distributed implementation. I am computing different frequencies, so I have a loop in my run() function where I solve for each frequency.  When for example computing 5 frequencies, I get 5 results with the same value in my hdf5 file, however, the solution vector inside dealii is correct. Can I ask for help, I do not know what am I doing wrong when exporting the results, my code looks like the following:

DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);
for(int freq_iter=0 ...  )    //solver loop, 
{
... //computes the solution per frequency
string nname = "rb";
string nit = to_string(freq_iter);
string fitame =nname+nit;
data_out.add_data_vector(locally_relevant_solution, fitame);
}
data_out.build_patches();
DataOutBase::DataOutFilterFlags flags(true, true);
DataOutBase::DataOutFilter data_filter(flags);
data_out.write_filtered_data(data_filter);
data_out.write_hdf5_parallel(data_filter, "solution.h5", MPI_COMM_WORLD);


Thank you for you time.

Timo Heister

nieprzeczytany,
15 lut 2022, 13:43:0215.02.2022
do dea...@googlegroups.com
The call to data_out.add_data_vector() does not copy the contents of the vector but it just keeps track of it until the data is actually written.
You will need to store your solutions in different vectors without touching the old ones.

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/540023f3-2531-4d00-97dc-8d24c3e9219en%40googlegroups.com.

Uclus Heis

nieprzeczytany,
15 lut 2022, 15:02:0615.02.2022
do deal.II User Group
Thank you very much for the answer.  
I need to evaluate a high number of frequencies, so in that case I would need to have a large number of vectors to track the results, which is not optimal in my case.
Is it any other way to do that? Would be possible to call data_out.add_data_vector() and data_out.write_hdf5_parallel() in each iteartion? Would that keep the previous written information?

Thank you

Timo Heister

nieprzeczytany,
15 lut 2022, 17:45:3915.02.2022
do dea...@googlegroups.com
What we do typically in this situation is that we write one visualization output file per iteration with a different filename. This is done in many of the examples, especially for time dependent problems.
I don't think there is an easy way to append data to an existing file like you suggested. This would overwrite existing data. With the hdf5 data, you can store the mesh only once, assuming it doesn't change. See the DataOutBase::DataOutFilterFlags.


Uclus Heis

nieprzeczytany,
16 lut 2022, 05:07:5616.02.2022
do deal.II User Group
Dear Timo, 

Thank you for the comments.

I am doing now one file per iteration. However, when running with mpirun more tha 1 MPI ranks I get an error in the writing function.
My code right now looks :

DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);
for(int freq_iter=0 ...  )    //solver loop, 
{
... //computes the solution per frequency
string nname = "rb";
string nit = to_string(freq_iter);
string fitame =nname+nit;

string f_output("./solution-" + std::to_string(freq_iter) + ".h5");
data_out.add_data_vector(locally_relevant_solution, fitame);

data_out.build_patches();
DataOutBase::DataOutFilterFlags flags(true, true);
DataOutBase::DataOutFilter data_filter(flags);
data_out.write_filtered_data(data_filter);
data_out.write_hdf5_parallel(data_filter, "solution.h5", MPI_COMM_WORLD);
}

The error is the following:

An error occurred in line <7811> of file </zhome/32/9/115503/dealii-candi/tmp/unpack/deal.II-v9.3.1/source/base/data_out_base.cc> in function

void dealii::DataOutBase::write_hdf5_parallel(const std::vector<dealii::DataOutBase::Patch<dim, spacedim> >&, const dealii::DataOutBase::DataOutFilter&, bool, const string&, const string&, ompi_communicator_t* const&) [with int dim = 2; int spacedim = 2; std::string = std::__cxx11::basic_string<char>; MPI_Comm = ompi_communicator_t*]

The violated condition was: 

patches.size() > 0

Additional information: 

You are trying to write graphical data into a file, but no data is available in the intermediate format that the DataOutBase functions require.

Did you forget to call a function such as DataOut::build_patches()?

My intention is to save the whole domain, for each iteration when using a parallel distributed implemeentation
Could you please help me or point me what am I doing wrong?

Thank you again

Wolfgang Bangerth

nieprzeczytany,
16 lut 2022, 11:09:0216.02.2022
do dea...@googlegroups.com

Uclus,
the usual style is to create the DataOut object every time you are writing
into a file, and then let the variable die at the end of the scope again,
rather than keeping it around for the next time you want to create output. You
might want to take a look at how time dependent programs do this, say step-26.

Of course, in that case you will want to write to a different file in each
iteration. Right now, your code snippet suggests that you don't actually ever
write into the f_output stream.

For your particular error, I don't know whether that is going to solve the
problem. It may be that you have no cells on a particular MPI process and that
that is what the error indicates -- that would be a bug. But I would first try
to address the issue mentioned above.

Best
W.


On 2/16/22 03:07, Uclus Heis wrote:
> *** Caution: EXTERNAL Sender ***
>
> Dear Timo,
>
> Thank you for the comments.
>
> I am doing now one file per iteration. However, when running with mpirun more
> tha 1 MPI ranks I get an error in the writing function.
> My code right now looks :
>
> DataOut<dim> data_out;
> data_out.attach_dof_handler(dof_handler);
> for(int freq_iter=0 ...  ) ///solver loop, /
> {
> ... ///computes the solution per frequency/
> string nname = "rb";
> string nit = to_string(freq_iter);
> string fitame =nname+nit;
>
> string f_output("./solution-" + std::to_string(freq_iter) + ".h5");
> data_out.add_data_vector(locally_relevant_solution, fitame);
>
> data_out.build_patches();
> DataOutBase::DataOutFilterFlags flags(true, true);
> DataOutBase::DataOutFilter data_filter(flags);
> data_out.write_filtered_data(data_filter);
> data_out.write_hdf5_parallel(data_filter, "solution.h5", MPI_COMM_WORLD);
> }
>
> The error is the following:
>
> /An error occurred in line <7811> of file
> </zhome/32/9/115503/dealii-candi/tmp/unpack/deal.II-v9.3.1/source/base/data_out_base.cc>
> in function/
>
> /void dealii::DataOutBase::write_hdf5_parallel(const
> std::vector<dealii::DataOutBase::Patch<dim, spacedim> >&, const
> dealii::DataOutBase::DataOutFilter&, bool, const string&, const string&,
> ompi_communicator_t* const&) [with int dim = 2; int spacedim = 2; std::string
> = std::__cxx11::basic_string<char>; MPI_Comm = ompi_communicator_t*]/
>
> /The violated condition was: /
>
> /patches.size() > 0/
>
> /Additional information: /
>
> /You are trying to write graphical data into a file, but no data is available
> in the intermediate format that the DataOutBase functions require. /
>
> /Did you forget to call a function such as DataOut::build_patches()?/
> for(int freq_iter=0 ...  ) ///solver loop, /
> {
> ... ///computes the solution per frequency/
> string nname = "rb";
> string nit = to_string(freq_iter);
> string fitame =nname+nit;
> data_out.add_data_vector(locally_relevant_solution, fitame);
> }
> data_out.build_patches();
> DataOutBase::DataOutFilterFlags flags(true, true);
> DataOutBase::DataOutFilter data_filter(flags);
> data_out.write_filtered_data(data_filter);
> data_out.write_hdf5_parallel(data_filter, "solution.h5",
> MPI_COMM_WORLD);
>
>
> Thank you for you time.
>
> --
> The deal.II project is located at http://www.dealii.org/
> <https://nam10.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.dealii.org%2F&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ca1494995e20447207baf08d9f13435ef%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637806029709788029%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=41v0VEMkBpUsm2%2F7LxuDHYSMjiAJwPV6lrdV5I004sY%3D&reserved=0>
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fforum%2Fdealii%3Fhl%3Den&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ca1494995e20447207baf08d9f13435ef%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637806029709788029%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=KaISQ9K%2B52cwOtJweK%2FSYkw7%2FDH2m7IxQFQw%2BM8UX%2FY%3D&reserved=0>
> ---
> You received this message because you are subscribed to the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from
> it, send an email to dealii+un...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/540023f3-2531-4d00-97dc-8d24c3e9219en%40googlegroups.com
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fdealii%2F540023f3-2531-4d00-97dc-8d24c3e9219en%2540googlegroups.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ca1494995e20447207baf08d9f13435ef%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637806029709788029%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=foKjVcsp103Vf3JBf%2FxpnAzrrtSg5xa7qy5jQusG%2Bz8%3D&reserved=0>.
>
> --
> The deal.II project is located at http://www.dealii.org/
> <https://nam10.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.dealii.org%2F&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ca1494995e20447207baf08d9f13435ef%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637806029709944242%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=r%2F%2FWgc6fxp5tBxb1hy%2BvbcJw4oKVAqTsx3J%2B4iOA5Qg%3D&reserved=0>
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fforum%2Fdealii%3Fhl%3Den&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ca1494995e20447207baf08d9f13435ef%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637806029709944242%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=OmmdWhqbJmmPzpKaSpOyITjOaJkZxLJ4V0KgGkkEcgU%3D&reserved=0>
> ---
> You received this message because you are subscribed to the Google
> Groups "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to dealii+un...@googlegroups.com.
>
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/252db675-766c-476d-b7d6-758d2c403f39n%40googlegroups.com
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fdealii%2F252db675-766c-476d-b7d6-758d2c403f39n%2540googlegroups.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ca1494995e20447207baf08d9f13435ef%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637806029709944242%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=xGTClKk%2F%2FiEKqvR6yp6ZXU4eM5kP74yxJXU04x0HZO4%3D&reserved=0>.
>
> --
> The deal.II project is located at http://www.dealii.org/
> <https://nam10.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.dealii.org%2F&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ca1494995e20447207baf08d9f13435ef%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637806029709944242%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=r%2F%2FWgc6fxp5tBxb1hy%2BvbcJw4oKVAqTsx3J%2B4iOA5Qg%3D&reserved=0>
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fforum%2Fdealii%3Fhl%3Den&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ca1494995e20447207baf08d9f13435ef%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637806029709944242%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=OmmdWhqbJmmPzpKaSpOyITjOaJkZxLJ4V0KgGkkEcgU%3D&reserved=0>
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+un...@googlegroups.com
> <mailto:dealii+un...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/600b490f-2f1d-4003-8819-ecc302db3d56n%40googlegroups.com
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fdealii%2F600b490f-2f1d-4003-8819-ecc302db3d56n%2540googlegroups.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ca1494995e20447207baf08d9f13435ef%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637806029709944242%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=YplACYBIRARQRt17Kj5LypVt315yQpPyjgPiv1Qvxfw%3D&reserved=0>.


--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Uclus Heis

nieprzeczytany,
16 lut 2022, 12:16:4516.02.2022
do deal.II User Group
Dear Wolfgang, 

Thank you for the suggestions. I updated the code as you mentioned. I do not fully understand what you mentioned about that I am not writing into the f_output stream. Is data_out.write_hdf5_parallel() not writing the data? However, I still have the error when running more than 1 MPI rank. If it is a bug I can not do much about it.

At this point I do not care much if the output file is hdf5 or not. I just need to export the raw data of the whole domain for each frequency using parallel approach. Do you have any suggestion how to do that?

I tried also to save the local_relevant_solution vector into a FullMatrix that stores each dof and frequency and then print it in the end, but the data is wrong also for more than 1 MPI rank:


for(int freq_iter=0 ...  )    //solver loop, 
{
// solve...
testvec.operator=(locally_relevant_solution); // testvec is a vector of size dof.

for ( int j= 0; j < dof_handler.n_dofs(); ++j)
testsol(freq_iter, j)+=testvec(j); // testsol is a FullMatrix of size number_frequencies x dof

}
if(Utilities::MPI::this_mpi_process (mpi_communicator)==0){
testsol.print_formatted(mydomain,12,true,0,"0"); // my domain is ofstream


Is there anything wrong in this approach of exporting the results?

Thank you


Wolfgang Bangerth

nieprzeczytany,
16 lut 2022, 12:25:4016.02.2022
do dea...@googlegroups.com

> Thank you for the suggestions. I updated the code as you mentioned. I do not
> fully understand what you mentioned about that I am not writing into the
> f_output stream.

In the code snippet you showed, you are always writing to "solution.h5", not
to the f_output stream.


> Is data_out.write_hdf5_parallel() not writing the data?
> However, I still have the error when running more than 1 MPI rank. If it is a
> bug I can not do much about it.

It is meant to do exactly that, and we use it all the time. Just not for the
corner case of a process that has no cells. But you can use any number of
other formats to output your solutions. In parallel, we generally use the VTU
file format. Take a look at any of the parallel programs, starting with
step-40, step-32, etc, and see how they create output.


> I tried also to save the local_relevant_solution vector into a FullMatrix that
> stores each dof and frequency and then print it in the end, but the data is
> wrong also for more than 1 MPI rank:
>
>
> for(int freq_iter=0 ...  ) ///solver loop, /
> {
> // solve...
> testvec.operator=(locally_relevant_solution); // testvec is a vector of size dof.
>
> for ( int j= 0; j < dof_handler.n_dofs(); ++j)
> testsol(freq_iter, j)+=testvec(j); // testsol is a FullMatrix of size
> number_frequencies x dof
>
> }
> if(Utilities::MPI::this_mpi_process (mpi_communicator)==0){
> testsol.print_formatted(mydomain,12,true,0,"0"); // my domain is ofstream
>
>
> Is there anything wrong in this approach of exporting the results?

I don't know what "the data is wrong" means, but this process could in
principle work. Except you end up with just vectors without knowing the
physical location at which each degree of freedom is located. You don't know
how to visualize these vectors any more if you don't also output node data.
This is of course exactly what DataOut is there for.

Best
W.

Uclus Heis

nieprzeczytany,
16 lut 2022, 12:44:0216.02.2022
do deal.II User Group
Dear Wolfgang,

It seems this "solution.h5" instead of f_output was a misspelling mistake, I have right f_output in my code. I can see that the problem here is that there are processes without cells, which may be a bug of my implementation as I would like to use all the processes. How does could that happen? 

Thank you again

Wolfgang Bangerth

nieprzeczytany,
16 lut 2022, 13:07:1216.02.2022
do dea...@googlegroups.com
On 2/16/22 10:44, Uclus Heis wrote:
>
> It seems this "solution.h5" instead of f_output was a misspelling mistake, I
> have right f_output in my code. I can see that the problem here is that there
> are processes without cells, which may be a bug of my implementation as I
> would like to use all the processes. How does could that happen?

No, it's not a bug. If you have only 1 cell, for example, and you run on
multiple processes, then necessarily there are processes that own no cells.

The bug lies in the deal.II library:
https://github.com/dealii/dealii/issues/13404

Like I said, the VTU output writer works even in the case of empty processes.
Odpowiedz wszystkim
Odpowiedz autorowi
Przekaż
Nowe wiadomości: 0