HDF5 output fails at large meshes .

25 views
Skip to first unread message

Jalil Khan

unread,
Sep 16, 2025, 7:04:42 AM (7 days ago) Sep 16
to deal.II User Group
Dear all,

I am encountering an issue with HDF5/XDMF output using DataOut::write_hdf5_parallel() on large meshes. Up to a resolution of 256×256×256, everything works correctly, but at 512×512×512 the program aborts with the following HDF5 error:

```
#014 HDF5-DIAG: Error ... unable to set chunk sizes
    major: Dataset
    minor: Bad value
#015 HDF5-DIAG: ... chunk size must be < 4GB

```
I have attached the error log as well.

From inspecting the source in data_out_base.cc, HDF5 uses chunked layout, with chunk dimensions set internally in deal.II. At 512³, the computed chunk size exceeds the HDF5 hard limit of 4 GB per chunk, which leads to the failure.

Is there a recommended way for users to avoid this error when writing large output files?

I would really appreciate any suggestions.

Thank you.

Best Regards,
JRK
errornew.txt

Wolfgang Bangerth

unread,
Sep 16, 2025, 1:59:50 PM (7 days ago) Sep 16
to dea...@googlegroups.com

On 9/16/25 05:04, Jalil Khan wrote:
>
> From inspecting the source in data_out_base.cc, HDF5 uses chunked
> layout, with chunk dimensions set internally in deal.II. At 512³, the
> computed chunk size exceeds the HDF5 hard limit of 4 GB per chunk, which
> leads to the failure.
>
> Is there a recommended way for users to avoid this error when writing
> large output files?

Jalil:
that likely falls in the category of "the person who wrote these HDF5
calls didn't think of this scenario, and nobody has run into it since
then". Can you identify which line in data_out_base.cc specifically
triggers the issue, and can you show how you call deal.II functions to
run into this issue?

In the end, this ought to be fixed of course. Any help is obviously
always welcome!

Best
Wolfgang


Jalil Khan

unread,
Sep 17, 2025, 12:56:33 AM (6 days ago) Sep 17
to dea...@googlegroups.com
Here is how I call my data_out object to write HDF5,
DataOutBase::DataOutFilter data_filter(DataOutBase::DataOutFilterFlags(true,true));

data_out.write_filtered_data(data_filter);

data_out.write_hdf5_parallel(data_filter,
                             write_mesh_file,
                             mesh_filename,
                             solution_filename,
                             mpi_comm);


Here are the lines pulled from data_out_base.cc (9.7.0 documentation)
833 node_dataset_id = H5Pcreate(H5P_DATASET_CREATE);
#  ifdef DEAL_II_WITH_ZLIB
    H5Pset_deflate(node_dataset_id,
                   get_zlib_compression_level(flags.compression_level));
    H5Pset_chunk(node_dataset_id, 2, cell_ds_dim); //<------- Here it sets some chunk size?
#  endif
    cell_dataset = H5Dcreate(h5_mesh_file_id,
                             "cells",
                             H5T_NATIVE_UINT,
                             cell_dataspace,
                             H5P_DEFAULT,
                             node_dataset_id,
                             H5P_DEFAULT);
    H5Pclose(node_dataset_id);
#  endif
 840   AssertThrow(cell_dataset >= 0, ExcIO());   // <--- line 8330 in my error

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/dealii/f1bd81c7-9629-460b-977f-9b2c627bdd59%40colostate.edu.


--
Regards
J.R.Khan
8803985667

Jalil Khan

unread,
Sep 17, 2025, 12:59:36 AM (6 days ago) Sep 17
to dea...@googlegroups.com
I want to correct the line numbers, it is between 8333 - 8348.
--
Regards
J.R.Khan
8803985667

Praveen C

unread,
Sep 17, 2025, 1:15:54 AM (6 days ago) Sep 17
to Deal. II Googlegroup
This limit of 4GB seems to be a hard limit set within hdf5


In this case, how can we save larger files ? The problem we see is with a data of the order of 512^3 grid with 5 unknowns at each node.

Should we switch to some other format ?

Thanks
praveen

Wolfgang Bangerth

unread,
Sep 17, 2025, 10:46:39 PM (5 days ago) Sep 17
to dea...@googlegroups.com
On 9/16/25 22:56, Jalil Khan wrote:
> Here is how I call my data_out object to write HDF5,
>
> |DataOutBase::DataOutFilter
> data_filter(DataOutBase::DataOutFilterFlags(true,true));
> data_out.write_filtered_data(data_filter);
> data_out.write_hdf5_parallel(data_filter, write_mesh_file, mesh_filename,
> solution_filename, mpi_comm);|

Jalil:
Can you make this into a self-contained testcase that illustrates the problem?
Best
W.

Wolfgang Bangerth

unread,
Sep 17, 2025, 10:54:17 PM (5 days ago) Sep 17
to dea...@googlegroups.com
On 9/16/25 23:15, Praveen C wrote:
> **
>
> This limit of 4GB seems to be a hard limit set within hdf5
>
> https://support.hdfgroup.org/documentation/hdf5/latest/
> group___d_c_p_l.html#title22 <https://nam10.safelinks.protection.outlook.com/?
> url=https%3A%2F%2Fsupport.hdfgroup.org%2Fdocumentation%2Fhdf5%2Flatest%2Fgroup___d_c_p_l.html%23title22&data=05%7C02%7CWolfgang.Bangerth%40colostate.edu%7C36f38e15fd704b1ac1e008ddf5a94778%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C638936829597879160%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=lpQ49VlJ7amraXkdKiU6CqPQeckdP9zuNBNLSNf311k%3D&reserved=0>
>
> In this case, how can we save larger files ? The problem we see is with a data
> of the order of 512^3 grid with 5 unknowns at each node.
>
> Should we switch to some other format ?

If you want to output stuff on regular NxNxN grids, you really only have the
choice between HDF5 and NetCDF. If HDF5 works for you, it seems easier to fix
the underlying issue than to switch to something else.

I must admit that I don't know the HDF5 interfaces. In fact, I don't know
whether there is anyone left who does. But it seems like a reasonable choice
to change the deal.II code to just limit the chunk size to a maximum of 4 GB.
In the place where that is currently happening, lines 8314 and 8337, you
define the chunk size via a 2d array with sizes
cell_ds_dim[0] = global_node_cell_count[1];
cell_ds_dim[1] = patches[0].reference_cell.n_vertices();

If the product of these two numbers happen to exceed 4B elements, then call
H5Pset_chunk with a different array that has one of the two dimensions reduced
to a point where the product is no longer bigger than 4B. I *think*, but it
might require some reading, that the chunk size is only relevant for how the
data is stored, but not what is being stored and how it is to be interpreted
by any reader. As a consequence, you might get away with just setting a
smaller cell_ds_dim in these calls, and everything else might still work.

Want to try this out?

Cheers
W.

Jalil Khan

unread,
Sep 18, 2025, 11:17:16 AM (5 days ago) Sep 18
to dea...@googlegroups.com
Here is a minimal self contained example that produces the error.


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.


--
Regards,
JRK
example.cc
Reply all
Reply to author
Forward
0 new messages