Need advice : errno = 22

15 views
Skip to first unread message

Djibril Togola

unread,
Apr 1, 2019, 12:05:31 PM4/1/19
to liam2...@googlegroups.com
Dear Liam2 users,

The error below persists, when I try to execute a code with a relatively big data of 2.5 millions of individuals, within a total number of lines of 62 millions at the starting period of simulation.

The issue seems to desapear when the amount of data decreases.

I desesperatly need your feedbacks.

Many thanks in advance.


Traceback (most recent call last):
  File "Console.py", line 27, in <module>
  File "main.py", line 375, in <module>
  File "main.py", line 365, in main
  File "main.py", line 141, in simulate
  File "simulation.py", line 689, in run
  File "simulation.py", line 496, in run_single
  File "utils.py", line 206, in timed
  File "utils.py", line 201, in gettime
  File "data.py", line 920, in prepare
  File "data.py", line 533, in copy_table
  File "data.py", line 511, in append_table
  File "utils.py", line 674, in loop_wh_progress
  File "data.py", line 507, in copy_chunk
  File "tables\table.py", line 2269, in append
  File "tables\table.py", line 2196, in _save_buffered_rows
  File "tableextension.pyx", line 486, in tables.tableextension.Table._append_records (tables\tableextension.c:5978)
HDF5ExtError: HDF5 error back trace

  File "..\..\hdf5-1.8.11\src\H5Dio.c", line 234, in H5Dwrite
    can't prepare for writing data
  File "..\..\hdf5-1.8.11\src\H5Dio.c", line 366, in H5D__pre_write
    can't write data
  File "..\..\hdf5-1.8.11\src\H5Dio.c", line 774, in H5D__write
    can't write data
  File "..\..\hdf5-1.8.11\src\H5Dchunk.c", line 1969, in H5D__chunk_write
    unable to read raw data chunk
  File "..\..\hdf5-1.8.11\src\H5Dchunk.c", line 2954, in H5D__chunk_lock
    unable to preempt chunk(s) from cache
  File "..\..\hdf5-1.8.11\src\H5Dchunk.c", line 2740, in H5D__chunk_cache_prune
    unable to preempt one or more raw data cache entry
  File "..\..\hdf5-1.8.11\src\H5Dchunk.c", line 2607, in H5D__chunk_cache_evict
    cannot flush indexed storage buffer
  File "..\..\hdf5-1.8.11\src\H5Dchunk.c", line 2535, in H5D__chunk_flush_entry
    unable to write raw data to file
  File "..\..\hdf5-1.8.11\src\H5Fio.c", line 158, in H5F_block_write
    write through metadata accumulator failed
  File "..\..\hdf5-1.8.11\src\H5Faccum.c", line 816, in H5F_accum_write
    file write failed
  File "..\..\hdf5-1.8.11\src\H5FDint.c", line 185, in H5FD_write
    driver write request failed
  File "..\..\hdf5-1.8.11\src\H5FDsec2.c", line 822, in H5FD_sec2_write
    file write failed: time = Mon Apr 01 17:19:43 2019
, filename = 'T:\Commun\Travail\Tests_Divers_DT\Serveur\2019_01_11_modele_DT\output/simulation7.h5', file descriptor = 4, errno = 22, error message = 'Invalid argument', buf = 0000000040D31E98, total write size = 65120, bytes this sub-write = 65120, bytes actually written = 18446744073709551615, offset = 42374451840

End of HDF5 error back trace

Problems appending the records.

------
Djibril T.

Raphaël Desmet

unread,
Apr 2, 2019, 4:03:44 AM4/2/19
to liam2...@googlegroups.com

Dear Djibril,

 

I am not sure but I would say that is a problem of hard disc space.

 

If you don’t need your output hdf5 file, use this in the simulation block:

 

output:

        path: output\blablabla\

        file: ''

    

No output file will be written on your hard disk.

 

I hope it will help.

 

Raphaël  

--
You received this message because you are subscribed to the Google Groups "liam2-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to liam2-users...@googlegroups.com.
To post to this group, send email to liam2...@googlegroups.com.
Visit this group at https://groups.google.com/group/liam2-users.



Disclaimer: This e-mail may contain confidential information which is intended only for the use of the recipient(s) named above.
If you have received this communication in error, please notify the sender immediately and delete this e-mail from your system.
Please note that e-mail messages cannot be considered as official information from the Federal Planning Bureau.

Gaëtan de Menten

unread,
Apr 2, 2019, 5:14:27 AM4/2/19
to liam2...@googlegroups.com

62 millions is uncharted territory for LIAM2 as far as I know. All the pieces should work (if you have enough RAM to hold two periods worth of data plus some for the index and some buffers) but since this has never been tested, there might be silly mistakes lurking somewhere.

 

That said, you can only hope to make this work with a 64bit version of LIAM2. Speaking of which, it usually helps to report the version of LIAM2 you are using when reporting a problem. Note that I will not be able to help with this problem for a few months, and will only be able to investigate the problem if you provide me with an example of LIAM2 code which creates a dataset (using new()) which reproduces the problem.

 

See for example:

 

https://github.com/liam2/liam2/blob/master/liam2/tests/functional/generate.yml

 

(but it can be much simpler than that of course, the generated data does not need to be any where near realistic)

 

Gaëtan

 

From: liam2...@googlegroups.com <liam2...@googlegroups.com> On Behalf Of Djibril Togola
Sent: Monday, April 01, 2019 18:05
To: liam2...@googlegroups.com
Subject: [liam2-users] Need advice : errno = 22

 

Dear Liam2 users,

------
Djibril T
.

--

You received this message because you are subscribed to the Google Groups "liam2-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to liam2-users...@googlegroups.com.
To post to this group, send email to liam2...@googlegroups.com.
Visit this group at https://groups.google.com/group/liam2-users.

Djibril Togola

unread,
Apr 2, 2019, 12:57:10 PM4/2/19
to liam2...@googlegroups.com
Dear Raphaël, 

I thank you a lot for your helpful comment. 
Your approach seems to work, but I am doing a long run simulation (55 periods) which reuses the output file and the duration function. What's more, I need to store the output file for some reasons. 

Anyway, I can reasonnably assert, now, that the problem is related to the hard disc. 

Many thanks.
---
Djibril
------
Djibril TOGOLA


Djibril Togola

unread,
Apr 2, 2019, 1:25:49 PM4/2/19
to liam2...@googlegroups.com
Dear Gaëtan,

I thank you for your helpful comment. Actually, I am using LIAM2 0.12.0 alpha2 (64 bit).

Unfortunally the dataset (62 millions rows) is not created. Instead, it is read into a h5 file. Also, the code is a embadeed code which involves to many files.  So, I regret, I am not able to provide a working code. I thank you anyway for your kind  attention.

Many thanks, 

------
Djibril TOGOLA


Reply all
Reply to author
Forward
0 new messages