java.io.IOException Bad file number

1,777 views
Skip to first unread message

Viji

unread,
Dec 2, 2011, 12:46:55 AM12/2/11
to H2 Database, vnata...@lbl.gov
Hi,
We are getting this exception a lot while inserting into the table.
It only happens on one particular machine can you please throw some
insight on this.

Is it related to IO on the local machine. "What is meant is Bad file
number".
Please let me know.

GsiFTPParallelBrowsing.Exception.parseListingAndTransferFilesForGet=org.h2.jdbc.JdbcSQLException:
IO Exception: "java.io.IOException: Bad file number"; "/export/home/
dean/.bdm/bdmDB.ffdf5ac1c3b2f4c5.4439.temp.db" [90031-159]
GsiFTPParallelBrowsing.Exception.parseListingAndTransferFilesForGet=org.h2.jdbc.JdbcSQLException:
IO Exception: "java.io.IOException: Bad file number"; "/export/home/
dean/.bdm/bdmDB.ffdf5ac1c3b2f4c5.4440.temp.db" [90031-159]
GsiFTPParallelBrowsing.Exception.parseListingAndTransferFilesForGet=org.h2.jdbc.JdbcSQLException:
IO Exception: "java.io.IOException: Bad file number"; "/export/home/
dean/.bdm/bdmDB.ffdf5ac1c3b2f4c5.4441.temp.db" [90031-159]

Thanks.
Viji

andreis

unread,
Dec 2, 2011, 2:43:11 AM12/2/11
to H2 Database
> Is it related to IO on the local machine. "What is meant is Bad file> number".
http://docs.oracle.com/cd/E19455-01/806-1075/msgs-1050/index.html
"Either a file descriptor refers to no open file,
or a read(2)--or a write(2)--request is made to a file that is open
only for writing or reading."

I think the problem is in your environment. Is the home directory
mounted via NFS?

Thomas Mueller

unread,
Dec 3, 2011, 12:48:56 PM12/3/11
to h2-da...@googlegroups.com
Hi,

It looks like a temporary file was closed too early. Could you post
the complete stack trace please, and tell us what statement(s) you
ran?

Regards,
Thomas

vijaya natarajan

unread,
Dec 4, 2011, 12:03:28 PM12/4/11
to h2-da...@googlegroups.com
Hi Thomas,
Thanks for your reply. Here is the stack trace. It is happending during insert statement.
The table size is not that big also. It is about 20k+ and the disk space we have is 60TB
and it is NFS mounted.

We really don't know what is the problem. Thanks for helping us.

ts=2011-12-01T21:35:27.376Z level=Excep class=gov.lbl.bdm.gsiftp.GsiFTPParallelBrowsing IO Exception: "java.io.IOException: Bad file number"; "/export/home/dean/.bdm/bdmDB.f062c268ab6e375a.17.temp.db" [90031-159]
org.h2.message.DbException.getJdbcSQLException(DbException.java:329)
org.h2.message.DbException.get(DbException.java:158)
org.h2.message.DbException.convertIOException(DbException.java:315)
org.h2.store.FileStore.readFully(FileStore.java:287)
org.h2.result.ResultDiskBuffer.readRow(ResultDiskBuffer.java:198)
org.h2.result.ResultDiskBuffer.nextUnsorted(ResultDiskBuffer.java:216)
org.h2.result.ResultDiskBuffer.next(ResultDiskBuffer.java:209)org.h2.result.LocalResult.next(LocalResult.java:229)
org.h2.jdbc.JdbcResultSet.nextRow(JdbcResultSet.java:2986)org.h2.jdbc.JdbcResultSet.next(JdbcResultSet.java:116)
gov.lbl.bdm.BDM_DB.insertIntoDirectory(BDM_DB.java:787)
gov.lbl.bdm.gsiftp.GsiFTPParallelBrowsing.parseListingsAndTransferFileForGet(GsiFTPParallelBrowsing.java:914)
gov.lbl.bdm.gsiftp.GsiFTPParallelBrowsing.browseDirsNow(GsiFTPParallelBrowsing.java:666)

Thanks.
Viji


--
You received this message because you are subscribed to the Google Groups "H2 Database" group.
To post to this group, send email to h2-da...@googlegroups.com.
To unsubscribe from this group, send email to h2-database...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/h2-database?hl=en.
h

Viji

unread,
Dec 4, 2011, 4:16:12 PM12/4/11
to H2 Database
Hi Andreis,
Yes. The home directory is NFS mounted and it has lot of disk space
too.
Thanks.
Viji

andreis

unread,
Dec 7, 2011, 6:58:37 AM12/7/11
to H2 Database
I've got that exception a couple of times. It was not related to H2.
We were not able to find the reason and the solution. We decided it's
related to high load on NFS.
And here is my question: Thomas, what is the purpose to keep temporary
files next to the DB file? It creates unnecesary presure on a network
if the DB is placed on NFS volume (or ZFS on SAN). I'm talking about
the current implementation of Database.createTempFile(). A parameter
to set a directory for tmp files would reduce remote IO on big inserts/
updates/selects.

Noel Grandin

unread,
Dec 7, 2011, 9:01:52 AM12/7/11
to h2-da...@googlegroups.com, andreis

That sounds like a reasonable suggestion to me.
(not that I'm volunteering to implement it, little busy on another open-source project right now).

Viji

unread,
Dec 7, 2011, 10:00:54 PM12/7/11
to H2 Database
Hi Noel,
We are in great pressure in one of the projects we are bumping into
this error. Otherwise, we really like H2. Can you please suggest us
some solution. It is kind of URGENT.

Thanks a lot.
Viji

Noel Grandin

unread,
Dec 8, 2011, 1:15:43 AM12/8/11
to h2-da...@googlegroups.com
Not much we can do about a flakey NFS connection.

Why exactly are you writing what looks like a temporary database to an
NFS server? Why not save it on the local storage?

Donal Tobin

unread,
Dec 8, 2011, 4:20:22 AM12/8/11
to H2 Database
Hi Viji,
The NFS FAQ: http://www.sunhelp.org/faq/nfs.html
4.7
Indicates that it is a mis-configured NFS Daemon (that is the server
serving the nfs mount point).

I would question why a nfs mount is used for temporary files, this
would be a huge performance bottleneck and also can cause the process
to lock (i.e. nfs hard mount and network packet loss).

Donal.

andreis

unread,
Dec 8, 2011, 8:03:22 AM12/8/11
to H2 Database
You can try fixing the library yourself. In method
{{Database.createTempFile()}} change from "boolean inTempDir =
readOnly" to "boolean inTempDir = true". It will use "java.io.tmpdir"
for temporary files. (But, of cource, I would prefer introducing some
h2 parameter for that).
Second, you can try reconfiguring your h2 instance to avoid buffering.
Look at "SET MAX_MEMORY_ROWS" http://www.h2database.com/html/grammar.html#set_max_memory_rows.
Also, look at "SET MAX_MEMORY_UNDO", "SET MAX_OPERATION_MEMORY".
Third, you can try optimizing your queries to reduce number of rows in
a result set.

Viji

unread,
Dec 13, 2011, 12:19:57 AM12/13/11
to H2 Database
Hi,
Thanks a lot for all of your valuable suggestion.
We are using local storage for storing the database files.
Finally, we understand that we are using so many threads,
to update the database. We reduce the number of threads.
The problems seems to be solved now.

But our setup will scale soon to much larger degree.
We are really keeping our fingers crossed.
We will definitely try your suggestions later.

Thanks.
Viji

Thomas Mueller

unread,
Dec 14, 2011, 3:02:00 PM12/14/11
to h2-da...@googlegroups.com
Hi,

> Thomas, what is the purpose to keep temporary files next to the DB file?

The idea was simplicity, but now I see it would probably make sense to
always store them in the temp directory (some are already stored in
the temp dir, but not all). I will check if that's possible.

Generally keeping the database file on a slow file system is usually
not a good idea however.

Regards,
Thomas

Reply all
Reply to author
Forward
0 new messages