proper shutdown for PageFile?

14 views
Skip to first unread message

Justin

unread,
Aug 2, 2010, 12:07:27 PM8/2/10
to hawtdb
In writing some test cases, I find that once an exception happens with
a PageFile, it is not possible to close the file using the API, I keep
getting the same exception because it keeps trying to flush (and re-
encountering the original exception during flush()).

PageFileFactory factory = ...;
PageFile pf = factory.open();
try { // this somehow triggers an exception
doSomethingThatFails(pf);
}
finally { // won't work
pf.close();
}

Stranger yet, this even happens if I use a different (empty) file for
each test.
first exception: IOPagingException: position: 0@0 bufferSize: 67108864
...
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
... 29 more

Every subsequent exception:
org.fusesource.hawtdb.api.IOPagingException: position: 1543503872@23
bufferSize: 67108864
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
... 29 more

It almost seems like the memory mapped IO stuff in java is getting
into a bad state, which there is no recovery from.

I've also tried added a forceClose() method which calls close() on the
PageFileFactory, this allows the tests to complete. Ive noticed that
the close() method spends a long time in sync() while closing all the
buffers. I am not sure if this is normal or not.

Hiram Chirino

unread,
Aug 2, 2010, 1:19:35 PM8/2/10
to haw...@googlegroups.com
On Mon, Aug 2, 2010 at 12:07 PM, Justin <justin....@gmail.com> wrote:
> In writing some test cases, I find that once an exception happens with
> a PageFile, it is not possible to close the file using the API, I keep
> getting the same exception because it keeps trying to flush (and re-
> encountering the original exception during flush()).
>

interesting..

> PageFileFactory  factory = ...;
> PageFile pf = factory.open();
> try { // this somehow triggers an exception
> doSomethingThatFails(pf);
> }
> finally {  // won't work
>  pf.close();
> }
>
> Stranger yet, this even happens if I use a different (empty) file for
> each test.
> first exception: IOPagingException: position: 0@0 bufferSize: 67108864
> ...
> Caused by: java.lang.OutOfMemoryError: Map failed
>        at sun.nio.ch.FileChannelImpl.map0(Native Method)
>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
>        ... 29 more
>
> Every subsequent exception:
> org.fusesource.hawtdb.api.IOPagingException: position: 1543503872@23
> bufferSize: 67108864
> Caused by: java.lang.OutOfMemoryError: Map failed
>        at sun.nio.ch.FileChannelImpl.map0(Native Method)
>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
>        ... 29 more
>
> It almost seems like the memory mapped IO stuff in java is getting
> into a bad state, which there is no recovery from.
>

What OS are you on? Could you get me the test case?

> I've also tried added a forceClose() method which calls close() on the
> PageFileFactory, this allows the tests to complete.  Ive noticed that
> the close() method spends a long time in sync() while closing all the
> buffers.  I am not sure if this is normal or not.

Yes, it typically is. Unless your calling flush() in your app, the OS
will buffer all most of your writes. On close() we flush to make sure
everything is on disk before we return.

--
Regards,
Hiram

Blog: http://hiramchirino.com

Open Source SOA
http://fusesource.com/

Justin

unread,
Aug 2, 2010, 2:10:05 PM8/2/10
to hawtdb
I am running on JDK 1.6.13 (Win XP).
I _think_ I have figured out git enough to have push my test case to
my personal fork:
g...@github.com:JustinSands/hawtdb.git

In order to see the close failure, just don't call forceClose().

Note that closing the channel after the file is mapped does not have
any affect on the buffer,
which is why you must close the MappedByteBuffer in order to unstuck
the test.
Reply all
Reply to author
Forward
0 new messages