memory usage

92 views
Skip to first unread message

Justin

unread,
Jul 29, 2010, 3:46:43 PM7/29/10
to hawtdb
I am running into some OutOfMemory problems during a basic test. It
seems like hawt is hanging onto some references, but I can't quite pin
it down:

The test creates a TxPageFile with one Index<Long, Integer>
(which maps object id to page Id) and stores the object data in the
data pages.

for (int i = 0; i < n; i++) {
HawtStore.store(i, new DataObject(i));
}

The memory consumption should be fixed, and very low.
After about 3000 objects the test runs out of memory.

Here are the relevant settings:
Page Size = 10430
PageFactory.isSync = false
indexFactory = HashIndexFactory
indexFactory.isDeferredEncoding = true

I'm using the java visualVM to count object allocations while the test
runs, this is what I see:

type live bytes live objects generations
-------------------------------------------------------------
java.lang.Object[] 64752 64,752 B (19.6%) 1,291 (12.4%) 195
char[] 46520 46,520 B (13.7%) 1,270 (12%) 199
java.lang.String 28680 28,680 B (8.3%) 1,195 (11%) 205
java.util.ArrayList 24552 24,552 B (6.8%) 1,023 (9.3%) 187
java.util.concurrent.ConcurrentHashMap$HashEntry 24384 24,384 B (8.1%)
1,016 (10.5%) 187
org.fusesource.hawtdb.internal.page.Update 23232 23,232 B (7.5%) 968
(9.8%) 182
java.lang.Integer 15248 15,248 B (4.7%) 953 (9.5%) 185
java.util.concurrent.ConcurrentHashMap$HashEntry[] 14184 14,184 B
(4.2%) 429 (4.1%) 180
java.util.concurrent.ConcurrentHashMap$Segment 13696 13,696 B (4%) 428
(4.1%) 182
java.util.concurrent.locks.ReentrantLock$NonfairSync 10104 10,104 B
(2.9%) 421 (3.9%) 182
java.util.TreeMap$Entry 10208 10,208 B (2.8%) 319 (2.9%) 6

When looking into the largest object counts, I see some un-necessary
strings stored as notes, here are some stack trace elements of where
some of the high object counts are coming from:

char[]
---------------------------------
java.util.Arrays.copyOfRange(char[], int, int)
java.lang.String.<init>(char[], int, int)
java.lang.StringBuilder.toString()
org.fusesource.hawtdb.internal.page.HawtTransaction$1.alloc(int) (713
instances)
org.fusesource.hawtdb.internal.page.HawtTransaction.put(org.fusesource.hawtdb.api.PagedAccessor,
int, Object) (289 instances)
org.fusesource.hawtdb.internal.page.HawtTransaction$1.free(int, int)
(110 instances)


java.util.ArrayList
---------------------------------
org.fusesource.hawtdb.internal.page.Update.<init>()
org.fusesource.hawtdb.internal.page.Update.update() (964 instances)

java.util.concurrent.ConcurrentHashMap$HashEntry
---------------------------------
java.util.concurrent.ConcurrentHashMap$Segment.put(Object, int,
Object, boolean) (841 instances)
java.util.concurrent.ConcurrentHashMap.put(Object, Object)
org.fusesource.hawtdb.internal.page.Commit.merge(org.fusesource.hawtdb.api.Allocator,
int, org.fusesource.hawtdb.internal.page.Update) (557 instances)

java.util.concurrent.locks.ReentrantLock$NonfairSync (421 instances)
java.util.concurrent.locks.ReentrantLock.<init>()
java.util.concurrent.ConcurrentHashMap$Segment.<init>(int, float)
java.util.concurrent.ConcurrentHashMap.<init>(int, float, int)
java.util.concurrent.ConcurrentHashMap.<init>()
org.fusesource.hawtdb.internal.page.HawtTransaction.getUpdates() (418
instances)

Justin

unread,
Jul 29, 2010, 4:06:41 PM7/29/10
to hawtdb
I should probably include the stack trace, since something is trapping
the OOM exception, I can't get an hprof dump.

Exception in thread "main"
org.fusesource.hawtdb.api.IOPagingException: java.io.IOException: Map
failed
at
org.fusesource.hawtdb.internal.io.MemoryMappedFile.loadBuffer(MemoryMappedFile.java:
222)
at
org.fusesource.hawtdb.internal.io.MemoryMappedFile.slice(MemoryMappedFile.java:
107)
at
org.fusesource.hawtdb.internal.page.HawtPageFile.slice(HawtPageFile.java:
85)
at
org.fusesource.hawtdb.internal.page.HawtTransaction.slice(HawtTransaction.java:
272)
at org.fusesource.hawtdb.internal.page.Extent.writeOpen(Extent.java:
105)
at
org.fusesource.hawtdb.internal.page.ExtentOutputStream.<init>(ExtentOutputStream.java:
54)
at
org.fusesource.hawtdb.api.AbstractStreamPagedAccessor.store(AbstractStreamPagedAccessor.java:
42)
at
org.fusesource.hawtdb.internal.index.BTreeIndex.storeNode(BTreeIndex.java:
199)
at
org.fusesource.hawtdb.internal.index.BTreeIndex.create(BTreeIndex.java:
77)
at
org.fusesource.hawtdb.api.BTreeIndexFactory.create(BTreeIndexFactory.java:
63)
at org.fusesource.hawtdb.internal.index.HashIndex
$Buckets.create(HashIndex.java:275)
at
org.fusesource.hawtdb.internal.index.HashIndex.changeCapacity(HashIndex.java:
198)
at
org.fusesource.hawtdb.internal.index.HashIndex.putIfAbsent(HashIndex.java:
131)
at justin.HawtStore.write(HawtStore.java:207)
at justin.TestKaha.writePerson(TestKaha.java:60)
at justin.TestKaha.createInitial(TestKaha.java:76)
at justin.TestKaha.testPerformance(TestKaha.java:101)
at justin.TestKaha.main(TestKaha.java:141)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:761)
at
org.fusesource.hawtdb.internal.io.MemoryMappedFile.loadBuffer(MemoryMappedFile.java:
218)
... 17 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
... 18 more

Hiram Chirino

unread,
Jul 29, 2010, 4:08:05 PM7/29/10
to haw...@googlegroups.com
does it happen if isDeferredEncoding=false ??

--
Regards,
Hiram

Blog: http://hiramchirino.com

Open Source SOA
http://fusesource.com/

Justin

unread,
Jul 29, 2010, 4:20:31 PM7/29/10
to hawtdb
Hmmmm..... No I get a totally different exception
I shortened the stack to hopefully be readable in google groups

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException:
-2617
at java.util.ArrayList.get(ArrayList.java:324)
at hawt.io.MemoryMappedFile.loadBuffer(MemoryMappedFile.java:214)
at hawt.io.MemoryMappedFile.read(MemoryMappedFile.java:73)
at hawt.io.MemoryMappedFile.read(MemoryMappedFile.java:67)
at hawt.page.HawtPageFile.read(HawtPageFile.java:69)
at hawt.page.HawtTransaction.read(HawtTransaction.java:228)
at hawt.index.BTreeIndex.loadNode(BTreeIndex.java:235)
at hawt.index.BTreeIndex.root(BTreeIndex.java:158)
at hawt.index.BTreeIndex.get(BTreeIndex.java:85)
at hawt.index.HashIndex.get(HashIndex.java:80)
at justin.HawtStore.write(HawtStore.java:180)
at justin.TestKaha.writePerson(TestKaha.java:60)
at justin.TestKaha.createInitial(TestKaha.java:76)
at justin.TestKaha.testPerformance(TestKaha.java:101)
at justin.TestKaha.main(TestKaha.java:141)

I had put some trace into the exception handler to show the possition
and buffer size:

Exception in thread "main"
org.fusesource.hawtdb.api.IOPagingException: position: 134217728
bufferSize: 67108864

Justin

unread,
Jul 29, 2010, 4:33:07 PM7/29/10
to hawtdb
Ok, the test runs to completion (still going) when using
BTreeIndexFactory instead of HashIndexFactory.
It seems to be a pretty good test, as It fails after 3 insertions, if
you like I can send it to you off list.
I also verified that I'm not overwriting the byte buffer bounds in my
custom allocator.

Hiram Chirino

unread,
Jul 29, 2010, 4:40:37 PM7/29/10
to haw...@googlegroups.com
FYI, I have not been using hash indexes much so they have not received
as much testing as the btree bits.

--

Justin

unread,
Jul 29, 2010, 4:54:30 PM7/29/10
to hawtdb
Actually, the BTree failed eventually too after a long time (5291
writes).
I noticed I was not writing my custom MAGIC on each page, however I
don't see how that would matter, since both types of indexes only us
the magic as a sanity check -- not as a guide on which pages to read.

Exception in thread "main"
org.fusesource.hawtdb.api.IOPagingException: position: 67108864
bufferSize: 67108864
at hawt.io.MemoryMappedFile.loadBuffer(MemoryMappedFile.java:223)
at hawt.io.MemoryMappedFile.slice(MemoryMappedFile.java:107)
at hawt.page.HawtPageFile.slice(HawtPageFile.java:85)
at hawt.page.HawtTransaction.slice(HawtTransaction.java:272)
at hawt.page.Extent.writeOpen(Extent.java:105)
at hawt.page.ExtentOutputStream.write(ExtentOutputStream.java:82)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at java.io.FilterOutputStream.write(FilterOutputStream.java:80)
at org.fusesource.hawtbuf.codec.ObjectCodec.encode(ObjectCodec.java:
40)
at hawt.index.BTreeNode.write(BTreeNode.java:170)
at hawt.index.BTreeNode$DataPagedAccessor.encode(BTreeNode.java:223)
at hawt.index.BTreeNode$DataPagedAccessor.encode(BTreeNode.java:1)
at
hawt.api.AbstractStreamPagedAccessor.store(AbstractStreamPagedAccessor.java:
45)
at hawt.index.BTreeIndex.storeNode(BTreeIndex.java:199)
at hawt.index.BTreeNode.putIfAbsent(BTreeNode.java:454)
at hawt.index.BTreeNode.putIfAbsent(BTreeNode.java:444)
at hawt.index.BTreeIndex.putIfAbsent(BTreeIndex.java:93)
at justin.HawtStore.write(HawtStore.java:208)
at justin.TestKaha.writePerson(TestKaha.java:60)
at justin.TestKaha.createInitial(TestKaha.java:79)
at justin.TestKaha.testPerformance(TestKaha.java:109)
at justin.TestKaha.main(TestKaha.java:149)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:761)
at hawt.io.MemoryMappedFile.loadBuffer(MemoryMappedFile.java:219)
... 21 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
... 22 more

Justin

unread,
Jul 29, 2010, 5:11:26 PM7/29/10
to hawtdb
Ok, so I explicitly set the key and value codecs for my index, and it
no longer fails:
m_indexFactory.setKeyCodec(LongCodec.INSTANCE);
m_indexFactory.setValueCodec(IntegerCodec.INSTANCE);

Is this expected behavior or have I exposed a bug?
I had read that it would use java serialization for keys and values,
so it seems like it should not have mattered.

Justin

unread,
Jul 29, 2010, 5:26:31 PM7/29/10
to hawtdb
HashIndex still fails when using the appropriate codex.

Hiram Chirino

unread,
Jul 29, 2010, 5:40:44 PM7/29/10
to haw...@googlegroups.com
Could be a bug.. mind posting your test case? Best way to submit it
would be to fork the project at git hub and commit your test case to
your copy. That way I can test it out on your branch.

--

Justin

unread,
Jul 30, 2010, 10:39:25 AM7/30/10
to hawtdb
I am new to git, so I will post it here as well it fails after 3
insertions:
---------------------
package org.fusesource.hawtdb.api;

import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.Collections;
import java.util.List;

import org.fusesource.hawtbuf.Buffer;
import org.fusesource.hawtbuf.codec.IntegerCodec;
import org.fusesource.hawtbuf.codec.LongCodec;
import org.fusesource.hawtdb.api.Paged.SliceType;
import org.junit.Test;

public class MixedUseTest {
public static final Buffer MAGIC = new Buffer(new byte[] {'c'});

private TxPageFile m_pageFile;
private IndexFactory<Long, Integer> m_indexFactory;
private short m_pageSize;
private short m_dataSize;

private IndexDataAccessor m_dataAccessor = new IndexDataAccessor();

private static IndexFactory<Long, Integer> createIndexFactory() {
HashIndexFactory<Long, Integer> indexFactory = new
HashIndexFactory<Long, Integer>();
indexFactory.setDeferredEncoding(false);
indexFactory.setKeyCodec(LongCodec.INSTANCE);
indexFactory.setValueCodec(IntegerCodec.INSTANCE);
return indexFactory;
}

private static TxPageFile createFileFactory(File path, short
pageSize) throws IOException {
TxPageFileFactory factory = new TxPageFileFactory();
factory.setFile(path);
factory.setPageSize(pageSize);
factory.setSync(false);
factory.open();

return factory.getTxPageFile();
}

public void setUp(short pageSize) throws IOException {
File path = new File("test-mixed");
path.delete();

m_pageSize = pageSize;
m_dataSize = (short)(pageSize - (MAGIC.length + Long.SIZE/8));
m_pageFile = createFileFactory(path, pageSize);
m_indexFactory = createIndexFactory();

// create the index
Transaction tx = m_pageFile.tx();
try {
Index<Long, Integer> index = m_indexFactory.create(tx);

// seed the index with some metadata about the test
index.put(-2L, (int)m_pageSize);
index.put(-3L, (int)m_dataSize);
index.put(-4L, MAGIC.length);
}
finally {
tx.commit();
}

m_pageFile.flush();
}

private class CustomObject {
private long id;
private byte[] data;

public CustomObject(long id) {
this.id = id;
this.data = new byte[m_dataSize];
}
}

private class IndexDataAccessor implements
PagedAccessor<CustomObject> {
public IndexDataAccessor() {
}
public CustomObject load(Paged p, int page) {
ByteBuffer bb = p.slice(SliceType.READ, page, 1);

try {
// check the magic
Buffer magicBuffer = new Buffer(MAGIC.length);
bb.get(magicBuffer.data);
if (!MAGIC.equals(magicBuffer))
throw new IllegalArgumentException("unknown page type");

CustomObject co = new CustomObject(bb.getLong());
bb.get(co.data);
return co;
}
finally {
p.unslice(bb);
}
}
@Override
public List<Integer> store(Paged p, int page, CustomObject value) {
ByteBuffer bb = p.slice(SliceType.WRITE, page, 1);
try {
bb.put(MAGIC.data);
bb.putLong(value.id);
bb.put(value.data);
}
finally {
p.unslice(bb);
}
return Collections.EMPTY_LIST;
}
@Override
public List<Integer> pagesLinked(Paged paged, int page) {
return Collections.EMPTY_LIST;
}
}

private void store(CustomObject c) {
Transaction tx = m_pageFile.tx();
try {
Index<Long, Integer> idx = m_indexFactory.open(tx);
Integer pageNumber = idx.get(c.id);

CustomObject obj;

// see if the index contains this object
if (pageNumber == null) {
pageNumber = tx.alloc();
obj = c;
}
else {
// read the previous object
obj = tx.get(m_dataAccessor, pageNumber);
}

// data update
obj.data = c.data;

// add this person to the index
idx.putIfAbsent(c.id, pageNumber);

// write the data to the page
m_dataAccessor.store(tx, pageNumber, obj);

tx.commit();
}
finally {
tx.close();
}
}

@Test
public void testSingleIndex() throws IOException {
setUp((short)(1024*29));

System.out.println("building db");
for (int i = 0; i < 10000; i++) {
System.out.println("creating: " + i);
CustomObject c = new CustomObject(i);
store(c);

if (i % 10 == 0)
m_pageFile.flush();
}
}


}

Hiram Chirino

unread,
Jul 30, 2010, 11:47:03 AM7/30/10
to haw...@googlegroups.com
Thank,

It's easier to see what's going on that way.. Ok #1, looks like
there's a bug in Hash based indexes.. will track that down soon.
But I noticed some other usage issues. Firstly, the paged get/put
methods are are there to do deferred updates and cached object reads.
It relies on the keys and values to be immutable. Its actually
simpler if you don't use them at all. For example, the following
should work for you: http://gist.github.com/500742

Message has been deleted
Message has been deleted

Justin

unread,
Jul 30, 2010, 2:09:31 PM7/30/10
to hawtdb
Thanks, that example is basically all I need to get working.
However, there is more going on here than just mutation ;)

The original test did not fail because the CustomObject is mutable,
note that when the
test runs, it never mutates the objects (creates only) -- pageNumber
is always null (I have
verified this via assertions).

It turns out that I must set DeferredEncoding(true) on the index
factory if I am to use my PagedAccessor -- using the PagedAccessor
with DeferredEncoding(false) causes the test to fail.
But why is this?? The pages which the index operates on and the set
of pages my custom PagedAccessor operates on _should_ be orthogonal.

This suggests something going on deeper, within the page allocator.

I still don't understand the allocator very well -- for example -- why
I am able to
allocate and write to pages without writing some basic page header
information. I notice that most of the built in implementations of
PageAllocator use ExtentOutputStream when creating pages, but in this
test we just write out own format -- could this be confusing the
allocator?

I also notice that the performance keeps getting considerably slower
-- the more objects get created -- unless I have
setDeferredEncoding(true).
It starts out just fine at like creates 10/sec, but after 9000 it
crawls along at 1/sec.

- Justin

Justin

unread,
Jul 30, 2010, 3:41:28 PM7/30/10
to hawtdb
The modified test (http://gist.github.com/500742) fails if you make n
= 1000000;
I'm still trying to figure out git, so I can update the test.

Justin

unread,
Aug 2, 2010, 1:38:24 PM8/2/10
to hawtdb
I believe I have traced this problem to
MemoryMappedFile.loadBuffer().

The design of MemoryMappedFile seems to be: permanently allocate
MappedByteBuffers for
each 64MB region of the file (ever touched) and never release them.
(the javadoc comment seems to confirm this:
"Multiple direct buffers are used to deal with OS and Java
restrictions.")

My test fails after allocating 21-23 of these buffers, varies a bit if
I change the default
JVM memory setting to -Xmx156M (I get 21 or 22 instead of 23) or if I
have other things running,
but it does not seem to be true that giving the JVM more or less heap
affects this in any reliable
way. Do NIO buffers come off the java heap?

Since the Javadoc on MappedByteBuffers is nonspecific due to system
dependencies,
its tough to say exactly how many I should expect to be able to
allocate (depending on my OS).

I suspect that each MappedByteBuffer requires a bit of dedicated non-
paged system memory, and
that is what I am running short on.

Does anyone know where I can get more info on OS limitations of
FileChannel.map(), sure it
is implementation dependent -- but what bout the default
implementation should that not state
the limitations?

Is there any way to avoid using mmap entirely and still use hawtdb? I
don't really need
fast access (read or write) but I do need atomic file access.

Hiram Chirino

unread,
Aug 2, 2010, 1:50:38 PM8/2/10
to haw...@googlegroups.com
What os are you on and what JDK?

--

Hiram Chirino

unread,
Aug 2, 2010, 2:03:23 PM8/2/10
to haw...@googlegroups.com
BTW try using the default page size 512

--

Justin

unread,
Aug 2, 2010, 2:18:04 PM8/2/10
to hawtdb
If I use the default page size I can allocate more _pages_ since my
file grows more slowly.
But, If I make the number of pages to allocate larger (to compensate
for the small page size),
I run out of java heap space around object 161500/9970000.


On Aug 2, 2:03 pm, Hiram Chirino <hi...@hiramchirino.com> wrote:
> BTW try using the default page size 512
>

Hiram Chirino

unread,
Aug 2, 2010, 3:09:33 PM8/2/10
to haw...@googlegroups.com
What does the stack trace look like when you get that OOME?

Justin

unread,
Aug 2, 2010, 3:41:13 PM8/2/10
to hawtdb
I'm only considering the first test (which uses an in memory map for
the index) since it reduces the number of moving parts.
The test's internal map requires more memory if there are more
objects.
The number of MappedBuffer objects however should be pretty constant
since I _think_ they depend on the file size (and max buffer size
which seems pretty steady at 67108864 bytes).

The particular place it hits the OOM is JVM heap size dependent, and
not very informative.
It is a regular heap OOM (not caused by map), it also seems like it
does not always happen in the same place, somewhere between creating
object 957000 - 1116500.

creating: 0 159500 319000 478500 638000 797500 957000
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2882)
at
java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:
100)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:
390)
at java.lang.StringBuilder.append(StringBuilder.java:119)
at java.io.ObjectStreamClass.getClassSignature(ObjectStreamClass.java:
1457)
at
java.io.ObjectStreamClass.getMethodSignature(ObjectStreamClass.java:
1471)
at java.io.ObjectStreamClass.access$2400(ObjectStreamClass.java:52)
at java.io.ObjectStreamClass
$MemberSignature.<init>(ObjectStreamClass.java:1798)
at
java.io.ObjectStreamClass.computeDefaultSUID(ObjectStreamClass.java:
1705)
at java.io.ObjectStreamClass.access$100(ObjectStreamClass.java:52)
at java.io.ObjectStreamClass$1.run(ObjectStreamClass.java:205)
at java.security.AccessController.doPrivileged(Native Method)
at
java.io.ObjectStreamClass.getSerialVersionUID(ObjectStreamClass.java:
202)
at java.io.ObjectStreamClass.writeNonProxy(ObjectStreamClass.java:
667)
at
java.io.ObjectOutputStream.writeClassDescriptor(ObjectOutputStream.java:
640)
at
java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:
1245)
at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:
1203)
at
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:
1387)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:
1150)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:
326)
at java.util.ArrayList.writeObject(ArrayList.java:570)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:
25)
at java.lang.reflect.Method.invoke(Method.java:597)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:
945)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:
1461)
at
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:
1392)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:
1150)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:
326)
at org.fusesource.hawtdb.internal.page.Batch.writeExternal(Batch.java:
93)
at
java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:
1421)

So if I bump up the JVM memory to 1000M, I instead get the NIO OOM
when allocating buffer #6, if I set JVM heap to 500M I get as far as
buffer #17.
So perhaps NIO buffers come out of JVM heap after all.

org.fusesource.hawtdb.api.IOPagingException: position: 603979776@9
bufferSize: 67108864
at
org.fusesource.hawtdb.internal.io.MemoryMappedFile.loadBuffer(MemoryMappedFile.java:
224)
at
org.fusesource.hawtdb.internal.io.MemoryMappedFile.slice(MemoryMappedFile.java:
107)
at
org.fusesource.hawtdb.internal.page.HawtPageFile.slice(HawtPageFile.java:
85)
at
org.fusesource.hawtdb.internal.page.HawtTransaction.slice(HawtTransaction.java:
272)
at org.fusesource.hawtdb.api.MixedUseTest.store(MixedUseTest.java:
151)
at org.fusesource.hawtdb.api.MixedUseTest.store(MixedUseTest.java:
172)
at
org.fusesource.hawtdb.api.MixedUseTest.createObjects(MixedUseTest.java:
212)
at
org.fusesource.hawtdb.api.MixedUseTest.createObjectsTest(MixedUseTest.java:
228)
at
org.fusesource.hawtdb.api.MixedUseTest.testNoIndex(MixedUseTest.java:
238)
...
at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:
196)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:761)
at
org.fusesource.hawtdb.internal.io.MemoryMappedFile.loadBuffer(MemoryMappedFile.java:
219)
... 28 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
... 29 more


Justin

unread,
Aug 2, 2010, 4:15:57 PM8/2/10
to hawtdb
Is there anything I can do about posting stack traces to google
groups?
It looks horrible, is there some setting which controls the max line
length?

Hiram Chirino

unread,
Aug 2, 2010, 4:17:37 PM8/2/10
to haw...@googlegroups.com
No. sorry. Best to just paste it to an external pasting service like gist:
http://gist.github.com/

--

Hiram Chirino

unread,
Aug 2, 2010, 4:25:51 PM8/2/10
to haw...@googlegroups.com
So the first test fails for me with:

Caused by: java.io.IOException: Map failed
at the 20th buffer.. file size is about 1.5 GB.

This could be a JVM issue BTW.. I'm on 1.6.0 update 21, 32 bit Windows 7
I'm about to test out on a 64 bit system to see if that makes a difference.

--

Hiram Chirino

unread,
Aug 2, 2010, 7:45:17 PM8/2/10
to haw...@googlegroups.com
So on Windows 7 system using a 64 bit JVM, it goes past 2.5 GB easily
without a problem.
Are you on a 32 or 64 bit jvm?

Justin

unread,
Aug 3, 2010, 10:11:31 AM8/3/10
to hawtdb
I'm on a 32 bit jvm. How did you decide on 64M as the segment sise?
I can allocate bigger segments but can get fewer of them e.g. (5@128M
=640M vs 24@64=1536).
Its not proportional -- relationship is not clear.

Hiram Chirino

unread,
Aug 3, 2010, 10:28:20 AM8/3/10
to haw...@googlegroups.com
The only reason is that the index file will grow by that segment size
amount. So I did not want something too big, but not something too
small either (to avoid creating to many segments)

--

Hiram Chirino

unread,
Aug 3, 2010, 10:31:16 AM8/3/10
to haw...@googlegroups.com
BTW I think we many be running up against maximum addressing space
issues in 32 bit processes. With memory fragmentation and what not it
could just be that at some point the OS can't find contiguous chunk of
processes memory big enough to map in the requested segment. This
could be why you can create more smaller segments then the bigger
ones.

On Tue, Aug 3, 2010 at 10:11 AM, Justin <justin....@gmail.com> wrote:

--

Justin

unread,
Aug 3, 2010, 11:13:39 AM8/3/10
to hawtdb
That totally explains it, we are running out of contiguous address
space in the JVM.

I'm beginning to think that mmap really is only worth doing in a 64
bit environment.
In a 32 bit JVM you might want to map the beginning, end, and maybe
some chunks of an index -- other than that you are better of using
seek + write.

Hiram Chirino

unread,
Aug 3, 2010, 11:16:09 AM8/3/10
to haw...@googlegroups.com
Agree... 32 bit env may need to avoid mmap altogether.

--

Reply all
Reply to author
Forward
0 new messages