Not enough available space for writing 61 bytes

102 views
Skip to first unread message

Chris

unread,
Apr 23, 2015, 6:09:18 AM4/23/15
to java-ch...@googlegroups.com
Hi,

I was wondering if there is a way to calculate the object's size (possibly in advance) in order to avoid such exception, below is a stack trace. I am storing in Chronicle around 10 different Externalizable classes, each containing various fields.

Caused by: java.lang.IllegalStateException: java.io.IOException: Not enough available space for writing 61 bytes
        at net.openhft.lang.io.AbstractBytes.writeObject(AbstractBytes.java:2091)
        at tradetest.model.OrderCreateResponse.writeExternal(OrderCreateResponse.java:98)
        at net.openhft.lang.io.serialization.impl.ExternalizableMarshaller.write(ExternalizableMarshaller.java:50)
        at net.openhft.lang.io.serialization.impl.ExternalizableMarshaller.write(ExternalizableMarshaller.java:33)
        at net.openhft.lang.io.serialization.BytesMarshallableSerializer.writeSerializable(BytesMarshallableSerializer.java:97)
        at net.openhft.lang.io.AbstractBytes.writeObject(AbstractBytes.java:2089)
        ... 11 more
Caused by: java.io.IOException: Not enough available space for writing 61 bytes
        at net.openhft.lang.io.view.BytesOutputStream.checkAvailable(BytesOutputStream.java:85)
        at net.openhft.lang.io.view.BytesOutputStream.write(BytesOutputStream.java:99)
        at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:253)
        at java.util.zip.DeflaterOutputStream.finish(DeflaterOutputStream.java:226)
        at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:238)
        at java.io.ObjectOutputStream$BlockDataOutputStream.close(ObjectOutputStream.java:1827)
        at java.io.ObjectOutputStream.close(ObjectOutputStream.java:741)
        at net.openhft.lang.io.serialization.JDKZObjectSerializer.writeSerializable(JDKZObjectSerializer.java:40)
        at net.openhft.lang.io.serialization.BytesMarshallableSerializer.writeSerializable(BytesMarshallableSerializer.java:102)
        at net.openhft.lang.io.AbstractBytes.writeObject(AbstractBytes.java:2089)
        ... 16 more


The particular class in the exception "OrderCreateResponse" has a field which is a java.util.List and contains other objects(TradeClosed) which are also Externalizable. I've changed the writeExternal (and readExternal) method as follows:

        class OrderCreateResponse implements Externalizable {

        private List<TradeClosed> tradesClosed; 

        //other fields 

        @Override
public void writeExternal(ObjectOutput out) throws IOException {
out.writeObject(code);
out.writeLong(unixTimeSec);

            //do not serialize the List, just its content
//out.writeObject(tradesClosed);

out.writeInt(tradesClosed.size());
for (TradeClosed tc : tradesClosed) {
out.writeObject(tc);
}

}


However, I am not sure if this could save space, and in general what is the correct approach to ensure this exception won't occur.

my writing code is this:

try {
appender.startExcerpt(512);
appender.writeUTFΔ(obj.getAccountId());
appender.writeObject(obj);
appender.finish();
} catch(Exception any) {
logger.log(Level.WARNING, String.format("Exception in appender: %s", any.getMessage()), any);
}




Thanks,
Chris


Peter Lawrey

unread,
Apr 23, 2015, 7:01:04 AM4/23/15
to java-ch...@googlegroups.com

The intent is that you provide the size if you know what it is likely to be, otherwise I suggest not providing it.
Try instead

appender.startExcerpt();

--
You received this message because you are subscribed to the Google Groups "Chronicle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-chronicl...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Chris

unread,
Apr 23, 2015, 9:05:03 AM4/23/15
to java-ch...@googlegroups.com
Hi Peter,

Thank you very much for clarifying that, I was under the impression space must be provided in advance. 

In this regard, my Vanilla Chronicle is constructed as follows:


int cycleLength = (int)TimeUnit.DAYS.toMillis(7);//604800000 ms == 1 wk

chronicle = ChronicleQueueBuilder.vanilla(adapterDescriptor.getQueueBasePath())
.cycleLength(cycleLength, true)
.defaultMessageSize(1024)
.dataBlockSize(1024L << 20)//1024 MB
.indexBlockSize(256L << 20)//256 MB
.build();

Now knowing your answer, I've removed the 1024 bytes setting. In chronicle's source, the the default capacity is 128K which I guess should be OK for most cases.

Also, in my case, messages in the Chronicle will be used for administrative queries and reporting operations. I am thinking to use replication to another node for this purpose(also to reduce the load on the main node). Also, I have only ~3GB free RAM left on main node. Would you recommend using chronicle replication and querying messages on a separate node in such scenario?


Thanks,
Chris

Peter Lawrey

unread,
Apr 23, 2015, 9:20:40 AM4/23/15
to java-ch...@googlegroups.com
The amount of free memory shouldn't be too important.  1 MB should be enough in most cases.  What matters is whether you are writing data either much slower than your disk sub-system can handle or not.  If you are getting close to the limits of your disk write speed (or it's capacity) then you have to reconsider a few things.

Because the chronicle queue is memory mapped, you can map in any size of memory and the OS will keep in memory on the 4 KB pages you used recently. i.e. it could be orders of magnitude smaller than the chunk size.  Note:  the more memory you have, the better the performance, and less jitter you get, but it should all still work.  In theory, you don't even need the whole message to fit into main memory at once, though we haven't tested this.

Peter Lawrey

unread,
Apr 23, 2015, 9:22:52 AM4/23/15
to java-ch...@googlegroups.com
To clarify; it keeps in memory only the 4 KB pages you use on Linux and MacOSX.  Windows uses the memory more eagerly, in which case you might reconsider making the chunk sizes so large.  On Linux this shouldn't cause much of a problem.

Chris

unread,
Apr 23, 2015, 10:01:49 AM4/23/15
to java-ch...@googlegroups.com
I got your point. In my case I don't see the write rate to be so high that it could cause big disk writes. I plan to run test clients simulating several hundred concurrent sessions and corresponding chronicle write operations(then "atop" will tell the % disk writes). My system runs on Debian servers without swap partition. (I only ran the chronicle replication demo app on my Windows and all 8GB was eaten at some point and released a bit later).

Thank you once again for your help!
Chris

Peter Lawrey

unread,
Apr 23, 2015, 11:16:54 AM4/23/15
to java-ch...@googlegroups.com
While you don't want to be swapping much I am of the opinion that a small swap partition is a good pressure valve. 
I am often amazed that some programs get portions swapped out and they seem to just stay there.  Without a swap space they would have wasted memory.
Reply all
Reply to author
Forward
0 new messages