Tips on debugging -- IllegalStateException: not enough space to write

158 views
Skip to first unread message

Andrew Oswald

unread,
Sep 16, 2016, 12:19:13 PM9/16/16
to Chronicle
Greetings Chronicle folks,

Hoping you might be able to provide some insight on how I might figure out what's going on here:

java.lang.IllegalStateException: not enough space to write 1073741823 was 1073741766
at net.openhft.chronicle.wire.AbstractWire.throwNotEnoughSpace(AbstractWire.java:67)
at net.openhft.chronicle.wire.AbstractWire.writeHeader(AbstractWire.java:210)
at net.openhft.chronicle.queue.impl.single.StoreRecovery.writeHeader(StoreRecovery.java:24)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.writeHeader(SingleChronicleQueueStore.java:305)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writingDocument(SingleChronicleQueueExcerpts.java:167)
at net.openhft.chronicle.wire.MethodWriterInvocationHandler.invoke(MethodWriterInvocationHandler.java:51)
at com.sun.proxy.$Proxy27.onPut(Unknown Source)

<dependency>
  <groupId>net.openhft</groupId>
  <artifactId>chronicle-queue</artifactId>
  <version>4.4.2</version>
</dependency>

The AbstractMarshallable instances being sent through the proxy are per Thread ala ThreadLocal.

The machines running this code have plenty of memory and oodles of disk space.  The inbound data is logically partitioned and each partition has its own SingleChronicleQueue, which writes to one of 6 disks on the machine.  As partitions are initialized, the disk for which its queue writes to are chosen via round-robbin.  More specifically:

queue = SingleChronicleQueueBuilder.binary("/data/" + (diskNumber.getAndIncrement() % 6 + 1) + "/some/event/dir").build();

(where diskNumber is an AtomicInteger)

Back to the question at hand, I've only see this happen when the load gets high.  It's after 10K events get persisted so I assume the code has warmed up.

Any suggestions on what I can look into to solve this?

thanks in advance!
-andy

Rob Austin

unread,
Sep 16, 2016, 12:23:31 PM9/16/16
to java-ch...@googlegroups.com
I'll take a look and get back to you. 

Sent from my iPhone
--
You received this message because you are subscribed to the Google Groups "Chronicle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-chronicl...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Andrew Oswald

unread,
Sep 19, 2016, 8:18:19 AM9/19/16
to Chronicle
Thanks, Rob!

Rob Austin

unread,
Sep 19, 2016, 10:07:12 AM9/19/16
to java-ch...@googlegroups.com
Andrew

in this code net.openhft.chronicle.wire.AbstractWire#writeHeader


It look’s to me that the reason for the error was that length == UNKNOWN_LENGTH,

this then set maxlen = MAX_LENGTH, which caused the subsequent check to fail.

if (maxlen > bytes.writeRemaining())
return throwNotEnoughSpace(maxlen, bytes);

I believe that this check (above ) should ONLY be carried out if the length != UNKNOWN_LENGTH, so Ive updated the code accordingly :

 

we will have to fully test this fix, before we release, but if you could help us by testing with the latest snapshot, I’d appreciate it  :
<artifactId>chronicle-wire</artifactId>
<version>1.7.14-SNAPSHOT</version>

Rob

Andrew Oswald

unread,
Sep 19, 2016, 10:12:39 AM9/19/16
to Chronicle
Will do, Rob; your help is much appreciated!

While I'm at it, any reasons I shouldn't go ahead and start using
<dependency>
  <groupId>net.openhft</groupId>
  <artifactId>chronicle-queue</artifactId>
  <version>4.5.14</version>
</dependency>

to coincide w/ this testing?

thanks!

Rob Austin

unread,
Sep 19, 2016, 11:26:37 AM9/19/16
to java-ch...@googlegroups.com
Yes, it’d be worth testing with the latest. Im not aware of any issues ( apart from the one below ) with this version.

<dependency>
  <groupId>net.openhft</groupId>
  <artifactId>chronicle-queue</artifactId>
  <version>4.5.14</version>
</dependency>

Rob

Andrew Oswald

unread,
Sep 19, 2016, 11:52:14 AM9/19/16
to Chronicle
Now getting
java.lang.AssertionError: you cant put a header inside a header, check that you have not nested the documents.
at net.openhft.chronicle.wire.AbstractWire.writeHeader(AbstractWire.java:259)
at net.openhft.chronicle.queue.impl.single.StoreRecovery.writeHeader(StoreRecovery.java:45)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.writeHeader(SingleChronicleQueueStore.java:332)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writingDocument(SingleChronicleQueueExcerpts.java:303)
at net.openhft.chronicle.wire.MethodWriterInvocationHandler.invoke(MethodWriterInvocationHandler.java:51)
at com.sun.proxy.$Proxy26.onPut(Unknown Source)


Is it intentional that the only place where the AbstractWire's insideHeader boolean is made false is within the updateHeader method?

thanks!

Rob Austin

unread,
Sep 19, 2016, 12:16:36 PM9/19/16
to java-ch...@googlegroups.com
re :
java.lang.AssertionError: you cant put a header inside a header, check that you have not nested the documents.
at net.openhft.chronicle.wire.AbstractWire.writeHeader(AbstractWire.java:259)
at net.openhft.chronicle.queue.impl.single.StoreRecovery.writeHeader(StoreRecovery.java:45)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.writeHeader(SingleChronicleQueueStore.java:332)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writingDocument(SingleChronicleQueueExcerpts.java:303)
at net.openhft.chronicle.wire.MethodWriterInvocationHandler.invoke(MethodWriterInvocationHandler.java:51)
at com.sun.proxy.$Proxy26.onPut(Unknown Source)


when you are calling :

MethodWriterInvocationHandler(MarshallableOut appender);
are you using the same appender across threads ? do you have an appender per thread !


> Is it intentional that the only place where the AbstractWire's insideHeader boolean is made false is within the updateHeader method?

snip -> from net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts.StoreAppender#writeBytes(long, net.openhft.chronicle.bytes.BytesStore)
position(store.writeHeader(wire, length, timeoutMS())); // start the header
wireBytes.write(bytes);
wire.updateHeader(length, position, false); // end the header

yes - the updateHeader is like the “end header”, it also called in the document.close() method.

Andrew Oswald

unread,
Sep 19, 2016, 1:15:50 PM9/19/16
to Chronicle
Hey Rob,

My use of the appender by way of method writer:
mutationEventHandler = queue.createAppender().methodWriter(MutationEventHandler.class);

The above queue (its creation code was previously provided) as well as mutationEventHandler are class instance members.

I'm not aware of Thread safety guarantees for the classes I'm implementing (for this framework), but wouldn't be surprised if numerous Threads are making calls.  I'll log Thread names to find out.

So I gather an appender should not be used across Threads?  My apologies if I missed that in the documentation; I was under the assumption the new version of Chronicle-Queue would handle that internally.

thanks again!

Rob Austin

unread,
Sep 19, 2016, 1:29:47 PM9/19/16
to java-ch...@googlegroups.com
the queue.createAppender()  has in later version(s) been renamed to   

appender = chronicle.acquireAppender();

which will reuse an existing appender in the same thread or create a new appender in another thread. The problem with checking the thread that the mutationEventHandler was created with when using it, is that this check adds latency, however we could do this as an assertion check, also, I’ll take a look at the java doc, to see if we can be a bit clearer on the use of threads, around this.

Rob


Andrew Oswald

unread,
Sep 20, 2016, 11:59:13 AM9/20/16
to Chronicle
In my particular situation, establishing affinity of arbitrary Thread to appender results in enough latency to cause problems (or something else as a byproduct of establishing that affinity is causing problems).

For the case of object creation causing enough latency to potentially cause issues, on the bright side, I do know how many Threads are in play here.  I was thinking I might try something like initializing an appender for that number of Threads and keeping them in an array.  The problem with that idea is that SingleChronicleQueue's newAppender method is default access specification.  Am I barking up the wrong tree w/ this type of approach?

For the case of a byproduct of Thread -> appender causing issues, could multiple appenders actually be the problem?  In other words, at what point do multiple appender contention become a significant bottle-neck?

thanks!

Rob Austin

unread,
Sep 20, 2016, 12:05:32 PM9/20/16
to java-ch...@googlegroups.com
I assume that this is when you are using a method writer, Can you not just use a thread local, something like this

public class GatewayQueue implements GatewayEventHandler {

@NotNull
private final ChronicleQueue queue;

private ThreadLocal<GatewayEventHandler> gatewayEvents = ThreadLocal.withInitial(this::newGatewayEventHandler);

public GatewayQueue(@NotNull final String path,
@NotNull final EventGroup eg,
@NotNull final Object... objects) {
queue = ((ChronicleQueueBuilder) ChronicleQueueBuilder.single(path)).build();
final MethodReader methodReader = queue.createTailer().toEnd().methodReader(objects);
eg.addHandler(methodReader::readOne);
}

private GatewayEventHandler newGatewayEventHandler() {

ExcerptAppender appender = queue.acquireAppender();
final MethodWriterBuilder<GatewayEventHandler> builder =
appender.methodWriterBuilder(GatewayEventHandler.class);
builder.recordHistory(true);
return builder.get();
}

@Override
public MessageGenerator onNewOrderSingle(NewOrderSingle newOrderSingle) {
gatewayEvents.get().onNewOrderSingle(newOrderSingle);
return null;
}

@Override
public MessageGenerator onOrderCancelReject(OrderCancelReject orderCancelReject) {
gatewayEvents.get().onOrderCancelReject(orderCancelReject);
return null;
}

@Override
public MessageGenerator onOrderCancelRequest(OrderCancelRequest orderCancelRequest) {
gatewayEvents.get().onOrderCancelRequest(orderCancelRequest);
return null;
}
}
Rob

Andrew Oswald

unread,
Sep 20, 2016, 12:20:51 PM9/20/16
to Chronicle
I tried something similar, albeit w/ a capturing lambda as opposed to method reference.  If it makes any difference, I'll give the method reference bit a try.

thanks.

Andrew Oswald

unread,
Sep 27, 2016, 11:28:28 AM9/27/16
to Chronicle
at what point does multiple appender contention become a significant bottle-neck?

Still curious about when to be concerned about creating too many appenders.  Synchronization is not really an option so as I see it thus far, perhaps an object pool might be my best bet?  Also, I can't necessarily go the ThreadLocal route as the Threads will actually be used across queues.

thanks!

Rob Austin

unread,
Sep 27, 2016, 11:31:24 AM9/27/16
to java-ch...@googlegroups.com
you can create as many appenders as you require, it just makes no sense creating more than one appender per thread for the same queue, which is why create appender was changed to 

appender = chronicle.acquireAppender();

Rob

Andrew Oswald

unread,
Sep 27, 2016, 12:29:31 PM9/27/16
to Chronicle
I get you on that, but (arguably) that's defective if you have multiple queues that get serviced by a common set of threads, no?

Rob Austin

unread,
Sep 27, 2016, 12:35:11 PM9/27/16
to java-ch...@googlegroups.com
I don’t understand can you explain what you mean,

I also ( suggest ) where possible that you try to write everything to the same queue, chronicle queue lets you mix different DTO’s into the same queue. When you read you can filter out everything not required, this way you store a time ordered record of your messages.

Andrew Oswald

unread,
Sep 28, 2016, 8:24:21 AM9/28/16
to Chronicle
My apologies, Rob.  The example your provided earlier works, and of course does so across queues.  I was getting confused due to Threads being arbitrarily renamed... great debugging info if you know it's explicitly being done, but very confusing if you don't.  Thanks again!
Reply all
Reply to author
Forward
0 new messages