IndexOutOfBoundsException: writerIndex: (expected: readerIndex(0) <= writerIndex <= capacity(8192))

462 views
Skip to first unread message

Pavan Kumar

unread,
May 30, 2017, 2:12:04 AM5/30/17
to vert.x

I have a Vert.x based application and on upgrading to the latest 3.4.1 version, am receiving an IndexOutOfBoundsException from Netty packages.


The way application works is by consuming Pump API - the ReadStream for Pump is the request received by the http server opened by the application. And the WriteStream is the http client request to a different server.


The earlier versions of Vert.x used Netty 4.1.1.Final, but the current one uses 4.1.8.Final. Thus I tried multiple versions of Netty in between and found the issue to be working up until 4.1.6.Final, but breaks with 4.1.7.Final and above. I could not find anything specific around this topic in the release notes.


All my efforts to implement a reproducer using similar/different read/write streams have not produced the required error and hence headed here to seek any support.


Exception Stack Trace is pasted below:


java.lang.IndexOutOfBoundsException: writerIndex: 10614 (expected: readerIndex(0) <= writerIndex <= capacity(8192))
        at io.netty.buffer.AbstractByteBuf.writerIndex(AbstractByteBuf.java:118)
        at io.netty.buffer.CompositeByteBuf.writerIndex(CompositeByteBuf.java:1686)
        at io.vertx.core.http.impl.HttpClientRequestImpl.write(HttpClientRequestImpl.java:851)
        at io.vertx.core.http.impl.HttpClientRequestImpl.write(HttpClientRequestImpl.java:228)
        at io.vertx.core.http.impl.HttpClientRequestImpl.write(HttpClientRequestImpl.java:51)
        at io.vertx.core.streams.impl.PumpImpl.lambda$new$1(PumpImpl.java:64)
        at io.vertx.core.http.impl.HttpServerRequestImpl.handleData(HttpServerRequestImpl.java:373)
        at io.vertx.core.http.impl.ServerConnection.handleChunk(ServerConnection.java:293)
        at io.vertx.core.http.impl.ServerConnection.processMessage(ServerConnection.java:435)
        at io.vertx.core.http.impl.ServerConnection.handleMessage(ServerConnection.java:131)
        at io.vertx.core.http.impl.HttpServerImpl$ServerHandler.doMessageReceived(HttpServerImpl.java:678)
        at io.vertx.core.http.impl.HttpServerImpl$ServerHandler.doMessageReceived(HttpServerImpl.java:573)
        at io.vertx.core.http.impl.VertxHttpHandler.lambda$channelRead$0(VertxHttpHandler.java:71)
        at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:335)
        at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:193)
        at io.vertx.core.http.impl.VertxHttpHandler.channelRead(VertxHttpHandler.java:71)
        at io.vertx.core.net.impl.VertxHandler.channelRead(VertxHandler.java:122)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1228)
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1039)
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:565)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:479)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at java.lang.Thread.run(Unknown Source)

Tim Fox

unread,
May 30, 2017, 2:47:13 AM5/30/17
to vert.x
If you can't provide a complete reproducer, at least some kind of code example describing what you are doing, otherwise I will have to get my crystal ball and tarot cards out.... ;)

Pavan Kumar

unread,
May 30, 2017, 3:27:53 AM5/30/17
to vert.x
Hi Tim,

Here is a sample code to depict my application. The method startServer actually reflects the kind of operation in the real application. Listening on one port, and pumping to a different upstream server. The code in main method depicts the client which uploads a file. The below code does not fail as all the buffers are generated by Vert.x and are of size 8192(or less when EOF), but in real life, the client is a mobile device remotely located and such buffer sizes are not guaranteed.

public static void main(String[] args) {
        Vertx vertx = Vertx.vertx();
        startServer(vertx);
       
        HttpClient client = vertx.createHttpClient();
        HttpClientRequest localReq = client.postAbs("http://localhost:3030");
        localReq.handler(resp -> {
            resp.endHandler(a -> {
                System.out.println("Done");
                vertx.close();
            });
        });
       
        String path = "D:\\temp\\file.txt";
        long fileSize = vertx.fileSystem().lpropsBlocking(path).size();
        localReq.headers().add(CONTENT_LENGTH, String.valueOf(fileSize));
        vertx.fileSystem().open(path, new OpenOptions(), res -> {
            AsyncFile file = res.result();
            file.endHandler(a -> localReq.end());
            Pump.pump(file, localReq).start();
//            file.handler(buffer -> System.out.println("buffer... "+buffer.length()));
        });
       
    }
   
    private static void startServer(Vertx vertx) {
        HttpServer server = vertx.createHttpServer();
        HttpClient client = vertx.createHttpClient();
        server.requestHandler(req -> {
            HttpClientRequest remoteReq = client.postAbs("http://httpbin.org/post");
            remoteReq.headers().add(CONTENT_LENGTH, req.headers().get(CONTENT_LENGTH));
            Pump.pump(req, remoteReq).start();
            req.endHandler(a -> remoteReq.end());
            remoteReq.handler(resp -> {
                resp.bodyHandler(body -> {
                    System.out.println("body....\n"+body);
                    req.response().end();
                });
            });
        }).listen(3030);
    }

Tim Fox

unread,
May 30, 2017, 3:52:44 AM5/30/17
to vert.x
I notice you are not writing your server as a verticle, any reason for this? It's always recommended to use verticles - you get the best performance that way and a sane, simple threading model. It's the path of least resistance.

In your embedded case here the server and client will be assigned different event loops - this could be causing your problems as they concurrently try and access the same buffer.

Pavan Kumar

unread,
May 30, 2017, 4:00:27 AM5/30/17
to vert.x
Does not seem like its an issue due to concurrent access. The exception states that the buffer size (10614 ) is beyond the capacity (8192). If it were an issue with concurrent access, it could have failed in the shared code itself.

Also, the code works great with Netty 4.1.6.Final, but fails for all newer versions. Definitely a change on Netty, but not sure on the exact detail.

Tim Fox

unread,
May 30, 2017, 10:07:17 AM5/30/17
to vert.x
In my experience this is the kind of exception I have seen when Netty buffers are used concurrently by different threads.

Pavan Kumar

unread,
Aug 23, 2017, 4:26:44 AM8/23/17
to vert.x
For the benefit of others having the same issue - I could work around this by setting the max chunk size (httpServerOptions.setMaxChunkSize(16384)). And this occurred only in one environment where the server was behind an F5 and specifically for ports where SSL was not handled by F5. The environment also allows port 443 where F5 itself offloads the SSL and gives a plain HTTP request to our vert.x application on port 80 and the issue did not occur in this scenario. I do not have any access to the F5 and not sure which exact configuration on F5 impacts this.

Tim Fox

unread,
Aug 23, 2017, 5:26:42 AM8/23/17
to vert.x
Changing configuration probably just makes the race less likely, but I strongly suspect the underlying issue is a concurrency issue. Without seeing a reproducer it's very hard to properly diagnose it.

Pavan Kumar

unread,
Aug 23, 2017, 5:41:15 AM8/23/17
to vert.x
True. Its hard to debug without a reproducer, but the issue can not even be reproduced on my machine, its only that specific environment which has a limited access.
Reply all
Reply to author
Forward
0 new messages