Object mapper. InvalidQueryException: Key may not be empty

1,943 views
Skip to first unread message

Alexandr Porunov

unread,
Sep 12, 2016, 10:06:20 AM9/12/16
to DataStax Java Driver for Apache Cassandra User Mailing List
Hello,

I am using DataStax Java Driver and can not execute "save" command with the object mapper.

I have next table:

CREATE TABLE media_upload.audio_info (
  swift_id blob,
  size bigint,
  audio_id blob,
  last_succeed_segment bigint,
  PRIMARY KEY (swift_id)
);

Here is my table in Java:

package com.loader.entity.tmp;

import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
import com.datastax.driver.mapping.annotations.Transient;

import javax.xml.bind.DatatypeConverter;
import java.nio.ByteBuffer;

@Table(keyspace = "media_upload", name = "audio_info",
        readConsistency = "QUORUM",
        writeConsistency = "QUORUM",
        caseSensitiveKeyspace = false,
        caseSensitiveTable = false)
public class AudioInfo{

    private ByteBuffer swiftId;

    private Long size;

    private ByteBuffer audioId;

    private Long lastSucceedSegment;

    @PartitionKey
    @Column(name = "swift_id")
    public ByteBuffer getSwiftId() {
        return swiftId;
    }

    @Column(name = "size")
    public Long getSize() {
        return size;
    }

    @Column(name = "audio_id")
    public ByteBuffer getAudioId() {
        return audioId;
    }

    @Column(name = "last_succeed_segment")
    public Long getLastSucceedSegment() {
        return lastSucceedSegment;
    }

    public void setSwiftId(ByteBuffer swiftId){
        this.swiftId=swiftId;
    }

    @Transient
    public String getHexSwiftId() {
        return DatatypeConverter.printHexBinary(swiftId.array());
    }

    @Transient
    public void setSize(Long size){
        this.size=size;
    }

    @Transient
    public String getHexAudioId() {
        return DatatypeConverter.printHexBinary(audioId.array());
    }

    @Transient
    public void setAudioId(ByteBuffer audioId){
        this.audioId=audioId;
    }

    @Transient
    public void setLastSucceedSegment(Long lastSucceedSegment){
        this.lastSucceedSegment=lastSucceedSegment;
    }

    @Transient
    @Override
    public String toString() {
        return "{swift_id='"+getHexSwiftId()+
                "',size='"+getSize()+
                "',audio_id='"+getHexAudioId()+
                "',last_succeed_segment='"+lastSucceedSegment+"'}";
    }
}

Here is what I am doing to save AudioInfo:

AudioInfo audioInfo = new AudioInfo();
audioInfo.setSwiftId(ByteBuffer.allocate(Long.BYTES).putLong(123));
audioInfo.setAudioId(ByteBuffer.allocate(Long.BYTES).putLong(124));
audioInfo.setLastSucceedSegment(0L);
audioInfo.setSize(100L);
mapper.save(audioInfo);

After "mapper.save(audioInfo);" I am getting an error:
com.datastax.driver.core.exceptions.InvalidQueryException: Key may not be empty
        at com.datastax.driver.core.Responses$Error.asException(Responses.java:136)
        at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
        at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:174)
        at com.datastax.driver.core.RequestHandler.access$2600(RequestHandler.java:43)
        at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:793)
        at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:627)
        at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1012)
        at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:935)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:321)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:321)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:321)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:321)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1280)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:890)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:564)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:505)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:419)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:391)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
        at java.lang.Thread.run(Thread.java:745)

The strange thing happens when I am using next code:
AudioInfo audioInfo = new AudioInfo();
audioInfo.setSwiftId(ByteBuffer.allocate(Long.SIZE).putLong(123));
audioInfo.setAudioId(ByteBuffer.allocate(Long.SIZE).putLong(124));
audioInfo.setLastSucceedSegment(0L);
audioInfo.setSize(100L);
mapper.save(audioInfo);

It adds without error but in the database I see that both swift_id and audio_id have next value:
0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

What am I doing wrongly?

Sincerely,
Alexandr

Kevin Gallardo

unread,
Sep 12, 2016, 10:19:10 AM9/12/16
to java-dri...@lists.datastax.com
Hi,

Which version of the driver are you using? If it's less than v3.1.0, you may need to define your @Columns and @PartitionKey (s) on the Class's fields rather than the getters/setters. If you use 3.1.0 we'd need to investigate further but I just thought I'd mention that first.

Thanks.

--
You received this message because you are subscribed to the Google Groups "DataStax Java Driver for Apache Cassandra User Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-driver-user+unsubscribe@lists.datastax.com.



--
Kévin Gallardo.
Software Engineer in Drivers and Tools Team at DataStax.
 

Alexandre Dutra

unread,
Sep 12, 2016, 10:31:44 AM9/12/16
to DataStax Java Driver for Apache Cassandra User Mailing List
Hello Alexandr,

Assuming (like Kevin pointed out) that you are using 3.1.0:

1) Are you sure that ByteBuffer is the right type for your mapped fields? In other words, are you sure that columns "swift_id" and "audio_id" are both of type blob?
2) If ByteBuffer is the right type, then your problem comes from a very common mistake: you forgot to flip your buffers:

audioInfo.setSwiftId((ByteBuffer) ByteBuffer.allocate(Long.BYTES).putLong(123).flip());
audioInfo.setAudioId((ByteBuffer) ByteBuffer.allocate(Long.BYTES).putLong(124).flip());
I suggest reading our FAQ on ByteBuffers:

On a side note, assuming again that your driver version is 3.1.0, you don't need to annotate any method other than getters with @Transient.

Hope that helps,
Alexandre



--
You received this message because you are subscribed to the Google Groups "DataStax Java Driver for Apache Cassandra User Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-driver-us...@lists.datastax.com.
--
Alexandre Dutra
Driver & Tools Engineer @ DataStax

Alexandr Porunov

unread,
Sep 12, 2016, 12:17:38 PM9/12/16
to DataStax Java Driver for Apache Cassandra User Mailing List
Hello Alexandre,

Thank you very much for help!
I am using version 3.1.0. I changed my code like you suggested and added "flip()". Now it is working but not always.
It always adds rows to the database but 4-6 requests out of 10 shows me next error:

Cassandra timeout during write query at consistency QUORUM (3 replica were required but only 2 acknowledged the write)] with root cause
com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency QUORUM (3 replica were required but only 2 acknowledged the write)
        at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:59)
        at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
        at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:277)
        at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:257)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:321)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:321)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1280)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:890)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:564)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:505)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:419)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:391)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
        at java.lang.Thread.run(Thread.java:745)

Even after this error I can see new rows in the database. Why this errors become be shown? 

Sincerely,
Alexandr

Alexandre Dutra

unread,
Sep 12, 2016, 12:26:12 PM9/12/16
to DataStax Java Driver for Apache Cassandra User Mailing List
That's not a driver issue anymore, but something related to your Cassandra cluster. A write timeout means that some replicas did not reply to the coordinator in time. I suggest that you take a look at the logs on Cassandra side to find out why there weren't enough replicas alive.

It's not unusual to see your rows in the database even when a write timeout occurs. This is because at least one replica acknowledged the write, and then eventual consistency kicked in and did the rest of job.

Regards,
Alexandre

--
You received this message because you are subscribed to the Google Groups "DataStax Java Driver for Apache Cassandra User Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-driver-us...@lists.datastax.com.

Alexandr Porunov

unread,
Sep 12, 2016, 1:45:32 PM9/12/16
to DataStax Java Driver for Apache Cassandra User Mailing List
I don't think so because I have changed my code as here:
audioInfo.setSwiftId(ByteBuffer.wrap(ByteBuffer.allocate(Long.BYTES).putLong(123L).array()));
            audioInfo.setAudioId(ByteBuffer.wrap(ByteBuffer.allocate(Long.BYTES).putLong(124L).array()));
            List<String> tableNames = new LinkedList<>();
            List<Object> tableValues = new LinkedList<>();
            tableNames.add("swift_id");
            tableNames.add("size");
            tableNames.add("audio_id");
            tableNames.add("last_succeed_segment");
            tableValues.add(ByteBuffer.wrap(idGenerator.generateId64AsByteArray()));
            tableValues.add(new Long(50L));
            tableValues.add(ByteBuffer.wrap(idGenerator.generateId64AsByteArray()));
            tableValues.add(new Long(2L));

            cassandraSession.execute(
                    QueryBuilder.insertInto("media_upload", "audio_info").values(tableNames, tableValues)
            );

It works perfectly fine. If I use QueryBuilder builder it works fast and just nice but if I use Mapper it shows me this error very often (4-5 requests out of 10).

So, it is more likely that I have configured mapper wrongly or something loke that.

Sincerelt,
Alexandr

Alexandr Porunov

unread,
Sep 12, 2016, 1:46:56 PM9/12/16
to DataStax Java Driver for Apache Cassandra User Mailing List
Sorry,

without these lines:
audioInfo.setSwiftId(ByteBuffer.wrap(ByteBuffer.allocate(Long.BYTES).putLong(123L).array()));
audioInfo.setAudioId(ByteBuffer.wrap(ByteBuffer.allocate(Long.BYTES).putLong(124L).array()));

Alexandr Porunov

unread,
Sep 13, 2016, 3:14:11 AM9/13/16
to DataStax Java Driver for Apache Cassandra User Mailing List
It was because Object Mapper uses async save by default in version 3.1.0. So, I have changed save function from:

mapper.save(entity);

to:

getSession().execute(mapper.saveQuery(entity));

Now it is working fine.

Sincerely,
Alexandr
Reply all
Reply to author
Forward
0 new messages