Re: Druid Cassandra as Deep Storage - InvalidRequestException.

75 views
Skip to first unread message

Brian O'Neill

unread,
Oct 31, 2017, 9:10:32 AM10/31/17
to Abhay Girnara, druid-de...@googlegroups.com
Abhay,

This is Brian O'Neill.  I wrote the initial Cassandra deep storage proof of concept.  

Last time I checked, we needed to upgrade Astyanax, which is the underlying framework I used for Cassandra access.

I didn't do the upgrade because there is some question whether we should continue down that path, or switch over to CQL and the datastax driver.

I happy to help if you want to try to upgrade the library, or switch over to CQL.

However, without updating the driver, I cannot advise using Cassandra for deep storage.

-brian

-- 
Brian O'Neill
CTO @ Monetate
m: 215.588.6024
bon...@monetate.com


On Oct 31, 2017, at 1:27 AM, Abhay Girnara <ab...@msg.ai> wrote:

Hi Team,

We were using amazon s3 as deep storage for druid events(druid 0.10.0). But now we wanted to use cassandra as deep storage.  

We have enable the Thrift server with associated credential.

common properties configurations 

druid.extensions.loadList=["druid-cassandra-storage", "druid-histogram", "druid-datasketches", "druid-kafka-indexing-service"]

druid.storage.type=c*

druid.storage.host=localhost:9160

druid.storage.keyspace=druid


We have created the keyspace associated with druid events 


CREATE KEYSPACE druid WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}  AND durable_writes = true;


CREATE TABLE druid.index_storage (

    key text,

    chunk text,

    value blob,

    PRIMARY KEY (key, chunk)

) WITH COMPACT STORAGE

    AND CLUSTERING ORDER BY (chunk ASC)

    AND bloom_filter_fp_chance = 0.01

    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}

    AND comment = ''

    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}

    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}

    AND crc_check_chance = 1.0

    AND dclocal_read_repair_chance = 0.1

    AND default_time_to_live = 0

    AND gc_grace_seconds = 864000

    AND max_index_interval = 2048

    AND memtable_flush_period_in_ms = 0

    AND min_index_interval = 128

    AND read_repair_chance = 0.0

    AND speculative_retry = '99PERCENTILE';


CREATE TABLE druid.descriptor_storage (

    key text PRIMARY KEY,

    descriptor text,

    lastmodified timestamp

) WITH COMPACT STORAGE

    AND bloom_filter_fp_chance = 0.01

    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}

    AND comment = ''

    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}

    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}

    AND crc_check_chance = 1.0

    AND dclocal_read_repair_chance = 0.1

    AND default_time_to_live = 0

    AND gc_grace_seconds = 864000

    AND max_index_interval = 2048

    AND memtable_flush_period_in_ms = 0

    AND min_index_interval = 128

    AND read_repair_chance = 0.0


    AND speculative_retry = '99PERCENTILE';


But when we try to insert  a events into druid(curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/wikiticker-index.json localhost:8090/druid/indexer/v1/task)

We are getting following error in task logs.


2017-10-31T05:15:44,228 INFO [appenderator_merge_0] io.druid.storage.cassandra.CassandraDataSegmentPusher - Wrote compressed file [/Users/abhay/Downloads/imply-2.3.6/var/tmp/druid1630467194234044329index.zip] to [druid/wikiticker/2016-06-27T00:00:00.000Z_2016-06-28T00:00:00.000Z/2017-10-31T05:13:38.844Z/0]
2017-10-31T05:15:44,231 ERROR [ChunkWriter-druid/wikiticker/2016-06-27T00:00:00.000Z_2016-06-28T00:00:00.000Z/2017-10-31T05:13:38.844Z/0-0] com.netflix.astyanax.recipes.storage.ObjectWriter - BadRequestException: [host=localhost(127.0.0.1):9160, latency=1(1), attempts=1] InvalidRequestException(why:You have not logged in)
2017-10-31T05:15:44,232 ERROR [ChunkWriter-druid/wikiticker/2016-06-27T00:00:00.000Z_2016-06-28T00:00:00.000Z/2017-10-31T05:13:38.844Z/0-1] com.netflix.astyanax.recipes.storage.ObjectWriter - BadRequestException: [host=localhost(127.0.0.1):9160, latency=1(1), attempts=1] InvalidRequestException(why:You have not logged in)
2017-10-31T05:15:44,232 ERROR [ChunkWriter-druid/wikiticker/2016-06-27T00:00:00.000Z_2016-06-28T00:00:00.000Z/2017-10-31T05:13:38.844Z/0-2] com.netflix.astyanax.recipes.storage.ObjectWriter - BadRequestException: [host=localhost(127.0.0.1):9160, latency=1(1), attempts=1] InvalidRequestException(why:You have not logged in)
2017-10-31T05:15:44,236 WARN [appenderator_merge_0] com.netflix.astyanax.recipes.storage.ObjectWriter - BadRequestException: [host=localhost(127.0.0.1):9160, latency=1(1), attempts=1] InvalidRequestException(why:You have not logged in)
com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=localhost(127.0.0.1):9160, latency=1(1), attempts=1] InvalidRequestException(why:You have not logged in)
	at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
	at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
	at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:27)
	at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$1.execute(ThriftSyncConnectionFactoryImpl.java:132)
	at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:52)
	at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:229)
	at com.netflix.astyanax.thrift.ThriftKeyspaceImpl.executeOperation(ThriftKeyspaceImpl.java:446)
	at com.netflix.astyanax.thrift.ThriftKeyspaceImpl.access$400(ThriftKeyspaceImpl.java:62)
	at com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1.execute(ThriftKeyspaceImpl.java:115)
	at com.netflix.astyanax.recipes.storage.CassandraChunkedStorageProvider.writeChunk(CassandraChunkedStorageProvider.java:82)
	at com.netflix.astyanax.recipes.storage.ObjectWriter$2.run(ObjectWriter.java:118)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: InvalidRequestException(why:You have not logged in)
	at org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:19479)
	at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:1035)
	at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:1009)
	at com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1$1.internalExecute(ThriftKeyspaceImpl.java:121)
	at com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1$1.internalExecute(ThriftKeyspaceImpl.java:118)
	at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:55)
	... 14 more
2017-10-31T05:15:44,239 WARN [appenderator_merge_0] com.netflix.astyanax.recipes.storage.ObjectWriter - BadRequestException: [host=localhost(127.0.0.1):9160, latency=1(1), attempts=1] InvalidRequestException(why:You have not logged in)
2017-10-31T05:15:44,240 WARN [appenderator_merge_0] io.druid.segment.realtime.appenderator.AppenderatorImpl - Failed to push merged index for segment[wikiticker_2016-06-27T00:00:00.000Z_2016-06-28T00:00:00.000Z_2017-10-31T05:13:38.844Z].




Reply all
Reply to author
Forward
0 new messages