History segment not loading segments, coordinator says "No segments found in the database"

3,409 views
Skip to first unread message

Amey Jain

unread,
Jan 15, 2016, 1:44:40 AM1/15/16
to Druid User
Hi,
I am running the basic example of wikipedia. I started 4 nodes to run each service on their separate nodes.
I am pushing data to realtime node using kafka. following is the spec file
[
   
{
       
"dataSchema": {
           
"dataSource": "wikipedia",
           
"parser": {
               
"type": "string",
               
"parseSpec": {
                   
"format": "json",
                   
"timestampSpec": {
                       
"column": "timestamp",
                       
"format": "auto"
                   
},
                   
"dimensionsSpec": {
                       
"dimensions": [
                           
"page",
                           
"language",
                           
"user",
                           
"unpatrolled",
                           
"newPage",
                           
"robot",
                           
"anonymous",
                           
"namespace",
                         
"continent",
                     
"country",
                           
"region",
                           
"city"
                       
],
                       
"dimensionExclusions": [],
                       
"spatialDimensions": []
                   
}
               
}
           
},
           
"metricsSpec": [
               
{
                   
"type": "count",
                   
"name": "count"
               
},
               
{
                   
"type": "doubleSum",
                   
"name": "added",
                   
"fieldName": "added"
               
},
       
{
                   
"type": "doubleSum",
       
"name": "deleted",
                   
"fieldName": "deleted"
               
},
               
{
                   
"type": "doubleSum",
                   
"name": "delta",
                   
"fieldName": "delta"
             
}
           
],
           
"granularitySpec": {
               
"type": "uniform",
               
"segmentGranularity": "FIVE_MINUTE",
               
"queryGranularity": "NONE"
           
}
       
},
       
"ioConfig": {
           
"type": "realtime",
           
"firehose": {
         
"type": "kafka-0.8",
               
"consumerProps": {
                   
"zookeeper.connect": "betatest1.bpaas-dev.local:2181",
                   
"zookeeper.connection.timeout.ms": "15000",
     
"zookeeper.session.timeout.ms": "15000",
                   
"zookeeper.sync.time.ms": "5000",
                   
"group.id": "druid-example-druid4",
                   
"fetch.message.max.bytes": "1048586",
                   
"auto.offset.reset": "largest",
                   
"auto.commit.enable": "false"
               
},
               
"feed": "wikipedia"
           
},
           
"plumber": {
               
"type": "realtime"
           
}
       
},
       
"tuningConfig": {
           
"type": "realtime",
           
"maxRowsInMemory": 500000,
           
"intermediatePersistPeriod": "PT3m",
           
"windowPeriod": "PT1m",
           
"basePersistDirectory": "/tmp/realtime/basePersist",
           
"rejectionPolicy": {
               
"type": "messageTime"
           
}
       
}
   
}
]






The logs for realtime node looks as. it mentions that it is trying to handoff segments but no errors.
Pushing [wikipedia_2014-01-30T11:55:00.000Z_2014-01-30T12:00:00.000Z_2014-01-30T11:55:00.000Z] to deep storage
2016-01-15T05:46:00,317 INFO [wikipedia-2014-01-30T11:55:00.000Z-persist-n-merge] io.druid.storage.hdfs.HdfsDataSegmentPusher - Copying segment[wikipedia_2014-01-30T11:55:00.000Z_2014-01-30T12:00:00.000Z_2014-01-30T11:55:00.000Z] to HDFS at location[hdfs://10.116.123.109:8020/tmp/druid4/wikipedia/20140130T115500.000Z_20140130T120000.000Z/2014-01-30T11_55_00.000Z/0]
2016-01-15T05:46:00,912 INFO [wikipedia-2014-01-30T11:55:00.000Z-persist-n-merge] io.druid.storage.hdfs.HdfsDataSegmentPusher - Compressing files from[/tmp/realtime/basePersist/wikipedia/2014-01-30T11:55:00.000Z_2014-01-30T12:00:00.000Z/merged] to [hdfs://10.116.123.109:8020/tmp/druid4/wikipedia/20140130T115500.000Z_20140130T120000.000Z/2014-01-30T11_55_00.000Z/0/index.zip]
2016-01-15T05:46:00,983 INFO [wikipedia-2014-01-30T11:55:00.000Z-persist-n-merge] com.metamx.common.CompressionUtils - Adding file[/tmp/realtime/basePersist/wikipedia/2014-01-30T11:55:00.000Z_2014-01-30T12:00:00.000Z/merged/version.bin] with size[4].  Total size so far[0]
2016-01-15T05:46:00,987 INFO [wikipedia-2014-01-30T11:55:00.000Z-persist-n-merge] com.metamx.common.CompressionUtils - Adding file[/tmp/realtime/basePersist/wikipedia/2014-01-30T11:55:00.000Z_2014-01-30T12:00:00.000Z/merged/meta.smoosh] with size[365].  Total size so far[4]
2016-01-15T05:46:00,987 INFO [wikipedia-2014-01-30T11:55:00.000Z-persist-n-merge] com.metamx.common.CompressionUtils - Adding file[/tmp/realtime/basePersist/wikipedia/2014-01-30T11:55:00.000Z_2014-01-30T12:00:00.000Z/merged/00000.smoosh] with size[8,552].  Total size so far[369]
2016-01-15T05:46:38,253 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2016-01-15T05:46:38.253Z","service":"realtime","host":"druid4.bpaas-dev.local:8080","metric":"ingest/events/thrownAway","value":0,"dataSource":"wikipedia"}]


The coordinator logs looks like
Starting coordination. Getting available segments.
2016-01-15T05:42:22,844 INFO [Coordinator-Exec--0] io.druid.server.coordinator.helper.DruidCoordinatorSegmentInfoLoader - Found [0] available segments.
2016-01-15T05:43:11,163 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-01-15T05:43:11,467 INFO [DatabaseRuleManager-Exec--0] io.druid.metadata.SQLMetadataRuleManager - Polled and found rules for 1 datasource(s)
2016-01-15T05:43:22,849 INFO [Coordinator-Exec--0] io.druid.server.coordinator.ReplicationThrottler - [_default_tier]: Replicant terminate queue is empty.
2016-01-15T05:43:22,849 INFO [Coordinator-Exec--0] io.druid.server.coordinator.helper.DruidCoordinatorBalancer - [_default_tier]: One or fewer servers found.  Cannot balance.
2016-01-15T05:43:22,850 INFO [Coordinator-Exec--0] io.druid.server.coordinator.helper.DruidCoordinatorLogger - Load Queues:
2016-01-15T05:43:22,850 INFO [Coordinator-Exec--0] io.druid.server.coordinator.helper.DruidCoordinatorLogger - Server[1.1.1.229:8080, historical, _default_tier] has 0 left to load, 0 left to drop, 0 bytes queued, 0 bytes served.
...
2016-01-15T05:47:22,896 INFO [Coordinator-Exec--0] io.druid.server.coordinator.helper.DruidCoordinatorLogger - Server[1.1.1.229:8080, historical, _default_tier] has 0 left to load, 0 left to drop, 0 bytes queued, 0 bytes served.
2016-01-15T05:48:11,184 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-01-15T05:48:11,502 INFO [DatabaseRuleManager-Exec--0] io.druid.metadata.SQLMetadataRuleManager - Polled and found rules for 1 datasource(s)


The history node shows nothing and is sitting idle

2016-01-15T05:06:46,622 INFO [main-SendThread(betaTest1.bpaas-dev.local:2181)] org.apache.zookeeper.ClientCnxn - Socket connection established to betaTest1.bpaas-dev.local/1.1.1.249:2181, initiating session
2016-01-15T05:06:46,633 INFO [main-SendThread(betaTest1.bpaas-dev.local:2181)] org.apache.zookeeper.ClientCnxn - Session establishment complete on server betaTest1.bpaas-dev.local/1.1.1.249:2181, sessionid = 0x1523de15b19004c, negotiated timeout = 30000
2016-01-15T05:06:46,643 INFO [main-EventThread] org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED
2016-01-15T05:06:46,788 INFO [main] io.druid.server.coordination.ZkCoordinator$BackgroundSegmentAnnouncer - Completed background segment announcing
2016-01-15T05:06:46,789 INFO [main] io.druid.server.coordination.ZkCoordinator - Cache load took 100 ms
2016-01-15T05:06:46,800 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking start method[public void io.druid.server.coordination.AbstractDataSegmentAnnouncer.start()] on object[io.druid.server.coordination.BatchDataSegmentAnnouncer@2a0b901c].
2016-01-15T05:06:46,801 INFO [main] io.druid.server.coordination.AbstractDataSegmentAnnouncer - Announcing self[DruidServerMetadata{name='1.1.1.229:8080', host='1.1.1.229:8080', maxSize=10000000000, tier='_default_tier', type='historical', priority='0'}] at [/druid/announcements/1.1.1.229:8080]


The table druid_segments in mysql is empty. There is one entry in druid_rules

mysql> select * from druid_rules;
+-----------------------------------+------------+--------------------------+-----------------------------------------------------------------+
| id                                | dataSource | version                  | payload                                                         |
+-----------------------------------+------------+--------------------------+-----------------------------------------------------------------+
| _default_2016-01-15T05:05:24.542Z | _default   | 2016-01-15T05:05:24.542Z | [{"tieredReplicants",{"_default_tier":2},"type":"loadForever"}] |
+-----------------------------------+------------+--------------------------+-----------------------------------------------------------------+
1 row in set (0.00 sec)


Realtime node has written the data in deep store(HDFS)
Found 5 items
drwxr-xr-x   - root hdfs          0 2016-01-15 05:15 hdfs://10.116.123.109:8020/tmp/druid4/wikipedia/20140130T010000.000Z_20140130T010500.000Z
drwxr
-xr-x   - root hdfs          0 2016-01-15 05:19 hdfs://10.116.123.109:8020/tmp/druid4/wikipedia/20140130T033000.000Z_20140130T033500.000Z
drwxr
-xr-x   - root hdfs          0 2016-01-15 05:23 hdfs://10.116.123.109:8020/tmp/druid4/wikipedia/20140130T071000.000Z_20140130T071500.000Z
drwxr
-xr-x   - root hdfs          0 2016-01-15 05:11 hdfs://10.116.123.109:8020/tmp/druid4/wikipedia/20140130T115500.000Z_20140130T120000.000Z
drwxr
-xr-x   - root hdfs          0 2016-01-15 05:50 hdfs://10.116.123.109:8020/tmp/druid4/wikipedia/20140330T010000.000Z_20140330T010500.000Z



Please help. I searched all the questions asked previously regarding this problem but none of the solutions helped. This is some very small mistake somewhere.

thanks and let me know if you need more info.

Nishant Bangarwa

unread,
Jan 15, 2016, 10:39:56 AM1/15/16
to Druid User
Hi Amey,
Looks like the realtime node was able to write the segment to deep storage but could not make metadata entry in mysql. 
Can you verify that you have correctly configured metadata storage to mysql in runtime.props for realtime node ? 


--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/e6cc804d-5142-4d33-ad8f-ad472bb5b7fb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Davor Poldrugo

unread,
Jan 16, 2016, 9:16:34 AM1/16/16
to Druid User
Are your coordinator and historical nodes on the same machine as the realtime node?
Does your historical node point to the same deep storage directory as the realtime node?

Amey Jain

unread,
Jan 18, 2016, 5:16:52 PM1/18/16
to Druid User

Yes that what I found that it is not writing data to metastore because of which coordinator does not hand over anything to history node. But in the logs I also see that realtime node has successfully connected to mysql
2016-01-18T22:02:05,449 INFO [main] io.druid.guice.JsonConfigurator - Loaded class[class io.druid.metadata.MetadataStorageConnectorConfig] from props[druid.metadata.storage.connector.] as [DbConnectorConfig{createTables=true, connectURI='jdbc:mysql://localhost:3306/druid', user='druid', passwordProvider=io.druid.metadata.DefaultPasswordProvider}]
2016-01-18T22:02:05,493 INFO [main] io.druid.metadata.storage.mysql.MySQLConnector - Configured MySQL as metadata storage

but it writes nothing in mysql. Do you see any other reason because of which this might be happening?

Thanks for the help
...

Amey Jain

unread,
Jan 18, 2016, 5:18:14 PM1/18/16
to Druid User

yes they all are running currently on the same machine. deep storage is also same but the problem is coordinator node never finds anything on the metastore so it does not put anything in the load queue of the historical node.

Nishant Bangarwa

unread,
Jan 19, 2016, 5:11:57 AM1/19/16
to Druid User
Hi Amey, 

Since you have configured mysql as storage it should write metadata to mysql, possibly realtime node is getting some exception or its adding the entry to some other database and coordinator is looking into different one.  
Can you share your runtime.properties and complete logs for the realtime node and coordinator ? 

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.

Amey Jain

unread,
Jan 21, 2016, 4:00:04 PM1/21/16
to Druid User
Hi Nishant,
I am not able to figure what went wrong. I restarted all the agents again with a new database/ new basepersist folder. And luckily it is working now. I used same configuration as I was using before.

Jakub Liska

unread,
Feb 7, 2016, 4:23:26 PM2/7/16
to Druid User
I have this issue too, it happens with defaults in https://github.com/implydata/distribution-docker, no properties changed. I'm just indexing custom data, not the wikiticker samples. But that should affect anything.

Jakub Liska

unread,
Feb 7, 2016, 5:17:25 PM2/7/16
to Druid User
Btw this happens even if I do : 

sed -i 's/druid.indexer.storage.type=metadata/druid.indexer.storage.type=local/g' conf-quickstart/druid/overlord/runtime.properties

cat coordinator.log | grep WARN
2016-02-07T21:57:10,693 WARN [main] org.apache.curator.retry.ExponentialBackoffRetry - maxRetries too large (30). Pinning to 29
2016-02-07T21:57:15,473 WARN [main] org.apache.curator.utils.ZKPaths - The version of ZooKeeper being used doesn't support Container nodes. CreateMode.PERSISTENT will be used instead.
2016-02-07T21:58:15,272 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T21:59:15,277 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:00:15,352 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:01:15,355 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:02:15,358 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:03:15,362 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:04:15,364 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:05:15,367 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:06:15,370 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:07:15,372 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:08:15,375 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:09:15,378 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:10:15,381 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:11:15,383 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:12:15,386 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:13:15,388 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:14:15,391 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!
2016-02-07T22:15:15,394 WARN [DatabaseSegmentManager-Exec--0] io.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!

Fangjin Yang

unread,
Feb 11, 2016, 12:38:23 PM2/11/16
to Druid User
Hi Jakub, thanks for reporting this, we are investigating.

Fangjin Yang

unread,
Feb 11, 2016, 1:09:48 PM2/11/16
to Druid User
Jakub, can you share your configuration and the ingestion spec you are using? I think the problem is there .

Jakub Liska

unread,
Feb 16, 2016, 11:36:07 AM2/16/16
to Druid User
Hey,

I was using this task and this Dockerfile where the last line didn't fix the Warnings. 

I also tried : 
    "tuningConfig" : {
     
"type" : "index",
     
"targetPartitionSize" : -1,
     
"rowFlushBoundary" : 500000,
     
"numShards": 1
   
}

Gian Merlino

unread,
Feb 18, 2016, 2:04:20 AM2/18/16
to druid...@googlegroups.com
Hey Jakub,

"SQLMetadataSegmentManager - No segments found in the database!" is a normal message that gets logged before you've inserted any data. It should stop getting logged after an indexing task successfully completes. If the indexing task is not successfully completing, then checking its logs for error messages could give some clues as to what is going wrong.

Gian

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages