After startup, which looks successful as far as I can tell, I start getting the following, looping endlessly. No data is ever written to S3.
2017-05-12 13:17:22,747 [secor_backup_mac-dev-mbailey.local_14611_18-leader-finder-thread] (kafka.utils.Logging$class:70) INFO Disconnecting from 10.3.230.71:9093
2017-05-12 13:17:22,749 [secor_backup_mac-dev-mbailey.local_14611_18-leader-finder-thread] (kafka.utils.Logging$class:70) INFO [ConsumerFetcherManager-1494616636717] Added fetcher for partitions ArrayBuffer([bestmatch.xml.sp.matches-36, initOffset 1021 to broker BrokerEndPoint(1,10.3.230.71,9092)] , [bestmatch.xml.sp.matches-33, initOffset 989 to broker BrokerEndPoint(1,10.3.230.71,9092)] , [bestmatch.xml.sp.matches-38, initOffset 979 to broker BrokerEndPoint(1,10.3.230.71,9092)] , [bestmatch.xml.sp.matches-35, initOffset 952 to broker BrokerEndPoint(1,10.3.230.71,9092)] , [bestmatch.xml.sp.matches-32, initOffset 982 to broker BrokerEndPoint(1,10.3.230.71,9092)] , [bestmatch.xml.sp.matches-37, initOffset 1025 to broker BrokerEndPoint(1,10.3.230.71,9092)] , [bestmatch.xml.sp.matches-34, initOffset 1053 to broker BrokerEndPoint(1,10.3.230.71,9092)] )
2017-05-12 13:17:22,749 [ConsumerFetcherThread-secor_backup_mac-dev-mbailey.local_14611_18-0-1] (kafka.utils.Logging$class:78) INFO Reconnect due to error:
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:85)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:132)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:132)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:132)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:131)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:131)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:131)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:130)
at kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:111)
at kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:31)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:103)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
As far as I can tell, my configuration is the same.
IntelliJ run configuration:
-ea
-Dsecor_group=secor_backup
-Dlog4j.configuration=log4j.dev.properties
-Dconfig=secor.dev.backup.properties
My shell command:
java -ea -Dsecor_group=secor_backup \
-Dlog4j.configuration=log4j.dev.properties \
-Dconfig=secor.dev.backup.properties \
-cp secor-0.23-SNAPSHOT.jar:lib/* \
com.pinterest.secor.main.ConsumerMain
I'm completely blocked and flummoxed by this issue. Is there some configuration that secor is pulling in from the environment that's not specified on the command line?