ERROR org.apache.kafka.connect.cli.ConnectStandalone - Failed to create job

2,845 views
Skip to first unread message

Robin Moffatt

unread,
Sep 8, 2016, 2:50:38 AM9/8/16
to Confluent Platform
Hi, 

I'm trying to run the Elasticsearch sink, and hitting a problem invoking it. It works fine on another machine - annoyingly, one which is a straight clone of the one on which it's not working. Point being, I have had this working...
 
/usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /opt/elasticsearch-2.4.0/config/elasticsearch-kafka-connect.properties

[...]
[main] INFO org.eclipse.jetty.server.ServerConnector - Started ServerConnector@641a7160{HTTP/1.1}{0.0.0.0:8083}
[main] INFO org.eclipse.jetty.server.Server - Started @10706ms
[main] INFO org.apache.kafka.connect.runtime.rest.RestServer - REST server listening at http://127.0.0.1:8083/, advertising URL http://127.0.0.1:8083/
[main] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect started
[main] INFO org.apache.kafka.connect.runtime.ConnectorConfig - ConnectorConfig values:
        connector.class = io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
        tasks.max = 1
        name = elasticsearch-sink

[main] INFO org.apache.kafka.connect.runtime.Worker - Creating connector elasticsearch-sink of type io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
[main] INFO org.apache.kafka.connect.runtime.Worker - Instantiated connector elasticsearch-sink with version 3.1.0-SNAPSHOT of type io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
[main] INFO io.confluent.connect.elasticsearch.ElasticsearchSinkConnectorConfig - ElasticsearchSinkConnectorConfig values:
        type.name = kafka-connect
        batch.size = 2000
        max.retries = 5
        key.ignore = false
        max.in.flight.requests = 5
        retry.backoff.ms = 100
        max.buffered.records = 20000
        schema.ignore = false
        flush.timeout.ms = 10000
        topic.index.map = [ORCL.SOE.LOGON:soe.logon]
        topic.key.ignore = [ORCL.SOE.LOGON]
        connection.url = http://localhost:9200
        topic.schema.ignore = []
        linger.ms = 1

[main] INFO org.apache.kafka.connect.runtime.Worker - Finished creating connector elasticsearch-sink
[main] ERROR org.apache.kafka.connect.cli.ConnectStandalone - Failed to create job for /opt/elasticsearch-2.4.0/config/elasticsearch-kafka-connect.properties
[main] ERROR org.apache.kafka.connect.cli.ConnectStandalone - Stopping after connector error
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Connector elasticsearch-sink   not found in this worker.
        at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:80)
        at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:67)
        at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:97)
Caused by: org.apache.kafka.connect.errors.ConnectException: Connector elasticsearch-sink   not found in this worker.
        at org.apache.kafka.connect.runtime.Worker.isRunning(Worker.java:300)
        at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.updateConnectorTasks(StandaloneHerder.java:297)
        at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:165)
        at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:94)
[main] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect stopping
[...]


I've amended /etc/kafka/connect-log4j.properties to set 

log4j.rootLogger=TRACE, stdout

But in the console output of connect I only see INFO/ERROR messages and don't see any information as to *why* the job create failed.

I've compared the env, including CLASSPATH, of both environments, and it's the same. I've compared the console output of executing the above on the two environments, and its identical (bar the expected timings, etc) right up until the error. 

Any suggestions where to go looking for the cause of this failure? 

thanks, Robin.

Robin Moffatt

unread,
Sep 8, 2016, 2:56:33 AM9/8/16
to Confluent Platform

konst...@confluent.io

unread,
Sep 12, 2016, 6:09:13 PM9/12/16
to Confluent Platform

Hi Robin, 

the version of Connect framework that you are using contains a bug, according to which the connector name is used as a trimmed string in one place and as an untrimmed one (meaning exactly as given in the configuration file) elsewhere. 

The bug has been fixed in the latest snapshot releases but not in the latest Kafka or Confluent platofrm releases. If you can not afford an upgrade to the latest snapshot release, a quick fix for now is to make sure your configuration files do not include extra white space around connector's name. With the latest fix, this wouldn't be necessary, since the connector name is always read as is from the properties file. 

Konstantine
Reply all
Reply to author
Forward
0 new messages