using confluent fromm kafka to hdfs: ERROR Task test-mysql-jdbc-autoincrement-source-0 threw an unc

753 views
Skip to first unread message

Kotesh Banoth

unread,
May 5, 2016, 2:30:32 PM5/5/16
to Confluent Platform


nohup /home/hpvertica1/confluent-2.0.1/bin/zookeeper-server-start /home/hpvertica1/confluent-2.0.1/etc/kafka/zeeper.properties &
[1] 325213
[hpvertica1@hpvertica3 confluent-2.0.1]$ nohup: ignoring input and appending output to ânohup.outâ

[hpvertica1@hpvertica3 confluent-2.0.1]$ jps
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=
true
325267 Jps
325213 QuorumPeerMain
[hpvertica1@hpvertica3 confluent-2.0.1]$ nohup /home/hpvertica1/confluent-2.0.1/bin/kafka-server-start /home/hpvertica1/confluent-2.0.1/etc/kafka/serveroperties &
[2] 325309
[hpvertica1@hpvertica3 confluent-2.0.1]$ nohup: ignoring input and appending output to ânohup.outâ

[hpvertica1@hpvertica3 confluent-2.0.1]$
[hpvertica1@hpvertica3 confluent-2.0.1]$ jps
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
325309 SupportedKafka
325357 Jps
325213 QuorumPeerMain
[hpvertica1@hpvertica3 confluent-2.0.1]$ nohup /home/hpvertica1/confluent-2.0.1/bin/schema-registry-start /home/hpvertica1/confluent-2.0.1/etc/schema-rstry/schema-registry.properties &
[3] 325441
[hpvertica1@hpvertica3 confluent-2.0.1]$ nohup: ignoring input and appending output to ânohup.outâ

[hpvertica1@hpvertica3 confluent-2.0.1]$ jps
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
325475 Jps
325441 SchemaRegistryMain
325309 SupportedKafka
325213 QuorumPeerMain
[hpvertica1@hpvertica3 confluent-2.0.1]$ bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstarysql.properties
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBindclass]
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBin.class]
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBir.class]
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/kafka/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-05-05 19:40:56,624] INFO StandaloneConfig values:
        rest.advertised.port = null
        rest.advertised.host.name = null
        bootstrap.servers = [localhost:9092]
        value.converter = class io.confluent.connect.avro.AvroConverter
        task.shutdown.graceful.timeout.ms = 5000
        internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
        rest.host.name = null
        cluster = connect
        internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
        key.converter = class io.confluent.connect.avro.AvroConverter
        offset.flush.timeout.ms = 5000
        rest.port = 8083
        offset.flush.interval.ms = 60000
 (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[2016-05-05 19:40:57,322] INFO Logging initialized @1948ms (org.eclipse.jetty.util.log:186)
[2016-05-05 19:40:57,380] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2016-05-05 19:40:57,381] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2016-05-05 19:40:57,413] INFO ProducerConfig values:
        request.timeout.ms = 2147483647
        retry.backoff.ms = 100
        buffer.memory = 33554432
        ssl.truststore.password = null
        batch.size = 16384
        ssl.keymanager.algorithm = SunX509
        receive.buffer.bytes = 32768
        ssl.cipher.suites = null
        ssl.key.password = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.provider = null
        sasl.kerberos.service.name = null
        max.in.flight.requests.per.connection = 1
        sasl.kerberos.ticket.renew.window.factor = 0.8
        bootstrap.servers = [localhost:9092]
        client.id =
        max.request.size = 1048576
        acks = all
        linger.ms = 0
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        metadata.fetch.timeout.ms = 60000
        ssl.endpoint.identification.algorithm = null
        ssl.keystore.location = null
        value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        ssl.truststore.location = null
        ssl.keystore.password = null
        key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        block.on.buffer.full = false
        metrics.sample.window.ms = 30000
        metadata.max.age.ms = 300000
        security.protocol = PLAINTEXT
        ssl.protocol = TLS
        sasl.kerberos.min.time.before.relogin = 60000
        timeout.ms = 30000
        connections.max.idle.ms = 540000
        ssl.trustmanager.algorithm = PKIX
        metric.reporters = []
        compression.type = none
        ssl.truststore.type = JKS
        max.block.ms = 9223372036854775807
        retries = 2147483647
        send.buffer.bytes = 131072
        partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
        reconnect.backoff.ms = 50
        metrics.num.samples = 2
        ssl.keystore.type = JKS
 (org.apache.kafka.clients.producer.ProducerConfig:165)
[2016-05-05 19:40:57,485] INFO Kafka version : 0.9.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-05-05 19:40:57,485] INFO Kafka commitId : 7113452b3e7d5638 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-05-05 19:40:57,487] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:
[2016-05-05 19:40:57,489] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2016-05-05 19:40:57,489] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[2016-05-05 19:40:57,489] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[2016-05-05 19:40:57,489] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2016-05-05 19:40:57,765] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
May 05, 2016 7:40:59 PM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resourcesnnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2016-05-05 19:40:59,091] INFO Started o.e.j.s.ServletContextHandler@17253394{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-05-05 19:40:59,104] INFO Started ServerConnector@65c42778{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-05-05 19:40:59,105] INFO Started @3733ms (org.eclipse.jetty.server.Server:379)
[2016-05-05 19:40:59,986] INFO REST server listening at http://198.105.244.11:8083/, advertising URL http://198.105.244.11:8083/ (org.apache.kafka.conn.runtime.rest.RestServer:132)
[2016-05-05 19:40:59,986] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-05-05 19:40:59,992] INFO ConnectorConfig values:
        topics = []
        name = test-mysql-jdbc-autoincrement-source
        tasks.max = 10
        connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
 (org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2016-05-05 19:40:59,993] INFO Creating connector test-mysql-jdbc-autoincrement-source of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apachafka.connect.runtime.Worker:170)
[2016-05-05 19:40:59,995] INFO Instantiated connector test-mysql-jdbc-autoincrement-source with version 2.0.1 of type io.confluent.connect.jdbc.JdbcSouConnector (org.apache.kafka.connect.runtime.Worker:183)
[2016-05-05 19:41:00,004] INFO JdbcSourceConnectorConfig values:
        table.poll.interval.ms = 60000
        incrementing.column.name = id
        connection.url = jdbc:mysql://localhost:3306/test?user=root&password=hpvertica
        timestamp.column.name =
        query =
        poll.interval.ms = 5000
        topic.prefix = test-mysql-jdbc-
        batch.max.rows = 100
        table.whitelist = []
        mode = incrementing
        table.blacklist = []
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
Thu May 05 19:41:00 IST 2016 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 526+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not ug SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true andovide truststore for server certificate verification.
[2016-05-05 19:41:00,373] INFO Finished creating connector test-mysql-jdbc-autoincrement-source (org.apache.kafka.connect.runtime.Worker:193)
[2016-05-05 19:41:00,387] INFO TaskConfig values:
        task.class = class io.confluent.connect.jdbc.JdbcSourceTask
 (org.apache.kafka.connect.runtime.TaskConfig:165)
[2016-05-05 19:41:00,388] INFO Creating task test-mysql-jdbc-autoincrement-source-0 (org.apache.kafka.connect.runtime.Worker:256)
[2016-05-05 19:41:00,389] INFO Instantiated task test-mysql-jdbc-autoincrement-source-0 with version 2.0.1 of type io.confluent.connect.jdbc.JdbcSourcek (org.apache.kafka.connect.runtime.Worker:267)
[2016-05-05 19:41:00,398] INFO JdbcSourceTaskConfig values:
        tables = [accounts]
        table.poll.interval.ms = 60000
        incrementing.column.name = id
        connection.url = jdbc:mysql://localhost:3306/test?user=root&password=hpvertica
        timestamp.column.name =
        query =
        poll.interval.ms = 5000
        topic.prefix = test-mysql-jdbc-
        batch.max.rows = 100
        table.whitelist = []
        mode = incrementing
        table.blacklist = []
 (io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[2016-05-05 19:41:00,400] INFO Created connector test-mysql-jdbc-autoincrement-source (org.apache.kafka.connect.cli.ConnectStandalone:82)
Thu May 05 19:41:00 IST 2016 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 526+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not ug SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true andovide truststore for server certificate verification.
[2016-05-05 19:41:00,477] INFO Source task Thread[WorkerSourceTask-test-mysql-jdbc-autoincrement-source-0,5,main] finished initialization and start (orpache.kafka.connect.runtime.WorkerSourceTask:342)
[2016-05-05 19:41:00,610] ERROR Task test-mysql-jdbc-autoincrement-source-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.rune.WorkerSourceTask:362)
[2016-05-05 19:41:00,611] ERROR Task is being killed and will not recover until manually restarted: (org.apache.kafka.connect.runtime.WorkerSourceTask:)
org.apache.kafka.connect.errors.DataException: Failed to serialize Avro data:

        at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:92)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:142)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.access$600(WorkerSourceTask.java:50)
        at org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:356)
        at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
Caused by: org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:579)
        at java.net.Socket.connect(Socket.java:528)
        at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
        at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
        at sun.net.www.http.HttpClient.New(HttpClient.java:308)
        at sun.net.www.http.HttpClient.New(HttpClient.java:326)
        at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:997)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:933)
        at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:851)
        at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1092)
        at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:139)
        at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:174)
        at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:225)
        at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:217)
        at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:212)
        at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:57)
        at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:89)
        at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:50)
        at io.confluent.connect.avro.AvroConverter$Serializer.serialize(AvroConverter.java:120)
        at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:90)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:142)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.access$600(WorkerSourceTask.java:50)
        at org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:356)
        at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82

Liquan Pei

unread,
May 5, 2016, 3:58:09 PM5/5/16
to confluent...@googlegroups.com
It seems that you have an issue connecting to SchemaRegistry. Can you access the SchemaRegistry through its Rest API? 

Liquan

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/678ac657-6078-40ee-96a4-305ad184d0e6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Liquan Pei | Software Engineer | Confluent | +1 413.230.6855
Download Apache Kafka and Confluent Platform: www.confluent.io/download

Kotesh Banoth

unread,
May 6, 2016, 6:47:46 AM5/6/16
to Confluent Platform
Hi Liquan,

     In vim schema-registry.properties  I have given the following, Can you please help me out.

port=8081
kafkastore.connection.url=localhost:2181
kafkastore.topic=_schemas
debug=false
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

Kotesh Banoth

unread,
May 7, 2016, 1:57:08 AM5/7/16
to Confluent Platform
i follow the below steps, but i got error while run standalone process .

nohup ~/confluent-2.0.1/bin/zookeeper-server-start ~/confluent-2.0.1/etc/kafka/zookeeper.properties &
nohup ~/confluent-2.0.1/bin/kafka-server-start ~/confluent-2.0.1/etc/kafka/server.properties &
nohup ~/confluent-2.0.1/bin/schema-registry-start ~/confluent-2.0.1/etc/schema-registry/schema-registry.properties &


bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-mysql.properties
ERROR :


schema-registry :  Process :

Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-05-07 12:21:06,065] INFO SchemaRegistryConfig values:
        master.eligibility = true
        port = 8081
        kafkastore.timeout.ms = 500
        kafkastore.init.timeout.ms = 60000
        debug = false
        kafkastore.zk.session.timeout.ms = 30000
        schema.registry.zk.namespace = schema_registry
        request.logger.name = io.confluent.rest-utils.requests
        metrics.sample.window.ms = 30000
        kafkastore.topic = _schemas
        avro.compatibility.level = backward
        shutdown.graceful.ms = 1000
        access.control.allow.origin =
        response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
        metrics.jmx.prefix = kafka.schema.registry
        host.name = 198.105.244.11
        metric.reporters = []
        kafkastore.commit.interval.ms = -1
        kafkastore.connection.url = localhost:2181
        metrics.num.samples = 2
        response.mediatype.default = application/vnd.schemaregistry.v1+json
        kafkastore.topic.replication.factor = 3
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2016-05-07 12:21:07,381] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:86)
[2016-05-07 12:21:08,789] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:205)
[2016-05-07 12:21:09,143] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2016-05-07 12:22:09,289] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:166)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
        at io.confluent.rest.Application.createServer(Application.java:109)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:155)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
        ... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:367)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:224)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:153)
        ... 5 more
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
        at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:686)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:449)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:362)
        ... 7 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.



2016-05-07 12:22:07,488] INFO StandaloneConfig values:

        rest.advertised.port = null
        rest.advertised.host.name = null
        bootstrap.servers = [localhost:9092]
        value.converter = class io.confluent.connect.avro.AvroConverter
        task.shutdown.graceful.timeout.ms = 5000
        internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
        rest.host.name = null
        cluster = connect
        internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
        key.converter = class io.confluent.connect.avro.AvroConverter
        offset.flush.timeout.ms = 5000
        rest.port = 8083
        offset.flush.interval.ms = 60000
 (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[2016-05-07 12:22:08,185] INFO Logging initialized @1955ms (org.eclipse.jetty.util.log:186)
[2016-05-07 12:22:08,247] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2016-05-07 12:22:08,248] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2016-05-07 12:22:08,286] INFO ProducerConfig values:
[2016-05-07 12:22:08,363] INFO Kafka version : 0.9.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-05-07 12:22:08,363] INFO Kafka commitId : 7113452b3e7d5638 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-05-07 12:22:08,365] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:53)
[2016-05-07 12:22:08,367] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2016-05-07 12:22:08,367] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[2016-05-07 12:22:08,367] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[2016-05-07 12:22:08,367] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2016-05-07 12:22:08,658] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
May 07, 2016 12:22:10 PM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.

WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2016-05-07 12:22:10,031] INFO Started o.e.j.s.ServletContextHandler@70ea4a6b{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-05-07 12:22:10,049] INFO Started ServerConnector@512f2e80{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-05-07 12:22:10,050] INFO Started @3822ms (org.eclipse.jetty.server.Server:379)
[2016-05-07 12:22:10,912] INFO REST server listening at http://198.105.244.11:8083/, advertising URL http://198.105.244.11:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2016-05-07 12:22:10,912] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-05-07 12:22:10,919] INFO ConnectorConfig values:

        topics = []
        name = test-mysql-jdbc-autoincrement-source
        tasks.max = 10
        connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
 (org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2016-05-07 12:22:10,920] INFO Creating connector test-mysql-jdbc-autoincrement-source of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:170)
[2016-05-07 12:22:10,922] INFO Instantiated connector test-mysql-jdbc-autoincrement-source with version 2.0.1 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:183)
[2016-05-07 12:22:10,948] INFO JdbcSourceConnectorConfig values:

        table.poll.interval.ms = 60000
        incrementing.column.name = id
        connection.url = jdbc:mysql://localhost:3306/test?user=root&password=hpvertica
        timestamp.column.name =
        query =
        poll.interval.ms = 5000
        topic.prefix = test-mysql-jdbc-
        batch.max.rows = 100
        table.whitelist = []
        mode = incrementing
        table.blacklist = []
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
Sat May 07 12:22:11 IST 2016 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[2016-05-07 12:22:11,338] INFO Finished creating connector test-mysql-jdbc-autoincrement-source (org.apache.kafka.connect.runtime.Worker:193)
[2016-05-07 12:22:11,354] INFO TaskConfig values:

        task.class = class io.confluent.connect.jdbc.JdbcSourceTask
 (org.apache.kafka.connect.runtime.TaskConfig:165)
[2016-05-07 12:22:11,354] INFO Creating task test-mysql-jdbc-autoincrement-source-0 (org.apache.kafka.connect.runtime.Worker:256)
[2016-05-07 12:22:11,355] INFO Instantiated task test-mysql-jdbc-autoincrement-source-0 with version 2.0.1 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:267)
[2016-05-07 12:22:11,365] INFO Created connector test-mysql-jdbc-autoincrement-source (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2016-05-07 12:22:11,366] INFO JdbcSourceTaskConfig values:

        tables = [accounts]
        table.poll.interval.ms = 60000
        incrementing.column.name = id
        connection.url = jdbc:mysql://localhost:3306/test?user=root&password=hpvertica
        timestamp.column.name =
        query =
        poll.interval.ms = 5000
        topic.prefix = test-mysql-jdbc-
        batch.max.rows = 100
        table.whitelist = []
        mode = incrementing
        table.blacklist = []
 (io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
Sat May 07 12:22:11 IST 2016 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[2016-05-07 12:22:11,464] INFO Source task Thread[WorkerSourceTask-test-mysql-jdbc-autoincrement-source-0,5,main] finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:342)
[2016-05-07 12:22:11,599] ERROR Task test-mysql-jdbc-autoincrement-source-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSourceTask:362)
[2016-05-07 12:22:11,600] ERROR Task is being killed and will not recover until manually restarted: (org.apache.kafka.connect.runtime.WorkerSourceTask:363)
        at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)







kotesh banoth

unread,
May 9, 2016, 7:51:56 AM5/9/16
to confluent...@googlegroups.com
kindly replay .... this issue

Banoth Kotesh
Computer Science and Engineering(2010-14),
NIT Rourkela,

--
You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/2tzr5NDlTjQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

David Tucker

unread,
May 9, 2016, 1:18:54 PM5/9/16
to confluent...@googlegroups.com
Is there any chance that the Zookeeper service is misbehaving (or the ZK configuration within the other services is incorrect) ?

You can confirm that the individual ZK nodes are running with
echo “ruok” | nc <zk_host> 2181

All hosts should return “imok”

You can also confirm that proper creation of a ZK forum with the FourLetterWordMain class.   The “Mode” setting on zk nodes will be either “standalone” (for a one node cluster) or “leader”/“follower” in a multi-node cluster.   I often check ZK status with the command
kafka-run-class -name zookeeper org.apache.zookeeper.client.FourLetterWordMain localhost ${clientPort:-2181} srvr | grep ^Mode

on all of the ZK nodes.

The zookeeper log (logs/zookeeper.out) should show activity as the Kafka service and Schema Registry get started.   

NOTE: I’ve seen multiple occasions where the zookeeper quorum takes some time to become stable … and launching the other services
within that window fails.  

— David

[2016-05-07 12:22:10,912] INFO REST server listening at http://198.105.244.11:8083/, advertising URL http://198.105.244.11:8083/(org.apache.kafka.connect.runtime.rest.RestServer:132)


-- 

You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Kotesh Banoth

unread,
May 10, 2016, 8:19:18 AM5/10/16
to Confluent Platform
Hi I had checked Zookeper is running perfectly, but when running schema Registry(./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties) i am getting below error in ZooKeeper


INFO Got user-level KeeperException when processing sessionid:0x1549aa8ffcd0004 type:create cxid:0x3 zxid:0xcd txntype:-1 reqpath:n/a Error Path:/consumers/schema-registry-198.105.244.11-8081/ids Error:KeeperErrorCode = NodeExists for /consumers/schema-registry-198.105.244.11-8081/ids (org.apache.zookeeper.server.PrepRequestProcessor)

Can you please help me

-Kotesh

Kotesh Banoth

unread,
May 11, 2016, 4:42:57 AM5/11/16
to Confluent Platform

Liquan Pei

unread,
May 11, 2016, 4:47:40 AM5/11/16
to confluent...@googlegroups.com
The log in Zookeeper seems normal and it is in the info level. It means that the ids path in Zookeeper is already created when we try to create the path. 

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

kotesh banoth

unread,
May 11, 2016, 6:27:56 AM5/11/16
to confluent...@googlegroups.com
Thank you so  much Liquan ,

I am trying extremely to import data from Msql to HDFS real time using COnfluent 2.0. It will be a great favour if you guide me as i am in learning stage.

Then i think i have problem with schema-Registry, I have attached the ERROR


[hpvertica1@hpvertica3 confluent-2.0.1]$  ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties

Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-05-11 16:52:49,247] INFO SchemaRegistryConfig values:
    master.eligibility = true
    port = 8081
    kafkastore.timeout.ms = 500
    kafkastore.init.timeout.ms = 60000
    debug = false
    kafkastore.zk.session.timeout.ms = 30000
    schema.registry.zk.namespace = schema_registry
    request.logger.name = io.confluent.rest-utils.requests
    metrics.sample.window.ms = 30000
    kafkastore.topic = _schemas
    avro.compatibility.level = backward
    shutdown.graceful.ms = 1000
    access.control.allow.origin =
    response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
    metrics.jmx.prefix = kafka.schema.registry
    host.name = 198.105.244.11
    metric.reporters = []
    kafkastore.commit.interval.ms = -1
    kafkastore.connection.url = localhost:2181
    metrics.num.samples = 2
    response.mediatype.default = application/vnd.schemaregistry.v1+json
    kafkastore.topic.replication.factor = 3
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2016-05-11 16:52:50,609] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:86)
[2016-05-11 16:52:52,029] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:205)
[2016-05-11 16:52:52,378] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)

[2016-05-11 16:53:52,510] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)

io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry

    at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:166)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
    at io.confluent.rest.Application.createServer(Application.java:109)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:155)
    at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
    ... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:367)
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:224)
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:153)
    ... 5 more
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
    at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:686)

    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:449)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:362)
    ... 7 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms


 Thank you
 Kotesh.







Banoth Kotesh
Computer Science and Engineering(2010-14),
NIT Rourkela,

You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/2tzr5NDlTjQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Bartek B

unread,
May 13, 2016, 3:13:31 AM5/13/16
to Confluent Platform
Hi Kotesh, I had the same problem. Make sure you did this steps from confluent platform quickstart:

Actually doing steps 1-4 (so starting zookeper etc.) solved this problem for me. 
Hope this will help you.
Reply all
Reply to author
Forward
0 new messages