[hpvertica1@hpvertica3 confluent-2.0.1]$ nohup: ignoring input and
appending output to ânohup.outâ
true
325267 Jps
325213 QuorumPeerMain
[hpvertica1@hpvertica3 confluent-2.0.1]$
nohup /home/hpvertica1/confluent-2.0.1/bin/kafka-server-start
/home/hpvertica1/confluent-2.0.1/etc/kafka/serveroperties &
[2] 325309
[hpvertica1@hpvertica3 confluent-2.0.1]$
nohup: ignoring input and
appending output to ânohup.outâ
[hpvertica1@hpvertica3 confluent-2.0.1]$
[hpvertica1@hpvertica3 confluent-2.0.1]$ jps
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
325309 SupportedKafka
325357 Jps
325213 QuorumPeerMain
[hpvertica1@hpvertica3 confluent-2.0.1]$
nohup
/home/hpvertica1/confluent-2.0.1/bin/schema-registry-start
/home/hpvertica1/confluent-2.0.1/etc/schema-rstry/schema-registry.properties
&
[3] 325441
[hpvertica1@hpvertica3 confluent-2.0.1]$ nohup: ignoring input and
appending output to ânohup.outâ
[hpvertica1@hpvertica3 confluent-2.0.1]$ jps
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
325475 Jps
325441 SchemaRegistryMain
325309 SupportedKafka
325213 QuorumPeerMain
[hpvertica1@hpvertica3 confluent-2.0.1]$
bin/connect-standalone
etc/schema-registry/connect-avro-standalone.properties
etc/kafka-connect-jdbc/quickstarysql.properties
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[
jar:file:/home/hpvertica1/confluent-2.0.1/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBindclass]
SLF4J: Found binding in
[
jar:file:/home/hpvertica1/confluent-2.0.1/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBin.class]
SLF4J: Found binding in
[
jar:file:/home/hpvertica1/confluent-2.0.1/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBir.class]
SLF4J: Found binding in
[
jar:file:/home/hpvertica1/confluent-2.0.1/share/java/kafka/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See
http://www.slf4j.org/codes.html#multiple_bindings
for an explanation.
SLF4J: Actual binding is of type
[org.slf4j.impl.Log4jLoggerFactory]
[
2016-05-05 19:40:56,624] INFO StandaloneConfig values:
rest.advertised.port = null
rest.advertised.host.name = null
bootstrap.servers = [localhost:9092]
value.converter = class
io.confluent.connect.avro.AvroConverter
task.shutdown.graceful.timeout.ms = 5000
internal.value.converter = class
org.apache.kafka.connect.json.JsonConverter
rest.host.name = null
cluster = connect
internal.key.converter = class
org.apache.kafka.connect.json.JsonConverter
key.converter = class
io.confluent.connect.avro.AvroConverter
offset.flush.timeout.ms = 5000
rest.port = 8083
offset.flush.interval.ms = 60000
(org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[
2016-05-05 19:40:57,322] INFO Logging initialized @1948ms
(org.eclipse.jetty.util.log:186)
[
2016-05-05 19:40:57,380] INFO Kafka Connect starting
(org.apache.kafka.connect.runtime.Connect:53)
[
2016-05-05 19:40:57,381] INFO Worker starting
(org.apache.kafka.connect.runtime.Worker:89)
[
2016-05-05 19:40:57,413] INFO ProducerConfig values:
request.timeout.ms =
2147483647
retry.backoff.ms = 100
buffer.memory = 33554432
ssl.truststore.password = null
batch.size = 16384
ssl.keymanager.algorithm = SunX509
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.key.password = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.provider = null
sasl.kerberos.service.name = null
max.in.flight.requests.per.connection = 1
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
client.id =
max.request.size = 1048576
acks = all
linger.ms = 0
sasl.kerberos.kinit.cmd = /usr/bin/kinit
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
metadata.fetch.timeout.ms = 60000
ssl.endpoint.identification.algorithm = null
ssl.keystore.location = null
value.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
key.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer
block.on.buffer.full = false
metrics.sample.window.ms = 30000
metadata.max.age.ms = 300000
security.protocol = PLAINTEXT
ssl.protocol = TLS
sasl.kerberos.min.time.before.relogin = 60000
timeout.ms = 30000
connections.max.idle.ms = 540000
ssl.trustmanager.algorithm = PKIX
metric.reporters = []
compression.type = none
ssl.truststore.type = JKS
max.block.ms = 9223372036854775807
retries =
2147483647
send.buffer.bytes = 131072
partitioner.class = class
org.apache.kafka.clients.producer.internals.DefaultPartitioner
reconnect.backoff.ms = 50
metrics.num.samples = 2
ssl.keystore.type = JKS
(org.apache.kafka.clients.producer.ProducerConfig:165)
[
2016-05-05 19:40:57,485] INFO Kafka version : 0.9.0.1-cp1
(org.apache.kafka.common.utils.AppInfoParser:82)
[
2016-05-05 19:40:57,485] INFO Kafka commitId : 7113452b3e7d5638
(org.apache.kafka.common.utils.AppInfoParser:83)
[
2016-05-05 19:40:57,487] INFO Starting FileOffsetBackingStore
with file /tmp/connect.offsets
(org.apache.kafka.connect.storage.FileOffsetBackingStore:
[
2016-05-05 19:40:57,489] INFO Worker started
(org.apache.kafka.connect.runtime.Worker:111)
[
2016-05-05 19:40:57,489] INFO Herder starting
(org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[
2016-05-05 19:40:57,489] INFO Herder started
(org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[
2016-05-05 19:40:57,489] INFO Starting REST server
(org.apache.kafka.connect.runtime.rest.RestServer:91)
[
2016-05-05 19:40:57,765] INFO jetty-9.2.12.v20150709
(org.eclipse.jetty.server.Server:327)
May 05, 2016 7:40:59 PM org.glassfish.jersey.internal.Errors
logErrors
WARNING: The following warnings have been detected: WARNING: The
(sub)resource method listConnectors in
org.apache.kafka.connect.runtime.rest.resourcesnnectorsResource
contains empty path annotation.
WARNING: The (sub)resource method createConnector in
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource
contains empty path annotation.
WARNING: The (sub)resource method serverInfo in
org.apache.kafka.connect.runtime.rest.resources.RootResource
contains empty path annotation.
[
2016-05-05 19:40:59,091] INFO Started
o.e.j.s.ServletContextHandler@17253394{/,null,AVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler:744)
[
2016-05-05 19:40:59,104] INFO Started
ServerConnector@65c42778{HTTP/1.1}{
0.0.0.0:8083}
(org.eclipse.jetty.server.ServerConnector:266)
[
2016-05-05 19:40:59,105] INFO Started @3733ms
(org.eclipse.jetty.server.Server:379)
[
2016-05-05 19:40:59,986] INFO REST server listening at
http://198.105.244.11:8083/,
advertising URL
http://198.105.244.11:8083/
(org.apache.kafka.conn.runtime.rest.RestServer:132)
[
2016-05-05 19:40:59,986] INFO Kafka Connect started
(org.apache.kafka.connect.runtime.Connect:60)
[
2016-05-05 19:40:59,992] INFO ConnectorConfig values:
topics = []
name = test-mysql-jdbc-autoincrement-source
tasks.max = 10
connector.class = class
io.confluent.connect.jdbc.JdbcSourceConnector
(org.apache.kafka.connect.runtime.ConnectorConfig:165)
[
2016-05-05 19:40:59,993] INFO Creating connector
test-mysql-jdbc-autoincrement-source of type
io.confluent.connect.jdbc.JdbcSourceConnector
(org.apachafka.connect.runtime.Worker:170)
[
2016-05-05 19:40:59,995] INFO Instantiated connector
test-mysql-jdbc-autoincrement-source with version 2.0.1 of type
io.confluent.connect.jdbc.JdbcSouConnector
(org.apache.kafka.connect.runtime.Worker:183)
[
2016-05-05 19:41:00,004] INFO JdbcSourceConnectorConfig values:
table.poll.interval.ms = 60000
incrementing.column.name = id
connection.url =
jdbc:mysql://localhost:3306/test?user=root&password=hpvertica
timestamp.column.name =
query =
poll.interval.ms = 5000
topic.prefix = test-mysql-jdbc-
batch.max.rows = 100
table.whitelist = []
mode = incrementing
table.blacklist = []
(io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
Thu May 05 19:41:00 IST 2016 WARN: Establishing SSL connection
without server's identity verification is not recommended.
According to MySQL 5.5.45+, 526+ and 5.7.6+ requirements SSL
connection must be established by default if explicit option isn't
set. For compliance with existing applications not ug SSL the
verifyServerCertificate property is set to 'false'. You need
either to explicitly disable SSL by setting useSSL=false, or set
useSSL=true andovide truststore for server certificate
verification.
[
2016-05-05 19:41:00,373] INFO Finished creating connector
test-mysql-jdbc-autoincrement-source
(org.apache.kafka.connect.runtime.Worker:193)
[
2016-05-05 19:41:00,387] INFO TaskConfig values:
task.class = class
io.confluent.connect.jdbc.JdbcSourceTask
(org.apache.kafka.connect.runtime.TaskConfig:165)
[
2016-05-05 19:41:00,388] INFO Creating task
test-mysql-jdbc-autoincrement-source-0
(org.apache.kafka.connect.runtime.Worker:256)
[
2016-05-05 19:41:00,389] INFO Instantiated task
test-mysql-jdbc-autoincrement-source-0 with version 2.0.1 of type
io.confluent.connect.jdbc.JdbcSourcek
(org.apache.kafka.connect.runtime.Worker:267)
[
2016-05-05 19:41:00,398] INFO JdbcSourceTaskConfig values:
tables = [accounts]
table.poll.interval.ms = 60000
incrementing.column.name = id
connection.url =
jdbc:mysql://localhost:3306/test?user=root&password=hpvertica
timestamp.column.name =
query =
poll.interval.ms = 5000
topic.prefix = test-mysql-jdbc-
batch.max.rows = 100
table.whitelist = []
mode = incrementing
table.blacklist = []
(io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[
2016-05-05 19:41:00,400] INFO Created connector
test-mysql-jdbc-autoincrement-source
(org.apache.kafka.connect.cli.ConnectStandalone:82)
Thu May 05 19:41:00 IST 2016 WARN: Establishing SSL connection
without server's identity verification is not recommended.
According to MySQL 5.5.45+, 526+ and 5.7.6+ requirements SSL
connection must be established by default if explicit option isn't
set. For compliance with existing applications not ug SSL the
verifyServerCertificate property is set to 'false'. You need
either to explicitly disable SSL by setting useSSL=false, or set
useSSL=true andovide truststore for server certificate
verification.
[
2016-05-05 19:41:00,477] INFO Source task
Thread[WorkerSourceTask-test-mysql-jdbc-autoincrement-source-0,5,main]
finished initialization and start
(orpache.kafka.connect.runtime.WorkerSourceTask:342)
[
2016-05-05 19:41:00,610]
ERROR Task
test-mysql-jdbc-autoincrement-source-0 threw an uncaught and
unrecoverable exception
(org.apache.kafka.connect.rune.WorkerSourceTask:362)
[2016-05-05 19:41:00,611] ERROR Task is being killed and will
not recover until manually restarted:
(org.apache.kafka.connect.runtime.WorkerSourceTask:)
org.apache.kafka.connect.errors.DataException: Failed to
serialize Avro data:
at
io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:92)
at
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:142)
at
org.apache.kafka.connect.runtime.WorkerSourceTask.access$600(WorkerSourceTask.java:50)
at
org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:356)
at
org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
Caused by: org.apache.kafka.common.errors.SerializationException:
Error serializing Avro message
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at
sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at
sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at
sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:997)
at
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:933)
at
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:851)
at
sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1092)
at
io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:139)
at
io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:174)
at
io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:225)
at
io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:217)
at
io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:212)
at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:57)
at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:89)
at
io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:50)
at
io.confluent.connect.avro.AvroConverter$Serializer.serialize(AvroConverter.java:120)
at
io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:90)
at
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:142)
at
org.apache.kafka.connect.runtime.WorkerSourceTask.access$600(WorkerSourceTask.java:50)
at
org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:356)
at
org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82