confluent is throwing a error when using debezium postgres source connector.

2,055 views
Skip to first unread message

ch.ravi...@enterpi.com

unread,
Dec 17, 2018, 3:41:21 AM12/17/18
to debezium
Hi all, 
          I am facing a issue with the debezium postgresql connector and confluent community edition. Initial connection from the database via debezium connector is working but when i changes are made in the white listed database then the connection between the Kafka connect and PostgreSQL database is disconnecting, And the database is going into in accessible state, I have to manually restart the database. Below is the output i am getting. This is the same thing using the docker images too. Am i missing any configuration or should i change any value. I dont where to start or unable to debug where the problem is. below is the debezium connector file .

POSTGRESDEB-SOURCE.PROPERTIES:
        name = debezium connector
        connector.class = io.debezium.connector.postgresql.PostgresConnector
        tasks.max = 1
        key.converter = io.confluent.connect.avro.AvroConverter
        key.converter.schema.registry.url = http://localhost:8081
        value.converter = io.confluent.connect.avro.AvroConverter
        value.converter.schema.registry.url = http://localhost:8081
        database.tcpKeepAlive=true
        database.hostname = 192.168.1.15
        database.port = 5432
        database.user = debezium
        database.password = enterpi
        database.dbname = recon_profiles
        database.server.name = dbserver1
        topic.prefix = pgsql-
        pk.mode = record_key
        auto.create = true
        auto.evolve = true
        database.sslmode = disable
        snapshot.mode = never
This is the output i am getting:
[2018-12-17 02:53:58,834] INFO Kafka Connect standalone worker initializing ... (org.apache.kafka.connect.cli.ConnectStandalone:67)
[2018-12-17 02:53:58,842] INFO WorkerInfo values:
        jvm.args = -Xms256M, -Xmx2G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/root/confluent-5.0.1/bin/../logs, -Dlog4j.configuration=file:/root/confluent-5.0.1/bin/../etc/kafka/connect-log4j.properties
        jvm.spec = Oracle Corporation, OpenJDK 64-Bit Server VM, 1.8.0_191, 25.191-b12
        jvm.classpath = /root/debezium-debezium-connector-postgresql-0.8.1/lib/debezium-connector-postgres-0.8.1.Final.jar:/root/debezium-debezium-connector-postgresql-0.8.1/lib/debezium-core-0.8.1.Final.jar:/root/debezium-debezium-connector-postgresql-0.8.1/lib/postgresql-42.0.0.jar:/root/debezium-debezium-connector-postgresql-0.8.1/lib/protobuf-java-2.6.1.jar:/root/confluent-5.0.1/share/java/kafka/jackson-jaxrs-json-provider-2.9.7.jar:/root/confluent-5.0.1/share/java/kafka/connect-basic-auth-extension-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/kafka_2.11-2.0.1-cp1-test-sources.jar:/root/confluent-5.0.1/share/java/kafka/rocksdbjni-5.7.3.jar:/root/confluent-5.0.1/share/java/kafka/javax.inject-1.jar:/root/confluent-5.0.1/share/java/kafka/jersey-client-2.27.jar:/root/confluent-5.0.1/share/java/kafka/kafka_2.11-2.0.1-cp1-scaladoc.jar:/root/confluent-5.0.1/share/java/kafka/jetty-io-9.4.11.v20180605.jar:/root/confluent-5.0.1/share/java/kafka/connect-file-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/jackson-core-2.9.7.jar:/root/confluent-5.0.1/share/java/kafka/jetty-http-9.4.11.v20180605.jar:/root/confluent-5.0.1/share/java/kafka/zookeeper-3.4.13.jar:/root/confluent-5.0.1/share/java/kafka/jetty-client-9.4.11.v20180605.jar:/root/confluent-5.0.1/share/java/kafka/validation-api-1.1.0.Final.jar:/root/confluent-5.0.1/share/java/kafka/commons-logging-1.2.jar:/root/confluent-5.0.1/share/java/kafka/lz4-java-1.4.1.jar:/root/confluent-5.0.1/share/java/kafka/audience-annotations-0.5.0.jar:/root/confluent-5.0.1/share/java/kafka/kafka-streams-examples-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/jetty-util-9.4.11.v20180605.jar:/root/confluent-5.0.1/share/java/kafka/guava-20.0.jar:/root/confluent-5.0.1/share/java/kafka/support-metrics-client-5.0.1.jar:/root/confluent-5.0.1/share/java/kafka/kafka_2.11-2.0.1-cp1-sources.jar:/root/confluent-5.0.1/share/java/kafka/javax.servlet-api-3.1.0.jar:/root/confluent-5.0.1/share/java/kafka/commons-validator-1.4.1.jar:/root/confluent-5.0.1/share/java/kafka/commons-collections-3.2.1.jar:/root/confluent-5.0.1/share/java/kafka/javassist-3.22.0-CR2.jar:/root/confluent-5.0.1/share/java/kafka/httpclient-4.5.2.jar:/root/confluent-5.0.1/share/java/kafka/metrics-core-2.2.0.jar:/root/confluent-5.0.1/share/java/kafka/hk2-utils-2.5.0-b42.jar:/root/confluent-5.0.1/share/java/kafka/avro-1.8.1.jar:/root/confluent-5.0.1/share/java/kafka/hk2-locator-2.5.0-b42.jar:/root/confluent-5.0.1/share/java/kafka/connect-runtime-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/kafka-streams-scala_2.11-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/jackson-databind-2.9.7.jar:/root/confluent-5.0.1/share/java/kafka/jackson-module-jaxb-annotations-2.9.7.jar:/root/confluent-5.0.1/share/java/kafka/log4j-1.2.17.jar:/root/confluent-5.0.1/share/java/kafka/connect-transforms-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/netty-3.10.6.Final.jar:/root/confluent-5.0.1/share/java/kafka/support-metrics-common-5.0.1.jar:/root/confluent-5.0.1/share/java/kafka/kafka-clients-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/commons-lang3-3.5.jar:/root/confluent-5.0.1/share/java/kafka/jetty-servlet-9.4.11.v20180605.jar:/root/confluent-5.0.1/share/java/kafka/scala-library-2.11.12.jar:/root/confluent-5.0.1/share/java/kafka/jetty-security-9.4.11.v20180605.jar:/root/confluent-5.0.1/share/java/kafka/jersey-container-servlet-core-2.27.jar:/root/confluent-5.0.1/share/java/kafka/hk2-api-2.5.0-b42.jar:/root/confluent-5.0.1/share/java/kafka/aopalliance-repackaged-2.5.0-b42.jar:/root/confluent-5.0.1/share/java/kafka/jetty-server-9.4.11.v20180605.jar:/root/confluent-5.0.1/share/java/kafka/connect-json-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/javax.annotation-api-1.2.jar:/root/confluent-5.0.1/share/java/kafka/jersey-container-servlet-2.27.jar:/root/confluent-5.0.1/share/java/kafka/jersey-common-2.27.jar:/root/confluent-5.0.1/share/java/kafka/jersey-media-jaxb-2.27.jar:/root/confluent-5.0.1/share/java/kafka/plexus-utils-3.1.0.jar:/root/confluent-5.0.1/share/java/kafka/jackson-core-asl-1.9.13.jar:/root/confluent-5.0.1/share/java/kafka/slf4j-api-1.7.25.jar:/root/confluent-5.0.1/share/java/kafka/jersey-hk2-2.27.jar:/root/confluent-5.0.1/share/java/kafka/xz-1.5.jar:/root/confluent-5.0.1/share/java/kafka/jetty-servlets-9.4.11.v20180605.jar:/root/confluent-5.0.1/share/java/kafka/commons-compress-1.8.1.jar:/root/confluent-5.0.1/share/java/kafka/jline-0.9.94.jar:/root/confluent-5.0.1/share/java/kafka/jersey-server-2.27.jar:/root/confluent-5.0.1/share/java/kafka/javax.ws.rs-api-2.1.jar:/root/confluent-5.0.1/share/java/kafka/commons-codec-1.9.jar:/root/confluent-5.0.1/share/java/kafka/activation-1.1.1.jar:/root/confluent-5.0.1/share/java/kafka/commons-lang3-3.1.jar:/root/confluent-5.0.1/share/java/kafka/jaxb-api-2.3.0.jar:/root/confluent-5.0.1/share/java/kafka/httpcore-4.4.4.jar:/root/confluent-5.0.1/share/java/kafka/jopt-simple-5.0.4.jar:/root/confluent-5.0.1/share/java/kafka/slf4j-log4j12-1.7.25.jar:/root/confluent-5.0.1/share/java/kafka/paranamer-2.7.jar:/root/confluent-5.0.1/share/java/kafka/jackson-annotations-2.9.7.jar:/root/confluent-5.0.1/share/java/kafka/reflections-0.9.11.jar:/root/confluent-5.0.1/share/java/kafka/scala-logging_2.11-3.9.0.jar:/root/confluent-5.0.1/share/java/kafka/osgi-resource-locator-1.0.1.jar:/root/confluent-5.0.1/share/java/kafka/kafka_2.11-2.0.1-cp1-javadoc.jar:/root/confluent-5.0.1/share/java/kafka/kafka_2.11-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/javax.inject-2.5.0-b42.jar:/root/confluent-5.0.1/share/java/kafka/snappy-java-1.1.7.1.jar:/root/confluent-5.0.1/share/java/kafka/jackson-jaxrs-base-2.9.7.jar:/root/confluent-5.0.1/share/java/kafka/connect-api-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/httpmime-4.5.2.jar:/root/confluent-5.0.1/share/java/kafka/kafka_2.11-2.0.1-cp1-test.jar:/root/confluent-5.0.1/share/java/kafka/common-utils-5.0.1.jar:/root/confluent-5.0.1/share/java/kafka/jackson-mapper-asl-1.9.13.jar:/root/confluent-5.0.1/share/java/kafka/jetty-continuation-9.4.11.v20180605.jar:/root/confluent-5.0.1/share/java/kafka/zkclient-0.10.jar:/root/confluent-5.0.1/share/java/kafka/kafka.jar:/root/confluent-5.0.1/share/java/kafka/maven-artifact-3.5.3.jar:/root/confluent-5.0.1/share/java/kafka/scala-reflect-2.11.12.jar:/root/confluent-5.0.1/share/java/kafka/kafka-streams-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/kafka-tools-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/commons-beanutils-1.8.3.jar:/root/confluent-5.0.1/share/java/kafka/argparse4j-0.7.0.jar:/root/confluent-5.0.1/share/java/kafka/commons-digester-1.8.1.jar:/root/confluent-5.0.1/share/java/kafka/kafka-log4j-appender-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/kafka/kafka-streams-test-utils-2.0.1-cp1.jar:/root/confluent-5.0.1/share/java/confluent-common/zookeeper-3.4.13.jar:/root/confluent-5.0.1/share/java/confluent-common/audience-annotations-0.5.0.jar:/root/confluent-5.0.1/share/java/confluent-common/build-tools-5.0.1.jar:/root/confluent-5.0.1/share/java/confluent-common/common-metrics-5.0.1.jar:/root/confluent-5.0.1/share/java/confluent-common/netty-3.10.6.Final.jar:/root/confluent-5.0.1/share/java/confluent-common/slf4j-api-1.7.25.jar:/root/confluent-5.0.1/share/java/confluent-common/jline-0.9.94.jar:/root/confluent-5.0.1/share/java/confluent-common/common-utils-5.0.1.jar:/root/confluent-5.0.1/share/java/confluent-common/common-config-5.0.1.jar:/root/confluent-5.0.1/share/java/confluent-common/zkclient-0.10.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/kafka-json-serializer-5.0.1.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/kafka-avro-serializer-5.0.1.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/jackson-core-2.9.6.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/avro-1.8.1.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/kafka-streams-avro-serde-5.0.1.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/jackson-core-asl-1.9.13.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/xz-1.5.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/commons-compress-1.8.1.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/kafka-schema-registry-client-5.0.1.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/paranamer-2.7.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/jackson-databind-2.9.6.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/jackson-annotations-2.9.6.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/jackson-mapper-asl-1.9.13.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/snappy-java-1.1.1.3.jar:/root/confluent-5.0.1/share/java/kafka-serde-tools/kafka-connect-avro-converter-5.0.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.9.7.jar:/root/confluent-5.0.1/bin/../share/java/kafka/connect-basic-auth-extension-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka_2.11-2.0.1-cp1-test-sources.jar:/root/confluent-5.0.1/bin/../share/java/kafka/rocksdbjni-5.7.3.jar:/root/confluent-5.0.1/bin/../share/java/kafka/javax.inject-1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jersey-client-2.27.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka_2.11-2.0.1-cp1-scaladoc.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jetty-io-9.4.11.v20180605.jar:/root/confluent-5.0.1/bin/../share/java/kafka/connect-file-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jackson-core-2.9.7.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jetty-http-9.4.11.v20180605.jar:/root/confluent-5.0.1/bin/../share/java/kafka/zookeeper-3.4.13.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jetty-client-9.4.11.v20180605.jar:/root/confluent-5.0.1/bin/../share/java/kafka/validation-api-1.1.0.Final.jar:/root/confluent-5.0.1/bin/../share/java/kafka/commons-logging-1.2.jar:/root/confluent-5.0.1/bin/../share/java/kafka/lz4-java-1.4.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/audience-annotations-0.5.0.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka-streams-examples-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jetty-util-9.4.11.v20180605.jar:/root/confluent-5.0.1/bin/../share/java/kafka/guava-20.0.jar:/root/confluent-5.0.1/bin/../share/java/kafka/support-metrics-client-5.0.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka_2.11-2.0.1-cp1-sources.jar:/root/confluent-5.0.1/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/root/confluent-5.0.1/bin/../share/java/kafka/commons-validator-1.4.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/commons-collections-3.2.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/javassist-3.22.0-CR2.jar:/root/confluent-5.0.1/bin/../share/java/kafka/httpclient-4.5.2.jar:/root/confluent-5.0.1/bin/../share/java/kafka/metrics-core-2.2.0.jar:/root/confluent-5.0.1/bin/../share/java/kafka/hk2-utils-2.5.0-b42.jar:/root/confluent-5.0.1/bin/../share/java/kafka/avro-1.8.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/hk2-locator-2.5.0-b42.jar:/root/confluent-5.0.1/bin/../share/java/kafka/connect-runtime-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka-streams-scala_2.11-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jackson-databind-2.9.7.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.9.7.jar:/root/confluent-5.0.1/bin/../share/java/kafka/log4j-1.2.17.jar:/root/confluent-5.0.1/bin/../share/java/kafka/connect-transforms-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/netty-3.10.6.Final.jar:/root/confluent-5.0.1/bin/../share/java/kafka/support-metrics-common-5.0.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka-clients-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/commons-lang3-3.5.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jetty-servlet-9.4.11.v20180605.jar:/root/confluent-5.0.1/bin/../share/java/kafka/scala-library-2.11.12.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jetty-security-9.4.11.v20180605.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jersey-container-servlet-core-2.27.jar:/root/confluent-5.0.1/bin/../share/java/kafka/hk2-api-2.5.0-b42.jar:/root/confluent-5.0.1/bin/../share/java/kafka/aopalliance-repackaged-2.5.0-b42.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jetty-server-9.4.11.v20180605.jar:/root/confluent-5.0.1/bin/../share/java/kafka/connect-json-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/javax.annotation-api-1.2.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jersey-container-servlet-2.27.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jersey-common-2.27.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jersey-media-jaxb-2.27.jar:/root/confluent-5.0.1/bin/../share/java/kafka/plexus-utils-3.1.0.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jackson-core-asl-1.9.13.jar:/root/confluent-5.0.1/bin/../share/java/kafka/slf4j-api-1.7.25.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jersey-hk2-2.27.jar:/root/confluent-5.0.1/bin/../share/java/kafka/xz-1.5.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jetty-servlets-9.4.11.v20180605.jar:/root/confluent-5.0.1/bin/../share/java/kafka/commons-compress-1.8.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jline-0.9.94.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jersey-server-2.27.jar:/root/confluent-5.0.1/bin/../share/java/kafka/javax.ws.rs-api-2.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/commons-codec-1.9.jar:/root/confluent-5.0.1/bin/../share/java/kafka/activation-1.1.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/commons-lang3-3.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jaxb-api-2.3.0.jar:/root/confluent-5.0.1/bin/../share/java/kafka/httpcore-4.4.4.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/root/confluent-5.0.1/bin/../share/java/kafka/slf4j-log4j12-1.7.25.jar:/root/confluent-5.0.1/bin/../share/java/kafka/paranamer-2.7.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jackson-annotations-2.9.7.jar:/root/confluent-5.0.1/bin/../share/java/kafka/reflections-0.9.11.jar:/root/confluent-5.0.1/bin/../share/java/kafka/scala-logging_2.11-3.9.0.jar:/root/confluent-5.0.1/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka_2.11-2.0.1-cp1-javadoc.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka_2.11-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/javax.inject-2.5.0-b42.jar:/root/confluent-5.0.1/bin/../share/java/kafka/snappy-java-1.1.7.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jackson-jaxrs-base-2.9.7.jar:/root/confluent-5.0.1/bin/../share/java/kafka/connect-api-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/httpmime-4.5.2.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka_2.11-2.0.1-cp1-test.jar:/root/confluent-5.0.1/bin/../share/java/kafka/common-utils-5.0.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jackson-mapper-asl-1.9.13.jar:/root/confluent-5.0.1/bin/../share/java/kafka/jetty-continuation-9.4.11.v20180605.jar:/root/confluent-5.0.1/bin/../share/java/kafka/zkclient-0.10.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka.jar:/root/confluent-5.0.1/bin/../share/java/kafka/maven-artifact-3.5.3.jar:/root/confluent-5.0.1/bin/../share/java/kafka/scala-reflect-2.11.12.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka-streams-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka-tools-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/commons-beanutils-1.8.3.jar:/root/confluent-5.0.1/bin/../share/java/kafka/argparse4j-0.7.0.jar:/root/confluent-5.0.1/bin/../share/java/kafka/commons-digester-1.8.1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka-log4j-appender-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/kafka/kafka-streams-test-utils-2.0.1-cp1.jar:/root/confluent-5.0.1/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/*
        os.spec = Linux, amd64, 3.10.0-957.1.3.el7.x86_64
        os.vcpus = 6
 (org.apache.kafka.connect.runtime.WorkerInfo:71)
[2018-12-17 02:53:58,842] INFO Scanning for plugin classes. This might take a moment ... (org.apache.kafka.connect.cli.ConnectStandalone:76)
[2018-12-17 02:54:00,615] INFO Registered loader: sun.misc.Launcher$AppClassLoader@764c12b6 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:245)
[2018-12-17 02:54:00,615] INFO Added plugin 'io.debezium.connector.postgresql.PostgresConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,615] INFO Added plugin 'org.apache.kafka.connect.tools.MockSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,615] INFO Added plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,616] INFO Added plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,616] INFO Added plugin 'org.apache.kafka.connect.tools.MockConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,616] INFO Added plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,616] INFO Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,616] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,616] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,616] INFO Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,616] INFO Added plugin 'org.apache.kafka.connect.converters.ShortConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,616] INFO Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,617] INFO Added plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,617] INFO Added plugin 'org.apache.kafka.connect.converters.FloatConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,617] INFO Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,617] INFO Added plugin 'io.confluent.connect.avro.AvroConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,617] INFO Added plugin 'org.apache.kafka.connect.converters.LongConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,617] INFO Added plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,617] INFO Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,617] INFO Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,617] INFO Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'io.debezium.transforms.ByLogicalTableRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'io.debezium.transforms.UnwrapFromEnvelope' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,618] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,619] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,619] INFO Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,619] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,619] INFO Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,619] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,619] INFO Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,619] INFO Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:174)
[2018-12-17 02:54:00,623] INFO Added aliases 'PostgresConnector' and 'Postgres' to plugin 'io.debezium.connector.postgresql.PostgresConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,623] INFO Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,623] INFO Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,623] INFO Added aliases 'MockConnector' and 'Mock' to plugin 'org.apache.kafka.connect.tools.MockConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,623] INFO Added aliases 'MockSinkConnector' and 'MockSink' to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,623] INFO Added aliases 'MockSourceConnector' and 'MockSource' to plugin 'org.apache.kafka.connect.tools.MockSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,623] INFO Added aliases 'SchemaSourceConnector' and 'SchemaSource' to plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,623] INFO Added aliases 'VerifiableSinkConnector' and 'VerifiableSink' to plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'VerifiableSourceConnector' and 'VerifiableSource' to plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'AvroConverter' and 'Avro' to plugin 'io.confluent.connect.avro.AvroConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,624] INFO Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,625] INFO Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,625] INFO Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,625] INFO Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,625] INFO Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,625] INFO Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,625] INFO Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,625] INFO Added alias 'SimpleHeaderConverter' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:393)
[2018-12-17 02:54:00,625] INFO Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2018-12-17 02:54:00,625] INFO Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:393)
[2018-12-17 02:54:00,625] INFO Added alias 'UnwrapFromEnvelope' to plugin 'io.debezium.transforms.UnwrapFromEnvelope' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:393)
[2018-12-17 02:54:00,626] INFO Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:393)
[2018-12-17 02:54:00,626] INFO Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:393)
[2018-12-17 02:54:00,626] INFO Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:393)
[2018-12-17 02:54:00,626] INFO Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:393)
[2018-12-17 02:54:00,636] INFO StandaloneConfig values:
        access.control.allow.methods =
        access.control.allow.origin =
        bootstrap.servers = [192.168.1.15:9092]
        config.providers = []
        header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
        internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
        internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
        key.converter = class org.apache.kafka.connect.json.JsonConverter
        listeners = null
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        offset.flush.interval.ms = 10000
        offset.flush.timeout.ms = 5000
        offset.storage.file.filename = /tmp/connect.offsets
        plugin.path = [/user/share/java/]
        rest.advertised.host.name = null
        rest.advertised.listener = null
        rest.advertised.port = null
        rest.extension.classes = []
        rest.host.name = null
        rest.port = 8083
        ssl.client.auth = none
        value.converter = class org.apache.kafka.connect.json.JsonConverter
 (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:279)
[2018-12-17 02:54:00,637] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils:43)
[2018-12-17 02:54:00,640] INFO AdminClientConfig values:
        bootstrap.servers = [192.168.1.15:9092]
        client.id =
        connections.max.idle.ms = 300000
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 120000
        retries = 5
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        send.buffer.bytes = 131072
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
 (org.apache.kafka.clients.admin.AdminClientConfig:279)
[2018-12-17 02:54:00,664] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2018-12-17 02:54:00,664] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2018-12-17 02:54:00,664] WARN The configuration 'offset.storage.file.filename' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2018-12-17 02:54:00,664] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2018-12-17 02:54:00,665] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2018-12-17 02:54:00,665] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2018-12-17 02:54:00,665] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2018-12-17 02:54:00,665] INFO Kafka version : 2.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:109)
[2018-12-17 02:54:00,665] INFO Kafka commitId : 3d167ab3fdad2e73 (org.apache.kafka.common.utils.AppInfoParser:110)
[2018-12-17 02:54:00,732] INFO Kafka cluster ID: kBKaggfRRKazPcH3hcCq2g (org.apache.kafka.connect.util.ConnectUtils:59)
[2018-12-17 02:54:00,743] INFO Logging initialized @2124ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:193)
[2018-12-17 02:54:00,801] INFO Added connector for http://:8083 (org.apache.kafka.connect.runtime.rest.RestServer:119)
[2018-12-17 02:54:00,816] INFO Advertised URI: http://127.0.0.1:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:267)
[2018-12-17 02:54:00,821] INFO Kafka version : 2.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:109)
[2018-12-17 02:54:00,822] INFO Kafka commitId : 3d167ab3fdad2e73 (org.apache.kafka.common.utils.AppInfoParser:110)
[2018-12-17 02:54:00,899] INFO JsonConverterConfig values:
        converter.type = key
        schemas.cache.size = 1000
        schemas.enable = false
 (org.apache.kafka.connect.json.JsonConverterConfig:279)
[2018-12-17 02:54:00,900] INFO JsonConverterConfig values:
        converter.type = value
        schemas.cache.size = 1000
        schemas.enable = false
 (org.apache.kafka.connect.json.JsonConverterConfig:279)
[2018-12-17 02:54:00,904] INFO Kafka Connect standalone worker initialization took 2069ms (org.apache.kafka.connect.cli.ConnectStandalone:92)
[2018-12-17 02:54:00,904] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:49)
[2018-12-17 02:54:00,904] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:88)
[2018-12-17 02:54:00,904] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:172)
[2018-12-17 02:54:00,905] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:59)
[2018-12-17 02:54:00,908] INFO Worker started (org.apache.kafka.connect.runtime.Worker:177)
[2018-12-17 02:54:00,908] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:90)
[2018-12-17 02:54:00,908] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:163)
[2018-12-17 02:54:00,964] INFO jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_191-b12 (org.eclipse.jetty.server.Server:374)
[2018-12-17 02:54:00,985] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session:365)
[2018-12-17 02:54:00,985] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session:370)
[2018-12-17 02:54:00,986] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session:149)
Dec 17, 2018 2:54:01 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.RootResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.RootResource will be ignored.
Dec 17, 2018 2:54:01 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored.
Dec 17, 2018 2:54:01 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored.
Dec 17, 2018 2:54:01 AM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2018-12-17 02:54:01,303] INFO Started o.e.j.s.ServletContextHandler@7d5508e0{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:851)
[2018-12-17 02:54:01,308] INFO Started http_8083@7eb6b6b6{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:289)
[2018-12-17 02:54:01,308] INFO Started @2689ms (org.eclipse.jetty.server.Server:411)
[2018-12-17 02:54:01,308] INFO Advertised URI: http://127.0.0.1:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:267)
[2018-12-17 02:54:01,308] INFO REST server listening at http://127.0.0.1:8083/, advertising URL http://127.0.0.1:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:217)
[2018-12-17 02:54:01,308] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:55)
[2018-12-17 02:54:01,328] WARN The connection password is empty (io.debezium.connector.postgresql.PostgresConnector:89)
[2018-12-17 02:54:01,438] INFO Successfully tested connection for jdbc:postgresql://192.168.1.15:5432/recon_profiles with user 'debezium' (io.debezium.connector.postgresql.PostgresConnector:102)
[2018-12-17 02:54:01,441] INFO ConnectorConfig values:
        config.action.reload = RESTART
        connector.class = io.debezium.connector.postgresql.PostgresConnector
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = class io.confluent.connect.avro.AvroConverter
        name = debezium connector
        tasks.max = 1
        transforms = []
        value.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.ConnectorConfig:279)
[2018-12-17 02:54:01,442] INFO EnrichedConnectorConfig values:
        config.action.reload = RESTART
        connector.class = io.debezium.connector.postgresql.PostgresConnector
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = class io.confluent.connect.avro.AvroConverter
        name = debezium connector
        tasks.max = 1
        transforms = []
        value.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:279)
[2018-12-17 02:54:01,442] INFO Creating connector debezium connector of type io.debezium.connector.postgresql.PostgresConnector (org.apache.kafka.connect.runtime.Worker:235)
[2018-12-17 02:54:01,444] INFO Instantiated connector debezium connector with version 0.8.1.Final of type class io.debezium.connector.postgresql.PostgresConnector (org.apache.kafka.connect.runtime.Worker:238)
[2018-12-17 02:54:01,445] INFO Finished creating connector debezium connector (org.apache.kafka.connect.runtime.Worker:257)
[2018-12-17 02:54:01,445] INFO SourceConnectorConfig values:
        config.action.reload = RESTART
        connector.class = io.debezium.connector.postgresql.PostgresConnector
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = class io.confluent.connect.avro.AvroConverter
        name = debezium connector
        tasks.max = 1
        transforms = []
        value.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.SourceConnectorConfig:279)
[2018-12-17 02:54:01,445] INFO EnrichedConnectorConfig values:
        config.action.reload = RESTART
        connector.class = io.debezium.connector.postgresql.PostgresConnector
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = class io.confluent.connect.avro.AvroConverter
        name = debezium connector
        tasks.max = 1
        transforms = []
        value.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:279)
[2018-12-17 02:54:01,447] INFO Creating task debezium connector-0 (org.apache.kafka.connect.runtime.Worker:396)
[2018-12-17 02:54:01,447] INFO ConnectorConfig values:
        config.action.reload = RESTART
        connector.class = io.debezium.connector.postgresql.PostgresConnector
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = class io.confluent.connect.avro.AvroConverter
        name = debezium connector
        tasks.max = 1
        transforms = []
        value.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.ConnectorConfig:279)
[2018-12-17 02:54:01,447] INFO EnrichedConnectorConfig values:
        config.action.reload = RESTART
        connector.class = io.debezium.connector.postgresql.PostgresConnector
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = class io.confluent.connect.avro.AvroConverter
        name = debezium connector
        tasks.max = 1
        transforms = []
        value.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:279)
[2018-12-17 02:54:01,449] INFO TaskConfig values:
        task.class = class io.debezium.connector.postgresql.PostgresConnectorTask
 (org.apache.kafka.connect.runtime.TaskConfig:279)
[2018-12-17 02:54:01,449] INFO Instantiated task debezium connector-0 with version 0.8.1.Final of type io.debezium.connector.postgresql.PostgresConnectorTask (org.apache.kafka.connect.runtime.Worker:411)
[2018-12-17 02:54:01,453] INFO AvroConverterConfig values:
        schema.registry.url = [http://localhost:8081]
        basic.auth.user.info = [hidden]
        auto.register.schemas = true
        max.schemas.per.subject = 1000
        basic.auth.credentials.source = URL
        schema.registry.basic.auth.user.info = [hidden]
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.connect.avro.AvroConverterConfig:179)
[2018-12-17 02:54:01,460] INFO KafkaAvroSerializerConfig values:
        schema.registry.url = [http://localhost:8081]
        basic.auth.user.info = [hidden]
        auto.register.schemas = true
        max.schemas.per.subject = 1000
        basic.auth.credentials.source = URL
        schema.registry.basic.auth.user.info = [hidden]
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.kafka.serializers.KafkaAvroSerializerConfig:179)
[2018-12-17 02:54:01,464] INFO KafkaAvroDeserializerConfig values:
        schema.registry.url = [http://localhost:8081]
        basic.auth.user.info = [hidden]
        auto.register.schemas = true
        max.schemas.per.subject = 1000
        basic.auth.credentials.source = URL
        schema.registry.basic.auth.user.info = [hidden]
        specific.avro.reader = false
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.kafka.serializers.KafkaAvroDeserializerConfig:179)
[2018-12-17 02:54:01,583] INFO AvroDataConfig values:
        schemas.cache.config = 1000
        enhanced.avro.schema.support = false
        connect.meta.data = true
 (io.confluent.connect.avro.AvroDataConfig:179)
[2018-12-17 02:54:01,584] INFO AvroConverterConfig values:
        schema.registry.url = [http://localhost:8081]
        basic.auth.user.info = [hidden]
        auto.register.schemas = true
        max.schemas.per.subject = 1000
        basic.auth.credentials.source = URL
        schema.registry.basic.auth.user.info = [hidden]
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.connect.avro.AvroConverterConfig:179)
[2018-12-17 02:54:01,584] INFO KafkaAvroSerializerConfig values:
        schema.registry.url = [http://localhost:8081]
        basic.auth.user.info = [hidden]
        auto.register.schemas = true
        max.schemas.per.subject = 1000
        basic.auth.credentials.source = URL
        schema.registry.basic.auth.user.info = [hidden]
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.kafka.serializers.KafkaAvroSerializerConfig:179)
[2018-12-17 02:54:01,584] INFO KafkaAvroDeserializerConfig values:
        schema.registry.url = [http://localhost:8081]
        basic.auth.user.info = [hidden]
        auto.register.schemas = true
        max.schemas.per.subject = 1000
        basic.auth.credentials.source = URL
        schema.registry.basic.auth.user.info = [hidden]
        specific.avro.reader = false
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.kafka.serializers.KafkaAvroDeserializerConfig:179)
[2018-12-17 02:54:01,584] INFO AvroDataConfig values:
        schemas.cache.config = 1000
        enhanced.avro.schema.support = false
        connect.meta.data = true
 (io.confluent.connect.avro.AvroDataConfig:179)
[2018-12-17 02:54:01,584] INFO Set up the key converter class io.confluent.connect.avro.AvroConverter for task debezium connector-0 using the connector config (org.apache.kafka.connect.runtime.Worker:436)
[2018-12-17 02:54:01,584] INFO Set up the value converter class io.confluent.connect.avro.AvroConverter for task debezium connector-0 using the connector config (org.apache.kafka.connect.runtime.Worker:442)
[2018-12-17 02:54:01,585] INFO Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task debezium connector-0 using the worker config (org.apache.kafka.connect.runtime.Worker:446)
[2018-12-17 02:54:01,592] INFO ProducerConfig values:
        acks = all
        batch.size = 16384
        bootstrap.servers = [192.168.1.15:9092]
        buffer.memory = 33554432
        client.id =
        compression.type = none
        confluent.batch.expiry.ms = 30000
        connections.max.idle.ms = 540000
        enable.idempotence = false
        interceptor.classes = []
        key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        linger.ms = 0
        max.block.ms = 9223372036854775807
        max.in.flight.requests.per.connection = 1
        max.request.size = 1048576
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
        receive.buffer.bytes = 32768
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        retries = 2147483647
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        send.buffer.bytes = 131072
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        transaction.timeout.ms = 60000
        transactional.id = null
        value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig:279)
[2018-12-17 02:54:01,605] INFO Kafka version : 2.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:109)
[2018-12-17 02:54:01,605] INFO Kafka commitId : 3d167ab3fdad2e73 (org.apache.kafka.common.utils.AppInfoParser:110)
[2018-12-17 02:54:01,610] INFO Created connector debezium connector (org.apache.kafka.connect.cli.ConnectStandalone:104)
[2018-12-17 02:54:01,611] INFO Starting PostgresConnectorTask with configuration: (io.debezium.connector.common.BaseSourceTask:40)
[2018-12-17 02:54:01,612] INFO    connector.class = io.debezium.connector.postgresql.PostgresConnector (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    database.user = debezium (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    database.dbname = recon_profiles (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    tasks.max = 1 (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    database.server.name = dbserver1 (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    database.port = 5432 (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    value.converter.schema.registry.url = http://localhost:8081 (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    topic.prefix = pgsql- (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    database.sslmode = disable (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    auto.evolve = true (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    task.class = io.debezium.connector.postgresql.PostgresConnectorTask (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    database.hostname = 192.168.1.15 (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    database.password = ******** (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    name = debezium connector (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,612] INFO    auto.create = true (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,613] INFO    value.converter = io.confluent.connect.avro.AvroConverter (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,613] INFO    key.converter = io.confluent.connect.avro.AvroConverter (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,613] INFO    key.converter.schema.registry.url = http://localhost:8081 (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,613] INFO    pk.mode = record_key (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,613] INFO    snapshot.mode = never (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,613] INFO    database.tcpKeepAlive = true (io.debezium.connector.common.BaseSourceTask:42)
[2018-12-17 02:54:01,741] INFO user 'debezium' connected to database 'recon_profiles' on PostgreSQL 10.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit with roles:
        role 'pg_read_all_settings' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false]
        role 'pg_stat_scan_tables' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false]
        role 'debezium' [superuser: true, replication: true, inherit: true, create role: false, create db: false, can log in: true]
        role 'pg_monitor' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false]
        role 'postgres' [superuser: true, replication: true, inherit: true, create role: true, create db: true, can log in: true]
        role 'pg_read_all_stats' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false]
        role 'pg_signal_backend' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false]
        role 'kong' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: true] (io.debezium.connector.postgresql.PostgresConnectorTask:76)
[2018-12-17 02:54:01,742] INFO Found previous offset source_info[server='dbserver1', lsn=0/28E9B10, txId=1304, useconds=1544776465164000, snapshot=false] (io.debezium.connector.postgresql.PostgresConnectorTask:90)
[2018-12-17 02:54:01,742] INFO Previous snapshot has completed successfully, streaming logical changes from last known position (io.debezium.connector.postgresql.PostgresConnectorTask:105)
[2018-12-17 02:54:01,743] INFO Requested thread factory for connector PostgresConnector, id = dbserver1 named = records-stream-producer (io.debezium.util.Threads:231)
[2018-12-17 02:54:02,002] INFO REPLICA IDENTITY for 'community.users_clients' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,002] INFO REPLICA IDENTITY for 'community.departments' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,003] INFO REPLICA IDENTITY for 'community.certifications' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,003] INFO REPLICA IDENTITY for 'community.organization_types' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,003] INFO REPLICA IDENTITY for 'community.organization_addresses' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,003] INFO REPLICA IDENTITY for 'community.contacts' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,004] INFO REPLICA IDENTITY for 'community.organization_capabilities' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,004] INFO REPLICA IDENTITY for 'community.country' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,004] INFO REPLICA IDENTITY for 'community.email_notification_users' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,004] INFO REPLICA IDENTITY for 'community.addresses' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,005] INFO REPLICA IDENTITY for 'community.contact_organizations' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,005] INFO REPLICA IDENTITY for 'community.designations' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,005] INFO REPLICA IDENTITY for 'clients.organization_attributes' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,005] INFO REPLICA IDENTITY for 'community.contact_types' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,005] INFO REPLICA IDENTITY for 'clients.organizations_internal_invoice' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,005] INFO REPLICA IDENTITY for 'community.organizations' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,006] INFO REPLICA IDENTITY for 'community.organizations_certifications' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,006] INFO REPLICA IDENTITY for 'community.state_province' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,006] INFO REPLICA IDENTITY for 'community.organization_organization_types' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,006] INFO REPLICA IDENTITY for 'community.contact_profiles' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,006] INFO REPLICA IDENTITY for 'community.address_types' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,007] INFO REPLICA IDENTITY for 'community.email_notifications' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns (io.debezium.connector.postgresql.PostgresSchema:97)
[2018-12-17 02:54:02,019] INFO Creating thread debezium-postgresconnector-dbserver1-records-stream-producer (io.debezium.util.Threads:247)
[2018-12-17 02:54:02,019] INFO WorkerSourceTask{id=debezium connector-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:199)
[2018-12-17 02:54:11,610] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:54:11,610] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:54:32,042] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:54:32,042] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:54:52,064] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:54:52,064] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:55:22,096] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:55:22,096] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:55:52,128] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:55:52,129] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:56:22,154] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:56:22,155] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:56:52,187] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:56:52,187] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:57:02,198] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:57:02,198] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:57:22,219] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:57:22,220] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:57:32,221] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:57:32,221] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:58:02,253] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:58:02,253] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:58:08,245] WARN Closing replication stream due to db connection IO exception... (io.debezium.connector.postgresql.RecordsStreamProducer:131)
[2018-12-17 02:58:08,316] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:58:08,316] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:58:08,317] ERROR WorkerSourceTask{id=debezium connector-0} Exception thrown while calling task.commit() (org.apache.kafka.connect.runtime.WorkerSourceTask:508)
org.apache.kafka.connect.errors.ConnectException: org.postgresql.util.PSQLException: Database connection failed when writing to copy
        at io.debezium.connector.postgresql.RecordsStreamProducer.commit(RecordsStreamProducer.java:158)
        at io.debezium.connector.postgresql.PostgresConnectorTask.commit(PostgresConnectorTask.java:140)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.commitSourceTask(WorkerSourceTask.java:506)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:447)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:238)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: Database connection failed when writing to copy
        at org.postgresql.core.v3.QueryExecutorImpl.flushCopy(QueryExecutorImpl.java:942)
        at org.postgresql.core.v3.CopyDualImpl.flushCopy(CopyDualImpl.java:23)
        at org.postgresql.core.v3.replication.V3PGReplicationStream.updateStatusInternal(V3PGReplicationStream.java:176)
        at org.postgresql.core.v3.replication.V3PGReplicationStream.forceUpdateStatus(V3PGReplicationStream.java:99)
        at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.doFlushLsn(PostgresReplicationConnection.java:246)
        at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.flushLsn(PostgresReplicationConnection.java:239)
        at io.debezium.connector.postgresql.RecordsStreamProducer.commit(RecordsStreamProducer.java:153)
        ... 11 more
Caused by: java.net.SocketException: Broken pipe (Write failed)
        at java.net.SocketOutputStream.socketWrite0(Native Method)
        at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
        at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
        at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
        at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
        at org.postgresql.core.PGStream.flush(PGStream.java:553)
        at org.postgresql.core.v3.QueryExecutorImpl.flushCopy(QueryExecutorImpl.java:939)
        ... 17 more
[2018-12-17 02:58:08,318] ERROR WorkerSourceTask{id=debezium connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177)
org.apache.kafka.connect.errors.ConnectException: An exception ocurred in the change event producer. This connector will be stopped.
        at io.debezium.connector.base.ChangeEventQueue.throwProducerFailureIfPresent(ChangeEventQueue.java:168)
        at io.debezium.connector.base.ChangeEventQueue.poll(ChangeEventQueue.java:149)
        at io.debezium.connector.postgresql.PostgresConnectorTask.poll(PostgresConnectorTask.java:146)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:244)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:220)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: Database connection failed when reading from copy
        at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:964)
        at org.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImpl.java:41)
        at org.postgresql.core.v3.replication.V3PGReplicationStream.receiveNextData(V3PGReplicationStream.java:145)
        at org.postgresql.core.v3.replication.V3PGReplicationStream.readInternal(V3PGReplicationStream.java:114)
        at org.postgresql.core.v3.replication.V3PGReplicationStream.read(V3PGReplicationStream.java:60)
        at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.read(PostgresReplicationConnection.java:198)
        at io.debezium.connector.postgresql.RecordsStreamProducer.streamChanges(RecordsStreamProducer.java:126)
        at io.debezium.connector.postgresql.RecordsStreamProducer.lambda$start$1(RecordsStreamProducer.java:112)
        ... 5 more
Caused by: java.io.EOFException
        at org.postgresql.core.PGStream.receiveChar(PGStream.java:284)
        at org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1006)
        at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:962)
        ... 12 more
[2018-12-17 02:58:08,318] ERROR WorkerSourceTask{id=debezium connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178)
[2018-12-17 02:58:08,319] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1103)
[2018-12-17 02:58:18,246] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:58:18,246] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)
[2018-12-17 02:58:28,247] INFO WorkerSourceTask{id=debezium connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:397)
[2018-12-17 02:58:28,247] INFO WorkerSourceTask{id=debezium connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:414)

Gunnar Morling

unread,
Dec 18, 2018, 7:28:47 AM12/18/18
to debezium
Hum, hard to say without having more information. Is this to say that the connector gets stuck as soon as you alter a record in any of the captured tables?

I have to manually restart the database


This doesn't sound good for sure; it makes me wonder whether something more general is broken in your set-up, as a connector shouldn't have the "power" to bring down the DB. Any chance you provide steps for reproducing this issue?

Ravichandra cheeti

unread,
Dec 18, 2018, 10:36:07 AM12/18/18
to debe...@googlegroups.com
Hi Gunnar Morling, 
                                 To reproduce this problem, Just add a external postgresql-10 database to the debezium docker images. And install decoderbufs plugin in the external postgres database. I tried with every version of docker images got the same issue. Now i moved to confluent source installation still facing the same problem. If say i have to manually restart the database(sometimes i have to do it ) sometimes it wont go down. I Tried changing the plugin from decoderbufs to wal2json still the same issue. 

--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To post to this group, send email to debe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/03dd68d3-21bf-452d-900f-0c2d0335a000%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
Regards
Ch. Ravichandra
System Admin
EnterPi Software Solutions Private Limited

Gunnar Morling

unread,
Dec 18, 2018, 4:24:17 PM12/18/18
to debezium
> Just add a external postgresql-10 database to the debezium docker images

Can you share the steps you are doing for this? Which image specifically are you using as a base? Have you checked out the Debezium images for Postgres?
...

Ravichandra cheeti

unread,
Dec 19, 2018, 4:31:10 AM12/19/18
to debe...@googlegroups.com
Hi Gunnar,
                 My postgresql database os base is centos and i used ubuntu also. I am sending the steps that i followed for installation of the decoderbufs and wal2json for postgresql and docker's how i used.

Postgresql-10:


  1. Inorder for postgres to work with the debezium we need to enable logical replication and need to install any plugin for these i.e decoderbufs or wal2json.

  2. To install decoderbufs plugin and required packages use below script

Nano decoderbufs.sh and paste the following commands

#!/bin/bash

apt-get update \

    apt-get install -f -y --no-install-recommends \

       software-properties-common \

       build-essential \

       pkg-config \

       git \

       postgresql-10

       postgresql-server-dev-10 \

       libproj-dev

     apt-get install -f -y --no-install-recommends \

       postgresql-10-postgis-2.4 \

       postgresql-10-postgis-2.4-scripts \

       postgis

    apt-get clean && apt-get update && apt-get install -f -y --no-install-recommends \            

       liblwgeom-dev \              

    add-apt-repository "deb http://ftp.debian.org/debian testing main contrib" \

   apt-get update && apt-get install -f -y --no-install-recommends \

       libprotobuf-c-dev=1.3.* \

   rm -rf /var/lib/apt/lists/*             

# Compile the plugin from sources and install it

git clone https://github.com/debezium/postgres-decoderbufs -b $PLUGIN_VERSION --single-branch   cd /postgres-decoderbufs \

    make && make install \

    cd / \

    rm -rf postgres-decoderbufs


git clone https://github.com/eulerto/wal2json -b master --single-branch \

    cd /wal2json \

    git checkout $WAL2JSON_COMMIT_ID \

    make && make install \

    cd / \

    rm -rf wal2json


Save & exit

3. Chmod u+x decoderbufs.sh to make it executable

4. ./decoderbufs to execute it

5. Next install postgresql-10-devel package in order to enable the logical replication

6. Inorder for postgresql to use logical replication make changes in the following files

    Nano postgresql.conf

# LOGGING
log_min_error_statement = fatal

# CONNECTION
listen_addresses = '*'

# MODULES
shared_preload_libraries = 'decoderbufs'

# REPLICATION
wal_level = logical             # minimal, archive, hot_standby, or logical (change requires restart)
max_wal_senders = 1             # max number of walsender processes (change requires restart)
wal_keep_segments = 4          # in logfile segments, 16MB each; 0 disables
#wal_sender_timeout = 60s       # in milliseconds; 0 disables
#max_replication_slots = 1       # max number of replication slots (change requires restart)


Nano pg_hba.conf

local   replication     <youruser>                    trust
host    replication     <youruser> 127.0.0.1/32            trust
host    replication     <youruser> ::1/128                 trust
Host    replication       all dockernetwork       trust


7. After installing and configuring the above command restart the postgresql server




Installation to Debezium dockers:


  1. Sudo apt-get install docker.io to install the docker in ubuntu

  2. In order run the debezium containers use below commands


START ZOOKEEPER DOCKER CONTAINER

docker run -d --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper:0.7

START KAFKA DOCKER CONTAINER

docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper debezium/kafka:0.7



START KAFKA CONNECT DOCKER CONTAINER

docker run -it --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets --link zookeeper:zookeeper --link kafka:kafka   --add-host=hostname:posgtesqldatabaseip debezium/connect:0.9


USING KAFKA CONNECT REST API

New terminal-->


curl -H "Accept:application/json" localhost:8083/

       OUTPUT:{"version":"1.0.1","commit":"cb8625948210849f"}

       curl -H "Accept:application/json" localhost:8083/connectors/

       OUTPUT:[]

Post below curl in order to make the postgresql connector active

   curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{"name": "any name-connector","config": {"connector.class": "io.debezium.connector.postgresql.PostgresConnector","tasks.max": "1","database.hostname": "dbip","database.port": "5432","database.user": "dbuser","database.password": "dbpassword","database.dbname" : "userdatabase","database.server.name": "dbserver1","database.whitelist": "userdatabase","database.history.kafka.bootstrap.servers": "kafka:9092","database.history.kafka.topic": "schema-changes.recon"}}'


After posting the above curl you see some automatic creation of topics of your tables in the kafka connect terminal.

           These also creates a debezium replication slot in postgresql.


If you want to monitor a topic changes then use below commands

docker run -it --name watcher --rm  --link zookeeper:zookeeper debezium/kafka:0.9 watch-topic -a -k dbserver1..topicname(database server name.schemaname.tablename)
  

--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To post to this group, send email to debe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages