jdbc sink connector throwing error

1 686 vues
Accéder directement au premier message non lu

sri krishna alla

non lue,
1 sept. 2016, 12:49:2301/09/2016
à Confluent Platform
I am setting up a jdbc sink connector using mysql as the DB to which to write from the topic. I am getting the following error -

[2016-09-01 11:33:07,323] ERROR Task jdbc-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:401)
org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: monitor-alert
           at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:181)
           at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:57)
           at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:64)
           at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:59)
           at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:66)
           at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:381)
           at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:227)
           at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
           at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
           at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
           at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
           at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
           at java.lang.Thread.run(Thread.java:745)

This is the configuration I am using to set up the connector in Kafka Connect -
curl -X POST -H "Content-Type: application/json" --data '{"name": "jdbc-sink","config":{"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector","tasks.max": "1","topics": "monitor-alert","connection.url": "jdbc:mysql://127.0.0.1:3306/MONITORING_ALERTS?user=root&password=root","auto.create": "true","insert.mode": "insert","batch.size": "1"}}' http://localhost:8083/connectors

The connect-distributed properties are as follows -
bootstrap.servers=localhost:9093
group.id=alert
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.topic=connect-offsets
offset.flush.interval.ms=10000
config.storage.topic=connect-configs
status.storage.topic=status-storage
offset.flush.interval.ms=10000

Could someone please check this and let me know what could be the cause of this error? BTW, I am using the latest branch of Kafka-jdbc-connect and kafka 0.10.0.1

Thanks,
Sri

Vijay Arumugam

non lue,
7 déc. 2016, 12:20:2207/12/2016
à Confluent Platform
Hi,
I am getting the same issue.. were you able to solve this issue? can you point me what need to be done to correct this?

Maria Abramiuc

non lue,
3 mars 2017, 10:26:5503/03/2017
à Confluent Platform
Hi,

 This issue still exists, is it posible to use JDBC Sink Connector with JsonConverter and schemas.enabled=false ?
 
 I seems that io.confluent.connect.jdbc.sink.metadata.FieldsMetadata must receive a schema, is this true?
 
 After setting schemas.enabled=true and adding the schema in the json message, I was able to write objects to the tables, but I'm still getting and error:


ERROR Task test-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:449)
org.apache.kafka.connect.errors.ConnectException: Update count (-6) did not sum up to total number of records inserted (3)
    at io.confluent.connect.jdbc.sink.BufferedRecords.flush(BufferedRecords.java:105)
    at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:65)
    at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:66)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:429)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:250)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:179)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)

    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

 
 Did I misconfigured something?

Thank you,
Maria Abramiuc

Gwen Shapira

non lue,
3 mars 2017, 21:12:3203/03/2017
à confluent...@googlegroups.com
This error is really strange. It basically says:
"We think we inserted 3 rows to the database, but when we asked the JDBC driver, it told us it inserted -6 rows".

Can you share your connector configuration and which JDBC driver you used? Can you check how many rows were inserted to the database?

Thanks,
Gwen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/f61f7f40-8103-476f-9491-495c502f16a4%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Gwen Shapira
Product Manager | Confluent
650.450.2760 @gwenshap
Follow us: Twitter | blog

Maria Abramiuc

non lue,
6 mars 2017, 03:03:3306/03/2017
à Confluent Platform
Hi,
 
I tested again,

select * from temp_test23 // one row inserted

Error:
 Task test-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:449)
org.apache.kafka.connect.errors.ConnectException: Update count (-2) did not sum up to total number of records inserted (1)

my config:

sink config:

name=test-sink
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1

# The topics to consume from - required for sink connectors like this one
topics=TEMP_TEST23
fields.whitelist=name
table.whitelist=TEMP_TEST23
auto.create=true
pk.mode=none
insert.mode=insert
pk.fields=none

connector config:


bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
# The converters specify the format of data in Kafka and how to translate it into Connect data.
# Every Connect user will need to configure these based on the format they want their data in
# when loaded from or stored into Kafka


# The internal converter used for offsets and config data is configurable and must be specified,
# but most users will always want to use the built-in default. Offset and config data is never
# visible outside of Connect in this format.
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false

# Local storage file for offset data
offset.storage.file.filename=/tmp/connect.offsets

Oracle driver: ojdbc6

Manifest-Version: 1.0
Ant-Version: Apache Ant 1.6.5
Created-By: 1.5.0_30-b03 (Sun Microsystems Inc.)
Implementation-Vendor: Oracle Corporation
Implementation-Title: JDBC
Implementation-Version: 11.2.0.3.0
Repository-Id: JAVAVM_11.2.0.3.0_LINUX_110823
Specification-Vendor: Sun Microsystems Inc.
Specification-Title: JDBC
Specification-Version: 4.0
Main-Class: oracle.jdbc.OracleDriver
sealed: true
Gwen

To post to this group, send email to confluent...@googlegroups.com.

Maria Abramiuc

non lue,
6 mars 2017, 09:42:3806/03/2017
à Confluent Platform
It seems that :

BufferedRecords does this:

for (int updateCount : preparedStatement.executeBatch()) {
totalUpdateCount += updateCount;
}

And preparedStatement.executeBatch() return -2 -> Statement.SUCCESS_NO_INFO when the command was processed successfully, but the number of rows affected is unknown.

Specs details, this is why I'm getting the negative numbers.

Gwen Shapira

non lue,
7 mars 2017, 00:59:5607/03/2017
à confluent...@googlegroups.com
Ouch! This is a bug - JDBC allows returning a value of -2 when executing a batch insert/update, which means that operation was successful but number of rows returned is unknown (http://stackoverflow.com/questions/19022175/executebatch-method-return-array-of-value-2-in-java).

I think the right thing to do is to skip the check in that case rather than fail. 

I can't think of a good work-around though... I can point you to the location in the code with the check, if you want to fix it yourself. Or we can open a JIRA issue and see if someone else can fix this. (Or if you are a Confluent supported customer, you can escalate to our support team...)

Gwen



Gwen

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsubscribe@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Gwen Shapira
Product Manager | Confluent
650.450.2760 @gwenshap
Follow us: Twitter | blog

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Gwen Shapira

non lue,
7 mars 2017, 01:01:3207/03/2017
à confluent...@googlegroups.com
Ah, I see you found the guilty check. Do you want to submit a PR with the solution?

To post to this group, send email to confluent-platform@googlegroups.com.



--
Gwen Shapira
Product Manager | Confluent
650.450.2760 @gwenshap
Follow us: Twitter | blog

Maria Abramiuc

non lue,
7 mars 2017, 09:01:3807/03/2017
à Confluent Platform
Hi,

Yes, I submited a PR: https://github.com/confluentinc/kafka-connect-jdbc/pull/195

Thank you,
Maria Abramiuc
Gwen

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Gwen Shapira
Product Manager | Confluent
650.450.2760 @gwenshap
Follow us: Twitter | blog

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Gwen Shapira
Product Manager | Confluent
650.450.2760 @gwenshap
Follow us: Twitter | blog

Nishant Verma

non lue,
3 mai 2017, 23:38:1103/05/2017
à Confluent Platform
Répondre à tous
Répondre à l'auteur
Transférer
0 nouveau message