kafka-connect-jdbc-sink

754 views
Skip to first unread message

Eric Berryman

unread,
Nov 17, 2016, 3:31:23 PM11/17/16
to Confluent Platform
Hello!

I'm getting the following error on a kafka-connect-jdbc sink:
Caused by: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Incorrect datetime value: '3197-09-13 18:00:00' for column 'recordinterval' at row 1

This is a valid timestamp.  Is there a way for me to get this to work?
I'm currently using the jdbc source and sink connectors (for a readonly database on another network).

Thank you!
Eric

Shikhar Bhushan

unread,
Nov 17, 2016, 4:10:14 PM11/17/16
to Confluent Platform
Hi Eric,

It would be great if you can provide some more detail:
- what's the type of the field in the schema (or in the source db column, if the data is coming via the JDBC Source)
- what's the type of the field in the destination table, if you are manually creating the table rather than using `auto.create=true`
- the valid timestamp value that you expect to propagate

Best,

Shikhar

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/6cfc7e53-b359-4514-a8f4-aed3668f5296%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Eric Berryman

unread,
Nov 17, 2016, 4:25:05 PM11/17/16
to Confluent Platform

The column 'recordinterval' is a DATETIME Datatype in both the source and the sink.
The value '3197-09-13 18:00:00' is what I use for the "end of time"

Eric Berryman

unread,
Nov 17, 2016, 4:39:13 PM11/17/16
to Confluent Platform
I think this is a problem with the mysql jdbc driver and not the kafka connector.  I'm guessing the driver version and the mysql version are off.
I write back when I find something.

Thank you for the help!

Eric Berryman

unread,
Nov 18, 2016, 10:34:42 AM11/18/16
to Confluent Platform

I switched everything to mariadb (and mariadb jdbc connector driver jar), and still get this message.  I seems to really not like the  '3197-09-13 18:00:00'  datetime.  Although I don't understand why.  I have the exact database version (mariadb 10.1.19), and schema at source and sink.

Caused by: java.sql.BatchUpdateException: (conn:75) Incorrect datetime value: '3197-09-13 18:00:00' for column 'recordinterval' at row 1
Query is: INSERT INTO `cts-olog-jdbc-bitemporal_log`(`id`,`log_id`,`recordinterval`,`recordinterval_0`,`validityinterval`,`validityinterval_0`,`entry_id`) VALUES(?,?,?,?,?,?,?), parameters [10,10,'3197-09-13 18:00:00.0','2016-11-18 10:28:13.0','3197-09-13 18:00:00.0','2016-11-18 10:28:13.0',10]
        at org.mariadb.jdbc.MariaDbServerPreparedStatement.executeBatch(MariaDbServerPreparedStatement.java:281)
        at io.confluent.connect.jdbc.sink.BufferedRecords.flush(BufferedRecords.java:98)
        at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:65)
        at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:66)
        ... 11 more
Caused by: java.sql.SQLDataException: (conn:75) Incorrect datetime value: '3197-09-13 18:00:00' for column 'recordinterval' at row 1
Query is: INSERT INTO `cts-olog-jdbc-bitemporal_log`(`id`,`log_id`,`recordinterval`,`recordinterval_0`,`validityinterval`,`validityinterval_0`,`entry_id`) VALUES(?,?,?,?,?,?,?), parameters [10,10,'3197-09-13 18:00:00.0','2016-11-18 10:28:13.0','3197-09-13 18:00:00.0','2016-11-18 10:28:13.0',10]
        at org.mariadb.jdbc.internal.util.ExceptionMapper.get(ExceptionMapper.java:124)
        at org.mariadb.jdbc.internal.util.ExceptionMapper.getException(ExceptionMapper.java:101)
        at org.mariadb.jdbc.internal.util.ExceptionMapper.throwAndLogException(ExceptionMapper.java:77)
        at org.mariadb.jdbc.MariaDbStatement.executeQueryEpilog(MariaDbStatement.java:224)
        at org.mariadb.jdbc.MariaDbServerPreparedStatement.executeBatch(MariaDbServerPreparedStatement.java:272)
        ... 14 more

Eric Berryman

unread,
Nov 18, 2016, 10:43:58 AM11/18/16
to Confluent Platform

okay, this might be a 32/64 bit problem

Eric Berryman

unread,
Nov 18, 2016, 10:56:24 AM11/18/16
to Confluent Platform
Both are now 64bit, same error.
I just don't understand how it's valid in one and not the other?

Shikhar Bhushan

unread,
Nov 18, 2016, 11:16:14 AM11/18/16
to Confluent Platform
Have you tried the INSERT manually?
On Fri, Nov 18, 2016 at 07:56 Eric Berryman <eric.b...@gmail.com> wrote:
Both are now 64bit, same error.
I just don't understand how it's valid in one and not the other?

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

Eric Berryman

unread,
Nov 18, 2016, 11:24:38 AM11/18/16
to Confluent Platform
I found the issue.
The source column is a datetime.
The sink makes a new table, the new table makes the column a timestamp.
TIMESTAMP has a limit of 2038.

Is it possible to change the name of the tables which connect is doing the insert?  It seems the default is the topic name.
(A pointer to docs would be fine)

Thank you so much!

Eric Berryman

unread,
Nov 18, 2016, 11:35:33 AM11/18/16
to Confluent Platform
I see:
but this seems to only allow appends (ie. copy_${topic}) to the table

Eric Berryman

unread,
Nov 18, 2016, 11:37:52 AM11/18/16
to Confluent Platform
I should be clear, I'm looking to:
topic name:  cts-olog-jdbc-bitemporal
table name: bitemporal

Eric Berryman

unread,
Nov 18, 2016, 11:49:51 AM11/18/16
to Confluent Platform
Does this mean I need to make a different connect sink for each table?
So I can hard code the target name?
And set the pk.mode for that table?

Shikhar Bhushan

unread,
Nov 18, 2016, 2:04:54 PM11/18/16
to confluent...@googlegroups.com
The ${topic} placeholder can occur anywhere, e.g. `prefix_${topic}_suffix`

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

Shikhar Bhushan

unread,
Nov 18, 2016, 2:07:55 PM11/18/16
to confluent...@googlegroups.com
Ah yes, you can't munge the table name from 'cts-olog-jdbc-bitemporal' to 'bitemporal' without setting the `table.name.format` to be a static string `bitemporal`, which means that connector instance will only be able to target that table. In the next release we may have lightweight transformations which allow for more flexibility e.g. regex substitution.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

Shikhar Bhushan

unread,
Nov 18, 2016, 2:09:41 PM11/18/16
to confluent...@googlegroups.com
I also discovered this mismatch yesterday, and in the next release the connector will use DATETIME rather than TIMESTAMP (https://github.com/confluentinc/kafka-connect-jdbc/pull/168)

You could manually alter the type of that column. The connector never attempts to change column types.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

Eric Berryman

unread,
Nov 18, 2016, 2:49:24 PM11/18/16
to Confluent Platform
Thank you very much!  (And the regex would be very helpful, thank you!)
Reply all
Reply to author
Forward
0 new messages