Hi
I'm 'playing around' with Debezium & Oracle. Thanks to the
Tutorial by Chris Cranford, it wasn't that hard to set everything up.
So I've created another user and table like
CREATE TABLE source_app.cog_codegroup
(
cog_rowid NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
-- payload
cog_name VARCHAR(50),
cog_multi_value CHAR(1),
-- metadata of ETL-framework
etl_jobId INT DEFAULT 10000 NOT NULL,
primary key (cog_rowid)
);
alter table source_app.cog_codegroup add supplemental log data (all) columns;
However the NUMBER types (and the INT) are received in the form of COG_ROWID":"Cw==" within the JSON strings. No matter the 'client' (consumer) be it kafka's consumer.sh or py-spark. It's obviously not a hex-code and it's probably case-sensitive.
I've also set decimal.handling.mode to precise, but that's default already according to the manual. So I wasn't feeling that lucky here.
In another test, I removed the IDENTITY and set a length like NUMBER(20, 0) which didn't help either.
What am I doing wrong?