= BIT / OCTET
/ short-uint / long-uint / long-long-uint
/ short-string / long-string
/ timestamp
/ field-table
Disadvantages are:
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/176cbadc-7b12-4f8d-9d88-4295355d0544%40googlegroups.com.--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/CAFc58G_Zj%2BE40ycROdKxEZJ8Ufyh9vQDg%2BySwi4G-PbOzQSp4Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/CAPOm%3DTMAstjg9k3Qnxnn7ijNmoa2FV07SkVuZHdXmn3%2B%2BdmdXA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/CAFc58G_iynajGYK8p%3Dc4249%3DoKYKbvEH93%2BKarsp97_pnxhdyg%40mail.gmail.com.
A couple other use cases where custom headers are needed and should not be tied to message content.
1) If you use flume-ng as a source and Kafka as a sink, you need a way to pass metadata along with messages (agent-host, environment [prd/qa], etc.). Flume-ng has a notion of headers and they have to be mapped to a Kafka message wrapper that supports headers (in an interceptor). If the Confluent Platform would support optional custom headers, then you ship an generic Flume-ng interceptor that formats messages for Kafka in a compatible way with Confluent Platform.
2) This change data capture project for MySQL uses Avro with schema id similar to Confluent Platform but needs additional metadata like "mutation type". Again, if we had a flexible message envelope that allowed for custom headers, a single format could accommodate this use case as well.
https://github.com/mardambey/mypipe#kafka-message-format
I created a Github issue in case it helps keep track of this thread: https://github.com/confluentinc/schema-registry/issues/153
Cheers,
Roger
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/176cbadc-7b12-4f8d-9d88-4295355d0544%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/CAFc58G_Zj%2BE40ycROdKxEZJ8Ufyh9vQDg%2BySwi4G-PbOzQSp4Q%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/CAPOm%3DTMAstjg9k3Qnxnn7ijNmoa2FV07SkVuZHdXmn3%2B%2BdmdXA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
Resurrecting this old thread.
A number of people have pointed out that this can be implemented using the current avro schema, by either adding headers to the object itself, or implementing an "envelope" that contains the headers, and having the original object now be a payload within the envelope.
There is one instance where this does not work: when using delete tombstones in a log compacted topic. Since delete tombstones are created by writing a message with a null payload, there is no place to add headers of any sort.
You could instead invent an envelope that represents a delete. For example, an envelope with headers but with a null payload could be used to represent a delete, and an application layer can interpret that as a delete. But Kafka will not interpret that as a delete tombstone, and so the delete envelope will stay around forever. If that topic has a large number of deletes, then the topic could get very large. It's possible to work around this with some sort of cleanup job, but there are some race conditions there.
This also seems related to https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/confluent-platform/XQTjNJd-TrU. If there was a standard way to add headers, then it would also be possible to have a topic with mixed-schemas (which is required to guarantee ordering across mixed-schemas). Instead of "schema id in message + fixed schema type" for a topic, you could instead have "schema id in message + schema type in a header". Downstream consumers who are receiving that topic would be able to filter at the "type" header to only receive certain types of messages, or trigger off the "type" header to take different actions depending on message type.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/f1703348-201b-40ab-8d42-584e8df0f21d%40googlegroups.com.To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/CAFc58G90%2B5e28d5sRnnWySKXGGscmR%3Di7YaiUPXAcNZMB8mHUQ%40mail.gmail.com.
James,Could you explain a bit why a header is needed during a delete? Isn't it enough for deletion to just specify a key? It seems that customized header is only needed when you have inserts and updates.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/CAFc58G90%2B5e28d5sRnnWySKXGGscmR%3Di7YaiUPXAcNZMB8mHUQ%40mail.gmail.com.
On Jun 11, 2015, at 11:55 AM, Jun Rao <j...@confluent.io> wrote:James,Could you explain a bit why a header is needed during a delete? Isn't it enough for deletion to just specify a key? It seems that customized header is only needed when you have inserts and updates.Jun,Here are some use-cases that I'm considering (considering, but not yet implemented).I'm working on change data capture for MySQL->Kafka (similar to BottledWater). MySQL change events are of type insert/update/delete, and each has some common metadata associated with it that I (may) want to replicate to Kafka. For example, here is a MySQL write:primary key "userId = 1"--------------- message start -----------------BinlogEventStartPosition: 8288BinlogFilename: mariadb-bin.000008BinlogTransactionStartPosition: 8173BinlogTimestamp: 1231231213{"name":"James","userId":1}--------------- message end ------------------Those headers could be useful for monitoring progress. They could be used for figuring out where to restart replication after a crash, instead of using a separate mechanism to checkpoint my source offsets. They could be used to implement exactly-once semantics, using the techniques described in http://ben.kirw.in/2014/11/28/kafka-patterns/.If I were replicating deletes, I would want those headers for the exact same reasons.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/91172502-2E35-4928-ACB3-30EF6B9D9F90%40tivo.com.