Kafka-Rest examples

1,583 views
Skip to first unread message

John Omernik

unread,
May 21, 2015, 9:18:39 AM5/21/15
to confluent...@googlegroups.com
I am working to understand the platform better, and am a bit disappointed with some Kafka Rest API examples provided.   Basically, as someone new to this, I feel the examples are not well understand on what is "what". This is what I mean. In the API docs example (below) from what I can tell the schema is:

{\"name\":\"int\",\"type\": \"int\"}"
The "name" of the field is "int" This is utterly confusing to someone trying to understand and grok what is going on.  In the blog example, things are bit better, but still have challenges: 

  "value_schema": "{\"type\": \"record\", \"name\": \"User\", \"fields\": [{\"name\": \"name\", \"type\": \"string\"}]}",
      "records": [
        {"value": {"name": "testUser"}},
        {"value": {"name": "testUser2"}}

However the name here is "name" once again, not helpful.   

Additionally, I am at this point unclear on the concept of a "value_schema" vs a "key_schema", and when I should use each, or does it matter?, I am also confused on how to format requests where I have all my data  ready to go, I have my schema registered, and I am stumped on how to proceed with multiple records. In this case, I am using the BroIDS connection log as a test case, I have the schema I registered below.  At this point, my questions are:
1. Is this a key_schema or a value_schema? Is there documentation to help me understand that (I am noob in this space)
2. How do I format requests in my requests with multiple fields(the examples, as I have said, are confusing to me as I am not sure what are key or value names vs. keywords required for my records, and most seem to be only one value in the schema)
3. Can I have a schema like this and not include a value?  Bro has the concept of "unset" fields, the difference if say a useragent header doesn't exist in a request vs if it were to be set to blank string ("")  Do I pass a "NULL" if a field is unset? How do Avro handle that? Do I just not include it? 
4. If there is a page with some other examples, I would really appreciate it.  I am not trying to be negative, just pointing out my confusion as a "new" user to this space in trying to become a more experienced user :) 
Thank you!

John
Bro ID Schema:
{
    "schema": "{\"type\": \"record\", \"name\": \"bro_default_conn\", 
    	\"fields\": [{\"name\": \"ts\", \"type\": \"double\", \"doc\": \"Default Bro Schema parse for ts\"},
		{\"name\": \"uid\", \"type\": \"string\", \"doc\": \"Default Bro Schema parse for uid\"},
		{\"name\": \"id_orig_h\", \"type\": \"string\", \"doc\": \"Default Bro Schema parse for id.orig_h\"},
		{\"name\": \"id_orig_p\", \"type\": \"int\", \"doc\": \"Default Bro Schema parse for id.orig_p\"},
		{\"name\": \"id_resp_h\", \"type\": \"string\", \"doc\": \"Default Bro Schema parse for id.resp_h\"},
		{\"name\": \"id_resp_p\", \"type\": \"int\", \"doc\": \"Default Bro Schema parse for id.resp_p\"},
		{\"name\": \"proto\", \"type\": \"string\", \"doc\": \"Default Bro Schema parse for proto\"},
		{\"name\": \"service\", \"type\": \"string\", \"doc\": \"Default Bro Schema parse for service\"},
		{\"name\": \"duration\", \"type\": \"string\", \"doc\": \"Default Bro Schema parse for duration\"},
		{\"name\": \"orig_bytes\", \"type\": \"long\", \"doc\": \"Default Bro Schema parse for orig_bytes\"},
		{\"name\": \"resp_bytes\", \"type\": \"long\", \"doc\": \"Default Bro Schema parse for resp_bytes\"},
		{\"name\": \"conn_state\", \"type\": \"string\", \"doc\": \"Default Bro Schema parse for conn_state\"},
		{\"name\": \"local_orig\", \"type\": \"boolean\", \"doc\": \"Default Bro Schema parse for local_orig\"},
		{\"name\": \"missed_bytes\", \"type\": \"long\", \"doc\": \"Default Bro Schema parse for missed_bytes\"},
		{\"name\": \"history\", \"type\": \"string\", \"doc\": \"Default Bro Schema parse for history\"},
		{\"name\": \"orig_pkts\", \"type\": \"long\", \"doc\": \"Default Bro Schema parse for orig_pkts\"},
		{\"name\": \"orig_ip_bytes\", \"type\": \"long\", \"doc\": \"Default Bro Schema parse for orig_ip_bytes\"},
		{\"name\": \"resp_pkts\", \"type\": \"long\", \"doc\": \"Default Bro Schema parse for resp_pkts\"},
		{\"name\": \"resp_ip_bytes\", \"type\": \"long\", \"doc\": \"Default Bro Schema parse for resp_ip_bytes\"},
		{\"name\": \"tunnel_parents\", \"type\": \"string\", \"doc\": \"Default Bro Schema parse for tunnel_parents\"}]}"
}






POST /topics/test HTTP/1.1
Host: kafkaproxy.example.com
Content-Type: application/vnd.kafka.avro.v1+json
Accept: application/vnd.kafka.v1+json, application/vnd.kafka+json, application/json

{
  "value_schema": "{\"name\":\"int\",\"type\": \"int\"}"
  "records": [
    {
      "value": 12
    },
    {
      "value": 24,
      "partition": 1
    }
  ]
}

Example from Blog (http://blog.confluent.io/2015/03/25/a-comprehensive-open-source-rest-proxy-for-kafka/)

curl -i -X POST -H "Content-Type: application/vnd.kafka.avro.v1+json"
    --data '{
      "value_schema": "{\"type\": \"record\", \"name\": \"User\", \"fields\": [{\"name\": \"name\", \"type\": \"string\"}]}",
      "records": [
        {"value": {"name": "testUser"}},
        {"value": {"name": "testUser2"}}
      ]
    }' \
    http://localhost:8082/topics/avrotest

Ewen Cheslack-Postava

unread,
May 21, 2015, 1:37:20 PM5/21/15
to confluent...@googlegroups.com
Responses inline.

On Thu, May 21, 2015 at 6:18 AM, John Omernik <jo...@omernik.com> wrote:
I am working to understand the platform better, and am a bit disappointed with some Kafka Rest API examples provided.   Basically, as someone new to this, I feel the examples are not well understand on what is "what". This is what I mean. In the API docs example (below) from what I can tell the schema is:

{\"name\":\"int\",\"type\": \"int\"}"
The "name" of the field is "int" This is utterly confusing to someone trying to understand and grok what is going on.  In the blog example, things are bit better, but still have challenges: 

  "value_schema": "{\"type\": \"record\", \"name\": \"User\", \"fields\": [{\"name\": \"name\", \"type\": \"string\"}]}",
      "records": [
        {"value": {"name": "testUser"}},
        {"value": {"name": "testUser2"}}

However the name here is "name" once again, not helpful.   
In this case the field is intended to hold a name (or username), so making the name of the field be "name" makes sense. But I agree that's a bit confusing, making it username instead might make it less confusing. I've updated the blog post with that change.
 

Additionally, I am at this point unclear on the concept of a "value_schema" vs a "key_schema", and when I should use each, or does it matter?, I am also confused on how to format requests where I have all my data  ready to go, I have my schema registered, and I am stumped on how to proceed with multiple records. In this case, I am using the BroIDS connection log as a test case, I have the schema I registered below.  At this point, my questions are:
1. Is this a key_schema or a value_schema? Is there documentation to help me understand that (I am noob in this space)
 Messages in Kafka can have keys and values. The value holds the content of the messages so your messages should always have values (and therefore you'll always want to specify a value_schema).

Keys are optional. If you provide a key, it'll be used to determine which partition to put the message in. This is useful for semantically organizing your data. For example, if each event is associated with a user, you might use their user ID as the key so all events for a given user are stored in the same partition. Then a downstream consumer is guaranteed they will see all the the events for user in order in the same partition (and in a consumer group, it is guaranteed that a single consumer instance will see all events for that user). If you want to organize your events by some key, then you'll also need to include the key schema. Often the key schema will be very simple, usually just a primitive type like int or string.

For the schema you gave below, it would be the value_schema. I'm not sure if you'll have a key_schema since you didn't mention how you want to organize the data.
2. How do I format requests in my requests with multiple fields(the examples, as I have said, are confusing to me as I am not sure what are key or value names vs. keywords required for my records, and most seem to be only one value in the schema)
It's probably easiest to explain by breaking it down by how you'd probably construct the request. At the highest level you'll have the produce request which should contain your value_schema and a list of records:

{
  "value_schema": "{\"type\": \"record\", \"name\": \"bro_default_conn\","{\"name\":\"int\",\"type\": \"int\"}" .... }",
  "records": [ RECORDS ]
}

RECORDS should then be a list of objects representing each value . The structure of each of these should look like this:

{
  "key": KEY,
  "value": VALUE,
  "partition": PARTITION
}

You *can* specify all those fields, but key and partition are not required, so each one could be as simple as

{
  "value": VALUE
}

KEY and VALUE should be JSON-encoded Avro (http://avro.apache.org/docs/1.7.7/spec.html#json_encoding) embedded directly. For primitive types like int and string, this means you just put those values in directly. For a record type like your example, VALUE would look something like this:

{
  "uid": "...",
  "id_orig_h" : "...",
  "id_orig_p": "..."
  ... rest of the fields here ...
}

3. Can I have a schema like this and not include a value?  Bro has the concept of "unset" fields, the difference if say a useragent header doesn't exist in a request vs if it were to be set to blank string ("")  Do I pass a "NULL" if a field is unset? How do Avro handle that? Do I just not include it? 
In Avro, null is a type. So if you need to be able to omit the field you would define a union schema of null and the other type, e.g. the field definition would look like this:

{ "name": "useragent", "type": ["null", "string"], "doc": "..." }

See http://avro.apache.org/docs/1.7.7/spec.html#Unions for more info about unions in Avro. Note that this affects how you have to encode the JSON (as mentioned in the previously linked section).
4. If there is a page with some other examples, I would really appreciate it.  I am not trying to be negative, just pointing out my confusion as a "new" user to this space in trying to become a more experienced user :) 
I think this is confusing because if you're new to the entire stack, you're simultaneously trying to understand at least three things:

1. How to use Avro to represent your data.
2. How Kafka works (keys and values in messages, how partitioning works, etc.)
3. The REST Proxy and how it uses/interacts with the previous two.

For 1, we provide a basic introduction (http://confluent.io/docs/current/avro.html), but starting the Avro introduction & getting started tutorials (e.g. http://avro.apache.org/docs/current/gettingstartedjava.html) is going to be the easiest approach since they focus solely on Avro. For 2, reading through all of the "Getting Started" section of the Kafka docs (http://kafka.apache.org/documentation.html#gettingStarted) is probably best. Once you're familiar with 1 & 2, the REST Proxy docs will be less confusing since it'll be easier to map the examples to Kafka.

-Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/3c0efa96-10ba-41fd-b09d-e976b984075a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

John Omernik

unread,
May 21, 2015, 2:13:49 PM5/21/15
to confluent...@googlegroups.com
This is a fantastic response. Thank you so much for taking the time to clearly explain it. 

John

Responses inline.

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.



--
Thanks,
Ewen

John Omernik

unread,
May 21, 2015, 4:53:39 PM5/21/15
to confluent...@googlegroups.com
Are there any tools that may be available to help troubleshoot badly format kafka rest? Below is request I sent and the 422 error.  I am guessing it's because my json has double quotes around data that should be and int or log and thus the error, but I have no way to confirm that.  Before I write a tool to do that per the fields type in my script, I wanted to know if there were any more verbose errors messages I may be able to use.  Also, IF that is the error, what advantage do I get by finite typing of my data vs. just using string or bytes for everything? I know that data purists will hate me, but figuring this out can be a challenge to say to least, and I am curious if there is any discussion on using looser types vs. stricter types.  

Error:

HTTP/1.1 422 

Content-Length: 31

Content-Type: application/json

Server: Jetty(8.1.16.v20140903)


{"error_code":422,"message":""}



Json sent:

{

    "value_schema_id": 101,

    "records": [

        {

            "ts": "1431637257.883239",

            "uid": "CJEy5h20lBY6V5QAI3",

            "id_orig_h": "192.168.225.103",

            "id_orig_p": "2148",

            "id_resp_h": "192.168.100.40",

            "id_resp_p": "20050",

            "proto": "tcp",

            "service": "NULL",

            "duration": "NULL",

            "orig_bytes": "NULL",

            "resp_bytes": "NULL",

            "conn_state": "S0",

            "local_orig": "NULL",

            "missed_bytes": "0",

            "history": "S",

            "orig_pkts": "1",

            "orig_ip_bytes": "48",

            "resp_pkts": "0",

            "resp_ip_bytes": "0",

            "tunnel_parents": "NULL"

        }

    ]

}

Ewen Cheslack-Postava

unread,
May 21, 2015, 5:09:02 PM5/21/15
to confluent...@googlegroups.com
Hmm, usually we try to pass along any error information we can, but it looks like we're not in this case. I filed https://github.com/confluentinc/kafka-rest/issues/81 to address that issue.

Looking at your request and comparing to the schema you gave earlier, I think you have some type mismatches. For example, missed_bytes is listed as a long in the schema, but you've encoded it as a string.

Another way of debugging this is to just try to decode the payload yourself with Avro. Avro provides some tools to do basic translation of data, and should give you a more informative message if it fails. Here's a blog post covering some of the basics of these tools: http://www.michael-noll.com/blog/2013/03/17/reading-and-writing-avro-files-from-the-command-line/ You probably want the fromjson tool, which reads JSON and writes binary.

-Ewen

Responses inline.

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

John Omernik

unread,
May 22, 2015, 8:22:51 AM5/22/15
to confluent...@googlegroups.com
So I am still struggling with things even after I think I fixed my typing issues (Those items are below).  One thing that may be helpful as part of the Confluent platform (and I would imagine would be a fairly simple tool to create) would be a using the jas from the michael-noll.com blog, but wrap it in a web front end.  This would allow you to integrate the testing with the schema registry.  What I mean by that is you could have a space for a user to provide their schema and then it can be checked against the Schema Registry... does it return an id as intended? It could be pretty printed out, and if there are errors possibly highly them (that would be a bit more complex) Then You could take either a schema or a schemaid (because it's schema registry aware) and have, if it's an id, the tool grab the schema, and then take the json encoded example and "test it" see how the formatting is and provide verbose errors. At the very least, it may reduce wonky posts like mine here, but also, it could be a great way for in an enterprise to give tools to users to get these right.  I may try a PoC using a Python Flask thing, but I would imagine that on your side, you'd want to use a Jetty server like you have done for the API and the registry for consistency (Java scares me). 

As to what I have now I have the following Schema (using a null for every field, and the default type is null) and I am trying to post the one record and still getting the blank 422 error. I have the Schema Fields and the "prettied" data below, than I have the actual request off the wire.

POST /topics/brocon HTTP/1.1

Host: kafka-rest.marathon.mesos:8192

Content-Length: 622

User-Agent: python-requests/2.7.0 CPython/2.7.3 Linux/3.13.0-30-generic

Connection: keep-alive

Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json

Accept-Encoding: gzip, deflate


{"value_schema_id": 105, "records": [{"ts": {"string": "1431637257.883239"}, "uid": {"string": "CJEy5h20lBY6V5QAI3"}, "id_orig_h": {"string": "192.168.225.103"}, "id_orig_p": {"int": 2148}, "id_resp_h": {"string": "192.168.2.2"}, "id_resp_p": {"int": 20050}, "proto": {"string": "tcp"}, "service": "null", "duration": "null", "orig_bytes": "null", "resp_bytes": "null", "conn_state": {"string": "S0"}, "local_orig": "null", "missed_bytes": {"long": 0}, "history": {"string": "S"}, "orig_pkts": {"long": 1}, "orig_ip_bytes": {"long": 48}, "resp_pkts": {"long": 0}, "resp_ip_bytes": {"long": 0}, "tunnel_parents": "null"}]}



Schema ID returned: 105

{

    "fields": [

        {

            "default": "null",

            "doc": "Default Bro Schema parse for ts",

            "type": [

                "null",

                "string"

            ],

            "name": "ts"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for uid",

            "type": [

                "null",

                "string"

            ],

            "name": "uid"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for id.orig_h",

            "type": [

                "null",

                "string"

            ],

            "name": "id_orig_h"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for id.orig_p",

            "type": [

                "null",

                "int"

            ],

            "name": "id_orig_p"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for id.resp_h",

            "type": [

                "null",

                "string"

            ],

            "name": "id_resp_h"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for id.resp_p",

            "type": [

                "null",

                "int"

            ],

            "name": "id_resp_p"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for proto",

            "type": [

                "null",

                "string"

            ],

            "name": "proto"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for service",

            "type": [

                "null",

                "string"

            ],

            "name": "service"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for duration",

            "type": [

                "null",

                "string"

            ],

            "name": "duration"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for orig_bytes",

            "type": [

                "null",

                "long"

            ],

            "name": "orig_bytes"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for resp_bytes",

            "type": [

                "null",

                "long"

            ],

            "name": "resp_bytes"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for conn_state",

            "type": [

                "null",

                "string"

            ],

            "name": "conn_state"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for local_orig",

            "type": [

                "null",

                "boolean"

            ],

            "name": "local_orig"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for missed_bytes",

            "type": [

                "null",

                "long"

            ],

            "name": "missed_bytes"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for history",

            "type": [

                "null",

                "string"

            ],

            "name": "history"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for orig_pkts",

            "type": [

                "null",

                "long"

            ],

            "name": "orig_pkts"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for orig_ip_bytes",

            "type": [

                "null",

                "long"

            ],

            "name": "orig_ip_bytes"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for resp_pkts",

            "type": [

                "null",

                "long"

            ],

            "name": "resp_pkts"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for resp_ip_bytes",

            "type": [

                "null",

                "long"

            ],

            "name": "resp_ip_bytes"

        },

        {

            "default": "null",

            "doc": "Default Bro Schema parse for tunnel_parents",

            "type": [

                "null",

                "string"

            ],

            "name": "tunnel_parents"

        }

    ],

    "type": "record",

    "name": "brocon"

}


Data attempted to post:


{

    "value_schema_id": 105,

    "records": [

        {

            "ts": {

                "string": "1431637257.883239"

            },

            "uid": {

                "string": "CJEy5h20lBY6V5QAI3"

            },

            "id_orig_h": {

                "string": "192.168.225.103"

            },

            "id_orig_p": {

                "int": 2148

            },

            "id_resp_h": {

                "string": "192.168.2.2"

            },

            "id_resp_p": {

                "int": 20050

            },

            "proto": {

                "string": "tcp"

            },

            "service": "null",

            "duration": "null",

            "orig_bytes": "null",

            "resp_bytes": "null",

            "conn_state": {

                "string": "S0"

            },

            "local_orig": "null",

            "missed_bytes": {

                "long": 0

            },

            "history": {

                "string": "S"

            },

            "orig_pkts": {

                "long": 1

            },

            "orig_ip_bytes": {

                "long": 48

            },

            "resp_pkts": {

                "long": 0

            },

            "resp_ip_bytes": {

                "long": 0

            },

            "tunnel_parents": "null"

        }

    ]

Responses inline.

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

Ewen Cheslack-Postava

unread,
May 22, 2015, 12:09:13 PM5/22/15
to confluent...@googlegroups.com
Your data has fields like

"service": "null",

but nulls are encoded in JSON directly as null, i.e.

"service": null,

It looks like you have the same problem with your default values:

"default": "null",

which for the fields that are type ["null", "string"] is actually making the default value the string "null", not null.



Responses inline.

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

John Omernik

unread,
May 22, 2015, 3:13:08 PM5/22/15
to confluent...@googlegroups.com
Ok got it, in the schema, I made the "type":["null", "string"], "default":null (the null in the type should still be quoted per https://avro.apache.org/docs/1.7.7/spec.html#Data+Serialization)

Then I have the record look like this with the same 422 error with no message:

{

    "value_schema_id": 121,

Schema 

{

    "fields": [

        {

            "default": null,

    "name": "brocon1"

}

...

John Omernik

unread,
May 26, 2015, 10:12:27 AM5/26/15
to confluent...@googlegroups.com
I cleared all the previous messages to save space for folks.  I was able to create an avrofile with avrotools using the schema I am trying to use, and the data I am trying to post, but I am still getting a 422 error back from Kafka rest.  Where I can I look next for troubleshooting? 

Thanks!

John

Jun Rao

unread,
May 26, 2015, 3:58:03 PM5/26/15
to confluent...@googlegroups.com
John,

Your latest format looks correct. Could you try simplifying the schema to see which field is causing the problem?

Thanks,

Jun

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

John Omernik

unread,
May 26, 2015, 5:24:07 PM5/26/15
to confluent...@googlegroups.com
I am looking to build the 2.0-SNAPSHOT of the kafka-rest right now hoping for more robust error messages.  I "could" go through and simplify the Schema, but there is a stubborn part of me that feels that it is a work around for not getting clearer errors. (Now I am working through issues in getting the 2.0-SNAPSHOT compiled... :)  

Note, since in text, tone doesn't come through well, I mean this positively. I.e. it's my own stubbornness I am working through, I don't want to break down my data/schema because it works in avro tools, and I SHOULD be able to figure this out :) I'll keep the list posted :)


On Tuesday, May 26, 2015 at 2:58:03 PM UTC-5, Jun Rao wrote:
John,

Your latest format looks correct. Could you try simplifying the schema to see which field is causing the problem?

Thanks,

Jun
On Tue, May 26, 2015 at 7:12 AM, John Omernik <jo...@omernik.com> wrote:
I cleared all the previous messages to save space for folks.  I was able to create an avrofile with avrotools using the schema I am trying to use, and the data I am trying to post, but I am still getting a 422 error back from Kafka rest.  Where I can I look next for troubleshooting? 

Thanks!

John

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages