Secure Kafka - SASL_PLAINTEXT error when producing messages.

1,036 views
Skip to first unread message

Thibaud Chardonnens

unread,
Apr 20, 2017, 11:48:28 AM4/20/17
to Confluent Platform
Hi!

I am using a Kafka cluster (0.10.2.0) with multiple listeners (PLAINTEXT, SSL and SASL_PLAINTEXT)

Here is the relevant part of a broker config:
############################# Server Basics #############################
inter.broker.protocol.version=0.10.2
log.message.format.version=0.8.2

############################# Socket Server Settings #############################
listeners=PLAINTEXT://<<IP>>:9092,SSL://<<IP>>:9093,SASL_PLAINTEXT://<<IP>>:9095
advertised.listeners=PLAINTEXT://<<FQDN>>:9092,SSL://<<FQDN>>:9093,SASL_PLAINTEXT://<<FQDN>>:9095
num.network.threads=8
num.io.threads=8
auto.create.topics.enable=true
delete.topic.enable=true
auto.leader.rebalance.enable=true
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
queued.max.requests=16

############################# SSL Config #############################
ssl.client.auth=required
ssl.keystore.location=/path/to/keystore.jks
ssl.keystore.password=****
ssl.key.password=****
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=****

############################# SASL config ###########################
sasl.enabled.mechanisms=GSSAPI
sasl.mechanism.inter.broker.protocol=GSSAPI

I have a simple java producer (0.10.2.0) that writes to a test topic, it works perfectly when using the PLAINTEXT or SSL endpoints, but fails over SASL_PLAINTEXT

Relevant part of the producer config:
bootstrap.servers=<<FQDN>>:9095
acks=all
retries=0

security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI

I pass my jaas config file to the JVM with -Djava.security.auth.login.config=/path/to/jaas_file.conf

I enabled the DEBUG log level:
...
2017-04-20 17:29:07 DEBUG Metadata:244 - Updated cluster metadata version 1 to Cluster(id = null, nodes = [<<FQDN>>:9095 (id: -1 rack: null)], partitions = [])
2017-04-20 17:29:08 INFO  AbstractLogin:59 - Successfully logged in.
2017-04-20 17:29:08 DEBUG KerberosLogin:137 - [Principal=myp...@MYREALM.COM]: It is a Kerberos ticket
2017-04-20 17:29:08 INFO  KerberosLogin:145 - [Principal=myp...@MYREALM.COM]: TGT refresh thread started.
2017-04-20 17:29:08 DEBUG KerberosLogin:338 - Found TGT with client principal 'myp...@MYREALM.COM' and server principal 'krbtgt/MYREA...@MYREALM.COM'.
2017-04-20 17:29:08 INFO  KerberosLogin:321 - [Principal=myp...@MYREALM.COM]: TGT valid starting at: Thu Apr 20 17:29:08 CEST 2017
2017-04-20 17:29:08 INFO  KerberosLogin:322 - [Principal=myp...@MYREALM.COM]: TGT expires: Fri Apr 21 17:29:08 CEST 2017
2017-04-20 17:29:08 INFO  KerberosLogin:199 - [Principal=myp...@MYREALM.COM]: TGT refresh sleeping until: Fri Apr 21 12:51:58 CEST 2017
...
2017-04-20 17:29:08 DEBUG Sender:121 - Starting Kafka producer I/O thread.
2017-04-20 17:29:08 INFO  AppInfoParser:83 - Kafka version : 0.10.2.0
2017-04-20 17:29:08 INFO  AppInfoParser:84 - Kafka commitId : 576d93a8dc0cf421
2017-04-20 17:29:08 DEBUG KafkaProducer:336 - Kafka producer started
2017-04-20 17:29:08 DEBUG NetworkClient:767 - Initialize connection to node -1 for sending metadata request
2017-04-20 17:29:08 DEBUG NetworkClient:627 - Initiating connection to node -1 at <<FQDN>>:9095.
2017-04-20 17:29:08 DEBUG SaslClientAuthenticator:207 - Set SASL client state to SEND_HANDSHAKE_REQUEST
2017-04-20 17:29:08 DEBUG SaslClientAuthenticator:132 - Creating SaslClient: client=myp...@MYREALM.COM;service=kafka;serviceHostname=<<FQDN>>;mechs=[GSSAPI]
2017-04-20 17:29:08 DEBUG Metrics:335 - Added sensor with name node--1.bytes-sent
2017-04-20 17:29:08 DEBUG Metrics:335 - Added sensor with name node--1.bytes-received
2017-04-20 17:29:08 DEBUG Metrics:335 - Added sensor with name node--1.latency
2017-04-20 17:29:08 DEBUG Selector:339 - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
2017-04-20 17:29:08 DEBUG SaslClientAuthenticator:207 - Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE
2017-04-20 17:29:08 DEBUG NetworkClient:590 - Completed connection to node -1.  Fetching API versions.
2017-04-20 17:29:08 DEBUG SaslClientAuthenticator:207 - Set SASL client state to INITIAL
2017-04-20 17:29:08 DEBUG SaslClientAuthenticator:207 - Set SASL client state to INTERMEDIATE
2017-04-20 17:29:08 DEBUG Selector:375 - Connection with <<FQDN>>/<<IP>> disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:242)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:185)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:71)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:350)
at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:225)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:126)
at java.lang.Thread.run(Thread.java:745)
2017-04-20 17:29:08 DEBUG NetworkClient:570 - Node -1 disconnected.
2017-04-20 17:29:08 DEBUG NetworkClient:680 - Give up sending metadata request since no node is available

 The EOFException comes from here (NetworkReceive.java:83):
    // Need a method to read from ReadableByteChannel because BlockingChannel requires read with timeout
    // See: http://stackoverflow.com/questions/2866557/timeout-for-socketchannel-doesnt-work
    // This can go away after we get rid of BlockingChannel
    @Deprecated
    public long readFromReadableChannel(ReadableByteChannel channel) throws IOException {
        int read = 0;
        if (size.hasRemaining()) {
            int bytesRead = channel.read(size);
            if (bytesRead < 0)
                throw new EOFException();

Any idea what could be the root cause of the issue and how to solve the problem?

Gajendra Subramanyam

unread,
Sep 2, 2017, 1:14:15 AM9/2/17
to Confluent Platform
From the stack, it looks like the sasl client is receving EOFFile error when its trying to receive the "COMPLETE/FAILED" state from the SASL server at Kafka.
Can you enable the debug logs on Kafka and see how why authorization is failing??
I guess the jaas config file that you have passed to the Kafka is valid.
Reply all
Reply to author
Forward
0 new messages