Encountering exception when shutting down the HDFS Sink connector

433 views
Skip to first unread message

Mangesh B

unread,
Jul 14, 2016, 8:51:10 AM7/14/16
to Confluent Platform
Following exception gets thrown when I shutdown the HDFS sink connector worker process using Ctrl-C. I am using the 2.0.1 Kafka connect version.

[2016-07-14 08:36:50,882] ERROR java.nio.channels.ClosedChannelException (io.confluent.connect.hdfs.TopicPartitionWriter:294)
org.apache.kafka.connect.errors.ConnectException: java.nio.channels.ClosedChannelException
        at io.confluent.connect.hdfs.TopicPartitionWriter.getWriter(TopicPartitionWriter.java:424)
        at io.confluent.connect.hdfs.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:488)
        at io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:264)
        at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:234)
        at io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:91)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:280)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:176)
        at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90)
        at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58)
        at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
Caused by: java.nio.channels.ClosedChannelException
        at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
        at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
        at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
        at java.io.DataOutputStream.write(DataOutputStream.java:107)
        at org.apache.avro.file.DataFileWriter$BufferedFileOutputStream$PositionFilter.write(DataFileWriter.java:446)
        at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
        at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
        at org.apache.avro.io.BufferedBinaryEncoder$OutputStreamSink.innerFlush(BufferedBinaryEncoder.java:220)
        at org.apache.avro.io.BufferedBinaryEncoder.flush(BufferedBinaryEncoder.java:85)
        at org.apache.avro.file.DataFileWriter.create(DataFileWriter.java:154)
        at io.confluent.connect.hdfs.avro.AvroRecordWriterProvider.getRecordWriter(AvroRecordWriterProvider.java:99)
        at io.confluent.connect.hdfs.TopicPartitionWriter.getWriter(TopicPartitionWriter.java:416)
        ... 9 more

Has anyone encountered this exception?

Thanks
Mangesh

Dustin Cote

unread,
Jul 14, 2016, 10:33:24 AM7/14/16
to confluent...@googlegroups.com
Hi Mangesh,

You'll often see this type of exception for HDFS clients that get shut down when you still have a file open. Did you find any missing data or that there was something functionally wrong after you shut down the connector?  Did you maybe try to shut down the connector multiple times and interrupt the clean shutdown process?

-Dustin

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/3d6349a6-9eb5-4bc0-a392-fd8d19676e63%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Dustin Cote

Nivethika Mahasivam

unread,
Nov 17, 2016, 5:33:57 AM11/17/16
to Confluent Platform
Dear Dustin,
I get this exception quite often, while i test my HDFS connector.
As you mentioned i do shut down/interrupt the connector multiple times.
What is the actual root cause for this exception to happen?
How can i fix this and have the work-flow working continuously?
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.



--
Dustin Cote
Reply all
Reply to author
Forward
0 new messages