writer.builder.class.0=gobblin.writer.HiveWritableHdfsDataWriterBuilder
writer.writable.class.0=org.apache.hadoop.hive.ql.io.orc.OrcSerde
writer.output.format.class.0=org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
but the MR job errors out with the following error
ata records
java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Failed to create writer
at gobblin.writer.PartitionedDataWriter.write(PartitionedDataWriter.java:110)
serde.serializer.type=ORC
serde.deserializer.type=org.apache.hadoop.hive.serde2.OpenCSVSerde
serde.deserializer.input.format.type=org.apache.hadoop.mapred.TextInputFormat
serde.deserializer.output.format.type=org.apache.hadoop.mapred.TextOutputFormat
If the converter successfully converts the record, the HiveWritableHdfsDataWriter should be able to write it.