Loading data from Avro to a SQL Database fails with error message unable to sink into output identifier: 'unknown'

137 views
Skip to first unread message

Knows Not Much

unread,
May 25, 2016, 6:50:28 PM5/25/16
to cascading-user
I have a simple avro file with this schema

 
{
    "type":"record", 
    "name": "AutoGeneratedSchema",
    "doc":"Sqoop import of QueryResult",
    "fields":[
        {"name":"adCampaignId","type":["null","long"],"default":null,"columnName":"adCampaignId","sqlType":"-5"},
        {"name":"adCampaignContextId","type":["null","long"],"default":null,"columnName":"adCampaignContextId","sqlType":"-5"},
        {"name":"context","type":["null","string"],"default":null,"columnName":"context","sqlType":"12"},
        {"name":"created","type":["null","long"],"default":null,"columnName":"created","sqlType":"93"}
        ],
    "tableName":"QueryResult"
}

I have a table with the Definition

CREATE TABLE Foo (
A INT NOT NULL,
B INT NOT NULL,
C VARCHAR(85) NOT NULL,
D TIMESTAMP NOT NULL
);

When I run the cascading job to insert the data, it fails with this error message

Error: cascading.tuple.TupleException: unable to sink into output identifier: 'unknown' at cascading.tuple.TupleEntrySchemeCollector.collect(TupleEntrySchemeCollector.java:160) at cascading.tuple.TupleEntryCollector.safeCollect(TupleEntryCollector.java:145) at cascading.tuple.TupleEntryCollector.add(TupleEntryCollector.java:95) at cascading.tuple.TupleEntrySchemeCollector.add(TupleEntrySchemeCollector.java:134) at cascading.flow.stream.SinkStage.receive(SinkStage.java:90) at cascading.flow.stream.SinkStage.receive(SinkStage.java:37) at cascading.flow.stream.FunctionEachStage$1.collect(FunctionEachStage.java:80) at cascading.tuple.TupleEntryCollector.safeCollect(TupleEntryCollector.java:145) at cascading.tuple.TupleEntryCollector.add(TupleEntryCollector.java:133) at cascading.operation.Insert.operate(Insert.java:64) at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:99) at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:39) at cascading.flow.stream.SourceStage.map(SourceStage.java:102) at cascading.flow.stream.SourceStage.run(SourceStage.java:58) at cascading.flow.hadoop.FlowMapper.run(FlowMapper.java:130) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.io.IOException: unable to add batch statement at com.twitter.maple.jdbc.db.DBOutputFormat$DBRecordWriter.write(DBOutputFormat.java:185) at com.twitter.maple.jdbc.db.DBOutputFormat$DBRecordWriter.write(DBOutputFormat.java:59) at com.twitter.maple.jdbc.JDBCTapCollector.collect(JDBCTapCollector.java:116) at com.twitter.maple.jdbc.JDBCScheme.sink(JDBCScheme.java:673) at cascading.tuple.TupleEntrySchemeCollector.collect(TupleEntrySchemeCollector.java:153) ... 21 more Caused by: java.sql.SQLNonTransientException: [Vertica][JDBC](11500) Given type does not match given object: 4. at com.vertica.exceptions.ExceptionConverter.toSQLException(Unknown Source) at com.vertica.jdbc.common.SPreparedStatement.setObject(Unknown Source) at com.twitter.maple.jdbc.TupleRecord.write(TupleRecord.java:42) at com.twitter.maple.jdbc.db.DBOutputFormat$DBRecordWriter.write(DBOutputFormat.java:176) ... 25 more


Any idea of what can be going wrong with the job?

Knows Not Much

unread,
May 25, 2016, 10:06:50 PM5/25/16
to cascading-user
Reply all
Reply to author
Forward
0 new messages