int to bigint conversion on insert

49 views
Skip to first unread message

tod davis

unread,
Sep 6, 2017, 11:24:41 AM9/6/17
to pyodbc




Hey all !


Using pyodbc to insert into a Kudu table using impala sql.

 

Given this table: 

 

CREATE TABLE genomics.pipeline_status(
id INT NOT NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION
,
experiment_id INT NOT NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION
,
PRIMARY KEY
(id)
)
PARTITION BY HASH
(id) PARTITIONS 3
STORED AS KUDU
TBLPROPERTIES
('kudu.master_addresses'='<server>')
;


and this pyodbc call:

PYODBC CALL**************

try:

#kerberos ticket already acquired

conn
= pyodbc.connect('DSN=IMPALA_DEV', autocommit=True)
#print str(connection)
with conn.cursor() as cur:

cur
.execute("insert into genomics.pipeline_status (id, experiment_id) values (?,?)", (int(id), int(experiment_id)))

except pyodbc.Error as e:
logger
.critical('getting sync rundb results FAILED: ' + str(e) + '\n' + logPath)
finally:
logger
.info('end inserting rows')


I get this error:

ERROR**************

getting sync rundb results FAILED: ('HY000', "[HY000] [Cloudera][ImpalaODBC] (110) Error while executing a query in Impala: [HY000] : AnalysisException: Possible loss of precision for target table 'genomics.pipeline_status'.\nExpression 'cast(5 as bigint)' (type: BIGINT) would need to be cast to INT for column 'id'\n (110) (SQLExecDirectW)")
/usr/local/scripts/genomics/logs/syncRunDbResults.py-2017_09_05_17_10_15.log



If I convert the INT columns in the table to BIGINT, it works perfectly.   Is pyodbc making some inference about casting my int value into an int data type column as BIGINT ?

Is it possible to control that ?  I really don't want to use BIGINT data type.  The values will never approach that size.

Thanks, TD
Reply all
Reply to author
Forward
0 new messages