Hi,
thanks for the reply, really appreciated.
Using BashOperator, like below, it works:
task1 = BashOperator(
task_id='sqoop_prova_1',
bash_command='HADOOP_USER_NAME=hdfs /opt/hops/sqoop/bin/sqoop import --connect \'jdbc:mysql://MY_URL:3306/MY_DB\' --username MY_USR --password MY_PSW --table MY_TBL --driver com.mysql.jdbc.Driver --target-dir /Projects/TestP/sqoop-import/MY_TBL -m 1',
dag=dag)
Using HopsworksSqoopOperator, like you showed to me, it not works:
CONNECTION_ID = "hopsworks_jdbc"
PROJECT_NAME = "TestP"
task1 = HopsworksSqoopOperator(task_id='sqoop_prova_4',
dag=dag,
conn_id=CONNECTION_ID,
project_name=PROJECT_NAME,
table='MY_TBL',
target_dir='/Projects/TestP/sqoop-import/MY_TBL',
verbose=False,
cmd_type='import',
driver="com.mysql.jdbc.Driver",
file_type='text'
)
I have this error on hadoop log:
2019-12-18 15:54:24,134 WARN io.hops.transaction.handler.RequestHandler: SET_REPLICATION TX Failed. TX Time: 2 ms, RetryCount: 0, TX Stats -- Setup: 0ms, AcquireLocks: 2ms, InMemoryProcessing: -1ms, CommitTime: -1ms. Locks: INodeLock {paths=[/Projects/TestP/Resources/.mrStaging/TestP__meb10000/.staging/job_1576656122889_0005/job.split], lockType=WRITE_ON_TARGET_AND_PARENT }. java.lang.IllegalArgumentException
java.lang.IllegalArgumentException
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:77)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.getStorageTypeDeltas(FSDirectory.java:578)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:485)
at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetReplication(FSDirAttrOp.java:602)
at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp$4.performTask(FSDirAttrOp.java:260)
at io.hops.transaction.handler.TransactionalRequestHandler.execute(TransactionalRequestHandler.java:100)
at io.hops.transaction.handler.HopsTransactionalRequestHandler.execute(HopsTransactionalRequestHandler.java:50)
at io.hops.transaction.handler.RequestHandler.handle(RequestHandler.java:68)
at io.hops.transaction.handler.RequestHandler.handle(RequestHandler.java:63)
at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setReplication(FSDirAttrOp.java:272)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setReplication(FSNamesystem.java:1442)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setReplication(NameNodeRpcServer.java:560)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setReplication(ClientNamenodeProtocolServerSideTranslatorPB.java:463)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:996)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1929)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2786)
Thanks a lot,
Antony