Using the Salesforce source plugin can fail with the exception "Closing already closed Job not Allowed"
WARN [main:o.a.h.m.YarnChild@187] - Exception running child : java.io.IOException: [AsyncApiException exceptionCode='InvalidJobState'
exceptionMessage='Closing already closed Job not allowed'
]
at io.cdap.plugin.salesforce.plugin.source.batch.SalesforceBulkRecordReader.close(SalesforceBulkRecordReader.java:140)
at io.cdap.plugin.salesforce.plugin.source.batch.SalesforceRecordReaderWrapper.close(SalesforceRecordReaderWrapper.java:81)
at io.cdap.cdap.internal.app.runtime.batch.dataset.input.DelegatingRecordReader.close(DelegatingRecordReader.java:50)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.close(MapTask.java:529)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:797)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at io.cdap.cdap.internal.app.runtime.batch.distributed.MapReduceContainerLauncher.launch(MapReduceContainerLauncher.java:114)
at org.apache.hadoop.mapred.YarnChild.main(Unknown Source)
Caused by: [AsyncApiException exceptionCode='InvalidJobState'
exceptionMessage='Closing already closed Job not allowed'
]
at com.sforce.async.BulkConnection.parseAndThrowException(BulkConnection.java:182)
at com.sforce.async.BulkConnection.createOrUpdateJob(BulkConnection.java:166)
at com.sforce.async.BulkConnection.updateJob(BulkConnection.java:838)
at com.sforce.async.BulkConnection.updateJob(BulkConnection.java:832)
at io.cdap.plugin.salesforce.SalesforceBulkUtil.closeJob(SalesforceBulkUtil.java:86)
at io.cdap.plugin.salesforce.plugin.source.batch.SalesforceBulkRecordReader.close(SalesforceBulkRecordReader.java:138)
... 16 more
In my testing with the SF.com batch source 1.3.10 plugin, I was able to see these “warnings” when PK Chunking was enabled, but it did not cause the pipeline to fail. It just logged warnings for approximately n-1 chunks.
[~accountid:5f8f3173376a57006afac258] Yes that’s right. The pipeline succeeds and you only see this error in the logs. However, the logic to close the connection to Salesforce isn’t correct and that needs to be fixed.
Completely agreed. I just wanted to note that the warnings reported as a result of this bug do “not” cause the pipelines to fail. Some users have been resistant to upgrade to plugin patch 1.3.10 because they were thinking this unfixed bug would cause their pipelines to fail. 1.3.10 is a stable release and should be used, there will just be extra warnings in the log if PK Chunking is used (until this bug is fixed).
Aaron Pestel Sneha wrote this release note. Please let us know if you have feedback/changes: “Fixed an issue in the Salesforce plugin where Salesforce sessions were not closed properly when PK Chunking was enabled.”
[~accountid:5f8f3173376a57006afac258] Sneha wrote this release note. Please let us know if you have feedback/changes: “Fixed an issue in the Salesforce batch source plugin where Salesforce sessions were not closed properly when PK Chunking was enabled.”