Globus Transfers stoped

24 views
Skip to first unread message

Dylan McReynolds

unread,
Apr 28, 2022, 12:17:10 AM4/28/22
to Discuss
Hello,

I have a globus application transferring data to a data center using the python API. All of a sudden today, the transfers started failing with the following message:

(409, 'ClientError.Conflict.TooManyPendingJobs', 'This user has too many pending jobs!', 'OE4CDw78W')

When I print out the tasks with the following code, there are only 10 and they report having succeeded:

for task in transfer_client.task_list():
   print(task)

Anyone have any idea what might have gotten wedged or how to clear it?

Thanks!

Hutchison, Matt (NIH/NCI) [C]

unread,
Apr 28, 2022, 8:01:54 AM4/28/22
to Dylan McReynolds, Discuss
You're likely running into the default request limit and not seeing the complete list.


______
Parameters
num_results (int or none) – The number of tasks to fetch from the service. May be set to None to request the maximum allowable number of results. [Default: 10]
______

Matt Hutchison (contractor)

Software Developer

Frederick National Laboratory



From: dis...@globus.org <dis...@globus.org> on behalf of Dylan McReynolds <dmcre...@lbl.gov>
Sent: Thursday, April 28, 2022 12:17 AM
To: Discuss <dis...@globus.org>
Subject: [EXTERNAL] [Globus Discuss] Globus Transfers stoped
 

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and are confident the content is safe.

--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@globus.org.

Dylan McReynolds

unread,
Apr 28, 2022, 9:30:29 AM4/28/22
to Discuss, Hutchison, Matt (NIH/NCI) [C], Dylan McReynolds
Matt,

You're right. I was reading the wrong version (https://globus-sdk-python.readthedocs.io/en/stable/services/transfer.html#globus_sdk.TransferClient.task_list) and using "limit" instead of "num_results".

Thanks,
Dylan
Reply all
Reply to author
Forward
0 new messages