Hi folks,
I'm in the midst of a migration from an old storage system to a new one. I'm not using Globus as my primary data mover, but in order to help the process along, I'm using Globus to copy from an S3 bucket (on a local-ish ceph cluster) which has a copy of the data to the new storage system.
In any case, I've been seeing "a resource or processing limit was exceeded" errors on some of the transfers, with the details as follows:
{
"error": {
"details": "A sub directory is too large to scan"
}
}I'm guessing there's probably nothing I can do about this, but I'm curious what the limit I'm hitting is. I also wonder if this has to do with the fact the source is a bucket rather than a POSIX share.
Thanks,
Ken