Rclone The Download Quota For This File Has Been Exceeded !EXCLUSIVE!

0 views
Skip to first unread message

Ellyn Brener

unread,
Jan 20, 2024, 5:41:40 AM1/20/24
to bachcoagraphas

My gdrive size is about 70TB, and it's mounted locally as well as a remote server (I don't have the best upload speeds). I use separate api credentials and applications for the local vs remote mount (but still connect to the same accounts storage). I don't have a ton of people accessing my plex server. The most is about 5-6 concurrent streams and it's usually around 1-2. Is this just a GDrive bug/problem? I've been running the same commands/config for at least a year without trouble.

This is essentially the way my system works with my local cache and mergerfs. I guess the rclone version should say something about how long I've been using this. I'll try it out though as it could help with older content that's suddenly streamed more frequently. I'll try the updated rclone and see if there are any changes.

rclone the download quota for this file has been exceeded


DOWNLOAD ⚙⚙⚙ https://t.co/5Kws7Y9vOJ



I am going to go back to my old rclone config with --vfs-cache-mode WRITES, and see if the behavior changes. I use Sonarr/Radarr/Bazarr and Plex. Plex is not configured to do anything crazy and none of my app config has changed when I upgraded to 1.53. I had done some testing with the beta in a test environment and I had not experienced this problem, but I can't be any more specific than that.

I stream quite a bit from my gdrive rclone mount and I have not seen any 403's in my logs. I also have backup jobs from my gdrive mount to jotts cloud, and they also don't seem to be affected. I've been on 1.53 since it was released.

I realized I had setup my rclone config incorrectly for working with PLEX. I had it setup as drive1 -> crypt1 and accoring to this guide it should be drive -> cache -> crypt. So I created a new config on the same gdrive account: drive2 -> cache -> crypt2 and then tried to copy from crypt1 to crypt2, it seemed to work for a day or so before it I started receivng the error above. I waited the 24 hours and more for the error to go away, but it still occurs. I can upload through a web browser to Gdrive directly and it works. I recieve the error when I try to upload to either crypt1 or crypt2 directly from my local machine.

It's been 4-5 days since I've not been able to play anything on Plex via my gdrive. When I try to access the files from anywhere else, it says "Download quota exceeded for this file, please try again at a later time" for every file I try to download. I can upload fine, but as soon as I try to download it, it gives that error.

Can somebody please explain to me what is happening? Is this some kind of ban? I was hoping it would go away in 24 hours, but it has been too long now. I'm using one of those free team drives that became popular a while back with encryption, but I didn't have any problem till one week before.

Can you tell me what is going on with my account please?
Like this all started when I uploaded 2 files through rclone mount & copy at the same time, canceled it,
And ever since then, for the last 2 days it apparently has no storage at all.

I've had this working for quite a bit and I had originally followed space invader's tutorial. Plex is set to not autoscan the library at all. I have also recently been using jellyfin more often even though it had been running alongside plex for a bit. Jellyfin is also set not to scan automatically all libraries. As far as i can tell no one is abusing the servers and no one has access to the drive.

I am confident the issue I'm having is the same as this forum topic found here. However the topic is closed and I cannot comment.
Basically doing a sync on a UNION gdrive where the first drive is full causes it to fail and not move to the next available disk. Nick Craig-Wood (don't want to ping) suggested the possibility of implementing a minfreespace option before moving to the next drive.
I believe this option would solve my problem but wondered if this has been implemented or if someone found a way around this yet. I searched the global tags for both "free" and "min" reading tags and didn't find one matching this.

Because there are often multiple files being uploaded at once, this would need rclone to keep track of free space commited but not reported yet. Also the free space measurements often lag just to add further complications...

As you can see the log, In server side transfers, the download quota exceeded is marked as an API exceeded error, and the entire download becomes a bottleneck because the download is attempted continuously.

I think it was because I left out the back story.
I tried to transfer from shared drives used by dozens of people to my drives, and some users' files have a limit per download quota. (Never exceeded my quota) I haven't even uploaded 100GB to my drive for a few days, and other people's files are being uploaded normally.

I'm not asking you to make a file that isn't transferred to be sent.
As far as I know, when a download quota exceeded message appears, the file is automatically skipped. However, in server-to-server transfers, an API error appears instead of exceeding the download quota, so this file is not excluded, which slows down the overall speed.
I don't know why I get an API error instead of the download quota exceeded message, but is this an unfixable problem?

So it has been known that Google Drives have quota limits that are mostly undocumented. Currently I'm trying to encrypt my several hundred TBs of unencrypted files into a crypt backend, and I think this is a good chance for me to try to document the quota limits because I've read a lot of posts here about it but the information about it is very fragmented.

I really thing something has changed on Google side. I actually have been using node-gdrive-fuse and have been for almost 2 years with absolutely no issues. I then started getting bans and first figured that since this code is so old, it must be the problem. I promptly switched to rclone with a cached drive and all. Got banned again. I even started getting paranoid that my google drive account was somehow compromised and people were downloading without my knowledge so I changed all api keys/clients/secrets and changed my password. Still got banned. I know for a fact that I am WAY under the query quotas shown in the api console. And I think from the time I got unbanned to being re-banned, A total of 3-4 movies were played in plex. shrugs

Would you mind sharing which app type you setup through the Box dev console and the settings you used as well as your rclone conf? I've been banging my head against a wall for 3-4 days and keep getting running into problems (see here and here).

A myDriveHierarchyDepthLimitExceeded error occurs when the limit for thenumber of nested folder levels has been exceeded. A user's MyDrive can't contain more than 100 levels of nested folders. Formore information, see Folder-depthlimit.

This error occurs when the limit for a folder's number of children (folders,files, and shortcuts) has been exceeded. There's a 500,000 item limit forfolders, files, and shortcuts directly in a folder. Items nested in subfoldersdon't count against this 500,000 item limit. For more information onDrive folder limits, refer to Folder limits inGoogle Drive.

This error occurs when the per-user limit has been reached. This might be alimit from the Google Cloud console or a limit from the Drivebackend. The following JSON sample is a representation of this error:

Is there a way to tell through the DropBox web interface, either from the regular content browser or the admin console, how much of your upload API quota has been consumed? If not, any other way? Reason being we'd like to use Duplicati backup but received the warning you refer to above.

Quote Error: This error is related to the Google Analytics Reporting API.
User Rate Limit Exceeded: This indicates that you have exceeded the limit of API calls allowed per user per a specific time frame. Typically, this rate limit is set to 100 requests per 100 seconds.

Accounts are given home directory quota limits in order to share this very expensive resource. Should you find your initial quota too small, you should consider other shared disks like /common/users, /filer/tmp and /freespace/local below. If there are justifiable reasons, you can request additional space (please give an estimate of how much more you need, how long you believe you will need it and a short justification of what you need it for).

Some large projects may choose to split their resources into multiple subprojects. These subprojects will have identifiers appended to the main project ID. For example,the rse subgroup of the z19 project would have the ID z19-rse. If the mainproject has allocated storage quotas to the subproject the directories for thisstorage will be found at, for example:/home/z19/z19-rse/auser

Your Linux home directory will generally not be changed when you are made a memberof a subproject so you must change directories manually (or change the ownership offiles) to make use of this different storage quota allocation.

The easiest way of transferring data to/from ARCHER2 is to use one ofthe standard programs based on the SSH protocol such as scp, sftp orrsync. These all use the same underlying mechanism (SSH) as younormally use to log-in to ARCHER2. So, once the the command has beenexecuted via the command line, you will be prompted for your passwordfor the specified account on the remote machine (ARCHER2 in thiscase).

Note the use of the -P flag to allow partial transfer -- the samecommand could be used to restart the transfer after a loss ofconnection. The -e flag allows specification of the ssh command - wehave used this to add the location of the identity file. The -c option specifies the cipher to be used as aes128-ctr which has been found to increase performanceUnfortunatelythe shortcut is not correctly expanded, so we have specified thefull path. We move our research archive to our project work directory onARCHER2.

Rclone doesnt download automatically any files, dont know why you have this conception, the only way rclone download the files is in the event you have the vfs cache activated and modify the files on the mount, rclone will download the files to your local computer for cache purpose and after being uploaded again it will be deleted.

df19127ead
Reply all
Reply to author
Forward
0 new messages