[Instagram Like Services

0 views
Skip to first unread message

Eliora Shopbell

unread,
Jun 10, 2024, 8:53:08 PM6/10/24
to vilriagcounaf

I'm trying to copy 450k files from one S3-like service to another, but rclone memory consumption is getting it killed.
After about 10min it's already consuming +30GB of RAM, but checked only about 60k files. My VPS has around 60GB of RAM.

I saw the response above and also tried it. It goes much slower, but with constant memory usage for about 1h, then memory starts to raise until the process gets killed. It seems that memory starts to raise when copying from a folder with about 40k files.

Instagram Like Services


Download ❤❤❤ https://t.co/pdTkBXQn70



I found many posts in this forum complaining about this problem when dealing with many millions of files. This indeed can be a problem in the future for my use case, but is it expected to be a problem even for half a million files?

not sure the exact reason for the memory usage but is the source and dest are both s3, should consider using --checksum
If the source and destination are both S3 this is the recommended flag to use for maximum efficiency.

Sorry, I couldn't find the explanation.
I guess it does this to reduce remote requests, specially when doing a sync. But even then, it seems to me it could be done with minimal and constant memory usage. I guess I'll try to implement it in Python with Boto3 later, I'm already using this lib on my app anyway...

Let's say rclone is checking the folder with 208,000 pdf files, then it will need to collect an entire listing from both remotes before comparing them. Let's say that is 430,000 objects in total, to make calculations easy. (and perhaps only half of this if you haven't transferred anything yet)

Did you try any previous versions of rclone? It might be worth trying some older versions to see if they have the same problem - this will tell us whether it is a problem with a specific version of the SDK.

Hi! Sorry for the delay! I'm still waiting for a confirmation from their support, but it seems IDrive E2 has a bug...
When listing files from that folder with +80k files it keeps listing the same files forever. That's why rclone size also doesn't work. The same problem happens with boto3. So I really think it's a problem in their API implementation, not in rclone.

When you have a known good ruleset you save it with nft list ruleset. After that you can compare the stored ruleset against the current working ruleset every time the agent is called. No need to interpret anything.

First off, thank you to everyone who responded and offered suggestions. I am VERY new to checkmk. I think I have things working properly. I also forgot to state that I am using checkmk Raw. I first navigated to Setup > Services > Service monitoring rules > Systemd Services Summary where I configured the rule like so:

Im looking for help or possibly suggestions. Im working in Data Services on an ETL that loads one of our custom products. In one of our files (being imported in the ETL) contains a column that is a description. This desc will either be 'Paint', 'Paint Option' or 'Paint Color'. I currently have 4 different options I am extracting all with the same naming schemes Paint, Trim and Board. So basically there are now 12 descriptions. Once I drag over my column in my schema the next step I do is perform a reverse pivot on this. Now the problem is in the pivot I can create the following:

This is all being dumping into an SQL table...so doing it this way creates 12 columns versus just the 3 I want. So here is my question. In the ETL where I first import the desciptioin column what I would like to do is this:

So when it reaches my pivot the values are just "Paint", "Board" and "Trim." Making my SQL table have only 3 columns with values versus 12 columns and plenty of Nulls. Now I have tried the match_simple under the Mapping tab...and this WORKS! Unfortunately it is only letting me specify for just ONE description

TRAINING. Trainees stay in a beautiful, brand-new, specially built house accessible for any and all disabilities. I stayed in a room that had a hospital style bed, plenty of space for my wheelchair, and a room attached to mine for easy access to my caregivers.

On Monday, we were introduced to the trainers, and we went over some basics of being a Service Dog handler. I and the three other members of my training class were chomping at the bit to meet our new companions, but we had to wait until the afternoon. The suspense was killing us!

The day we met our pups was a very special day. The trainers brought all four dogs into the room, and it seemed as though the dogs knew who their handlers were. Oliver walked right over to me, like he knew me. It was breathtaking. From then on, it was six hours a day of learning commands, grooming tips, and the theories behind training; going out to public places; and some amazing extracurricular playtime.

Most NEADS dogs go through the Prison PUP Program, where they are trained by inmates at various prisons in MA and RI. I had the chance to meet the guys who trained Oliver at the Massachusetts Correctional Institute in Concord. After going through the security checks, we met in a common room with 15 or 20 inmates and correctional officers, all who were part of the program. I think the two inmates who were the primary trainers for Oliver were as nervous as I was, but you could really tell how much love they had for the program, the process, and the dogs. It was gratifying for them to see that Oliver was going to a good home with a person who really needed him.

NEADS has certain requirements for its Service Dog teams. For one year, I took Oliver to the vet once a month to have him weighed and checked out, as NEADS wants your dog to be healthy for as many years as possible. The dogs are to be treated as athletes, so you need to stick to very strict weight limits, exercise regimens, and really stay on top of any issues that they may have. You are also responsible to take an ADI recertification test after 1 year and then every 5 years after that.

I have personally used Runcloud and I am happy with it, specially with their support which is most of the time really fast. But feature wise Ploi seems to be really far ahead features wise and I really wish for a lot of their stuff like load balancer, the backup system, db servers, etc.

You buy a dedicated server, attach it to ServerPilot and you get your server up in minutes, he will get updated without any intervention. You will get the right stuff, updated with every last software and dependencies version, patched, without "any" security hole. I even some at the start tried to hack the server with no luck, I had to make a vulnerability myself to get into ? no really you get all needed architecture, like app isolation and more with security in mind.

I actually have a ServerPilot account from when they had their free tier, quite a few years ago and it's still running my first pet project with ProcessWire, getting 100k-200k sessions a month on a 5USD Vultr VPS.

Hard to give a comparison because I've not used the other services but we switched to Cloudways about 18 months ago and they've worked well for us ( much faster than the previous hosting company we used).

You get to pick server providers with no need to separate accounts with them so all the billing is in the same place. The control panel does pretty much everything we need to and you can let clients have access to their server if you trust them.

Just had a situation where one of Runcloud utils came in handy, I was able to block a rain of requests coming looking for WordPress stuff with their per-applicatinon Firewall/ModSecurity settings. As someone who isn't really knowledgeable on htaccess or firewall settings, it came in super handy to save face with the client.

We've switched most of our servers to a setup of Hetzner VPS managed via Ploi. I haven't found many differences between Cloudways, ServerPilot, Moss, Ploi, etc. if all you're doing is hosting simple Laravel or ProcessWire sites.

I originally wanted to go with Ploi, but unfortunately the features I needed just weren't ready at the time, so have stuck with RunCloud. I'm also invested in using the RunCloud API to automate the setup of some projects, so there's that too.

Basic site layout and then various things like pulling in meta tags / CSP etc, generating a nonce for inline scripts, password protecting the site whilst it's in debug mode. CSS gets pulled in either via AIOM or directly if the site is in debug.

For CSS I generally have half a dozen files (definitions / grid / nav / typography and so on) that I've built up over the years but are mostly based on bits of other libraries eg we pretty much have the Bootstrap grid

Looks like cron jobs aren't cloned which is a shame because I have a cron job I normally use to dump database backups (which are easier to roll back than the default Cloudways backups) but that's easy enough to add.

I have user Serverpilot back when they had a free tier and was very happy with it. For my local development I used Vagrant and was able to install Serverpilot in my Vagrant machine so my development and production environments were 100% identical. That's a big plus!

It's working great, we're not missing much in terms of features. I think ploi did at some point offer Apache + Nginx but scrapped the option. We're running ProcessWire on Nginx now. It requires some extra work here and there but is running smoothly overall.

795a8134c1
Reply all
Reply to author
Forward
0 new messages