Processor: 2 vCPUs @ 2.3GHz
Memory: 7GB
Disk space (processing): 50GB at a minimum for AtoM’s core stack plus more storage would be required for supporting any substantial number of digital objects.
Consider this the absolute minimum - in general, the more memory and CPU you can provide, the better.
If you want a setup that will allow you to properly tune each server per your needs, I recommend you consider a 2-site installation - this is what we offer to our larger on-prem clients, and hosted enterprise users. In addition to allowing better control over how you tune each server, it also offers better security and privacy for any potential sensitive client information. See this slide deck for an overview:
Note that since those slides were created, our team has consolidated a number of different enhancements across client repos into one i improved, updated version of the replication script, which is now available here:
Additionally, on Slide 7 of the deck linked above, you will see a generalized deployment diagram, showing how we tend to set up these AtoM installations. This shows that our team will often use a separate server for the Elasticsearch index and MySQL database, which is then shared (with different access permissions) between the public and internal AtoM installations.
Such an approach allows you to tune the internal read/write site for editing - more CPU and RAM in particular. The public site can be tuned for fast reading, and can use an additional caching engine (our team uses Varnish, for example) to greatly increase the response time for public end-users.
Does this help, or do you have more specific questions? If so, please share them and I will see if our team can add any further suggestions.
Cheers,