Technical requirements for a big AtoM

93 views
Skip to first unread message

Carlos Moreno

unread,
Feb 26, 2024, 4:33:48 AMFeb 26
to AtoM Users
I'm going to install an AtoM for a somewhat large file (more than 500 thousand files) on a on-prem server.

The images will be on a separate server and I will have access to them through public urls or shared resource paths.

I would like to know your opinion about the possible technical requirements in terms of hardware and also if you recommend using an all-in-one, that is, a server where AtoM, Mysql, Nginx and Elasticsearch are located.




Dan Gillean

unread,
Feb 26, 2024, 9:39:27 AMFeb 26
to ica-ato...@googlegroups.com
Hi Carlos, 

Here's what we recommend for basic production sites, per the documentation

Processor: 2 vCPUs @ 2.3GHz
Memory: 7GB
Disk space (processing): 50GB at a minimum for AtoM’s core stack plus more storage would be required for supporting any substantial number of digital objects.

Consider this the absolute minimum - in general, the more memory and CPU you can provide, the better. 

If you want a setup that will allow you to properly tune each server per your needs, I recommend you consider a 2-site installation - this is what we offer to our larger on-prem clients, and hosted enterprise users. In addition to allowing better control over how you tune each server, it also offers better security and privacy for any potential sensitive client information. See this slide deck for an overview: 
Note that since those slides were created, our team has consolidated a number of different enhancements across client repos into one i improved, updated version of the replication script, which is now available here: 
Additionally, on Slide 7 of the deck linked above, you will see a generalized deployment diagram, showing how we tend to set up these AtoM installations. This shows that our team will often use a separate server for the Elasticsearch index and MySQL database, which is then shared (with different access permissions) between the public and internal AtoM installations. 

Such an approach allows you to tune the internal read/write site for editing - more CPU and RAM in particular. The public site can be tuned for fast reading, and can use an additional caching engine (our team uses Varnish, for example) to greatly increase the response time for public end-users. 

Does this help, or do you have more specific questions? If so, please share them and I will see if our team can add any further suggestions. 

Cheers, 

Dan Gillean, MAS, MLIS
AtoM Program Manager
Artefactual Systems, Inc.
604-527-2056
@accesstomemory
he / him


--
You received this message because you are subscribed to the Google Groups "AtoM Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ica-atom-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ica-atom-users/5385abec-e438-44fd-8a96-72712fe0e5a3n%40googlegroups.com.

Carlos Moreno

unread,
Feb 26, 2024, 10:01:00 AMFeb 26
to ica-ato...@googlegroups.com
Thanks Dan for this complete guide.

We will take this architecture into consideration to improve resource optimization.

Until now I had normally raised Dev and Prod instances.

Although indifferently with this architecture that you propose we should also have instances of development.

Thanks Dan.



--
Carlos Moreno
Reply all
Reply to author
Forward
0 new messages