Hi Josh,
It's clear that you've looked through the documentation and the forum for solutions before posting - thank you! A couple of further suggestions to try:
First, don't forget that the atom.conf PHP pool file you created during the installation process also has execution limit values in it. These may be overriding the global php.ini values, and our default config in the installation instructions sets a 64MB upload limit, so that might fit with what you're seeing. If you've followed our recommended instructions, this is typically found at /etc/php/7.2/fpm/pool.d/atom.conf - see:
Remember to clear the cache and restart PHP-FPM if you change this file.
Additionally, don't forget there is an Admin setting that can globally limit file uploads, as well as a per-repository setting - might as well make sure these are both set to -1 as well. See:
There's a small chance that you've edited the wrong php.ini file on your system as well - there can be more than 1! Our docs here have some suggestions on how to find them:
Finally: while I'm hopeful that the issue is the PHP pool file, please be aware that you may still run into the browser timeout limits for a 72MB file upload via the user interface. When you upload something through the user interface, everything must happen via the browser synchronously - that is, on demand in real time. Most browsers have a built-in default timeout limit of about 1 minute - so anything that takes longer to upload will time out and fail.
If you want to upload large files, it is currently best accomplished via the command-line. You could use the digital object load task instructions we have in our documentation, which allows you to use a simple 2-column CSV and a directory of files placed on the server to upload digital objects to existing description. See:
In this previous forum thread, I have also outlined how you might use the 2 digital object related columns in the description's CSV import template to add digital objects. See:
In the future, we could do further development so that the upload through the user interface is performed as a job, using the job scheduler that has been included in AtoM to handle long-running tasks. This would allow the upload to be executed asynchronously in the background as a job, managed by the job scheduler - this way, you would not experience timeout errors when trying to upload large files to AtoM. There are some good libraries that support large file uploads by hashing and chunking them as well, that would work in concert with the job scheduler - I'm currently quite partial to
uppy, and would love to see support for it added to AtoM in a future release. However, this development has not yet been sponsored for inclusion in AtoM.
If this functionality sounds important to you and your institution might be interested in sponsoring it for inclusion in a future list, please feel free to contact me off-list, and Artefactual could prepare a development estimate for you.
Hopefully this will help!
Cheers,