Now, there ARE some actual limits in place in the size of digital objects that can be uploaded in AtoM. Any of these that are set can be configured and changed. I have described those in our docs, as well as in several previous user forum posts (including where and how to change them). See for example:
Some forum threads that sum it up too:
While those can all be changed, perhaps the biggest limit currently for large files is the web browser. Most web browsers have a built-in timeout limit of about 1 minute, so that long-running processes don't continue unchecked and end up consuming all available client resources and crashing your browser. In practice, this means that if you try to upload a very large PDF to a description in AtoM through the web-based user interface, the upload may timeout - and that has the risk of interrupting the process and leaving incomplete data (i.e. data corruption) in the database, potentially leading to serious problems later.
I would love to see us add support for asynchronous uploads that can hash and chunk large files in the background when users upload large content via the interface, but this will require significant time, analysis, and effort to develop, and is not currently slated for an upcoming release.
In the meantime, we have the ability to upload large files via the command-line, or as part of a CSV upload. The forum links above include further links that describe both methods in greater detail. Use these if you need to add very large PDFs to AtoM - and if, somehow, the MEDIUMTEXT change in 2.8 is not enough to index all content in your large PDFs, a local developer could also use the related commit as a guide, and change the relevant database field from MEDIUMTEXT to LONGTEXT
Cheers,
@accesstomemory