Hola Natalia,
The "Upload Digital Objects" option that you are using is done synchronously (i.e. in real time, on demand) via the web browser. Most web browsers have a built-in timeout limit of about 1 minute, so that long-running processes do not run forever and consume all client resources (like memory, cpu, etc). Meaning this is not really a bug, but a limitation of the feature as it is currently implemented.
Generally in AtoM, for large or long-running processes we recommend using the command line. We do have the job scheduler for handling long-running processes via the user interface asynchronously (i.e. behind the scenes, handled by the job scheduler on the backend, and not via the browser). However, at this time we do not have a method of uploading digital objects via the user interface using the job scheduler.
Adding this support is a more complex development process, as it requires the ability to split up the bits of a digital object, add a hash to each one so they can be uploaded in chunks, and then reassemble them on the other side. We would love to be able to add this support in the future, but it will require analysis, testing, and development to do so.
For now, before proceeding I recommend that you perform a check to make sure that this process failing mid-import has not left any data corruption behind. Data corruption typically occurs when long-running processes are interrupted or timeout before completing, leaving partial database rows that can cause unexpected issues. Fortunately, we have a SQL questy in the Troubleshooting documentation that can help check for common sources of data corruption, and there are recommendations for resolving the most common forms as well.
These steps will require command-line access, and in fact direct access to the MySQL database. If you are not the system administrator responsible for installing and maintaining your AtoM instance, pass this message on to the person who is.
First, you will want to make sure that you have backed up your data, just in case! We strongly recommend this for all users before accessing MySQL directly. See:
We have general instructions on how to access the MySQL command-prompt here:
The example SQL query to check for data corruption, as well as some recommendations on how to fix common corruption causes, is here:
Going forward, you have 2 main options for avoiding this issue:
If you intend to continue using the user interface and the "Import digital objects" option, then I suggest that you do your digital object imports in much smaller batches. I cannot give you an exact number as it will depend on the speed of your internet connection, the size of your digital objects, and more. In general, I usually tell people during trainings not to do more than 10-15 JPGs at a time this way, so I am surprised at the number that WAS able to successfully upload for you. It will be up to you to determine how many you want to try in a batch, but keep in mind that a) there is no reason you cannot repeat the procedure more than once, and b) remember that if it DOES fail, there is a chance that you are leaving data corruption behind, which as you can see above, can cause a number of issues and requires a good deal of effort to correct.
The second option for importing many digital objects is to do so via the command-line interface. There are a couple different ways of doing this, depending on what you need.
If you already have created the item level descriptions for each digital object and you just want to import the files and attach them, you can use the digital object load task:
The task requires a 2-column CSV - one column takes the path to your digital objects, and the other something unique to associate with the target description - of the options the task supports, I recommend using the slug, as it is guaranteed to be unique, and does not require command-line access to look up. Place the digital objects in a folder, and put that folder on your AtoM server - for example, create a directory called import-objects and put that in the root AtoM installation directory. Now your file path in the CSV for each row will point to the target object - for example, if you had a JPG called image1 in a directory called import-objects, added to AtoM's usual root directory, then you would add something like the following as the file path in the digital object import CSV:
- /usr/share/nginx/atom/import-objects/image1.jpg
Continue like this for all target descriptions.
If you want to create the item-level descriptions AND attach digital objects at the same time, you can prepare an archival description CSV import - in fact, this will give you more control over the descriptive metadata created than the "Import Digital Objects" option would. For this, you will also need to create a directory of digital objects and place that on the AtoM server somewhere, as well as add the path to those objects to the CSV - this time in the digitalObjectPath column of the archival desrciption CSV template. You can add whatever metadata you want to the rest of the description row. The important part is making sure that these description rows all share one thing in common - you add the slug of the target parent description to the qubitParentSlug column, and you do NOT use the parentId column (you can't use both columns in a single row). This will ensure that all these new item-level descriptions attach as children to the target description with a matching slug, and the digitalObjectPath value will ensure that it fetches the relevant digital object and attaches it to the new item-level description after it is created. You can read more about CSV imports in the user manual, here:
Note that, so long as you place the digital object directory on your server first, you could actually run this import via the user interface, since the job scheduler will handle the CSV import and ensure it does not time out. Or you could run it via the command-line:
- Import archival descriptions:
Good luck, I hope all of this helps!
Cheers,