Hi Jarad,
At this time, we don't have support for physical storage imports via the user interface. However, the archival description CSV templates do have physical storage columns, which offers you a few options. First, the obvious use - creating new containers or linking to existing containers while importing new descriptions. However, it sounds like you already have the descriptions, so let's talk about the second option.
You can use AtoM's description import CSV to update existing descriptions - however, there are challenges to doing so when roundtripping in a single system, particularly if you can't get an import run from the command-line. See:
AtoM's update import was designed rather rapidly on restricted budget, and its intended use case at the time was exporting from one system and importing updates into a different system. This is not how most people want to use the feature, and we haven't yet been able to overhaul its design to better accommodate the most desired use case - roundtripping descriptions in a single system.
With update imports, you cannot remove or replace existing storage from a description, but you can append new data - either linking to existing containers (by adding the name, location, and type EXACTLY in the 3 CSV fields), or by creating new ones on import. The BEST way to go about this would be:
- Use search and browse to try and narrow the results as much as possible to the subset of records you want to update
- Add these target records to the clipboard. Repeat these 2 steps until all the descriptions you want to update are on the clipboard
- Export them
- Open the exported CSV in a spreadsheet application like LibreOffice Calc - BEWARE of MS Excel, which might change your default character encoding and/or line ending characters on a save if you're not careful!
- Add the container data to the 3 storage columns for each row
- Perform a match-and-update import, with the "skip unmatched" option selected
- Using the CLI, you can use the --roundtrip option. This option ignores all other match criteria, and ONLY matches on the objectID - which is unique per record in AtoM, and happens to be what we populate the legacyId column with on export, so this method works very well with exporting, updating, and reimporting a single CSV (i.e. roundtripping)
- If you must use the UI, this matching option is not currently supported. Therefore AtoM falls back to trying for exact matches on title, identifier, and repository. In both cases - but especially this one - you can see why using the "skip unmatched" option is important - otherwise, by default, unmatched rows will import as new records, potentially creating duplicates. When used, if no match is found then the whole row is skipped.
You're not missing anything, AFAIK. When we had a number of reports about timeouts in AtoM, several background processes were moved to the job scheduler so that not everything needed to happen in the browser. However, these processes were not applied SELECTIVELY as they should have been, running only when needed - instead, a number of them run in all cases, even when it shouldn't be required. The same thing happens if you update the Places field (or any field really) on an authority record - even though this field is never shown on related descriptions, the index still updates all relations (unnecessarily).
Hope this helps!