Hi Katherine,
The history of the CSV update import is a long and varied one. I will give you the short answer first:
If you are using the command line and you are NOT changing the legacyID value found in the CSV after exporting it, then the easiest way to do this will be to use the --roundtrip option on the CSV import task. This option bypasses all the other complex matching criteria, and will only look for an exact match on the objectID value (which is what AtoM puts in the legacyId column on export). When roundtripping updates in the same system, this is the best way to have the imports succeed.
It's still worth reviewing which fields can and can't be updated via import - be sure to review the following:
For more information on the --roundtrip option and other command-line task details, see:
For the longer answer with some history on how the update import was originally designed, and why matching is hard to do etc, please see the following older forum threads:
Finally, while not directly related to updates, here are a couple tips to make sure your CSVs are properly prepared so they will import without errors:
First, we have the following slide deck which helps summarize the key points for preparing archival description CSV files for import:
Additionally, in 2.7 we now have CSV validation, which can check for common issues in CSV import files, and is supported through both the user interface and the command-line. See:
Hope this helps!