Consulta sobre edición masiva de descripciones en AtoM

33 views
Skip to first unread message

Johanna Guerrero

unread,
Apr 10, 2026, 1:09:05 PMApr 10
to AtoM Users

Estimados,

Junto con saludar, me encuentro trabajando en un proceso de normalización y reorganización de descripciones archivísticas en AtoM (perfil administrador), y quisiera consultarles sobre las mejores prácticas para realizar cambios masivos en la jerarquía de registros.

Específicamente, necesito mover y/eliminar un conjunto considerable de ítems que actualmente se encuentran fuera de su estructura archivística correcta, reasignándolos a una serie existente. He intentado realizar este proceso mediante importación CSV utilizando la opción “match-and-update”, trabajando con campos como legacyId y qubitParentSlug. Sin embargo, he observado comportamientos inconsistentes, tales como:

  • Creación de registros duplicados en lugar de actualización
  • Coincidencias parciales dependiendo de la configuración del import
  • Casos donde el cambio de padre no se aplica, pese a que el registro es reconocido

Adicionalmente, al realizar el movimiento manual desde la interfaz, he notado que el sistema utiliza internamente parámetros como objectId y parentId, lo que me hace dudar si el flujo CSV estándar es el método más adecuado para este tipo de operación.

En este contexto, quisiera consultarles:

  1. ¿Existe un método recomendado para realizar movimientos masivos de descripciones dentro de la jerarquía (por ejemplo, mediante CSV, CLI o scripts)?
  2. ¿El campo qubitParentSlug es el mecanismo correcto para reubicar descripciones en todos los casos, o existen limitaciones conocidas?
  3. ¿Es posible utilizar parentId en procesos de importación CSV, o este tipo de operación está restringido a procesos internos del sistema?
  4. ¿Recomiendan algún flujo específico para evitar la duplicación de registros en procesos de actualización masiva?

Agradezco de antemano su orientación, ya que este proceso es crítico para mantener la integridad de la estructura archivística.

Quedo atenta a sus comentarios.

Saludos cordiales.

Dan Gillean

unread,
Apr 13, 2026, 9:57:11 AMApr 13
to ica-ato...@googlegroups.com
Hi Johanna, 

Question 1: 
Is there a recommended method for performing bulk moves of descriptions within the hierarchy (e.g., using CSV, CLI, or scripts)?


If I recall correctly, you have two main options for moving records to a different parent. 

The first method is to move them manually using the Move module in the user interface. Moving any record also moves its attached descendants - for example, moving a Series to a new parent would also move the files and items of the series. See: 


If you want to update records via CSV, I recommend exporting the target records first, and then re-importing them using the command-line task with the --roundtrip option instead. I believe that this would be the best way to move records via a CSV update import. 

Long story short, the options currently supported in the user interface for importing updates rely on some rather complex and poorly designed matching logic, making it hard to use and limiting the fields you can update. Instead, the command-line --roundtrip option is much more reliable. See the CLI task for CSV import, including details on the --roundtrip option, here: 
So, your import command would look something like: 
  • sudo -u www-data php symfony csv:import --update="match-and-update" --roundtrip --skip-unmatch /path/to/your/import.csv
You will also need to repopulate the search index, clear the application cache, and restart PHP-FPM after the import completes.  If you are on version 2.10 this would look like: 
  • sudo -u www-data php symfony cc
  • sudo systemctl restart php8.3-fpm
  • sudo -u www-data php symfony search:populate

Currently the --roundtrip option is only supported via the command-line - I believe that the AtoM Maintainers still hope to add support for it to the user interface in the future, but currently if you are trying to update existing data in your own system via CSV import, this will be your best bet for success. It is helpful to understand a bit about the history of AtoM's CSV update import development and why it currently works the way it does - for a longer overview with some history on how the update import was originally designed, and why matching is hard to do etc, please see the following older forum threads:
I would also recommend using the --skip-unmatched option. This way, if for some reason no match is found for  your record, the task should skip the CSV row instead of creating a new duplicate top-level description. 

Finally, please note that exporting FIRST is an important prerequisite for the --roundtrip option to succeed. On export, AtoM will add the internal object ID value from the database to the CSV's legacyId column. The --roundtrip import task option then uses this value to search for an exact match against the internal database IDs.

Question 2: 
Is the qubitParentSlug field the correct mechanism for relocating descriptions in all cases, or are there known limitations?

Yes, if this works, then this would be the correct column to update in the CSV template to move descriptions via a CSV update import. For any row you update, I recommend deleting the export value in the parentId column. Technically, if both parentId and qubitParentSlug are populated, AtoM is meant to prefer the slug value - but better to be safe and avoid conflicts. 

The limitations I can think of are: 

1) As described above, the target descriptions must be exported first so that the legacyId column is properly populated with the database object ID values, which will then be used for matching on the update re-import. 
2) The target description slug value you add to the qubitParentSlug column must already exist - you cannot create new stub parent records via this method.  
3) If you are changing any of the description metadata as part of moving parents, keep in mind that not all fields can be updated via import. See: https://www.accesstomemory.org/docs/latest/user-manual/import-export/csv-import/#fields-that-will-support-update-imports


Question 3: 
Is it possible to use parentId in CSV import processes, or is this type of operation restricted to internal system processes?

the parentId column is typically used only to manage parent-child relations with NEW descriptions in a CSV - meaning that both the parent and the child rows are new descriptions present in the same CSV. In that case, they should both have unique legacyId values. The child row must appear in a row below the target parent record, and it should use the parent's legacyId value as its parentId value. See: 

Question 4: 
Do you recommend any specific workflow to avoid duplicate records in bulk update processes?

Using the --skip-unmatched option should avoid creating duplicates when no match is found. However, this means that you cannot include new descriptions in your CSV mixed with the rows you intend to match and update (because they would be skipped when no match is found on the legacyId value you assign them). So for example, if you want to move one of the rows to a new parent description that doesn't exist in AtoM yet, I recommend you create the target parent description in AtoM manually via the user interface first, so you can then add its slug value to the target child row in the CSV. 

Good luck! 

Dan Gillean, MAS, MLIS
Business & User Experience Analyst
Artefactual Systems, Inc.
604-527-2056
he / him


--
You received this message because you are subscribed to the Google Groups "AtoM Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ica-atom-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ica-atom-users/c4c4c887-59e6-46f5-bd69-4f89deccd34fn%40googlegroups.com.

Johanna Guerrero

unread,
Apr 13, 2026, 10:21:48 AMApr 13
to ica-ato...@googlegroups.com

Estimado Dan,

Junto con saludar, quisiera agradecer sinceramente el tiempo y el nivel de detalle de su respuesta. Ha sido de gran ayuda para comprender mejor el funcionamiento de las actualizaciones masivas en AtoM y las limitaciones de la importación vía interfaz. Con sus indicaciones hemos aclarado varios puntos críticos que estábamos enfrentando en nuestras pruebas.

Agradezco nuevamente su apoyo y la claridad de sus explicaciones.

Saludos cordiales,

Johanna.


Reply all
Reply to author
Forward
0 new messages