Accession record migration into ArchivesSpace

Skip to first unread message

Jeremy Floyd

Aug 4, 2022, 1:36:50 PMAug 4
to ArchivesSpace
HI All
I'm just starting a project of importing accession records into ArchivesSpace. I have just under 6,000 records I'm dealing with which I can most easily pull out an normalize in a csv file. I saw the accession import template on github and have begun to crosswalk towards that. But I saw in googling other's experience with it that there may be some fields which are handled as events in archivesspace which cannot be accommodated by the template, and the api is the best way to get that data ingested. I can handle creating json for that type of data, and have access to someone to help me with the API. Is this still the best way to handle it? Are there other pitfalls to be aware of?
Does the ingest by the csv template create new agent records, for creator, donor ect.? Is there any way to link to existing agents, or do I have to merge agents once they've been created?
Last thing, the legacy accessions database was also used to store information about donors which may be sensitive and private. We are one repository among many using the archivesspace instance, will that data be visible to all editors on the back end (the way I can see agents across repositories)? There are some concerns about moving from a localized system to a shared cloud environment, and I'm not really sure how best to handle those questions.

Any guidance, or recent experience going through ingest of accessions would be helpful.

Chatnik, Corinne

Aug 5, 2022, 9:33:25 AMAug 5
Hi Jeremy,

I did the exact workflow you are suggesting. 
  • Imported the accessions first via the csv import template. 
  • Got all the accession ids after import and matched them to the event I wanted to link them to. 
    • I did this by query the database for all the accession ids and their identifier and putting it in a tab delimited text file and then wrote a quick little script that was basically: if this accession identifier, print this id (while adding the prefix of /repositories/2/accessions/ )
  • Created a csv file with all the events data that mapped to the api fields:
    • this included the link to the accession record in it 
  • Followed these instructions to transform the event csv to json in OpenRefine and run the api query:
    • I had to modify the json transformation to fit my fields a bit and I think remove a comma between each record in the json file or something to make the bash happy.
I hope this helps a little. I'm happy to answer any questions!

Corinne Chatnik
Digital Collections and Preservation Librarian
Schaffer Library, Union College

You received this message because you are subscribed to the Google Groups "ArchivesSpace" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit

Reply all
Reply to author
0 new messages