I'm just starting a project of importing accession records into ArchivesSpace. I have just under 6,000 records I'm dealing with which I can most easily pull out an normalize in a csv file. I saw the accession import template on github and have begun to crosswalk towards that. But I saw in googling other's experience with it that there may be some fields which are handled as events in archivesspace which cannot be accommodated by the template, and the api is the best way to get that data ingested. I can handle creating json for that type of data, and have access to someone to help me with the API. Is this still the best way to handle it? Are there other pitfalls to be aware of?
Does the ingest by the csv template create new agent records, for creator, donor ect.? Is there any way to link to existing agents, or do I have to merge agents once they've been created?
Last thing, the legacy accessions database was also used to store information about donors which may be sensitive and private. We are one repository among many using the archivesspace instance, will that data be visible to all editors on the back end (the way I can see agents across repositories)? There are some concerns about moving from a localized system to a shared cloud environment, and I'm not really sure how best to handle those questions.
Any guidance, or recent experience going through ingest of accessions would be helpful.