Following my post "Importing Nesstar DDI & data into Dataverse 4x", Marina (webdev) has been testing some tools and has this question:
We have been exploring using different api functions to import our
datasets into dataverse.
The GET api/batch/migrate function call looked promising ("Import single
or multiple datasets that are in the local filesystem") but there does not
seem to be any documentation to indicate the expected structure of the
directory to import.
This calls the Java method:
class BatchImport getImport()
which eventually calls the class BatchServiceBean processFilePath() and/or the
What is the expected import directory structure?
[Is each subdirectory expected to be the name of a dataverse with dataset
Or something different?]
What are the different file formats the function(s) is/are expecting in order
to import the files in the correct way? .xml? .txt? etc.
Will a directory's .sav file be ingested as a data file?
We have tried different permutations and keep getting NullPointerExceptions so
we would like to know how to use this batch API call.
example (is this correct?):
- dataset 1
- extra document 1
- extra document 2
- dataset 2
- dataset n