The tool is fail-fast, so any failures will cause the entire import to halt. You can poll the status Web Script
to determine whether an import failed or not - the JSON version of this Web Script is specifically designed to be easy to consume by automated tooling (e.g. the scripts that initiate your scheduled imports). For Unix-style shell scripting, I especially like the combination of httpie
for this kind of thing, but the tool itself is agnostic - you can use whatever you prefer.
When an import fails, the status Web Script will include the exception that caused the failure, so you shouldn't have to read log files to figure out which file was problematic.
Because files are loaded in batches, any files in the same batch as the failing file will also be rolled back (even though they may have been written to the repository correctly). Rather than trying to determine which files were in that batch (which the tool doesn't track, and can't easily report even via the log files) you'd be better off either:
- fixing whatever the root cause issue is
- pulling the offending file out of the source content set
and then retrying the exact same import, with the "replace existing file" option disabled (turned off). The reason for this is that the tool is designed to be efficiently re-runnable, and will quickly skip over the files that were already successfully imported, then pick back up at the first file that failed to import (or was successful, but got rolled back as part of a failing batch).
So in short, rather than trying to track errors at an individual file level, it's better to simply poll the status Web Script to determine when an import has completed and whether it succeeded or failed. If it failed, use the exception information to identify the root case, and once that root cause issue is corrected (or the offending content removed from the source content set), re-run the exact same import with that same source content set, ensuring that "replace existing files" is disabled (turned off).