James
> --
> You received this message because you are subscribed to the Google Groups
> "JetS3t Users" group.
> To post to this group, send email to jets3t...@googlegroups.com.
> To unsubscribe from this group, send email to
> jets3t-users...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/jets3t-users?hl=en.
>
If running Synchonize in batch mode exceeds the available memory,
reducing the size of the batch isn't likely to help since at that
point it is information about the local files that is most likely
chewing up all the memory. As of version 0.8.1 I've minimized the
memory usage about as far as I can for local files, only the filename
path string and target object key name string are stored but this can
still add up to a lot of bytes for large numbers of files.
If the number of local files exceeds the memory you are able to give
the Synchronize app you will need to manually "batch" your uploads by
uploading smaller subsets of files using multiple command invocations.
Changing the "upload.transformed-files-batch-size" option won't have
any effect if you are not encrypting or gzipping files during upload.
Cheers,
James
synchronize.sh UP target-bucket/DirA /path/to/files/DirA
synchronize.sh UP target-bucket/DirB /path/to/files/DirB
etc.
Alternatively, you can use normal file wildcards but must be careful
to avoid deleting files in S3 that don't match the wildcard:
synchronize.sh UP target-bucket /path/to/files/A* --nodelete
synchronize.sh UP target-bucket /path/to/files/B* /path/to/files/C* --nodelete
etc.