Hello Guys,
we are using gerrit v3.0.9 and S3 as backbone for Lfs objects.
lfs.config:
[storage]
backend = s3
[s3]
region = ap-south-1
bucket = gerrit-lfs-test
storageClass = REDUCED_REDUNDANCY
expiration = 60
disableSslVerify = false
we have been using this from a year , until we faced an issue pushing 10 gb file using LFS.
It is documented on AWS that to upload files more than 5gb size we need to use multipart upload API. and S3 has highest limit of upload size upto 5 gb in a singleshot.
1. The question is the Lfs plugin, is it using multipart upload technology ? if yes what changes we need to do on our lfs config to achieve that.
2. If not what are the other solutions you suggest to use S3 as a backbone for gitlfs to upload unlimited size of large objects
Regards
Shad