Cannot get it to work at all; B2 or S3 broken towards BB?

42 views
Skip to first unread message

Martin Jensen

unread,
Nov 3, 2020, 2:28:49 PM11/3/20
to dedupfilesystem-sdfs-user-discuss
Hi guys - I really need some help.

I've tried the S3 guide, and the B2 guide, and I cannot get to mount at all.

Using master keys or application keys doesn't change the outcome. Using a numeric bucket ID doesn't work either.

I get an "java.io.IOException: com.amazonaws.services.s3.model.AmazonS3Exception:  (Service: Amazon S3; Status Code: 400; Error Code: 400 ; Request ID: 405[...]a; S3 Extended Request ID: adX.....rA=), S3 Extended Request ID: adX[....]brA=

And I consistently gen an 'invalid accountId' error in B2 mode.

I know they are correct; because I tried the bucketid/appkey/access-key in FileZilla Pro, which confirms it works, in both S3 mode and B2 mode.

However, I simply cannot get any of the guides to work, and it seems that other people have given up. Could anyone please test with a fresh set of credentials, that the guides (for Linux) actually works to this day?

I've used CEPH, Swift, S3 @ various vendors, B2, and tardigrade - but never have I tried something as difficult/afar from the examples, than this. And it bugs me, because i really want to use and like this product, but apparently something has broken somewhere?

I've tried on Ubuntu 18.04 and 20.04 on different boxes, to no avail.

I'm using the EU endpoint, don't know if that matters; I tried the US ones, and had exactly the same experience.

Any ideas on what i could try?

Thank you for your time!

Kind regards,
Martin



Martin Jensen

unread,
Nov 3, 2020, 2:46:35 PM11/3/20
to dedupfilesystem-sdfs-user-discuss
Hi all,

And I'm going to pull back the above. I've now gotten it to work in 1 of 8 scenarios; It has to be master key, US only, B2 only, for it to work with Backblaze offerings. As I can get 20-40 Mbps to US pr thread, and easily get 900+ Mbps per thread to the EU datacenter, I'd very much like that. The S3 wish is based on the assumption, that S3 is more mature in SDFS than B2 protocol.

Kind regards,
Martin

Martin Jensen

unread,
Nov 3, 2020, 3:30:14 PM11/3/20
to dedupfilesystem-sdfs-user-discuss
Well, I'm getting approximately 1 file (at ~8kbyte size) file pr 20-22 seconds into the mounted store, so a Linux kernel would take 18 days to copy into the volume. I take it that is not expected performance. The log does not have any entries after the mount.

Using B2 and 64 threads, I can push at 800 Mbps into another bucket in the same B2/US DC, using b2 cli utility. The same files.
Reply all
Reply to author
Forward
0 new messages