Backblaze B2 not supported

266 views
Skip to first unread message

Bud Bennett

unread,
Mar 28, 2018, 4:00:55 PM3/28/18
to dedupfilesystem-sdfs-user-discuss
Hello,

I'm not sure what I’m doing wrong, but here’s how I understand things:

In my Backblaze B2 Cloud account settings, I see Account ID and Application Key and I think that these are the two things I need to configure this appliance to work with the BackBlaze B2 service, but I could be wrong.

 

I create a Cloud Storage Target with the following settings:

 

Cloud Storage Target Name                        BackBlazeB2-01

Cloud Storage Provider                                 Backblaze (B2)

Disable Checking Authentication:             UNCHECKED

Access Key:                                                        I believe this is my Account ID

Secret Key:                                                         I believe this is my Application Key

Connection Timeout:                                     15 seconds

Number of Write Threads:                           24

Download/Upload Speed:                           Unlimited

Block Size:                                                           50 MB

Encrypt Data:                                                     UNCHECKED

 

However, this produces an error, “Could not save Cloud Storage Target Configuration” with no other clues as to what was incorrect.

 

If I uncheck “Disable Checking Authentication” with the same settings, it saves the configuration, which tells me that the credentials I’m supplying could be invalid.

 

The logs produce this when I configure the provider:  (Access Key and Secret Key have ellipses added and the all digits removed)

 

2018-03-27 16:55:08,710 [sdfs] [controllers.CloudServers] [160] [play-thread-1]  - {"type":"backblaze","subtype":"backblaze","bucketLocation":"us-west-2","hostName":"","disableDNSBucket":false,"disableCheckAuth":true,"useAim":false,"encryptData":false,"accessKey":"…","secretKey":"…","maxThreads":24,"readSpeed":0,"writeSpeed":0,"archiveInDays":0,"connectionTimeoutMS":20000,"blockSizeMB":50,"proxyHost":"","proxyPort":0,"proxyUser":"","proxyPassword":"","proxyDomain":"","name":"BackBlazeB2","id":"","simpleS3":false,"usebasicsigner":false,"usev4signer":false,"simpleMD":false,"acceleratedAWS":false,"iaInDays":0}

 

If I attempt to add the cloud volume, it never mounts with no errors produced and the logs are cryptic. All I get is a red indicator. I would be happy to supply any information requested.


I have filed a ticket with Backblaze about the issue and here was the response I received:

---


Christopher, Mar 28, 12:21 PDT:

Hello Bud, 

To our knowledge, the Open DeDupe software/service has not been designed to work with B2. However, for definitive information, I'd suggest reaching out to their support team directly. If they do not currently support B2 as a storage service, submitting a feature request with their team may also be worth while. 

If you have any further questions, please do not hesitate to write back.

Regards,
 Christopher

https://www.backblaze.com/blog/technical-support-engineer-christopher/
The Backblaze Team

Can anyone advise what actions I need to take to configure this appliance to work properly with BackBlaze B2 Storage?


Thanks.


Message has been deleted

Bud Bennett

unread,
Mar 28, 2018, 4:06:50 PM3/28/18
to dedupfilesystem-sdfs-user-discuss
Error in /var/log/sdfs/sdfs.log:

2018-03-27 16:52:07,516 [sdfs] [controllers.CloudServers] [160] [play-thread-4]  - {"type":"backblaze","subtype":"backblaze","bucketLocation":"us-west-2","hostName":"","disableDNSBucket":false,"disableCheckAuth":false,"useAim":false,"encryptData":false,"accessKey":"...","secretKey":"...","maxThreads":24,"readSpeed":0,"writeSpeed":0,"archiveInDays":0,"connectionTimeoutMS":20000,"blockSizeMB":50,"proxyHost":"","proxyPort":0,"proxyUser":"","proxyPassword":"","proxyDomain":"","name":"BackblazeB2-01","id":"","simpleS3":false,"usebasicsigner":false,"usev4signer":false,"simpleMD":false,"acceleratedAWS":false,"iaInDays":0}
2018-03-27 16:52:07,523 [play] [play.Logger] [604] [play-thread-4]  -

@77ddb679e
Internal Server Error (500) for request POST /cloudserver/?_dc=1522191127462

Execution exception
NoClassDefFoundError occured : org/jclouds/ContextBuilder

play.exceptions.JavaExecutionException: org/jclouds/ContextBuilder
        at play.mvc.ActionInvoker.invoke(ActionInvoker.java:231)
        at Invocation.HTTP Request(Play!)
Caused by: java.lang.NoClassDefFoundError: org/jclouds/ContextBuilder
        at org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore.checkAccess(BatchJCloudChunkStore.java:1537)
        at controllers.CloudServers.auth(CloudServers.java:84)
        at controllers.CloudServers.add(CloudServers.java:175)
        at play.mvc.ActionInvoker.invokeWithContinuation(ActionInvoker.java:544)
        at play.mvc.ActionInvoker.invoke(ActionInvoker.java:494)
        at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:489)
        at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:458)
        at play.mvc.ActionInvoker.invoke(ActionInvoker.java:162)
        ... 1 more

Error in <BucketName>-volujme-cfg-xml.log:
2018-03-27 16:13:54,971 [sdfs] [org.opendedup.sdfs.servers.HashChunkService] [73] [main]  - Unable to initiate ChunkStore
java.io.IOException: java.util.NoSuchElementException: key [b2] not in the list of providers or apis: {providers=[aws-s3], apis=[s3]}
        at org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore.init(BatchJCloudChunkStore.java:548)
        at org.opendedup.sdfs.servers.HashChunkService.<init>(HashChunkService.java:71)
        at org.opendedup.sdfs.servers.HCServiceProxy.init(HCServiceProxy.java:201)
        at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:78)
        at fuse.SDFS.MountSDFS.setup(MountSDFS.java:209)
        at fuse.SDFS.MountSDFS.init(MountSDFS.java:248)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Caused by: java.util.NoSuchElementException: key [b2] not in the list of providers or apis: {providers=[aws-s3], apis=[s3]}
        at org.jclouds.ContextBuilder.newBuilder(ContextBuilder.java:175)
        at org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore.init(BatchJCloudChunkStore.java:460)
        ... 10 more


Sam Silverberg

unread,
Mar 28, 2018, 5:40:07 PM3/28/18
to dedupfilesystem-...@googlegroups.com
Hi Bud - I will share with you my config so you can try it out.  I will send it over to you directly later today.

--
You received this message because you are subscribed to the Google Groups "dedupfilesystem-sdfs-user-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedupfilesystem-sdfs-user-discuss+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Bud Bennett

unread,
Mar 29, 2018, 5:30:10 PM3/29/18
to dedupfilesystem-sdfs-user-discuss

On Wednesday, 28 March 2018 15:40:07 UTC-6, Sam Silverberg wrote:
Hi Bud - I will share with you my config so you can try it out.  I will send it over to you directly later today.

Any luck? 

Sam Silverberg

unread,
Mar 30, 2018, 12:19:28 PM3/30/18
to dedupfilesystem-...@googlegroups.com
Hi Bud,

We see the issue you are having and will provide a fix. It will be updated tomorrow and I will provide you all the steps for setup.

--
You received this message because you are subscribed to the Google Groups "dedupfilesystem-sdfs-user-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedupfilesystem-sdfs-u...@googlegroups.com.

Sam Silverberg

unread,
Apr 3, 2018, 1:12:17 PM4/3/18
to dedupfilesystem-...@googlegroups.com
Bud - 

Please download http://www.opendedup.org/downloads/SDFS-3.6.0.14-Setup.exe

On windows run

mksdfs --volume-name=<volume-name> --volume-capacity=<capacity> --backblaze-enabled --cloud-access-key=<access-key> --cloud-secret-key=<secret-key> --cloud-bucket-name=<bucket name>

e.g.

mksdfs --volume-name=b4 --volume-capacity=4TB --backblaze-enabled --cloud-access-key=bbba989eebf7 --cloud-secret-key=777777ff8dc9b8c35750364bcfd33bc0de68f8de7d1 --cloud-bucket-name=sdfsb4


On Fri, Mar 30, 2018 at 9:19 AM, Sam Silverberg <sam.sil...@gmail.com> wrote:
Hi Bud,

We see the issue you are having and will provide a fix. It will be updated tomorrow and I will provide you all the steps for setup.
On Thu, Mar 29, 2018 at 2:30 PM Bud Bennett <qua...@gmail.com> wrote:

On Wednesday, 28 March 2018 15:40:07 UTC-6, Sam Silverberg wrote:
Hi Bud - I will share with you my config so you can try it out.  I will send it over to you directly later today.

Any luck? 

--
You received this message because you are subscribed to the Google Groups "dedupfilesystem-sdfs-user-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedupfilesystem-sdfs-user-discuss+unsubscribe@googlegroups.com.

Bud Bennett

unread,
Apr 10, 2018, 11:15:25 AM4/10/18
to dedupfilesystem-sdfs-user-discuss
I'm trying to configure the Linux appliance via the web interface, which doesn't look like it has BackBlaze support. At least, the open source version of it doesn't have that support.
Fair enough, I can understand that the open source version would be crippled in comparison to the commercial version. Most vendors do that anyway.

I do appreciate your help but this doesn't really resolve the issue. If the Open Source Linux appliance isn't supposed to have BackBlaze support, I understand. Please just say so.

It's looking to me like using the open source version of the Linux appliance is not an option for us at this time. I'm in contact with a reseller, we'll look at evaluating the commercial version of the appliance instead.

Thanks for your help.

On Tuesday, 3 April 2018 11:12:17 UTC-6, Sam Silverberg wrote:

Sam Silverberg

unread,
Apr 10, 2018, 1:16:29 PM4/10/18
to dedupfilesystem-...@googlegroups.com
Hi Bud - I would work with a reseller on this. It is supported from the command line but I would work through them to get it configured.

Sam

--

Bud Bennett

unread,
Apr 11, 2018, 11:53:38 AM4/11/18
to dedupfilesystem-sdfs-user-discuss
First of all, I apologize for my terse response earlier. That was just my frustration coming through. Well, your post got me wondering... since the appliance is just an Ubuntu system, why not RTFM a bit and go through the Linux configuration guide? Well, Here's what I did:

mkfs.sdfs --volume-name BB-Private001 --volume-capacity 499GB --backblaze-enabled --cloud-access-key=xxxx --cloud-secret-key=xxxx --cloud-bucket-name=Private001
Volume [BB-Private001] created with a capacity of [499GB]
check [/etc/sdfs/BB-Private001-volume-cfg.xml] for configuration details if you need to change anything

Then in /etc/sdfs, I saw a file BB-Private001-volume-cfg.xml created. (I won't post this unless requested - it contains encryption keys, passwords, etc.)

I created the directory structure as per what I saw in the XML file. (I modified /opt to /srv because that's where the storage is located.)

Then I ran the following command with the resultant output:
mount.sdfs BB-Private001 /srv/sdfs/volumes/BB-Private001/
Running Program SDFS Version 3.5.4.0
reading config file = /etc/sdfs/BB-Private001-volume-cfg.xml

Unable to initiate ChunkStore
java.io.IOException: java.util.NoSuchElementException: key [b2] not in the list of providers or apis: {providers=[aws-s3], apis=[s3]}
        at org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore.init(BatchJCloudChunkStore.java:548)
        at org.opendedup.sdfs.servers.HashChunkService.<init>(HashChunkService.java:71)
        at org.opendedup.sdfs.servers.HCServiceProxy.init(HCServiceProxy.java:201)
        at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:78)
        at fuse.SDFS.MountSDFS.setup(MountSDFS.java:209)
        at fuse.SDFS.MountSDFS.init(MountSDFS.java:248)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Caused by: java.util.NoSuchElementException: key [b2] not in the list of providers or apis: {providers=[aws-s3], apis=[s3]}
        at org.jclouds.ContextBuilder.newBuilder(ContextBuilder.java:175)
        at org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore.init(BatchJCloudChunkStore.java:460)
        ... 10 more
Service exit with a return value of 255

I just thought I'd mention that I didn't really see anything show up in the web interface, no cloud storage targets are listed.

So, from what I can see, the version of SDFS is outdated in the appliance. So, I downloaded the Ubuntu Linux 64-bit sdfs package and upgraded it from 3.5.4.0 to 3.6.0.14, using dpkg -i sdfs_3.6.0.14_amd64.deb

Trying the same mount.sdfs command I did earlier, I now got this error instead:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000004c6000000, 12784238592, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12784238592 bytes for committing reserved memory.
# An error report file with more information is saved as:
# //hs_err_pid1138.log

Well, I only built the appliance with 4 GB of RAM. Hmm.. OK, Well, I guess I need to bump it up to 16 GB of RAM then. Alright. No problem.
Tried the mount command again after shutting down, expanding the RAM and starting it back up again.
Well, no luck. Only thing that changed was the version produced by the error message:

mount.sdfs BB-Private001 /srv/sdfs/volumes/BB-Private001/
Running Program SDFS Version 3.6.0.14 build date 2018-04-03 17:21
reading config file = /etc/sdfs/BB-Private001-volume-cfg.xml

Unable to initiate ChunkStore
java.io.IOException: java.util.NoSuchElementException: key [b2] not in the list of providers or apis: {providers=[aws-s3], apis=[s3]}
        at org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore.init(BatchJCloudChunkStore.java:610)
        at org.opendedup.sdfs.servers.HashChunkService.<init>(HashChunkService.java:66)
        at org.opendedup.sdfs.servers.HCServiceProxy.init(HCServiceProxy.java:154)
        at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:86)
        at fuse.SDFS.MountSDFS.setup(MountSDFS.java:214)
        at fuse.SDFS.MountSDFS.init(MountSDFS.java:253)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Caused by: java.util.NoSuchElementException: key [b2] not in the list of providers or apis: {providers=[aws-s3], apis=[s3]}
        at org.jclouds.ContextBuilder.newBuilder(ContextBuilder.java:175)
        at org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore.init(BatchJCloudChunkStore.java:522)
        ... 10 more
Service exit with a return value of 255

----

One interesting thing to note, in the GUI, I see multiple lines of "No cloud storage targets found" under the cloud storage targets folder.
It looks like somehow the gui isn't properly reading the contents of /etc/sdfs. At least I _think_ that's where it might be stored.

Just as an FYI, I have been in contact with a reseller and the call went well and had a very positive feel to it. I'm waiting on pricing as well, but in the meanwhile, I can't leave well enough alone, so I thought I'd at least put this into the forum to say how far I got.

Realistically, I realize I'm probably doing lots of things I shouldn't be doing within the appliance, since it looks like it's supposed to be primarily managed with the GUI, but it can't hurt anything and it's not in production yet.

Sam, I appreciate your help. Thanks for the quick responses.

Sam Silverberg

unread,
Apr 11, 2018, 2:20:21 PM4/11/18
to dedupfilesystem-...@googlegroups.com
Hi Bud - B2 is currently only available on the windows opensource package. We are creating a Linux package for this as well that will be released on Sunday. 

Sam

Bud Bennett

unread,
Apr 13, 2018, 4:06:09 PM4/13/18
to dedupfilesystem-sdfs-user-discuss

On Wednesday, 11 April 2018 12:20:21 UTC-6, Sam Silverberg wrote:
Hi Bud - B2 is currently only available on the windows opensource package. We are creating a Linux package for this as well that will be released on Sunday. 

OK, I'll give it a shot again next week and I'll try upgrading the SDFS component of the appliance with that new version to see if that helps anything.

Thanks for the heads up Sam!

Sam Silverberg

unread,
Apr 18, 2018, 6:32:58 PM4/18/18
to dedupfilesystem-...@googlegroups.com
Hi Bud - You can use http://www.opendedup.org/downloads/sdfs_3.7.0.0_amd64.deb to get access to backblaze.



Bud Bennett

unread,
Apr 24, 2018, 5:22:44 PM4/24/18
to dedupfilesystem-sdfs-user-discuss
Thanks for the update. Within the appliance, I ran dpkg -i sdfs)3.7.0.0_amd64.deb and it looked like it upgraded it from 3.5.4 to 3.7.0.0.
However, the errors when I try to add the storage through the web interface are the same.

So, I thought I'd try just manually running:
mkfs.sdfs --volume-name=BBVol01 --volume-capacity=1TB --backblaze-enabled --cloud-access-key=... --cloud-secret-key=... --cloud-bucket-name=Private001
Attempting to create SDFS volume ...
Volume [BBVol01] created with a capacity of [1TB]
check [/etc/sdfs/BBVol01-volume-cfg.xml] for configuration details if you need to change anything

... OK.. So far so good. So, now I see the volume showing up in the web interface, but it refuses to mount.

When I look in the web Interface, I just see a Red (!)

If I try to mount it via the command line, here's what I get:

root@TLS-svr-DEDUP01:/etc/sdfs# export MEM=1024
(Per https://github.com/opendedup/sdfs/issues/18)

root@TLS-svr-DEDUP01:/etc/sdfs# mount.sdfs BBVol01 /srv
Running Program SDFS Version 3.7.0.0 build date 2018-04-18 21:48
reading config file = /etc/sdfs/BBVol01-volume-cfg.xml
[Fatal Error] BBVol01-volume-cfg.xml:1:1: Content is not allowed in prolog.
org.xml.sax.SAXParseException; systemId: file:/etc/sdfs/BBVol01-volume-cfg.xml; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.
        at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257)
        at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339)
        at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:205)
        at org.opendedup.sdfs.Config.parseSDFSConfigFile(Config.java:242)
        at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:66)

        at fuse.SDFS.MountSDFS.setup(MountSDFS.java:214)
        at fuse.SDFS.MountSDFS.init(MountSDFS.java:253)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Exiting because org.xml.sax.SAXParseException; systemId: file:/etc/sdfs/BBVol01-volume-cfg.xml; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.

Service exit with a return value of 255
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006c6000000, 4194304000, 0) failed; error='Cannot allocate memory' (errno=12)

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 4194304000 bytes for committing reserved memory.

# An error report file with more information is saved as:
# //hs_err_pid14395.log

Also, FYI.. I created the /srv mount point manually so I could store 1 TB of local storage.

Here's what /etc/sdfs/BBVol01-volume-cfg.xml looks like:

?xml version="1.0" encoding="UTF-8" standalone="no"?>
<subsystem-config version="3.7.0.0">
<locations dedup-db-store="/srv/sdfs/volumes/BBVol01/ddb" io-log="/srv/sdfs/volumes/BBVol01/ioperf.log"/>
<io chunk-size="256" claim-hash-schedule="0 0 12 ? * SUN" dedup-files="true" hash-type="VARIABLE_MD5" log-level="1" max-file-inactive="900" max-file-write-buffers="1" max-open-files="128" max-variable-segment-size="32" meta-file-cache="512" read-ahead="true" safe-close="true" safe-sync="false" variable-window-size="48" volume-type="standard" write-threads="8"/>
<permissions default-file="0644" default-folder="0755" default-group="0" default-owner="0"/>
<volume capacity="1TB" closed-gracefully="true" cluster-block-copies="2" cluster-id="sdfscluster" cluster-rack-aware="false" cluster-response-timeout="256000" compress-metadata="false" current-size="0" maximum-percentage-full="0.95" name="BBVol01" path="/srv/sdfs/volumes/BBVol01/files" perf-mon-file="/srv/sdfs//logs/volume-BBVol01-perf.json" read-timeout-seconds="-1" serial-number="2639267260289480266" use-dse-capacity="true" use-dse-size="true" use-perf-mon="false" write-timeout-seconds="-1"/>
<sdfscli enable="true" enable-auth="false" listen-address="localhost" password="39c0bd70a7e810908523a0a9795e7f1f5a4788c97160edc56d64292523d4352c" port="6442" salt="GkGKeb" use-ssl="true"/>
<local-chunkstore allocation-size="1099511627776" average-chunk-size="8192" chunk-store="/srv/sdfs/volumes/BBVol01/chunkstore/chunks" chunkstore-class="org.opendedup.sdfs.filestore.BatchFileChunkStore" cluster-config="/etc/sdfs/jgroups.cfg.xml" cluster-dse-password="admin" cluster-id="sdfscluster" compress="true" disable-auto-gc="false" encrypt="false" encryption-iv="..." encryption-key="..." fpp=".001" gc-class="org.opendedup.sdfs.filestore.gc.PFullGC" hash-db-store="/srv/sdfs/volumes/BBVol01/chunkstore/hdb-2639267260289480266" hashdb-class="org.opendedup.collections.RocksDBMap" io-threads="8" low-memory="false" max-repl-batch-sz="128">
<extended-config allow-sync="false" block-size="30 MB" delete-unclaimed="true" io-threads="16" local-cache-size="10GB" map-cache-size="100" service-type="b2" sync-check-schedule="4 59 23 * * ?" sync-files="true" upload-thread-sleep-time="10000"/>
<file-store access-key="..." bucket-name="CortexPrivate001" chunkstore-class="org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore" enabled="true" secret-key="..."/>
</local-chunkstore>
</subsystem-config>

Bud Bennett

unread,
Apr 24, 2018, 5:28:52 PM4/24/18
to dedupfilesystem-sdfs-user-discuss
So, then I tried to remove the volume through the web interface, that worked.

But when I try to add it through the web interface, I get a cryptic "Could not save Cloud Storage Configuration"

The sdfs.log file contains an error very similar to the one I posted at the start of this post.

So, I think I'm going to have to give up on the appliance until a new version comes out or something.

Sam Silverberg

unread,
Apr 24, 2018, 5:43:16 PM4/24/18
to dedupfilesystem-...@googlegroups.com
Hi Bud -  The "<" was missing at the beginning of the config. Try this... 

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<subsystem-config version="3.7.0.0">
<locations dedup-db-store="/srv/sdfs/volumes/BBVol01/ddb" io-log="/srv/sdfs/volumes/BBVol01/ioperf.log"/>
<io chunk-size="256" claim-hash-schedule="0 0 12 ? * SUN" dedup-files="true" hash-type="VARIABLE_MD5" log-level="1" max-file-inactive="900" max-file-write-buffers="1" max-open-files="128" max-variable-segment-size="32" meta-file-cache="512" read-ahead="true" safe-close="true" safe-sync="false" variable-window-size="48" volume-type="standard" write-threads="8"/>
<permissions default-file="0644" default-folder="0755" default-group="0" default-owner="0"/>
<volume capacity="1TB" closed-gracefully="true" cluster-block-copies="2" cluster-id="sdfscluster" cluster-rack-aware="false" cluster-response-timeout="256000" compress-metadata="false" current-size="0" maximum-percentage-full="0.95" name="BBVol01" path="/srv/sdfs/volumes/BBVol01/files" perf-mon-file="/srv/sdfs//logs/volume-BBVol01-perf.json" read-timeout-seconds="-1" serial-number="2639267260289480266" use-dse-capacity="true" use-dse-size="true" use-perf-mon="false" write-timeout-seconds="-1"/>
<sdfscli enable="true" enable-auth="false" listen-address="localhost" password="39c0bd70a7e810908523a0a9795e7f1f5a4788c97160edc56d64292523d4352c" port="6442" salt="GkGKeb" use-ssl="true"/>
<local-chunkstore allocation-size="1099511627776" average-chunk-size="8192" chunk-store="/srv/sdfs/volumes/BBVol01/chunkstore/chunks" chunkstore-class="org.opendedup.sdfs.filestore.BatchFileChunkStore" cluster-config="/etc/sdfs/jgroups.cfg.xml" cluster-dse-password="admin" cluster-id="sdfscluster" compress="true" disable-auto-gc="false" encrypt="false" encryption-iv="..." encryption-key="..." fpp=".001" gc-class="org.opendedup.sdfs.filestore.gc.PFullGC" hash-db-store="/srv/sdfs/volumes/BBVol01/chunkstore/hdb-2639267260289480266" hashdb-class="org.opendedup.collections.RocksDBMap" io-threads="8" low-memory="false" max-repl-batch-sz="128">
<extended-config allow-sync="false" block-size="30 MB" delete-unclaimed="true" io-threads="16" local-cache-size="10GB" map-cache-size="100" service-type="b2" sync-check-schedule="4 59 23 * * ?" sync-files="true" upload-thread-sleep-time="10000"/>
<file-store access-key="..." bucket-name="CortexPrivate001" chunkstore-class="org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore" enabled="true" secret-key="..."/>
</local-chunkstore>
</subsystem-config>
Reply all
Reply to author
Forward
0 new messages