Release 3.10

218 views
Skip to first unread message

Sam Silverberg

unread,
Jan 4, 2020, 12:38:33 PM1/4/20
to dedupfilesystem-sdfs-user-discuss
I am proud to announce the release of SDFS 3.10 with numerous bug fixes and feature updates.

SDFS Version 3.10 is a year in the making and includes various bug fixes to issues asked in this forum. In addition it includes a backlog feature that allows uploads to cloud storage to complete in the background after data is staged to the local sdfs cache. To enable this feature add --backlog-size to the mkfs.sdfs command with the size of the backlog that you would like to keep locally. There must be enough space on the local volume to keep the backlog while it is uploading. Setting the backlog-size to -1 makes it unbounded while setting it to 0 disables it.

e.g.

mkfs.sdfs --volume-name=pool0 --volume-capacity=100TB --backlog-size=1TB  --aws-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>

Rinat Camal

unread,
Jan 4, 2020, 5:13:28 PM1/4/20
to dedupfilesystem-sdfs-user-discuss
Hi!
I've tested new version already.
On new test VM.

unfortunately issue https://github.com/opendedup/sdfs/issues/88#issuecomment-518648313  still exist in new version.

Scott Ruckh

unread,
Jan 30, 2020, 6:55:06 PM1/30/20
to dedupfilesystem-sdfs-user-discuss

On Saturday, January 4, 2020 at 10:38:33 AM UTC-7, Sam Silverberg wrote:
I am proud to announce the release of SDFS 3.10 with numerous bug fixes and feature updates.

SDFS Version 3.10 is a year in the making and includes various bug fixes to issues asked in this forum. In addition it includes a backlog feature that allows uploads to cloud storage to complete in the background after data is staged to the local sdfs cache. To enable this feature add --backlog-size to the mkfs.sdfs command with the size of the backlog that you would like to keep locally. There must be enough space on the local volume to keep the backlog while it is uploading. Setting the backlog-size to -1 makes it unbounded while setting it to 0 disables it.



What is the correct way to upgrade an existing volume (v 3.7.8) to new 3.10.8?  Just updating the binaries through RPM and trying to mount volume with old configuration produces the following output:

<JAVA_HOME>/lib/ext exists, extensions mechanism no longer supported; Use -classpath instead.
.Cannot create Java VM
Service exit with a return value of 1

This is the version of java running on the server:  java-1.8.0-openjdk-1.8.0.242.b08-0



 

Louis van Dyk

unread,
May 19, 2020, 9:06:01 PM5/19/20
to dedupfilesystem-sdfs-user-discuss
Scott, did you ever get this fixed?  I have the same problem with the JAVA error.

Louis van Dyk

unread,
May 19, 2020, 9:18:37 PM5/19/20
to dedupfilesystem-sdfs-user-discuss
Sharing my fix:

rmdir /usr/share/sdfs/bin/jre/lib/ext

Scott Ruckh

unread,
May 19, 2020, 9:23:15 PM5/19/20
to dedupfilesystem-...@googlegroups.com
Thank You.  I will try it out with the new version.  I had gone back to 3.7.8 since I never got a response.

--
You received this message because you are subscribed to the Google Groups "dedupfilesystem-sdfs-user-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedupfilesystem-sdfs-u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dedupfilesystem-sdfs-user-discuss/80459207-e09a-430d-9c78-40eb1256a916%40googlegroups.com.

Scott Ruckh

unread,
May 20, 2020, 7:20:42 PM5/20/20
to dedupfilesystem-...@googlegroups.com
Well it got rid of the error, but the actual volume is SUPER SLOW.  The volume mounts and I am able to see files and such, but just creating (touch'ing) a file takes like 5 seconds.  I was using the SDFS volume for backups, but it is not really working even though the volume did mount.  Maybe I will try to recreate the volume and start from scratch.  Since I first started using SDFS, my cloud storage provider now natively supports AWS APIs so it is not really important that SDFS work in my environment.

Louis van Dyk

unread,
May 21, 2020, 11:53:31 AM5/21/20
to dedupfilesystem-...@googlegroups.com
Let me know if a) dropping back down to 3.7.8 helps or b) recreating the volume helps. It would be interesting to know.

Thanks for feeding back.

Scott Ruckh

unread,
May 21, 2020, 5:33:31 PM5/21/20
to dedupfilesystem-...@googlegroups.com
When creating new sdfs volume from scratch this is what is sent to standard out -- filesystem does mount.

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.inject.internal.cglib.core.$ReflectUtils$2 (file:/usr/share/sdfs/lib/sdfs.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
WARNING: Please consider reporting this to the maintainers of com.google.inject.internal.cglib.core.$ReflectUtils$2
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
multiplier=32 size=8
mem=268443456 memperDB=33555432 bufferSize=1073741824 bufferSizePerDB=134217728
Loading Existing Hash Tables |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Scott Ruckh

unread,
May 21, 2020, 6:53:33 PM5/21/20
to dedupfilesystem-...@googlegroups.com
I don't have any official numbers on speed performance, but it still feels laggy after recreating SDFS volume from scratch and using the newly created XML file (which produces the Warning messages posted earlier).  Plus, it seems to bring the computer to a crawl when it is actively processing IO activity.

My backup states that 1.18GB of data was written.  On the backblaze backend it says 621MB is currently stored for the same backup.  The local cache on disk was 2.9GB when the volume was mounted and is now 1.8GB when the volume is not mounted.  That seems like quite a bit of overhead for the local cache.
Reply all
Reply to author
Forward
0 new messages