Can't mount a large volume

271 vues
Accéder directement au premier message non lu

Dan Yasny

non lue,
21 avr. 2014, 18:23:5421/04/2014
à dedupfilesystem-...@googlegroups.com
[root@ems-backup ~]# mkfs.sdfs /dev/mapper/vg_dedupe-lv_dedupe --volume-name=pool0 --volume-capacity=36.38Tb --hash-type=VARIABLE_MURMUR3
Attempting to create SDFS volume ...
Volume [pool0] created with a capacity of [36.38Tb]
check [/etc/sdfs/pool0-volume-cfg.xml] for configuration details if you need to change anything
[root@ems-backup ~]# mkdir /opt/dedupe
[root@ems-backup ~]# mount.sdfs pool0 /opt/dedupe/
Running Program SDFS Version 2.0
reading config file = /etc/sdfs/pool0-volume-cfg.xml
Loading Hashes |)))))))))))))))))))))))))))))))))]                | 67% java.lang.OutOfMemoryError: Java heap space
        at java.util.BitSet.initWords(BitSet.java:164)
        at java.util.BitSet.<init>(BitSet.java:159)
        at org.opendedup.collections.FileByteArrayLongMap.setUp(FileByteArrayLongMap.java:224)
        at org.opendedup.collections.FileBasedCSMap.setUp(FileBasedCSMap.java:164)
        at org.opendedup.collections.FileBasedCSMap.init(FileBasedCSMap.java:57)
        at org.opendedup.sdfs.filestore.HashStore.connectDB(HashStore.java:157)
        at org.opendedup.sdfs.filestore.HashStore.<init>(HashStore.java:77)
        at org.opendedup.sdfs.servers.HashChunkService.<init>(HashChunkService.java:61)
        at org.opendedup.sdfs.servers.HCServiceProxy.init(HCServiceProxy.java:107)
        at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:51)
        at fuse.SDFS.MountSDFS.main(MountSDFS.java:146)
Exiting because java.lang.OutOfMemoryError: Java heap space


I think this is pretty much self explanatory. I've got 64Gb of RAM in this system. Centos 6.5 updated today.

Sam Silverberg

non lue,
21 avr. 2014, 18:25:5321/04/2014
à dedupfilesystem-...@googlegroups.com
edit the mount.sdfs script and change to -Xms6g and -Xmx6g 


--
You received this message because you are subscribed to the Google Groups "dedupfilesystem-sdfs-user-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedupfilesystem-sdfs-u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sam Silverberg

non lue,
21 avr. 2014, 18:28:3021/04/2014
à dedupfilesystem-...@googlegroups.com
you may also want to try the enhanced hashmap by adding --chunk-store-hashdb class=org.opendedup.collections.MaxFileBasedCSMap to your mkfs.sdfs command. Its experimental but will run better with larger volumes.

Sam Silverberg

non lue,
21 avr. 2014, 18:29:2621/04/2014
à dedupfilesystem-...@googlegroups.com
--chunk-store-hashdb-class=org.opendedup.collections.MaxFileBasedCSMap

Dan Yasny

non lue,
21 avr. 2014, 18:29:5821/04/2014
à dedupfilesystem-...@googlegroups.com
Thanks,

I'm completely new to this, so I'll need some handholding

1. Where is mount.sfds located? 
2. Can I alter the volume after I've already formatted it, or do I have to reformat it?

Thanks


You received this message because you are subscribed to a topic in the Google Groups "dedupfilesystem-sdfs-user-discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dedupfilesystem-sdfs-user-discuss/UkNb_g05OLg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dedupfilesystem-sdfs-u...@googlegroups.com.

Sam Silverberg

non lue,
21 avr. 2014, 18:31:0721/04/2014
à dedupfilesystem-...@googlegroups.com
you will need to reformat it. 

/usr/share/sdfs/mount.sdfs is the file you will want to change.


Dan Yasny

non lue,
21 avr. 2014, 18:46:1121/04/2014
à dedupfilesystem-...@googlegroups.com
Thanks.
How do I remove the pool? 

[root@ems-backup ~]# mkfs.sdfs /dev/mapper/vg_dedupe-lv_dedupe --volume-name=pool0 --volume-capacity=36.38Tb --hash-type=VARIABLE_MURMUR3 --chunk-store-hashdb-class=org.opendedup.collections.MaxFileBasedCSMap
Attempting to create SDFS volume ...
ERROR : Unable to create volume because java.io.IOException: Volume [pool0] already exists
java.io.IOException: Volume [pool0] already exists
        at org.opendedup.sdfs.VolumeConfigWriter.parseCmdLine(VolumeConfigWriter.java:415)
        at org.opendedup.sdfs.VolumeConfigWriter.main(VolumeConfigWriter.java:1004)


Tried this:
[root@ems-backup ~]# sdfscli  --partition-rm pool0
0 [main] INFO org.apache.commons.httpclient.HttpMethodDirector  - I/O exception (java.net.ConnectException) caught when processing request: Connection refused
2 [main] INFO org.apache.commons.httpclient.HttpMethodDirector  - Retrying request
8 [main] INFO org.apache.commons.httpclient.HttpMethodDirector  - I/O exception (java.net.ConnectException) caught when processing request: Connection refused
8 [main] INFO org.apache.commons.httpclient.HttpMethodDirector  - Retrying request
14 [main] INFO org.apache.commons.httpclient.HttpMethodDirector  - I/O exception (java.net.ConnectException) caught when processing request: Connection refused
14 [main] INFO org.apache.commons.httpclient.HttpMethodDirector  - Retrying request
java.io.IOException: java.net.ConnectException: Connection refused
        at org.opendedup.sdfs.mgmt.cli.MgmtServerConnection.getResponse(MgmtServerConnection.java:63)
        at org.opendedup.sdfs.mgmt.cli.ProcessBlockDeviceRm.runCmd(ProcessBlockDeviceRm.java:16)
        at org.opendedup.sdfs.mgmt.cli.SDFSCmdline.parseCmdLine(SDFSCmdline.java:191)
        at org.opendedup.sdfs.mgmt.cli.SDFSCmdline.main(SDFSCmdline.java:440)
Caused by: java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:579)
        at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
        at sun.security.ssl.SSLSocketImpl.<init>(SSLSocketImpl.java:451)
        at sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:140)
        at org.apache.commons.httpclient.contrib.ssl.EasySSLProtocolSocketFactory.createSocket(EasySSLProtocolSocketFactory.java:188)
        at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
        at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
        at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
        at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
        at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
        at org.opendedup.sdfs.mgmt.cli.MgmtServerConnection.getResponse(MgmtServerConnection.java:51)
        ... 3 more

Sam Silverberg

non lue,
21 avr. 2014, 18:50:5621/04/2014
à dedupfilesystem-...@googlegroups.com
Dan,

SDFS is a virtual filesystem that lives upon an existing filesystem. All of the SDFS data will live in /opt/sdfs. You must format /dev/mapper/vg_dedupe-lv_dedupe to ext4 or xfs and then mount it to /opt/sdfs . Then delete the config from /etc/sdfs/pool0-volume-cfg.xml. Then create your volume


mkfs.sdfs --volume-name=pool0 --volume-capacity=36.38Tb --hash-type=VARIABLE_MURMUR3 --chunk-store-hashdb-class=org.opendedup.collections.MaxFileBasedCSMap


Hope that helps.

Dan Yasny

non lue,
21 avr. 2014, 19:02:1321/04/2014
à dedupfilesystem-...@googlegroups.com
Thanks, I guess I didn't realise it had to be on an existing FS first. 

I'll also have to cut down the size of the volume, 40Tb is too large for ext4 and I don't want to use experimental 64 bit support yet. Will the settings be the same for three 12Tb volumes? 

In fact, if you could point me to some information about how SDFS works and how to calculate these settings, I'll be very grateful.

Dan

Sam Silverberg

non lue,
21 avr. 2014, 19:20:4321/04/2014
à dedupfilesystem-...@googlegroups.com
Dan,

I believe you can use XFS for volumes larger than 16 TB. The setting will work fine for 12 TB volumes as well. You could change the Xms and Xmx to 4g instead of 6g for these smaller volumes.

Dan Yasny

non lue,
22 avr. 2014, 01:52:2722/04/2014
à dedupfilesystem-...@googlegroups.com
Thanks Sam,

I am aware of the advantages of XFS, but my dataset will contain large dd generated dumps of LV snapshots (afaik XFS is supposed to better at handling large amounts of small files), and since I need to have three monthly backups kept, having three separate partitions for those monthlies is actually quite alright. So unless there's an advantage in running a single sdfs partition (and that's obviously something you know better than I do) I'm quite happy with using three smaller ones. 

Dan
Répondre à tous
Répondre à l'auteur
Transférer
0 nouveau message