I have the same problem. I modified /etc/security/limits.conf as per
the quick start and rebooted.
It looks like the limit changes have taken effect. Here is the output
of ulimit:
pmah@rufus:~$ ulimit -Hn
65535
pmah@rufus:~$ ulimit -Sn
65535
pmah@rufus:~$ lsof | wc -l
730
Here is a sample of the errors I'm getting from the sdfs volume log.
2011-10-12 20:48:30,299 [Thread-11] WARN sdfs - unable to write file
metadata for [/mnt/data01/sdfs/files/backups/zbot/20111012-162356/pics/
pics_2005/september/Picture 038.jpg]
java.io.FileNotFoundException: /mnt/data01/sdfs/files/backups/zbot/
20111012-162356/pics/pics_2005/september/Picture 038.jpg (Too many
open files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(Unknown Source)
at java.io.FileOutputStream.<init>(Unknown Source)
at
org.opendedup.sdfs.io.MetaDataDedupFile.writeFile(MetaDataDedupFile.java:
583)
at
org.opendedup.sdfs.io.MetaDataDedupFile.unmarshal(MetaDataDedupFile.java:
610)
at
org.opendedup.sdfs.io.MetaDataDedupFile.sync(MetaDataDedupFile.java:
912)
at fuse.SDFS.SDFSFileSystem.mknod(SDFSFileSystem.java:381)
at
fuse.Filesystem3ToFuseFSAdapter.mknod(Filesystem3ToFuseFSAdapter.java:
132)
2011-10-12 20:48:35,975 [Thread-297] WARN sdfs - unable to write file
metadata for [/mnt/data01/sdfs/files/backups/zbot/20111012-162356/
peter/work/fasttrack/spike_report/modules/DataModel/FileSystem/Export/
LocalToLocalHandler.php.html]
java.io.FileNotFoundException: /mnt/data01/sdfs/files/backups/zbot/
20111012-162356/peter/work/fasttrack/spike_report/modules/DataModel/
FileSystem/Export/LocalToLocalHandler.php.html (Too many open files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(Unknown Source)
at java.io.FileOutputStream.<init>(Unknown Source)
at
org.opendedup.sdfs.io.MetaDataDedupFile.writeFile(MetaDataDedupFile.java:
583)
at
org.opendedup.sdfs.io.MetaDataDedupFile.unmarshal(MetaDataDedupFile.java:
610)
at org.opendedup.sdfs.filestore.MetaFileStore
$1.onEviction(MetaFileStore.java:46)
at org.opendedup.sdfs.filestore.MetaFileStore
$1.onEviction(MetaFileStore.java:1)
at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.notifyListener(ConcurrentLinkedHashMap.java:
567)
at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.afterCompletion(ConcurrentLinkedHashMap.java:
350)
at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.put(ConcurrentLinkedHashMap.java:
778)
at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.put(ConcurrentLinkedHashMap.java:
750)
at
org.opendedup.sdfs.filestore.MetaFileStore.cacheMF(MetaFileStore.java:
66)
at
org.opendedup.sdfs.filestore.MetaFileStore.getMF(MetaFileStore.java:
101)
at fuse.SDFS.SDFSFileSystem.mknod(SDFSFileSystem.java:380)
at
fuse.Filesystem3ToFuseFSAdapter.mknod(Filesystem3ToFuseFSAdapter.java:
132)
2011-10-12 20:48:35,976 [Thread-297] WARN sdfs - unable to write file
metadata for [/mnt/data01/sdfs/files/backups/zbot/20111012-162356/pics/
pics_2005/september/Picture 071.jpg]
java.io.FileNotFoundException: /mnt/data01/sdfs/files/backups/zbot/
20111012-162356/pics/pics_2005/september/Picture 071.jpg (Too many
open files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(Unknown Source)
at java.io.FileOutputStream.<init>(Unknown Source)
at
org.opendedup.sdfs.io.MetaDataDedupFile.writeFile(MetaDataDedupFile.java:
583)
at
org.opendedup.sdfs.io.MetaDataDedupFile.unmarshal(MetaDataDedupFile.java:
610)
at
org.opendedup.sdfs.io.MetaDataDedupFile.sync(MetaDataDedupFile.java:
912)
at fuse.SDFS.SDFSFileSystem.mknod(SDFSFileSystem.java:381)
at
fuse.Filesystem3ToFuseFSAdapter.mknod(Filesystem3ToFuseFSAdapter.java:
132)
I never had a problem with 1.0.1 but started to have problems after
upgrading to 1.0.6. Had the same problem with 1.0.7 and still with
1.1.0.
I basically just copy (via scp) about 50GB of files nightly to an sdfs
volume.
Any suggestions?
Thanks.
Peter