If I have a 1000GB volume, when does it become full? When there's
1000GB written to it (and therefore df shows it as full), or once
there has been 1000GB of unique chunks written?
I assumed it's the latter - and tried to rsync a large amount of data
onto the volume. The rsync died about 8 hours in:
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]:
Broken pipe (32)
rsync: write failed on
"/store/backups/backup/esx06/orac-slave-new/orac-slave-new-2011-09-02_13-19-01/oral-slave-new-flat.vmdk":
Software caused connection abort (103)
rsync: stat "/store/backups/backup/esx06/orac-slave-new/orac-slave-new-2011-09-02_13-19-01/.oral-slave-new-flat.vmdk.JCgfIi"
failed: Transport endpoint is not connected (107)
rsync: connection unexpectedly closed (3121 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at
io.c(601) [sender=3.0.7]
This is what was show in the console containing the mount command:
/sbin/mount.sdfs: line 4: 6367 Killed
/usr/share/sdfs/jre1.7.0/bin/java
-Djava.library.path=/usr/share/sdfs/bin/
-Dorg.apache.commons.logging.Log=fuse.logging.FuseLog
-Dfuse.logging.level=INFO -Xmx12g -Xms2g -server -XX:+UseG1GC
-XX:+UseCompressedOops -classpath
/usr/share/sdfs/lib/jacksum.jar:/usr/share/sdfs/lib/trove-3.0.0a3.jar:/usr/share/sdfs/lib/slf4j-api-1.5.10.jar:/usr/share/sdfs/lib/slf4j-log4j12-1.5.10.jar:/usr/share/sdfs/lib/quartz-1.8.3.jar:/usr/share/sdfs/lib/commons-collections-3.2.1.jar:/usr/share/sdfs/lib/log4j-1.2.15.jar:/usr/share/sdfs/lib/jdbm.jar:/usr/share/sdfs/lib/concurrentlinkedhashmap-lru-1.2.jar:/usr/share/sdfs/lib/bcprov-jdk16-143.jar:~/java_api/sdfs-bin/lib/commons-codec-1.3.jar:/usr/share/sdfs/lib/commons-httpclient-3.1.jar:/usr/share/sdfs/lib/commons-logging-1.1.1.jar:/usr/share/sdfs/lib/commons-codec-1.3.jar:/usr/share/sdfs/lib/java-xmlbuilder-1.jar:/usr/share/sdfs/lib/jets3t-0.7.4.jar:/usr/share/sdfs/lib/commons-cli-1.2.jar:/usr/share/sdfs/lib/simple-4.1.21.jar:/usr/share/sdfs/lib/jdokan.jar:/usr/share/sdfs/lib/commons-io-1.4.jar:/usr/share/sdfs/lib/sdfs.jar
fuse.SDFS.MountSDFS $*
and in kern.log, I can see it was killed by the OOM killer again:
Sep 8 00:52:49 thq-vmstore01 kernel: [221022.054324] Out of memory:
Kill process 6367 (java) score 1000 or sacrifice child
Sep 8 00:52:49 thq-vmstore01 kernel: [221022.054473] Killed process
6367 (java) total-vm:20096772kB, anon-rss:9707900kB, file-rss:0kB
The first attempt to remount is resulted in:
root@thq-vmstore01:/usr/share/sdfs# mount.sdfs -v backup2 -m /store/backups
Running SDFS Version 1.0.9
reading config file = /etc/sdfs/backup2-volume-cfg.xml
Loading #######################################################################################################################################################
Running Consistancy Check on DSE, this may take a while
Scanning DSE Finished
Succesfully Ran Consistance Check for [0] records, recovered [0]
09:36:56.211 main INFO [fuse.FuseMount]: Mounted filesystem
fuse: bad mount point `/store/backups': Transport endpoint is not connected
09:36:56.235 main INFO [fuse.FuseMount]: Filesystem is unmounted
the 2nd:
root@thq-vmstore01:/usr/share/sdfs# mount.sdfs -v backup2 -m /store/backups
Running SDFS Version 1.0.9
reading config file = /etc/sdfs/backup2-volume-cfg.xml
Loading #######################################################################################################################################################
09:50:15.522 main INFO [fuse.FuseMount]: Mounted filesystem
fuse: fork: Cannot allocate memory
09:50:15.756 main INFO [fuse.FuseMount]: Filesystem is unmounted
Attached is the XML config file. Let me know if I can do anything
else to help!
Thanks,
On 7 September 2011 18:16, Ian P. Christian <poo...@pookey.co.uk> wrote:
> I stupidly trashed the data after posting, and re-created it again.
--
Blog: http://pookey.co.uk/blog
Follow me on twitter: http://twitter.com/ipchristian
Thanks very much!
However - what does this mean exactly? What are the effects of this change?
Thanks again