Appliance 1.1.5 & sdfs 1.1.8

121 views
Skip to first unread message

cnu cnu

unread,
Aug 28, 2012, 2:14:16 PM8/28/12
to dedupfilesystem-...@googlegroups.com
Hello, I deployed the opendedup appliance in version 1.1.5 and upgraded sdfs to version 1.1.8 .

I configured a volumes with the webinterface and linked it with our esx test farm over NFS. (ISCSI I will try later, thanks for the feature).

Now I have some problems:

1.) I deployed two linux machine on the sdfs volume. But something strange is happen with the space usage.
Both machines are installed with the same parameters (packages, filesystems and so on) But the first one has 4.3 GB used and the secound one 1.3 GB used.  Checked with VMware Vsphere


So I checked it directly on the sdfs appliance:

root@sdfsnas:/media/cfsdfs01_vol01/nfs# du -sh *
4.3G    cfvirt01_LNX-Standard
1.3G    cfvirt02_LNX-Standard
root@sdfsnas:/media/cfsdfs01_vol01/nfs#


But both machines should have 4.3GB.
I tried the same thing on a normal linux share, and both has 4.3GB.

2.) Next I go to the /opt/sdfs/volumes/ directory an checked the used space.

root@sdfsnas:/opt/sdfs/volumes/cfsdfs01_vol01# du -sh *
11G     chunkstore
113M    ddb
88K     files
root@sdfsnas:/opt/sdfs/volumes/cfsdfs01_vol01#


11GB sdfs need fot the same two machines? I think there is somethink wrong.

3.) When I try to move both machines to another Storage, I get after some time:
Volume Log:
2012-08-28 19:46:20,562 [Thread-127] INFO sdfs  - trying to read again
2012-08-28 19:46:39,385 [Thread-12] INFO sdfs  - trying to read again
2012-08-28 19:51:12,296 [Thread-61] INFO sdfs  - trying to read again
2012-08-28 19:51:12,336 [Thread-61] INFO sdfs  - trying to read again
2012-08-28 19:51:14,767 [Thread-125] INFO sdfs  - trying to read again
2012-08-28 19:52:24,191 [Thread-63] INFO sdfs  - trying to read again



SDFS LOG:
2012-08-28 15:49:24,493 [main] INFO play  - Listening for HTTPS on port 443 ...
2012-08-28 19:55:51,830 [New I/O  worker #7] ERROR play  -
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
        at sun.nio.ch.IOUtil.read(IOUtil.java:186)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:66)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:372)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:246)
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)
2012-08-28 19:56:31,707 [New I/O  worker #7] ERROR play  -
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
        at sun.nio.ch.IOUtil.read(IOUtil.java:186)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:66)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:372)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:246)
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)
2012-08-28 19:57:01,779 [New I/O  worker #6] ERROR play  -


and so on ....

Volume Configuration:
Mount Point :  /etc/sdfs/cfsdfs01_vol01-volume-cfg.xml
Volume Version :  1.1.8
Meta File Path :  /opt/sdfs/volumes/cfsdfs01_vol01/files
File Map Store :  /opt/sdfs/volumes/cfsdfs01_vol01/ddb
Dedupe Hash DB Store :  /opt/sdfs/volumes/cfsdfs01_vol01/chunkstore/hdb
Unique Data Store :  /opt/sdfs/volumes/cfsdfs01_vol01/chunkstore/chunks
Dedupe Block Size :  4096
Volume Used :  257698621440
Volume Current Size :  257698621440
Volume Capacity :  536870912000
File Safe Sync :  false
File Safe Close :  false
Bytes Read :  35868135698
Unique Bytes Written :  5923753984
Duplicate Bytes Written :  12234948608
Dedupe Store Size :  107374182400
AWS (S3) Storage Target :  false




I hope someone can help me.
br, cnu




cnu80

unread,
Aug 28, 2012, 4:26:17 PM8/28/12
to dedupfilesystem-...@googlegroups.com
Hello, I tried again to clone a machine. When I use unix commands, like cp it works. But when I try to clone a machine directy on a esxhost with the vmkfstools, sdfs get "2012-08-28 22:08:44,245 [Thread-157] INFO sdfs  - trying to read again" errors.

With the command "vmkfstools" you can clone vmdk files faster and I think the Vsphere Client use the same technology. Does anyone know, what vmkfstools do?

thanks, br cnu

cnu80

unread,
Aug 29, 2012, 8:13:31 AM8/29/12
to dedupfilesystem-...@googlegroups.com
Hello, I think I found the problem !!!!!

I used for my tests with opendedup ESX in Version 5.0.xxxx.  ESX 5 has new features called I/O HW-Acceleration.

But these features are not working with the linux nfs or. iscsi daemons. I found following link:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1033665

The problem is the scsi command "WRITE_SAME" discussed in this Thread: http://www.spinics.net/lists/target-devel/msg00993.html
Linux SCSI/BLOCK does not support this SCSI Command.

This is also the reason why the copy command works directly on your appliance and VMware VSphere has the problems.

Now I only tested it with ISCSI, I will make the test with NFS again.

br, cnu
Reply all
Reply to author
Forward
0 new messages