FATAL sdfs - Couldn't get I/O for the connection to the host

337 views
Skip to first unread message

Blacki

unread,
Feb 12, 2013, 5:52:17 PM2/12/13
to dedupfilesystem-...@googlegroups.com
Hi,

I have installed opendedup 1.2.1 with fuse 2.9 on ubuntu 12.10. mkfs.sdfs executes without any problems. mount.sdfs fails with following error:

@opendedup:/usr/share/sdfs$ sudo ./mount.sdfs -v sdfs-vol01 -m /media/opendedup
Running SDFS Version 1.2.1
reading config file = /etc/sdfs/sdfs-vol01-volume-cfg.xml
Loading Hashes |))))))))))))))))))))))))))))))))))))))))))))))))))| 100% 


java.io.IOException: unable to connect to server
at org.opendedup.sdfs.Config.parserRoutingFile(Config.java:491)
at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:45)
at fuse.SDFS.MountSDFS.main(MountSDFS.java:148)
Exiting because java.io.IOException: unable to connect to server

Log file provides the following information:

2013-02-12 22:34:53,783 [main] FATAL sdfs  - Couldn't get I/O for the connection to the host
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at java.net.Socket.<init>(Socket.java:425)
at java.net.Socket.<init>(Socket.java:208)
at org.opendedup.sdfs.network.HashClient.openConnection(HashClient.java:103)
at org.opendedup.sdfs.network.HashClient.<init>(HashClient.java:52)
at org.opendedup.sdfs.network.HashClientPool.makeObject(HashClientPool.java:106)
at org.opendedup.sdfs.network.HashClientPool.populatePool(HashClientPool.java:32)
at org.opendedup.sdfs.network.HashClientPool.<init>(HashClientPool.java:25)
at org.opendedup.sdfs.Config.parserRoutingFile(Config.java:483)
at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:43)
at fuse.SDFS.MountSDFS.main(MountSDFS.java:148)
2013-02-12 22:34:53,785 [main] FATAL sdfs  - unable to open connection
java.io.IOException: Couldn't get I/O for the connection to the host 127.0.0.12222
at org.opendedup.sdfs.network.HashClient.openConnection(HashClient.java:134)
at org.opendedup.sdfs.network.HashClient.<init>(HashClient.java:52)
at org.opendedup.sdfs.network.HashClientPool.makeObject(HashClientPool.java:106)
at org.opendedup.sdfs.network.HashClientPool.populatePool(HashClientPool.java:32)
at org.opendedup.sdfs.network.HashClientPool.<init>(HashClientPool.java:25)
at org.opendedup.sdfs.Config.parserRoutingFile(Config.java:483)
at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:43)
at fuse.SDFS.MountSDFS.main(MountSDFS.java:148)
2013-02-12 22:34:53,785 [main] ERROR sdfs  - Unable to get object out of pool 
java.io.IOException: unable to open connection
at org.opendedup.sdfs.network.HashClient.<init>(HashClient.java:55)
at org.opendedup.sdfs.network.HashClientPool.makeObject(HashClientPool.java:106)
at org.opendedup.sdfs.network.HashClientPool.populatePool(HashClientPool.java:32)
at org.opendedup.sdfs.network.HashClientPool.<init>(HashClientPool.java:25)
at org.opendedup.sdfs.Config.parserRoutingFile(Config.java:483)
at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:43)
at fuse.SDFS.MountSDFS.main(MountSDFS.java:148)
2013-02-12 22:34:53,785 [main] WARN sdfs  - unable to connect to server server1
java.io.IOException: java.io.IOException: unable to open connection
at org.opendedup.sdfs.network.HashClientPool.populatePool(HashClientPool.java:36)
at org.opendedup.sdfs.network.HashClientPool.<init>(HashClientPool.java:25)
at org.opendedup.sdfs.Config.parserRoutingFile(Config.java:483)
at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:43)
at fuse.SDFS.MountSDFS.main(MountSDFS.java:148)

SELinux is disabled.

Anyone have seen this issue before and could point me in the right direction on how to solve this?

Thanks.

Sam Silverberg

unread,
Feb 12, 2013, 6:32:03 PM2/12/13
to dedupfilesystem-...@googlegroups.com
can you provide the config for the volume. This looks like its trying to connect to a remote DSE but need to validate.


--
You received this message because you are subscribed to the Google Groups "dedupfilesystem-sdfs-user-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedupfilesystem-sdfs-u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Blacki

unread,
Feb 13, 2013, 3:33:07 AM2/13/13
to dedupfilesystem-...@googlegroups.com
@opendedup:/etc/sdfs$ more sdfs-vol01-volume-cfg.xml 
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<subsystem-config version="1.2.1">
<locations dedup-db-store="/opt/sdfs/volumes/sdfs-vol01/ddb" io-log="/opt/sdfs/volumes/sdfs-vol01/ioperf.log"/>
<io chunk-size="128" claim-hash-schedule="0 59 23 * * ?" dedup-files="true" file-read-cache="5" hash-type="murmur3_128" log-level="1" max-file-inactive="900" max-file-write-buffers="1" max-o
pen-files="1024" meta-file-cache="1024" multi-read-timeout="1000" safe-close="true" safe-sync="false" system-read-cache="1000" write-threads="12"/>
<permissions default-file="0644" default-folder="0755" default-group="0" default-owner="0"/>
<volume capacity="10GB" closed-gracefully="true" current-size="0" maximum-percentage-full="0.95" name="sdfs-vol01" path="/opt/sdfs/volumes/sdfs-vol01/files" perf-mon-file="/opt/sdfs//logs/vo
lume-sdfs-vol01-perf.json" use-dse-capacity="true" use-dse-size="true" use-perf-mon="false"/>
<launch-params class-path="/usr/share/sdfs/lib/snappy-java.jar:/usr/share/sdfs/lib/activation-1.1.jar:/usr/share/sdfs/lib/antlr-2.7.4.jar:/usr/share/sdfs/lib/apache-mime4j-0.6.jar:/usr/share
/sdfs/lib/bcprov-jdk16-143.jar:/usr/share/sdfs/lib/chardet-1.0.jar:/usr/share/sdfs/lib/commons-cli-1.2.jar:/usr/share/sdfs/lib/commons-codec-1.3.jar:/usr/share/sdfs/lib/commons-collections-3
.2.1.jar:/usr/share/sdfs/lib/commons-digester-1.8.1.jar:/usr/share/sdfs/lib/commons-httpclient-3.1.jar:/usr/share/sdfs/lib/commons-io-1.4.jar:/usr/share/sdfs/lib/commons-lang3-3.1.jar:/usr/s
hare/sdfs/lib/commons-logging-1.0.4.jar:/usr/share/sdfs/lib/commons-logging-1.1.1.jar:/usr/share/sdfs/lib/commons-pool-1.5.5.jar:/usr/share/sdfs/lib/concurrent-1.3.4.jar:/usr/share/sdfs/lib/
concurrentlinkedhashmap-lru-1.3.jar:/usr/share/sdfs/lib/cpdetector_1.0.8.jar:/usr/share/sdfs/lib/dom4j-1.6.1.jar:/usr/share/sdfs/lib/httpclient-4.1.1.jar:/usr/share/sdfs/lib/httpcore-4.1.jar
:/usr/share/sdfs/lib/httpcore-nio-4.1.jar:/usr/share/sdfs/lib/httpmime-4.0.3.jar:/usr/share/sdfs/lib/jackson-core-asl-1.8.3.jar:/usr/share/sdfs/lib/jackson-jaxrs-1.8.3.jar:/usr/share/sdfs/li
b/jackson-mapper-asl-1.8.3.jar:/usr/share/sdfs/lib/jackson-xc-1.8.3.jar:/usr/share/sdfs/lib/jacksum.jar:/usr/share/sdfs/lib/jargs-1.0.jar:/usr/share/sdfs/lib/javax.inject-1.jar:/usr/share/sd
fs/lib/java-xmlbuilder-1.jar:/usr/share/sdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/share/sdfs/lib/jaxen-1.1.1.jar:/usr/share/sdfs/lib/jcs-1.3.jar:/usr/share/sdfs/lib/jdbm.jar:/usr/share/sdfs/lib/jd
okan.jar:/usr/share/sdfs/lib/jersey-client-1.10-b02.jar:/usr/share/sdfs/lib/jersey-core-1.10-b02.jar:/usr/share/sdfs/lib/jersey-json-1.10-b02.jar:/usr/share/sdfs/lib/jets3t-0.7.4.jar:/usr/sh
are/sdfs/lib/jets3t-0.8.1.jar:/usr/share/sdfs/lib/log4j-1.2.15.jar:/usr/share/sdfs/lib/mail-1.4.jar:/usr/share/sdfs/lib/microsoft-windowsazure-api-0.2.2.jar:/usr/share/sdfs/lib/quartz-1.8.3.
jar:/usr/share/sdfs/lib/sdfs.jar:/usr/share/sdfs/lib/simple-4.1.21.jar:/usr/share/sdfs/lib/slf4j-api-1.5.10.jar:/usr/share/sdfs/lib/slf4j-log4j12-1.5.10.jar:/usr/share/sdfs/lib/stax-api-1.0.
1.jar:/usr/share/sdfs/lib/trove-3.0.0a3.jar:/usr/share/sdfs/lib/truezip-samples-7.3.2-jar-with-dependencies.jar:/usr/share/sdfs/lib/uuid-3.1.jar" java-options="-Djava.library.path=/usr/share
/sdfs/bin/ -Dorg.apache.commons.logging.Log=fuse.logging.FuseLog -Dfuse.logging.level=INFO -server -XX:+UseG1GC -Xmx1228m -Xmn228m" java-path="/usr/share/sdfs/jre1.7.0/bin/java"/>
<sdfscli enable="true" enable-auth="false" listen-address="localhost" password="2f9da03d7b46d8bbf958992e639bcfbc907516222ba646f4c81bc21a4a5c4053" port="6442" salt="e3CRZj"/>
<local-chunkstore allocation-size="10737418240" chunk-gc-schedule="0 0 0/4 * * ?" chunk-store="/opt/sdfs/volumes/sdfs-vol01/chunkstore/chunks" chunk-store-dirty-timeout="1000" chunk-store-re
ad-cache="5" chunkstore-class="org.opendedup.sdfs.filestore.FileChunkStore" compress="true" enabled="false" encrypt="true" encryption-key="ACvemVkuP=Bpk9PV6N" eviction-age="6" gc-class="org.
opendedup.sdfs.filestore.gc.PFullGC" hash-db-store="/opt/sdfs/volumes/sdfs-vol01/chunkstore/hdb" hashdb-class="org.opendedup.collections.CSByteArrayLongMap" max-repl-batch-sz="128" pre-alloc
ate="false" read-ahead-pages="1">
<network enable="false" hostname="0.0.0.0" port="2222" upstream-enabled="false" upstream-host="" upstream-host-port="2222" upstream-password="admin" use-ssl="true" use-udp="false"/>
<aws aws-access-key="" aws-bucket-name="" aws-secret-key="" compress="true" enabled="true"/>
</local-chunkstore>
</subsystem-config>
To unsubscribe from this group and stop receiving emails from it, send an email to dedupfilesystem-sdfs-user-discuss+unsubscribe@googlegroups.com.

mishra....@gmail.com

unread,
Mar 11, 2013, 2:58:56 AM3/11/13
to dedupfilesystem-...@googlegroups.com
Hi Blacki, Sam Silverberg,

Any update on this issue?
I am also facing the same problem.

[root@localhost ~]# startDSEService.sh /etc/sdfs/hashserver-config.xml
java.lang.NullPointerException
        at org.opendedup.sdfs.notification.SDFSEvent.loadHashDBEvent(SDFSEvent.java:184)
        at org.opendedup.collections.CSByteArrayLongMap.<init>(CSByteArrayLongMap.java:61)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
        at java.lang.Class.newInstance0(Class.java:372)
        at java.lang.Class.newInstance(Class.java:325)
        at org.opendedup.sdfs.filestore.HashStore.connectDB(HashStore.java:169)
        at org.opendedup.sdfs.filestore.HashStore.<init>(HashStore.java:82)
        at org.opendedup.sdfs.servers.HashChunkService.<init>(HashChunkService.java:65)
        at org.opendedup.sdfs.servers.HCServiceProxy.<clinit>(HCServiceProxy.java:51)
        at org.opendedup.sdfs.network.NetworkHCServer.init(NetworkHCServer.java:69)
        at org.opendedup.sdfs.network.NetworkHCServer.main(NetworkHCServer.java:62)
#### Shutting down StorageHub ####
#### Shutting Down Network Service ####
#### Shutting down HashStore ####
To unsubscribe from this group and stop receiving emails from it, send an email to dedupfilesystem-sdfs-user-discuss+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages