new documentation, and an issue with connecting to the right IP address

741 views
Skip to first unread message

ben.s.f...@gmail.com

unread,
Aug 12, 2014, 3:24:18 PM8/12/14
to lum...@googlegroups.com
Hey all,

Just wanted to shout out to the new documentation on the GitHub page, it's fantastic! Thanks so much for putting more detail, better instructions, etc. Now, I followed the instructions religiously, and I am trying to run Lumify on my virtual machine (through a bridged network adapter, of course). I have a problem with the step for launching Accumulo, though, where when I try to run it with sudo -u accumulo /usr/lib/accumulo/bin/accumulo init --instance-name lumify --password password I get a few exceptions. I first got a warning that there was an address change from 192.168.1.219:8020 to 127.0.0.1:8020 (which is the fake IP address you get when you don't use a bridged adapter in the virtual machine) and then when I deleted localhost from my /etc/hosts file and only had 192.168.1.219 as my ip address, I still got a connection exception. This is all I know at this point, but any help would be much appreciated.

Thanks so much!

Ben Friedman

justin...@gmail.com

unread,
Aug 13, 2014, 9:24:10 AM8/13/14
to lum...@googlegroups.com
Try adding this to your accumulo-site.xml

  <property>

    <name>instance.dfs.uri</name>

    <value>hdfs://192.168.1.219:8020</value>

</property>

ben.s.f...@gmail.com

unread,
Aug 13, 2014, 12:17:39 PM8/13/14
to lum...@googlegroups.com
Thanks for the reply, Justin, unfortunately that property is already there, and we still have the problem. Should I try rebooting my VBox?

justin...@gmail.com

unread,
Aug 14, 2014, 10:16:08 AM8/14/14
to lum...@googlegroups.com
Can you provide the logs? Maybe it's not the error I'm thinking of.

ben.s.f...@gmail.com

unread,
Aug 14, 2014, 2:29:03 PM8/14/14
to lum...@googlegroups.com
[root@localhost Jovianite]# sudo -u accumulo /usr/lib/accumulo/bin/accumulo init --instance-name lumify --password password
2014-08-14 14:26:34,881 [util.Initialize] INFO : Hadoop Filesystem is hdfs://192.168.1.219:8020
2014-08-14 14:26:34,884 [util.Initialize] INFO : Accumulo data dir is /accumulo
2014-08-14 14:26:34,884 [util.Initialize] INFO : Zookeeper server is localhost:2181
2014-08-14 14:26:34,885 [util.Initialize] INFO : Checking if Zookeeper is available. If this hangs, then you need to make sure zookeeper is running


Warning!!! Your instance secret is still set to the default, this is not secure. We highly recommend you change it.


You can change the instance secret in accumulo by using:
   bin/accumulo org.apache.accumulo.server.util.ChangeSecret oldPassword newPassword.
You will also need to edit your secret in your configuration file by adding the property instance.secret to your conf/accumulo-site.xml. Without this accumulo will not operate correctly
2014-08-14 14:26:35,632 [ipc.Client] WARN : Address change detected. Old: localhost/192.168.1.219:8020 New: localhost/127.0.0.1:8020
2014-08-14 14:26:35,640 [util.Initialize] FATAL: java.io.IOException: Failed to check if filesystem already initialized
java.io.IOException: Failed to check if filesystem already initialized
    at org.apache.accumulo.server.util.Initialize.checkInit(Initialize.java:178)
    at org.apache.accumulo.server.util.Initialize.doInit(Initialize.java:185)
    at org.apache.accumulo.server.util.Initialize.main(Initialize.java:545)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.accumulo.start.Main$1.run(Main.java:103)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.ConnectException: Call From localhost/127.0.0.1 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:782)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:729)
    at org.apache.hadoop.ipc.Client.call(Client.java:1242)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:629)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1545)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:820)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1380)
    at org.apache.accumulo.server.util.Initialize.isInitialized(Initialize.java:512)
    at org.apache.accumulo.server.util.Initialize.checkInit(Initialize.java:163)
    ... 8 more
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:510)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:604)
    at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1291)
    at org.apache.hadoop.ipc.Client.call(Client.java:1209)
    ... 23 more
Thread "init" died java.lang.reflect.InvocationTargetException
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.accumulo.start.Main$1.run(Main.java:103)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem already initialized
    at org.apache.accumulo.server.util.Initialize.main(Initialize.java:549)
    ... 6 more
Caused by: java.io.IOException: Failed to check if filesystem already initialized
    at org.apache.accumulo.server.util.Initialize.checkInit(Initialize.java:178)
    at org.apache.accumulo.server.util.Initialize.doInit(Initialize.java:185)
    at org.apache.accumulo.server.util.Initialize.main(Initialize.java:545)
    ... 6 more
Caused by: java.net.ConnectException: Call From localhost/127.0.0.1 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:782)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:729)
    at org.apache.hadoop.ipc.Client.call(Client.java:1242)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:629)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1545)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:820)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1380)
    at org.apache.accumulo.server.util.Initialize.isInitialized(Initialize.java:512)
    at org.apache.accumulo.server.util.Initialize.checkInit(Initialize.java:163)
    ... 8 more
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:510)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:604)
    at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1291)
    at org.apache.hadoop.ipc.Client.call(Client.java:1209)
    ... 23 more

Jeff Kunkle

unread,
Aug 18, 2014, 1:53:34 PM8/18/14
to lum...@googlegroups.com

ben.s.f...@gmail.com

unread,
Aug 19, 2014, 11:16:44 AM8/19/14
to lum...@googlegroups.com
That's the problem, I can't seem to initialize it. I thought the command

sudo -u accumulo /usr/lib/accumulo/bin/accumulo init --instance-name lumify --password password
Would initialize accumulo. However, I get a warning that an address change has been detected, and then a fatal exception. Is there a missing step in the instructions that prevented me from properly initializing accumulo? I noticed that I don't have $ACCUMULO_HOME defined in the accumulo-env.sh file, and there was no instruction to do that in the new installation guide. I also noticed that the only "masters" and "slaves" in the masters and slaves files are a single line that says "localhost", and nothing with my IP address there. I changed "localhost" to my IP address in those files, and the same issue persisted. I'm fairly sure that it's an IP address issue, but I followed the instructions closely. Perhaps it's a problem because I'm on a virtual machine (even though I'm using the bridged adapter and got a real IP address for my virtual box)?
Thanks,
Ben

Jeff Kunkle

unread,
Aug 19, 2014, 1:41:39 PM8/19/14
to lum...@googlegroups.com
I've had problems in the past with VirtualBox when switching networks? The only reliable remedy I was able to figure out was to restart the VM after switching networks.

David Singley

unread,
Aug 19, 2014, 1:49:42 PM8/19/14
to lum...@googlegroups.com
If you used the CentOS 6.4 setup instructions (https://github.com/lumifyio/lumify/blob/master/docs/setup-centos-6.4.md) they configured accumulo and other services to use the IP address that the eth0 network interface had at that time**. If your VM is now on a different network and has a different IP address then you will likely need to update those configuration values.

**This has proven to be the most reliable way to ensure that everything is bound to the correct interface and allow development outside the VM to communicate with services inside the VM.

ben.s.f...@gmail.com

unread,
Aug 20, 2014, 12:24:19 PM8/20/14
to lum...@googlegroups.com
Thanks guys, but I check my eth0 connection using "ifconfig" and it still shows 192.168.1.232, while I still get the messages "Hadoop Filesystem is hdfs://192.168.1.232:8020" and "Accumulo data dir is /accumulo" and "Zookeeper server is localhost:2181" and then the warning "Address change detected. Old: localhost.localdomain/192.168.1.232:8020 New: localhost.localdomain/127.0.0.1:8020"

Then the exception, "Failed to check if filesystem is already initialized".

It seems like even though there is no different network, my VM is the same IP address that it was when I ran the setup, I get this brick in the middle of initializing accumulo.

Any other suggestions? The stack trace continues with a "ConnectException: Call from localhost.localdomain/127.0.0.1 to localhost.localdomain:8020 failed on connection exception: Connection refused"

Best,

Ben
Message has been deleted

mohit kaushik

unread,
Aug 25, 2014, 6:19:36 AM8/25/14
to lum...@googlegroups.com
i have faced the same exception, Check your host file,make that like this and reinstall hadoop.

127.0.0.1                 localhost
192.168.0.121          (your host name)

ben.s.f...@gmail.com

unread,
Aug 25, 2014, 12:25:59 PM8/25/14
to lum...@googlegroups.com
Thanks Mohit, my /etc/hosts file looks like this:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.232 localhost.localdomain

Is this correct? When I type "hostname" into the terminal, I get "localhost.localdomain". That IP address is in fact my virtual box's IP.

Now I need to uninstall hadoop, and re-install it?

mohit kaushik

unread,
Aug 26, 2014, 12:09:22 AM8/26/14
to lum...@googlegroups.com
The second line is for Ipv6 so remove it and then your hosts file should be exactly

127.0.0.1           localhost
192.168.1.232    localhost.localdomain

just uninstall hadoop completely also the temp dirs then reinstall it from scratch.
I am quite sure it will solve ur problem.

ben.s.f...@gmail.com

unread,
Aug 26, 2014, 1:10:17 PM8/26/14
to lum...@googlegroups.com
It worked! THANK YOU!!!!!

ben.s.f...@gmail.com

unread,
Aug 26, 2014, 4:55:14 PM8/26/14
to lum...@googlegroups.com
Now I'm getting stuck with errors with Bower. I see you have been through this already, Mohit. I followed instructions for fixing the bower.json file, now I get stuck with a warning of Task "ecev:buildCytoscape" failed. It says "cannot open file 'src/namespace.js' for reading. Any tips on this little sticking point? Basically, the compiling for 

mvn package -P web-war -pl web/war -am
Doesn't work.
Message has been deleted

mohit kaushik

unread,
Aug 29, 2014, 12:49:41 AM8/29/14
to lum...@googlegroups.com

Hi ben,
I think you may be using old code. please download the latest.
and about this error. Some of the packages are problematic while they install. So you can install them individually from the webapps dir

-----------------bower install cytoscape#(version as given in bower.json)

but better approach download the latest code and you don't have to do anything. Everything is fixed there.

Good Luck

drajesh...@gmail.com

unread,
Jun 21, 2017, 8:36:49 AM6/21/17
to Lumify
I did same steps this showing wrong ip.

[hduser@namnod1 logs]$ cat /etc/hosts
127.0.0.1       localhost
192.168.0.145   namnod1
192.168.0.146   namnod2
192.168.0.147   namnod3


17/06/21 18:02:10 WARN ipc.Client: Address change detected. Old: raccluster/158.69.143.107:8020 New: raccluster/158.69.145.48:8020
17/06/21 18:02:10 INFO ipc.Client: Retrying connect to server: raccluster/158.69.145.48:8020. Already tried 0 time(s); maxRetries=45
17/06/21 18:02:30 INFO ipc.Client: Retrying connect to server: raccluster/158.69.145.48:8020. Already tried 1 time(s); maxRetries=45
17/06/21 18:02:50 INFO ipc.Client: Retrying connect to server: raccluster/158.69.145.48:8020. Already tried 2 time(s); maxRetries=45
17/06/21 18:03:10 INFO ipc.Client: Retrying connect to server: raccluster/158.69.145.48:8020. Already tried 3 time(s); maxRetries=45
17/06/21 18:03:30 INFO ipc.Client: Retrying connect to server: raccluster/158.69.145.48:8020. Already tried 4 time(s); maxRetries=45
17/06/21 18:03:50 INFO ipc.Client: Retrying connect to server: raccluster/158.69.145.48:8020. Already tried 5 time(s); maxRetries=45
17/06/21 18:04:10 WARN ipc.Client: Address change detected. Old: raccluster/158.69.145.48:8020 New: raccluster/149.202.120.39:8020
17/06/21 18:04:10 INFO ipc.Client: Retrying connect to server: raccluster/149.202.120.39:8020. Already tried 0 time(s); maxRetries=45
17/06/21 18:04:30 INFO ipc.Client: Retrying connect to server: raccluster/149.202.120.39:8020. Already tried 1 time(s); maxRetries=45
17/06/21 18:04:50 WARN ipc.Client: Address change detected. Old: raccluster/149.202.120.39:8020 New: raccluster/158.69.143.107:8020
17/06/21 18:04:50 INFO ipc.Client: Retrying connect to server: raccluster/158.69.143.107:8020. Already tried 0 time(s); maxRetries=45
17/06/21 18:05:10 INFO ipc.Client: Retrying connect to server: raccluster/158.69.143.107:8020. Already tried 1 time(s); maxRetries=45

Thanks & Regards,
Rajesh
Reply all
Reply to author
Forward
0 new messages