hbase on alluxio not work

392 views
Skip to first unread message

kiss.k...@gmail.com

unread,
Jun 14, 2016, 10:41:58 PM6/14/16
to Alluxio Users

hi,all

I try to run hbase-0.98.16.1-hadoop2 on alluxio1.0.1 , I got error,the hbase master can not start!

2016-06-15 10:30:11,294 INFO  [master:master1:60000] logger.type: getFileStatus(alluxio://master1:19998/hbase/hbase.id): HDFS Path: hdfs://master1:9000/hbase/hbase.id Alluxio Path: alluxio://master1:19998/hbase/hbase.id
2016-06-15 10:30:11,295 INFO  [master:master1:60000] logger.type: getFileStatus(alluxio://master1:19998/hbase/hbase.id): HDFS Path: hdfs://master1:9000/hbase/hbase.id Alluxio Path: alluxio://master1:19998/hbase/hbase.id
2016-06-15 10:30:11,295 INFO  [master:master1:60000] logger.type: getFileStatus(alluxio://master1:19998/hbase/hbase.id): HDFS Path: hdfs://master1:9000/hbase/hbase.id Alluxio Path: alluxio://master1:19998/hbase/hbase.id
2016-06-15 10:30:11,296 INFO  [master:master1:60000] logger.type: open(alluxio://master1:19998/hbase/hbase.id, 131072)
2016-06-15 10:30:11,300 INFO  [master:master1:60000] logger.type: Connecting to remote worker @ slave1/10.1.3.119:29998
2016-06-15 10:30:11,304 INFO  [master:master1:60000] logger.type: Connecting to remote worker @ slave2/10.1.3.102:29998
2016-06-15 10:30:11,306 INFO  [master:master1:60000] logger.type: Connected to remote machine slave1/10.1.3.119:29999
2016-06-15 10:30:11,314 INFO  [master:master1:60000] logger.type: Data 117440512 from remote machine slave1/10.1.3.119:29999 received
2016-06-15 10:30:11,315 INFO  [master:master1:60000] logger.type: Connecting to remote worker @ slave1/10.1.3.119:29998
2016-06-15 10:30:11,319 INFO  [master:master1:60000] logger.type: Connected to remote machine slave2/10.1.3.102:29999
2016-06-15 10:30:11,338 INFO  [master:master1:60000] logger.type: status: SUCCESS from remote machine slave2/10.1.3.102:29999 received
2016-06-15 10:30:11,341 INFO  [master:master1:60000] logger.type: Connecting to remote worker @ slave2/10.1.3.102:29998
2016-06-15 10:30:11,386 INFO  [master:master1:60000] logger.type: getFileStatus(alluxio://master1:19998/hbase/data/hbase/meta/1588230740): HDFS Path: hdfs://master1:9000/hbase/data/hbase/meta/1588230740 Alluxio Path: alluxio://master1:19998/hbase/data/hbase/meta/1588230
740
2016-06-15 10:30:11,394 INFO  [master:master1:60000] logger.type: listStatus(alluxio://master1:19998/hbase/data/hbase/meta/.tabledesc): HDFS Path: hdfs://master1:9000/hbase/data/hbase/meta/.tabledesc
2016-06-15 10:30:11,459 INFO  [master:master1:60000] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2016-06-15 10:30:11,672 INFO  [master:master1:60000] logger.type: listStatus(alluxio://master1:19998/hbase/data/hbase/meta/.tabledesc): HDFS Path: hdfs://master1:9000/hbase/data/hbase/meta/.tabledesc
2016-06-15 10:30:11,674 DEBUG [master:master1:60000] util.FSTableDescriptors: Current tableInfoPath = alluxio://master1:19998/hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000002
2016-06-15 10:30:11,675 INFO  [master:master1:60000] logger.type: getFileStatus(alluxio://master1:19998/hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000002): HDFS Path: hdfs://master1:9000/hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000002 Alluxio Path: alluxio:/
/master1:19998/hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000002
2016-06-15 10:30:11,683 INFO  [master:master1:60000] logger.type: open(alluxio://master1:19998/hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000002, 131072)
2016-06-15 10:30:11,700 INFO  [master:master1:60000] logger.type: Connecting to remote worker @ slave1/10.1.3.119:29998
2016-06-15 10:30:11,739 INFO  [master:master1:60000] logger.type: Connected to remote machine slave1/10.1.3.119:29999
2016-06-15 10:30:11,747 INFO  [master:master1:60000] logger.type: Data 167772160 from remote machine slave1/10.1.3.119:29999 received
2016-06-15 10:30:11,757 INFO  [master:master1:60000] logger.type: Connecting to remote worker @ slave1/10.1.3.119:29998
2016-06-15 10:30:11,779 DEBUG [master:master1:60000] util.FSTableDescriptors: TableInfo already exists.. Skipping creation
2016-06-15 10:30:11,779 INFO  [master:master1:60000] logger.type: getFileStatus(alluxio://master1:19998/hbase/.tmp): HDFS Path: hdfs://master1:9000/hbase/.tmp Alluxio Path: alluxio://master1:19998/hbase/.tmp
2016-06-15 10:30:11,784 INFO  [master:master1:60000] logger.type: listStatus(alluxio://master1:19998/hbase/.tmp/data): HDFS Path: hdfs://master1:9000/hbase/.tmp/data
2016-06-15 10:30:11,811 FATAL [master:master1:60000] master.HMaster: Unhandled exception. Starting shutdown.
java.io.IOException: alluxio.exception.InvalidPathException: Path /hbase/.tmp/data does not exist
at alluxio.hadoop.AbstractFileSystem.listStatus(AbstractFileSystem.java:467)
at alluxio.hadoop.FileSystem.listStatus(FileSystem.java:25)
at org.apache.hadoop.fs.Globber.listStatus(Globber.java:69)
at org.apache.hadoop.fs.Globber.glob(Globber.java:217)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1655)
at org.apache.hadoop.hbase.util.FSUtils.getTableDirs(FSUtils.java:1309)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkTempDir(MasterFileSystem.java:532)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:157)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:130)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684)
at java.lang.Thread.run(Thread.java:745)
Caused by: alluxio.exception.InvalidPathException: Path /hbase/.tmp/data does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at alluxio.exception.AlluxioException.from(AlluxioException.java:72)
at alluxio.AbstractClient.retryRPC(AbstractClient.java:324)
at alluxio.client.file.FileSystemMasterClient.listStatus(FileSystemMasterClient.java:271)
at alluxio.client.file.BaseFileSystem.listStatus(BaseFileSystem.java:188)
at alluxio.client.file.BaseFileSystem.listStatus(BaseFileSystem.java:179)
at alluxio.hadoop.AbstractFileSystem.listStatus(AbstractFileSystem.java:465)
... 11 more
2016-06-15 10:30:11,813 INFO  [master:master1:60000] master.HMaster: Aborting
2016-06-15 10:30:11,813 DEBUG [master:master1:60000] master.HMaster: Stopping service threads
2016-06-15 10:30:11,813 INFO  [master:master1:60000] ipc.RpcServer: Stopping server on 60000


it is true that on the hdfs under hbase floder has nothing , because it's the first time to run hbase. so what to do next?

Haoyuan Li

unread,
Jun 15, 2016, 1:17:25 AM6/15/16
to kiss.k...@gmail.com, Alluxio Users
Kevin,

Would you be able to run Alluxio 1.1.0?

Haoyuan

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

kiss.k...@gmail.com

unread,
Jun 15, 2016, 2:53:19 AM6/15/16
to Alluxio Users, kiss.k...@gmail.com
When I try alluxio1.1.0 ,the work can't start!

2016-06-15 14:43:30,760
   INFO MASTER  Connecting to slave3 as root...
Pseudo-terminal will not be allocated because stdin is not a terminal.
Pseudo-terminal will not be allocated because stdin is not a terminal.
Permission denied, please try again.
which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
dirname: missing operand
Try 'dirname --help' for more information.
Formatting Alluxio Worker @ slave3
/home/dcos/alluxio-1.1.0/bin/alluxio: line 198: /../bin/java: No such file or directory
Permission denied, please try again.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Connection closed by 10.1.3.119
2016-06-15 14:46:02,883
   INFO WORKERS  Connecting to slave1 as root...
2016-06-15 14:46:02,889
   INFO WORKERS  Connecting to slave2 as root...
Pseudo-terminal will not be allocated because stdin is not a terminal.
Pseudo-terminal will not be allocated because stdin is not a terminal.
2016-06-15 14:46:02,898
   INFO WORKERS  Connecting to slave3 as root...
Pseudo-terminal will not be allocated because stdin is not a terminal.
Permission denied, please try again.
which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
dirname: missing operand
Try 'dirname --help' for more information.
Killed 0 processes on slave3
Permission denied, please try again.
which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
dirname: missing operand
Try 'dirname --help' for more information.
Killed 0 processes on slave2
Permission denied, please try again.
which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
dirname: missing operand
Try 'dirname --help' for more information.
Killed 0 processes on slave1
2016-06-15 14:46:43,499
   INFO MASTER  Connecting to slave1 as root...
Pseudo-terminal will not be allocated because stdin is not a terminal.
2016-06-15 14:46:43,514
   INFO MASTER  Connecting to slave2 as root...
Pseudo-terminal will not be allocated because stdin is not a terminal.
2016-06-15 14:46:43,536
   INFO MASTER  Connecting to slave3 as root...
Pseudo-terminal will not be allocated because stdin is not a terminal.
which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
dirname: missing operand
Try 'dirname --help' for more information.
sudo: sorry, you must have a tty to run sudo
Mount failed, not starting worker
Permission denied, please try again.
which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
dirname: missing operand
Try 'dirname --help' for more information.
sudo: sorry, you must have a tty to run sudo
Mount failed, not starting worker
Permission denied, please try again.
which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
dirname: missing operand
Try 'dirname --help' for more information.
sudo: sorry, you must have a tty to run sudo
Mount failed, not starting worker

I have export JAVA_HOME=/usr/lib/jdk in the alluxio-env.sh  ,and JDK version is 1.8


在 2016年6月15日星期三 UTC+8下午1:17:25,Haoyuan Li写道:

kiss.k...@gmail.com

unread,
Jun 15, 2016, 4:53:49 AM6/15/16
to Alluxio Users, kiss.k...@gmail.com
I have try alluxio 1.1.0.   and run hbase on hdfs sucessful. then I change hbase-site.xml:

<property>
    <name>fs.alluxio.impl</name>
    <value>alluxio.hadoop.FileSystem</value>
</property>

<property>
    <name>hbase.rootdir</name>
    <value>alluxio://master1:19998/hbase</value>
</property>

But I still get the same error.


在 2016年6月15日星期三 UTC+8下午1:17:25,Haoyuan Li写道:
Kevin,

kiss.k...@gmail.com

unread,
Jun 15, 2016, 5:23:10 AM6/15/16
to Alluxio Users, kiss.k...@gmail.com
after I create the floder on hdfs manually , I can start the Hmaster , but HRegionServer got error:

java.io.IOException: cannot get log writer
at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:212)
at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWALWriter(HLogFactory.java:192)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:630)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:556)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:513)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:423)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:339)
at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createHLog(HLogFactory.java:58)
at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateHLog(HRegionServer.java:1634)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1613)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1357)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:904)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: No available Alluxio worker found
at alluxio.client.block.AlluxioBlockStore.getOutStream(AlluxioBlockStore.java:167)
at alluxio.client.file.FileOutStream.getNextBlock(FileOutStream.java:315)
at alluxio.client.file.FileOutStream.write(FileOutStream.java:280)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:87)
at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:209)
... 12 more


but I can saw the alluxio node on the web

Gene Pang

unread,
Jun 15, 2016, 9:04:29 AM6/15/16
to Alluxio Users, kiss.k...@gmail.com
The Alluxio error says "Caused by: java.lang.RuntimeException: No available Alluxio worker found". Could you check the Alluxio worker logs to see why it is not starting?

Thanks,
Gene

kiss.k...@gmail.com

unread,
Jun 16, 2016, 3:17:35 AM6/16/16
to Alluxio Users, kiss.k...@gmail.com
I run alluxio tests some test failed , I found it is because I start hdfs use normal and I have to start alluxio with root ,after I change permission ,all test passed.
and then start hbase with alluxio, first time all sucessed,but I can't create table ,error is :

java.io.IOException: alluxio.exception.FileDoesNotExistException: Path /hbase/.tmp/data does not exist

then I create floder: /hbase/.tmp/data one hdfs ,and change alluxio.user.file.writetype.default=CACHE_THROUGH

now restart hbase with error:

2016-06-16 15:03:10,813 ERROR [master:master:60000] logger.type: Internal error processing remove
alluxio.org.apache.thrift.TApplicationException: Internal error processing remove
at alluxio.org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at alluxio.org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79)
at alluxio.thrift.FileSystemMasterClientService$Client.recv_remove(FileSystemMasterClientService.java:588)
at alluxio.thrift.FileSystemMasterClientService$Client.remove(FileSystemMasterClientService.java:574)
at alluxio.client.file.FileSystemMasterClient$4.call(FileSystemMasterClient.java:152)
at alluxio.client.file.FileSystemMasterClient$4.call(FileSystemMasterClient.java:149)
at alluxio.AbstractClient.retryRPC(AbstractClient.java:327)
at alluxio.client.file.FileSystemMasterClient.delete(FileSystemMasterClient.java:149)
at alluxio.client.file.BaseFileSystem.delete(BaseFileSystem.java:116)
at alluxio.hadoop.AbstractFileSystem.delete(AbstractFileSystem.java:223)
at alluxio.hadoop.FileSystem.delete(FileSystem.java:25)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkTempDir(MasterFileSystem.java:537)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:157)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:130)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684)
at java.lang.Thread.run(Thread.java:745)
2016-06-16 15:03:10,813 ERROR [master:master:60000] master.HMaster: Unhandled exception. Starting shutdown.
java.io.IOException: Failed after 32 retries.
at alluxio.AbstractClient.retryRPC(AbstractClient.java:337)
at alluxio.client.file.FileSystemMasterClient.delete(FileSystemMasterClient.java:149)
at alluxio.client.file.BaseFileSystem.delete(BaseFileSystem.java:116)
at alluxio.hadoop.AbstractFileSystem.delete(AbstractFileSystem.java:223)
at alluxio.hadoop.FileSystem.delete(FileSystem.java:25)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkTempDir(MasterFileSystem.java:537)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:157)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:130)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684)
at java.lang.Thread.run(Thread.java:745)

I wonder to know is alluxio support hbase really? maybe it is just a plan?


在 2016年6月15日星期三 UTC+8下午9:04:29,Gene Pang写道:

Gene Pang

unread,
Jun 17, 2016, 3:27:24 PM6/17/16
to Alluxio Users, kiss.k...@gmail.com
Hi,

If you start alluxio not as the root user, with: bin/alluxio-start.sh all SudoMount

Do you still run into the same issue?

Thanks,
Gene

Gene Pang

unread,
Jun 21, 2016, 10:35:05 AM6/21/16
to Alluxio Users, kiss.k...@gmail.com
Hi,

Where you able to resolve your issue?

Thanks,
Gene

Gene Pang

unread,
Jun 28, 2016, 9:54:25 AM6/28/16
to Alluxio Users, kiss.k...@gmail.com
Hi,

Was your issue resolved?

Thanks,
Gene

kevin

unread,
Jun 29, 2016, 2:17:24 AM6/29/16
to Gene Pang, alluxi...@googlegroups.com
Part of this issue was resolved .
please see [ https://github.com/Alluxio/alluxio/pull/3563 ] , after this PR be merged , you can do CRUD operations on tables also with snappy supported.
But when I try to load tpcds data into hbase by phoenix with about 1GB data , the region server crash without any error . the same test have been tested with hdfs and all works well.

2016-06-29 14:16 GMT+08:00 kevin <kiss.k...@gmail.com>:
Part of this issue was resolved .
please see [ https://github.com/Alluxio/alluxio/pull/3563 ] , after this PR be merged , you can do CRUD operations on tables also with snappy supported.
But when I try to load tpcds data into hbase by phoenix with about 1GB data , the region server crash without any error . the same test have been tested with hdfs and all works well.


kiss.k...@gmail.com

unread,
Jul 1, 2016, 4:54:14 AM7/1/16
to Alluxio Users, gene...@gmail.com
alluxio support hbase welll , but not support phoenix well.

在 2016年6月29日星期三 UTC+8下午2:17:24,kevin写道:

darion.yaphet

unread,
Jul 1, 2016, 8:52:50 AM7/1/16
to alluxi...@googlegroups.com
HI :

I'm try to build hbase on alluxio . Is there some document to describe how
to config ? thanks



--
View this message in context: http://alluxio-users.85194.x6.nabble.com/hbase-on-alluxio-not-work-tp175p370.html
Sent from the Alluxio Users mailing list archive at Nabble.com.

Gene Pang

unread,
Jul 1, 2016, 12:08:59 PM7/1/16
to Alluxio Users
Hi,

Thanks for confirming that HBase now works on Alluxio!

Could you describe what you did and what you had to change in order for this to work? It would be very helpful to the community!

Thanks,
Gene

kiss.k...@gmail.com

unread,
Jul 3, 2016, 10:19:35 PM7/3/16
to Alluxio Users
I use hbase0.98.16.1 rebuild with hadoop2.7.1 and alluxio1.1.1-snapshot rebuild with hadoop2.7.0 for test environment.
finally I found there is a little conflict between hbase and alluxio about process file not found exception , hbase catch java.io.FileNotFoundException,and then try to create new file , these files are the metadata about hbase.
where the exception is throw by under filesystem like hdfs.
alluxio as the under filesystem for hbase when met a  java.io.FileNotFoundException throw out a java.io.IOException,so hbase can not handle this.



在 2016年7月2日星期六 UTC+8上午12:08:59,Gene Pang写道:

Gene Pang

unread,
Jul 7, 2016, 10:49:42 AM7/7/16
to Alluxio Users
Thanks for the information!

So, after the PR was merged to master, you could run HBase on top of Alluxio? What configuration changes were necessary?

Thanks,
Gene

kiss.k...@gmail.com

unread,
Jul 7, 2016, 9:37:54 PM7/7/16
to Alluxio Users
like spark on top of alluxio :
in hbase-site.xml
<property>
    <name>fs.alluxio.impl</name>
    <value>alluxio.hadoop.FileSystem</value>
</property>

<property>
    <name>hbase.rootdir</name>
    <value>alluxio://master:19998/hbase</value>
</property>


<property> <!-- not necessary-->
    <name>alluxio.user.file.writetype.default</name>
    <value>CACHE_THROUGH</value>
</property>

in hbase-env.sh
 export HBASE_CLASSPATH=/home/dcos/alluxio-branch-1.1/core/client/target/alluxio-core-client-1.1.1-SNAPSHOT-jar-with-dependencies.jar:$HBASE_CLASSPATH

but what I really wants is run hbase with SQL supported(with phoenix) on top of Alluxio,but it's failed ,when a table have more than two million rows data If I run a count(1) statement on phoenix the hbase regionserver with crash but if I run it on hbase it will sucess.


在 2016年7月7日星期四 UTC+8下午10:49:42,Gene Pang写道:

Gene Pang

unread,
Jul 8, 2016, 10:07:47 AM7/8/16
to Alluxio Users
Thanks for the information and details!

As for Phoenix, please create a new post about Phoenix and provide additional details on the issues running on Alluxio.

Thank you,
Gene
Reply all
Reply to author
Forward
0 new messages