root@c63e6c937cba:/# pio statusSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/opt/pio/pio-0.11.0/lib/spark/pio-data-hdfs-assembly-0.11.0-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/pio/pio-0.11.0/lib/pio-assembly-0.11.0-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory][INFO] [Management$] Inspecting PredictionIO...[INFO] [Management$] PredictionIO 0.11.0-incubating is installed at /opt/pio/pio-0.11.0[INFO] [Management$] Inspecting Apache Spark...[INFO] [Management$] Apache Spark is installed at /usr/local/spark[INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum requirement of 1.3.0)[INFO] [Management$] Inspecting storage backend connections...[INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...[INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)...[ERROR] [Storage$] Error initializing storage client for source HDFS[ERROR] [Management$] Unable to connect to all storage backends successfully.The following shows the error message from the storage backend.
Data source HDFS was not properly initialized. (org.apache.predictionio.data.storage.StorageClientException)
Dumping configuration of initialized storage backend sources.Please make sure they are correct.
Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /usr/local/elasticsearch, HOSTS -> c63e6c937cba, PORTS -> 9300, CLUSTERNAME -> pio, TYPE -> elasticsearchSource Name: HDFS; Type: (error); Configuration: (error)...
2017-06-07 12:57:17,147 INFO org.apache.predictionio.tools.commands.Management$ [main] - Creating Event Server at 0.0.0.0:70702017-06-07 12:57:19,051 WARN org.apache.hadoop.hbase.util.DynamicClassLoader [main] - Failed to identify the fs of dir hdfs://c63e6c937cba:9000/hbase/lib, ignoredjava.io.IOException: No FileSystem for scheme: hdfs
......
2017-06-07 13:01:18,874 INFO org.apache.predictionio.data.storage.Storage$ [main] - Verifying Model Data Backend (Source: HDFS)...2017-06-07 13:01:19,298 ERROR org.apache.predictionio.data.storage.Storage$ [main] - Error initializing storage client for source HDFSjava.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2586)......
2017-06-07 13:12:46,937 INFO org.apache.predictionio.tools.commands.Management$ [main] - Creating Event Server at 0.0.0.0:70702017-06-07 13:12:49,494 ERROR org.apache.predictionio.data.storage.hbase.StorageClient [main] - Failed to connect to HBase. Please check if HBase is running properly.2017-06-07 13:12:49,494 ERROR org.apache.predictionio.data.storage.Storage$ [main] - Error initializing storage client for source HBASEjava.io.IOException: java.lang.reflect.InvocationTargetException...
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
...
...
2017-06-07 13:14:36,482 INFO org.apache.predictionio.data.storage.Storage$ [main] - Verifying Meta Data Backend (Source: ELASTICSEARCH)...2017-06-07 13:14:38,490 INFO org.apache.predictionio.data.storage.Storage$ [main] - Verifying Model Data Backend (Source: HDFS)...2017-06-07 13:14:38,859 ERROR org.apache.predictionio.data.storage.Storage$ [main] - Error initializing storage client for source HDFSjava.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found...<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://c63e6c937cba:9000/</value> </property>
<property> <name>fs.file.impl</name> <value>org.apache.hadoop.fs.LocalFileSystem</value> </property>
<property> <name>fs.hdfs.impl</name> <value>org.apache.hadoop.hdfs.DistributedFileSystem</value> </property></configuration><configuration> <property> <name>dfs.data.dir</name> <value>file:///usr/local/hadoop/dfs/name/data</value> <final>true</final> </property>
<property> <name>dfs.name.dir</name> <value>file:///usr/local/hadoop/dfs/name</value> <final>true</final> </property>
<property> <name>dfs.replication</name> <value>2</value> </property></configuration><configuration> <property> <name>hbase.rootdir</name> <value>hdfs://c63e6c937cba:9000/hbase</value> </property>
<property> <name>hbase.zookeeper.property.dataDir</name> <value>hdfs://c63e6c937cba:9000/zookeeper</value> </property>
<property> <name>hbase.zookeeper.quorum</name> <value>localhost</value> </property>
<property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property></configuration>#!/usr/bin/env bash
# Safe config that will work if you expand your cluster laterSPARK_HOME=/usr/local/sparkES_CONF_DIR=/usr/local/elasticsearch/configHADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoopHBASE_CONF_DIR=/usr/local/hbase/conf
# Filesystem paths where PredictionIO uses as block storage.PIO_FS_BASEDIR=$HOME/.pio_storePIO_FS_ENGINESDIR=$PIO_FS_BASEDIR/enginesPIO_FS_TMPDIR=$PIO_FS_BASEDIR/tmp
PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_metaPIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_eventPIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_modelPIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=HDFS# PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS
# Elasticsearch ExamplePIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearchPIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/usr/local/elasticsearch# the next line should match the cluster.name in elasticsearch.ymlPIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=pio
# For single host Elasticsearch, may add hosts and ports laterPIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=c63e6c937cbaPIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300
# dummy models are stored here so use HDFS in case you later want to# expand the Event and PredictionServersPIO_STORAGE_SOURCES_HDFS_TYPE=hdfsPIO_STORAGE_SOURCES_HDFS_PATH=hdfs://c63e6c937cba:9000/models
# localfs storage, because hdfs won't work# PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs# PIO_STORAGE_SOURCES_LOCALFS_PATH=${PIO_FS_BASEDIR}/models
# HBase Source configPIO_STORAGE_SOURCES_HBASE_TYPE=hbasePIO_STORAGE_SOURCES_HBASE_HOME=/usr/local/hbase# Hbase single master configPIO_STORAGE_SOURCES_HBASE_HOSTS=c63e6c937cbaPIO_STORAGE_SOURCES_HBASE_PORTS=0Sorry for the confusion over support, PIO has many components and the docker container you are using is of unknown origin (to me anyway) It seems to have misconfigured something. Please be sure to tell the author or create a PR for it so it can be fixed for other users, it’s one way to pay for free software.
On Jun 7, 2017, at 8:55 AM, Alexey Nikulov <hed...@gmail.com> wrote:
Sorry, saw similar topics here, didn't think of another place to ask.Anyway, solved it by editing /storage/hdfs/build.sbt before ./make-distribution.sh due to found information here and corresponding PR.
On Wednesday, June 7, 2017 at 6:29:32 PM UTC+3, pat wrote:This group is for support of ActionML projects like the Universal Recommender.Please direct PIO questions to the Apache PIO mailing list.
--
You received this message because you are subscribed to the Google Groups "actionml-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to actionml-user+unsubscribe@googlegroups.com.
To post to this group, send email to action...@googlegroups.com.
GitHub user shimamoto opened a pull request:
https://github.com/apache/incubator-predictionio/pull/389
PIO-91 Fixed hadoop-hdfs artifact missing error
I made a mistake when I reviewed dependencies.
Basically, the problem is due to unavailability of the hadoop-hdfs jars.You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shimamoto/incubator-predictionio pio-91_hadoop-hdfs-missing
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/incubator-predictionio/pull/389.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:This closes #389
commit f60af2c4dd5d4656ccab189b5590a9dc451ffb27
Author: shimamoto
Date: 2017-06-05T11:55:45ZPIO-91 Fixed hadoop-hdfs artifact missing error.