Hi Ankita,
I'm on CDH3, with security turned off, and tried to reproduce the problem with
exactly the same external metastore setting. Instead, Beeswax went ahead and
created the directory:
10/12/29 10:19:51 INFO beeswax.Server: Created /user/hive/warehouse-hue with
world-writable permissions.
10/12/29 10:19:51 INFO beeswax.Server: Starting beeswaxd at port 8002
A few quick things to check:
(1) Is the Hadoop on the Hue node configured correctly? Does `hadoop fs ...`
work?
(2) Can you get the corresponding error message from the NN log to see
why it couldn't create the directory?
Cheers,
--
bc Wong
Cloudera Software Engineer
It's the same. Hue runs BeeswaxServer, which is essentially a Hive
client. So it should have an identical setup.
> Also, I am not able to do hadoop fs -ls from my Hue machine as this
> machine is not part of hadoop cluster. I think this is not an
> requirement as I am able to browse my hadoop filesystem from Hue file
> browser correctly. Am I missing something?
Hue requires a properly configured Hadoop client locally. The file
upload runs Hadoop, job submission requires Hadoop, and BeeswaxServer
requires Hadoop.
In Hue, the left hand side of the application bar should show a little
red exclamation. Click on it and it'll show you the common
mis-configurations.
So it seems that Hue starts and stays up, just that you're missing
most of the apps. Right? Can you take a look at your /etc/hue/hue.ini
and make sure the "hadoop_home" is set correctly?
What you mentioned below, esp. "Supervisor shutting down" would
suggest that Hue can't even start. If so, you'd need to scan the other
log files in /var/log/hue for errors. (Error reporting is badly done
in 1.0.x. In 1.1.0, there is a central error.log for all errors.)
Anki,
You're absolutely right. I probably did not setup my hive conf correctly when I
said I couldn't reproduce your problem. I filed
https://issues.cloudera.org/browse/HUE-393 for this problem. The patch in the
jira should work for you.
Hi Matt,
Can you try `patch -p1 ...'?
Hi Matt,
What's your Hue version? Could you send me your
apps/beeswax/src/beeswax/db_utils.py file?
I see. The patch wouldn't apply directly on 1.1. (I was a bit mistaken
about what's in which version.) On your version, the patch should go
in around L305 rather than L296. It's a small change. Do you feel
comfortable hand-editing your db_utils.py?
Hi Matt,
Is hive.metastore.local set to false in your
/etc/hive/conf/hive-site.xml? Could you please attach that file?
Cheers,
bc
> I have verified that hive from cli connects to my derby server and works as
> expected.
>
> I also attached my current db_utils.py file as a sanity check.
>
> Thanks again for all of your help!
> -M@
>
> On Fri, Feb 11, 2011 at 8:34 AM, bc Wong <bcwa...@cloudera.com> wrote:
>>
>> On Fri, Feb 11, 2011 at 7:20 AM, Matt Tanquary <matt.t...@gmail.com>
>> wrote:
>> > Thanks!
>> >
>> > Now I get this error when trying to start hive from hue: An error
>> > occurred:
>> > 'module' object has no attribute 'METASTORE_CONN_TIMEOUT'
>>
>> Hi Matt,
>>
>> Your original file has this for the code block in question:
>>
>> client = thrift_util.get_client(ThriftHiveMetastore.Client,
>> conf.BEESWAX_META_SERVER_HOST.get(),
>> conf.BEESWAX_META_SERVER_PORT.get(),
>> service_name="Hive Metadata (Hive UI)
>> Server",
>> timeout_seconds=METASTORE_THRIFT_TIMEOUT)
>>
>> Your new file should modify two arguments:
>>
>> client = thrift_util.get_client(ThriftHiveMetastore.Client,
>> host,
>> port,
>> service_name="Hive Metadata (Hive UI)
>> Server",
>> timeout_seconds=METASTORE_THRIFT_TIMEOUT)
>>
>> Cheers,
>> bc
[cc-ing hue-users@. Please reply-all to keep a record for future users.]
Where is your external metastore? You were using the Hive CLI
earlier, which connected to the external metastore. So your CLI
session cannot be using /etc/hive/conf/hive-site.xml. You should
point hive_con_dir in `hue-beeswax.ini' to wherever the real
configuration is.
The hive-site.xml you attached is misconfigured. It doesn't have
hive.metastore.uris.
Matt,
Please include hue-users@ in the reply. This is useful to help
other people troubleshoot.
The Hive terminology might be confusing you:
http://wiki.apache.org/hadoop/Hive/AdminManual/MetastoreAdmin
What you have is not necessarily an external (remote) metastore.
A remote metastore is a Hive metastore server daemon. That daemon
proxies all metastore requests, and the clients don't directly
talk to the metastore DB. In your case, it sounds like your
clients are connecting to the DB directly, just that the DB is on
a remote machine. That is still classified as an internal
metastore.
To go into Hive best practices a bit:- You shouldn't use derby
for a shared metastore. You'll run into concurrency problems
(and definitely performance ones). I'd recommend mysql.
Exception communicating with Hive Metastore Server at localhost:8003: timed out
i installed mysql server on my master node, where i installed cloudera manager as well.
i have created a new database on mysql , which name is metastore and my hive-site.xml is so,
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://n1.example.com/metastore</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>q*****</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
can anybody help me to solve that error.
<?xml version="1.0" encoding="UTF-8"?>
<!--Autogenerated by Cloudera CM on 2013-02-19T14:25:59.866Z-->
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/metastore_db?useUnicode=true&characterEncoding=UTF-8</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/beeswax/warehouse</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hue</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value></value>
</property>
</configuration>
and also
hbase-conf/hbase-site.xml under under configuration files on beeswax_server(n1) looks so ;
<?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera CM on 2013-02-19T14:25:59.524Z--> <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://n1.example.com:8020/hbase</value> </property> <property> <name>hbase.client.write.buffer</name> <value>2097152</value> </property> <property> <name>hbase.client.pause</name> <value>1000</value> </property> <property> <name>hbase.client.retries.number</name> <value>10</value> </property> <property> <name>hbase.client.scanner.caching</name> <value>1</value> </property> <property> <name>hbase.client.keyvalue.maxsize</name> <value>10485760</value> </property> <property> <name>hbase.security.authentication</name> <value>simple</value> </property> <property> <name>zookeeper.session.timeout</name> <value>60000</value> </property> <property> <name>zookeeper.znode.parent</name> <value>/hbase</value> </property> <property> <name>zookeeper.znode.rootserver</name> <value>root-region-server</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>n1.example.com</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property>
</configuration>
I installed mysql server on same node (n1) than i build an new database,which name is 'metastore' and also all configuration from cloudera's webseite. metastore database is existing in under var/log/mysql/metastore with all metadata tables ( i had implement with that command SOURCE /usr/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-0.9.0.mysql.sql;)
and while creating of metastore database i configured database user like so;
CREATE USER hive@localhost IDENTIFIED BY 'xxxxxxx' ;
and also my hbase-conf/hdfs-site.xml under beeswax server in CM ;
<?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera CM on 2013-02-19T14:25:59.521Z--> <configuration> <property> <name>dfs.https.port</name> <value>50470</value> </property> <property> <name>dfs.namenode.http-address</name> <value>n1.example.com:50070</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.blocksize</name> <value>134217728</value> </property> <property> <name>dfs.client.use.datanode.hostname</name> <value>false</value> </property> <property> <name>fs.permissions.umask-mode</name> <value>022</value> </property> <property> <name>dfs.datanode.hdfs-blocks-metadata.enabled</name> <value>true</value> </property> </configuration>
I'm trying to solve that problem but I could not yet find any issue...
Thank you for your attention.
Configuration options for the Hive UI (Beeswax).
hive_conf_dir | /var/run/cloudera-scm-agent/process/191-hue-HUE_SERVER/hive-conf Verzeichnis der Hive-Konfiguration, in dem sich hive-site.xml befindet. Standard: /var/run/cloudera-scm-agent/process/191-hue-HUE_SERVER/hive-conf |
---|
share_saved_queries | True Gespeicherte Abfragen allen Benutzern mitteilen. Wenn auf falsch gesetzt, sind gesicherte Abfragen nur für den Eigentümer und Administratoren sichtbar. Standard: True |
---|
metastore_conn_timeout | 10 Timeouts in Sekunden für Thrift-Aufrufe des Hive-Metastores. Bei diesem Timeout sollte berücksichtigt werden, dass der Metastore mit einer externen Datenbank kommunizieren könnte. Standard: 10 |
---|
beeswax_server_port | 8002 Konfigurieren Sie den Port, auf dem der Beeswax Thrift-Server läuft. Standard: 8002 |
---|
beeswax_running_query_lifetime | 604800000 Dauer in Sekunden, während der Beeswax Abfragen im Cache behält. Standard: 604800000 |
---|
hive_home_dir | /usr/lib/hive Pfad zum Ursprung der Hive-Installation; kehrt standardmäßig zur Umgebungsvariablen zurück, wenn nicht eingestellt. Standard: /usr/lib/hive |
---|
browse_partitioned_table_limit | 250 Setzen Sie eine LIMIT-Klausel beim Durchsuchen einer partitionierten Tabelle. Ein positiver Wert wird als das LIMIT gesetzt. Wenn 0 oder negativ, wird kein Limit gesetzt. Standard: 250 |
---|
beeswax_server_heapsize | 53 Maximale vom Beeswax-Server verwendete Java-Heapsize (in Megabyte). Beachten Sie, dass die Einstellung von HADOOP_HEAPSIZE in $HADOOP_CONF_DIR/hadoop-env.sh diese Einstellung außer Kraft setzen kann. Standard: 1000 |
---|
beeswax_server_conn_timeout | 120 Timeout in Sekunden für Thrift-Aufrufe des Beeswax-Dienstes. Standard: 120 |
---|
beeswax_meta_server_port | 8003 Konfigurieren Sie den Port, auf dem der interne Metastore-Daemon läuft. Wird nur verwendet, wenn hive.metastore.local wahr ist. Standard: 8003 |
---|
beeswax_meta_server_only | None Disable Beeswax as the query server. This is used when Beeswax is just used for talking to the meta store and Hue is using another query server. Just fill in an unused port. Standard: None |
---|
local_examples_data_dir | /usr/share/hue/apps/beeswax/src/beeswax/../../data Der Pfad im lokalen Dateisystem, der die Beeswax-Beispiele enthält. Standard: /usr/share/hue/apps/beeswax/src/beeswax/../../data |
---|
Driver returned: 1. Errors: Hive history file=/tmp/hue/hive_job_log_hue_201302200154_1490517290.txt Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied: user=admin, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
I am very rookie with hdfs . I tried to see hdfs file through one node(master node) . "with hadoop fs -ls" i'm getting this message: ls: `.': No such file or directory
and also /var/lib/hive/metastore directory ,where i point my mysql metastore directory there is nothing in that directory. is it normal?
but default directory of hue beeswax /var/lib/hive/hue_beeswax_metastore/metastore_db have log,seg0,tmp,dblck,dbex,service.properties such of things of derby database.
and where should be that directory also under which directory ? under "bin" or "dfs" or "usr"
i thought that should be in hdfs but that command does not work out .
under Hbase Web UI i can see my tables look so ;
http://n1.example.com:60010/master-status
Master: n1.example.com:60000
Local logs, Thread Dump, Log Level, Debug dump
Attributes
Attribute Name Value Description
HBase Version 0.92.1-cdh4.1.3, rUnknown HBase version and revision
HBase Compiled Sat Jan 26 17:11:38 PST 2013, jenkins When HBase version was compiled and by whom
Hadoop Version 2.0.0-cdh4.1.3, rdbc7a60f9a798ef63afb7f5b723dc9c02d5321e1 Hadoop version and revision
Hadoop Compiled Sat Jan 26 16:46:14 PST 2013, jenkins When Hadoop version was compiled and by whom
HBase Root Directory hdfs://n1.example.com:8020/hbase Location of HBase home directory
HBase Cluster ID eab1a97c-2d1b-4d7d-8315-dcaf1c151f8d Unique identifier generated for each HBase cluster
Load average 1 Average number of regions per regionserver. Naive computation.
Zookeeper Quorum n1.example.com:2181 Addresses of all registered ZK servers. For more, see zk dump.
Coprocessors [] Coprocessors currently loaded loaded by the master
HMaster Start Time Tue Feb 19 15:25:28 CET 2013 Date stamp of when this HMaster was started
HMaster Active Time Tue Feb 19 15:25:28 CET 2013 Date stamp of when this HMaster became active
Tasks
Show All Monitored Tasks Show non-RPC Tasks Show All RPC Handler Tasks Show Active RPC Calls Show Client Operations View as JSON
No tasks currently running on this node.
Tables
Catalog Table Description
-ROOT- The -ROOT- table holds references to all .META. regions.
.META. The .META. table holds references to all User Table regions
2 table(s) in set. [Details]
User Table Description
cars {NAME => 'cars', FAMILIES => [{NAME => 'vi', MIN_VERSIONS => '0'}]}
test {NAME => 'test', FAMILIES => [{NAME => 'cf1', MIN_VERSIONS => '0'}]}
Region Servers
ServerName Start time Load
n1.example.com,60020,1361283928017 Tue Feb 19 15:25:28 CET 2013 requestsPerSecond=0, numberOfOnlineRegions=1, usedHeapMB=33, maxHeapMB=65
n2.example.com,60020,1361284069894 Tue Feb 19 15:27:49 CET 2013 requestsPerSecond=0, numberOfOnlineRegions=1, usedHeapMB=27, maxHeapMB=185
n3.example.com,60020,1361284067501 Tue Feb 19 15:27:47 CET 2013 requestsPerSecond=0, numberOfOnlineRegions=0, usedHeapMB=30, maxHeapMB=185
n4.example.com,60020,1361284298009 Tue Feb 19 15:31:38 CET 2013 requestsPerSecond=0, numberOfOnlineRegions=2, usedHeapMB=27, maxHeapMB=185
Total: servers: 4 requestsPerSecond=0, numberOfOnlineRegions=4
Load is requests per second and count of regions loaded
Dead Region Servers
Regions in Transition
No regions in transition.
thank you for your attention .
Onur
i tried now somethings and worked ;
[root@n1 ~]# hadoop fs -ls /hbase
Found 10 items
drwxr-xr-x - hbase hbase 0 2013-02-08 04:11 /hbase/-ROOT-
drwxr-xr-x - hbase hbase 0 2013-02-08 04:11 /hbase/.META.
drwxr-xr-x - hbase hbase 0 2013-02-08 04:12 /hbase/.corrupt
drwxr-xr-x - hbase hbase 0 2013-02-19 15:27 /hbase/.logs
drwxr-xr-x - hbase hbase 0 2013-02-19 15:28 /hbase/.oldlogs
drwxr-xr-x - hbase hbase 0 2013-02-16 22:20 /hbase/cars
-rw-r--r-- 3 hbase hbase 38 2013-02-08 04:11 /hbase/hbase.id
-rw-r--r-- 3 hbase hbase 3 2013-02-08 04:11 /hbase/hbase.version
drwxr-xr-x - hbase hbase 0 2013-02-19 15:26 /hbase/splitlog
drwxr-xr-x - hbase hbase 0 2013-02-16 22:10 /hbase/test
[root@n1 ~]# hadoop fs -ls /hbase/cars
Found 3 items
-rw-r--r-- 3 hbase hbase 509 2013-02-16 22:20 /hbase/cars/.tableinfo.0000000001
drwxr-xr-x - hbase hbase 0 2013-02-16 22:20 /hbase/cars/.tmp
drwxr-xr-x - hbase hbase 0 2013-02-18 04:39 /hbase/cars/7c91bdc9437420e2896525114c0a0499
[root@n1 ~]# hadoop fs -ls /
Found 3 items
drwxr-xr-x - hbase hbase 0 2013-02-16 22:20 /hbase
drwxrwxrwt - hdfs hdfs 0 2013-02-20 10:39 /tmp
drwxr-xr-x - hdfs supergroup 0 2013-02-08 04:14 /user
[root@n1 ~]# hadoop fs -ls /user/beeswax/warehouse
Found 2 items
drwxr-xr-x - hue hdfs 0 2013-02-20 10:38 /user/beeswax/warehouse/sample_07
drwxr-xr-x - hue hdfs 0 2013-02-20 10:39 /user/beeswax/warehouse/sample_08
[root@n1 ~]# hadoop fs -ls /user/beeswax/warehouse/sample_07
Found 1 items
-rw-r--r-- 3 hue hdfs 46055 2013-02-20 10:39 /user/beeswax/warehouse/sample_07/sample_07.csv
know i can see every tables on hadoop, which I created but I'am could noy figure out these errors,
Driver returned: 1. Errors: Hive history file=/tmp/hue/hive_job_log_hue_201302191549_48687276.txt
FAILED: Error in metadata: MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=admin, access=WRITE, inode="/user/beeswax/warehouse":hue:hive:drwxrwxr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
and
thanks
Onur
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
and
thanks
Onur
Thanks
Onur
I created a new user as 'hdfs' in Hue GUI. than i erased hue example table in HUE, after that i preferred to change permission setting in command line. when i tried to used that command in shell ;
sudo -u hdfs chmod 1777 /user/beeswax/warehouse i getting that error;
chmod: cannot acces /user/beeswax/warehouse : No such file or directory and also
i tried as well as : hadoop fs -ls /user/beeswax/warehouse than I get nothing .
after i create new user as hdfs i tried to build example tables of hue although i erased these tables i am getting that error ; 'There was an error processing your request: Beeswax examples already installed.'
thank you for your attention
Onur
13/02/26 03:28:15 INFO exec.HiveHistory: Hive history file=/tmp/hue/hive_job_log_hue_201302260328_1211673658.txt 13/02/26 03:28:15 INFO ql.Driver: <PERFLOG method=compile> 13/02/26 03:28:15 INFO parse.ParseDriver: Parsing command: SELECT s07.description, s07.total_emp, s08.total_emp, s07.salary FROM sample_07 s07 JOIN sample_08 s08 ON ( s07.code = s08.code ) WHERE ( s07.total_emp > s08.total_emp AND s07.salary > 100000 ) SORT BY s07.salary DESC 13/02/26 03:28:15 INFO parse.ParseDriver: Parse Completed 13/02/26 03:28:15 INFO parse.SemanticAnalyzer: Starting Semantic Analysis 13/02/26 03:28:15 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis 13/02/26 03:28:15 INFO parse.SemanticAnalyzer: Get metadata for source tables 13/02/26 03:28:15 INFO metastore.HiveMetaStore: 6: get_table : db=default tbl=sample_07 13/02/26 03:28:15 INFO HiveMetaStore.audit: ugi=hdfs ip=unknown-ip-addr cmd=get_table : db=default tbl=sample_07 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_07 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO metastore.HiveMetaStore: 6: get_table : db=default tbl=sample_08 13/02/26 03:28:15 INFO HiveMetaStore.audit: ugi=hdfs ip=unknown-ip-addr cmd=get_table : db=default tbl=sample_08 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_08 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO parse.SemanticAnalyzer: Get metadata for subqueries 13/02/26 03:28:15 INFO parse.SemanticAnalyzer: Get metadata for destination tables 13/02/26 03:28:15 INFO parse.SemanticAnalyzer: Completed getting MetaData in Semantic Analysis 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_07 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_08 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for FS(12) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for OP(11) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for RS(10) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for SEL(9) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for FIL(8) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Pushdown Predicates of FIL For Alias : s07 13/02/26 03:28:15 INFO ppd.OpProcFactory: (_col3 > 100000) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for JOIN(7) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Pushdown Predicates of JOIN For Alias : s07 13/02/26 03:28:15 INFO ppd.OpProcFactory: (VALUE._col3 > 100000) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for RS(5) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Pushdown Predicates of RS For Alias : s07 13/02/26 03:28:15 INFO ppd.OpProcFactory: (salary > 100000) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for TS(3) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Pushdown Predicates of TS For Alias : s07 13/02/26 03:28:15 INFO ppd.OpProcFactory: (salary > 100000) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for RS(6) 13/02/26 03:28:15 INFO ppd.OpProcFactory: Processing for TS(4) 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_07 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_07 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_07 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_08 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_08 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO hive.log: DDL: struct sample_08 { string code, string description, i32 total_emp, i32 salary} 13/02/26 03:28:15 INFO physical.MetadataOnlyOptimizer: Looking for table scans where optimization is applicable 13/02/26 03:28:15 INFO physical.MetadataOnlyOptimizer: Found 0 metadata only table scans 13/02/26 03:28:15 INFO physical.MetadataOnlyOptimizer: Looking for table scans where optimization is applicable 13/02/26 03:28:15 INFO physical.MetadataOnlyOptimizer: Found 0 metadata only table scans 13/02/26 03:28:15 INFO parse.SemanticAnalyzer: Completed plan generation 13/02/26 03:28:15 INFO ql.Driver: Semantic Analysis Completed 13/02/26 03:28:15 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:description, type:string, comment:null), FieldSchema(name:total_emp, type:int, comment:null), FieldSchema(name:total_emp, type:int, comment:null), FieldSchema(name:salary, type:int, comment:null)], properties:null) 13/02/26 03:28:15 INFO ql.Driver: </PERFLOG method=compile start=1361878095433 end=1361878095784 duration=351> Hive history file=/tmp/hue/hive_job_log_hue_201302260328_755410322.txt 13/02/26 03:28:15 INFO exec.HiveHistory: Hive history file=/tmp/hue/hive_job_log_hue_201302260328_755410322.txt 13/02/26 03:28:15 INFO ql.Driver: <PERFLOG method=Driver.execute> 13/02/26 03:28:15 INFO ql.Driver: Starting command: SELECT s07.description, s07.total_emp, s08.total_emp, s07.salary FROM sample_07 s07 JOIN sample_08 s08 ON ( s07.code = s08.code ) WHERE ( s07.total_emp > s08.total_emp AND s07.salary > 100000 ) SORT BY s07.salary DESC Total MapReduce jobs = 2 13/02/26 03:28:15 INFO ql.Driver: Total MapReduce jobs = 2 13/02/26 03:28:15 INFO ql.Driver: </PERFLOG method=TimeToSubmit end=1361878095885> Launching Job 1 out of 2 13/02/26 03:28:15 INFO ql.Driver: Launching Job 1 out of 2 13/02/26 03:28:16 INFO exec.Utilities: Cache Content Summary for hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_08 length: 46069 file count: 1 directory count: 1 13/02/26 03:28:16 INFO exec.Utilities: Cache Content Summary for hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07 length: 46055 file count: 1 directory count: 1 13/02/26 03:28:16 INFO exec.ExecDriver: BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=92124 Number of reduce tasks not specified. Estimated from input data size: 1 13/02/26 03:28:16 INFO exec.Task: Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): 13/02/26 03:28:16 INFO exec.Task: In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> 13/02/26 03:28:16 INFO exec.Task: set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: 13/02/26 03:28:16 INFO exec.Task: In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> 13/02/26 03:28:16 INFO exec.Task: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: 13/02/26 03:28:16 INFO exec.Task: In order to set a constant number of reducers: set mapred.reduce.tasks=<number> 13/02/26 03:28:16 INFO exec.Task: set mapred.reduce.tasks=<number> 13/02/26 03:28:16 INFO exec.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat 13/02/26 03:28:16 INFO exec.ExecDriver: adding libjars: file:///usr/lib/hive/lib/hive-builtins-0.9.0-cdh4.1.3.jar 13/02/26 03:28:16 INFO exec.ExecDriver: Processing alias s07 13/02/26 03:28:16 INFO exec.ExecDriver: Adding input file hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07 13/02/26 03:28:16 INFO exec.Utilities: Content Summary hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07length: 46055 num files: 1 num directories: 1 13/02/26 03:28:16 INFO exec.ExecDriver: Processing alias s08 13/02/26 03:28:16 INFO exec.ExecDriver: Adding input file hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_08 13/02/26 03:28:16 INFO exec.Utilities: Content Summary hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_08length: 46069 num files: 1 num directories: 1 13/02/26 03:28:17 INFO exec.ExecDriver: Making Temp Directory: hdfs://n1.example.com:8020/tmp/hive-beeswax-hdfs/hive_2013-02-26_03-28-15_434_7027909062045907076/-mr-10002 13/02/26 03:28:21 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 13/02/26 03:28:22 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07; using filter path hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07 13/02/26 03:28:22 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_08; using filter path hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_08 13/02/26 03:28:22 INFO mapred.FileInputFormat: Total input paths to process : 2 13/02/26 03:28:22 INFO io.CombineHiveInputFormat: number of splits 2 Starting Job = job_201302210135_0001, Tracking URL = http://n1.example.com:50030/jobdetails.jsp?jobid=job_201302210135_0001 13/02/26 03:28:32 INFO exec.Task: Starting Job = job_201302210135_0001, Tracking URL = http://n1.example.com:50030/jobdetails.jsp?jobid=job_201302210135_0001 Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=n1.example.com:8021 -kill job_201302210135_0001 13/02/26 03:28:32 INFO exec.Task: Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=n1.example.com:8021 -kill job_201302210135_0001 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1 13/02/26 03:28:49 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1 13/02/26 03:28:49 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-02-26 03:28:49,177 Stage-1 map = 0%, reduce = 0% 13/02/26 03:28:49 INFO exec.Task: 2013-02-26 03:28:49,177 Stage-1 map = 0%, reduce = 0% 13/02/26 03:29:49 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-02-26 03:29:49,548 Stage-1 map = 0%, reduce = 0% 13/02/26 03:29:49 INFO exec.Task: 2013-02-26 03:29:49,548 Stage-1 map = 0%, reduce = 0% 13/02/26 03:30:37 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-02-26 03:30:37,517 Stage-1 map = 100%, reduce = 100% 13/02/26 03:30:37 INFO exec.Task: 2013-02-26 03:30:37,517 Stage-1 map = 100%, reduce = 100% 13/02/26 03:30:37 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead Ended Job = job_201302210135_0001 with errors 13/02/26 03:30:37 ERROR exec.Task: Ended Job = job_201302210135_0001 with errors FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask 13/02/26 03:30:37 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask 13/02/26 03:30:37 INFO ql.Driver: </PERFLOG method=Driver.execute start=1361878095875 end=1361878237752 duration=141877> MapReduce Jobs Launched: 13/02/26 03:30:37 INFO ql.Driver: MapReduce Jobs Launched: Job 0: Map: 2 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL 13/02/26 03:30:37 INFO ql.Driver: Job 0: Map: 2 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec 13/02/26 03:30:37 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec 13/02/26 03:30:38 ERROR beeswax.BeeswaxServiceImpl: Exception while processing query BeeswaxException(message:Driver returned: 2. Errors: Hive history file=/tmp/hue/hive_job_log_hue_201302260328_755410322.txt Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201302210135_0001, Tracking URL = http://n1.example.com:50030/jobdetails.jsp?jobid=job_201302210135_0001 Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=n1.example.com:8021 -kill job_201302210135_0001 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1 2013-02-26 03:28:49,177 Stage-1 map = 0%, reduce = 0% 2013-02-26 03:29:49,548 Stage-1 map = 0%, reduce = 0% 2013-02-26 03:30:37,517 Stage-1 map = 100%, reduce = 100% Ended Job = job_201302210135_0001 with errors FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask MapReduce Jobs Launched: Job 0: Map: 2 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec , log_context:e7199542-f187-458c-b3dd-887560485a81, handle:QueryHandle(id:e7199542-f187-458c-b3dd-887560485a81, log_context:e7199542-f187-458c-b3dd-887560485a81), SQLState: ) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState.execute(BeeswaxServiceImpl.java:319) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:577) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:566) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1312) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1.run(BeeswaxServiceImpl.java:566) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 13/02/26 03:30:39 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:BeeswaxException(message:Driver returned: 2. Errors: Hive history file=/tmp/hue/hive_job_log_hue_201302260328_755410322.txt Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201302210135_0001, Tracking URL = http://n1.example.com:50030/jobdetails.jsp?jobid=job_201302210135_0001 Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=n1.example.com:8021 -kill job_201302210135_0001 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1 2013-02-26 03:28:49,177 Stage-1 map = 0%, reduce = 0% 2013-02-26 03:29:49,548 Stage-1 map = 0%, reduce = 0% 2013-02-26 03:30:37,517 Stage-1 map = 100%, reduce = 100% Ended Job = job_201302210135_0001 with errors FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask MapReduce Jobs Launched: Job 0: Map: 2 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec , log_context:e7199542-f187-458c-b3dd-887560485a81, handle:QueryHandle(id:e7199542-f187-458c-b3dd-887560485a81, log_context:e7199542-f187-458c-b3dd-887560485a81), SQLState: ) 13/02/26 03:30:39 ERROR beeswax.BeeswaxServiceImpl: Caught BeeswaxException BeeswaxException(message:Driver returned: 2. Errors: Hive history file=/tmp/hue/hive_job_log_hue_201302260328_755410322.txt Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201302210135_0001, Tracking URL = http://n1.example.com:50030/jobdetails.jsp?jobid=job_201302210135_0001 Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=n1.example.com:8021 -kill job_201302210135_0001 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1 2013-02-26 03:28:49,177 Stage-1 map = 0%, reduce = 0% 2013-02-26 03:29:49,548 Stage-1 map = 0%, reduce = 0% 2013-02-26 03:30:37,517 Stage-1 map = 100%, reduce = 100% Ended Job = job_201302210135_0001 with errors FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask MapReduce Jobs Launched: Job 0: Map: 2 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec , log_context:e7199542-f187-458c-b3dd-887560485a81, handle:QueryHandle(id:e7199542-f187-458c-b3dd-887560485a81, log_context:e7199542-f187-458c-b3dd-887560485a81), SQLState: ) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState.execute(BeeswaxServiceImpl.java:319) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:577) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:566) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1312) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1.run(BeeswaxServiceImpl.java:566) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662)
Duration | 0:02:12 |
Ended | 02/26/13 03:30:36 |
ID | job_201302210135_0001 |
User | hdfs |
Mapred Input Dir | hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07 hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_08 |
Mapred Input Format Class | org.apache.hadoop.hive.ql.io.CombineHiveInputFormat |
Mapred Mapper Class | org.apache.hadoop.hive.ql.exec.ExecMapper |
Mapred Output Format Class | org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl |
Mapred Reducer Class | org.apache.hadoop.hive.ql.exec.ExecReducer |
Maps | 0 of 2 |
Reduces | 0 of 1 |
Started | 02/26/13 03:28:24 |
Status | FAILED |
Starting Job = job_201302210135_0001, Tracking URL = http://n1.example.com:50030/jobdetails.jsp?jobid=job_201302210135_0001
Starting Job = job_201302210135_0002, Tracking URL = http://n1.example.com:50030/jobdetails.jsp?jobid=job_201302210135_0002 Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=n1.example.com:8021 -kill job_201302210135_0002 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1 2013-02-27 10:56:24,583 Stage-1 map = 0%, reduce = 0% 2013-02-27 10:57:24,870 Stage-1 map = 0%, reduce = 0% 2013-02-27 10:57:50,623 Stage-1 map = 100%, reduce = 100% Ended Job = job_201302210135_0002 with errors FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask MapReduce Jobs Launched: Job 0: Map: 2 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec , log_context:f8067762-4935-472e-9437-8f550b65e54a, handle:QueryHandle(id:f8067762-4935-472e-9437-8f550b65e54a, log_context:f8067762-4935-472e-9437-8f550b65e54a), SQLState: ) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState.execute(BeeswaxServiceImpl.java:319) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:577) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:566) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1312) at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1.run(BeeswaxServiceImpl.java:566) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662)
Driver returned: 2. Errors: Hive history file=/tmp/hue/hive_job_log_hue_201302271055_1265331267.txt Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201302210135_0002, Tracking URL = http://n1.example.com:50030/jobdetails.jsp?jobid=job_201302210135_0002 Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=n1.example.com:8021 -kill job_201302210135_0002 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1 2013-02-27 10:56:24,583 Stage-1 map = 0%, reduce = 0% 2013-02-27 10:57:24,870 Stage-1 map = 0%, reduce = 0% 2013-02-27 10:57:50,623 Stage-1 map = 100%, reduce = 100% Ended Job = job_201302210135_0002 with errors FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask MapReduce Jobs Launched: Job 0: Map: 2 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec
Attempt ID | Progress | State | Task Tracker | Start Time | End Time | Output Size | Phase | Shuffle Finish | Sort Finish | Map Finish |
---|---|---|---|---|---|---|---|---|---|---|
000000_0 | 100% | failed | tracker_n3.example.com:localhost/127.0.0.1:36555 | 02/27/13 11:00:06 | 02/27/13 11:00:19 | -1 | CLEANUP | |||
000000_1 | 100% | failed | tracker_n2.example.com:localhost/127.0.0.1:46543 | 02/27/13 11:00:20 | 02/27/13 11:00:32 | -1 | CLEANUP | |||
000000_2 | 100% | failed | tracker_n4.example.com:localhost/127.0.0.1:41410 | 02/27/13 11:04:10 | 02/27/13 11:04:19 | -1 | CLEANUP | |||
000000_3 | 0% | killed | tracker_n1.example.com:localhost/127.0.0.1:59170 | 02/27/13 10:57:37 | 02/27/13 10:57:46 | -1 | MAP |
Kind | % Complete | Num Tasks | Pending | Running | Complete | Killed | Failed/Killed Task Attempts | |
---|---|---|---|---|---|---|---|---|
map | 100.00% | 2 | 0 | 0 | 0 | 2 | 7 / 1 | |
reduce | 100.00% | 1 | 0 | 0 | 0 | 1 | 0 / 0 |
Counter | Map | Reduce | Total | |
---|---|---|---|---|
Job Counters | Failed map tasks | 0 | 0 | 1 |
Launched map tasks | 0 | 0 | 8 | |
Data-local map tasks | 0 | 0 | 2 | |
Rack-local map tasks | 0 | 0 | 6 | |
Total time spent by all maps in occupied slots (ms) | 0 | 0 | 58,457 | |
Total time spent by all reduces in occupied slots (ms) | 0 | 0 | 0 | |
Total time spent by all maps waiting after reserving slots (ms) | 0 | 0 | 0 | |
Total time spent by all reduces waiting after reserving slots (ms) | 0 | 0 | 0 |
Task Id | Start Time | Finish Time | Error |
---|---|---|---|
task_201302210135_0002_m_000000 | 27/02 20:00:06 | 27/02 20:00:19 (13sec) | Error: Java heap space |
task_201302210135_0002_m_000000 | 27/02 20:00:20 | 27/02 20:00:32 (12sec) | Error: Java heap space |
task_201302210135_0002_m_000000 | 27/02 20:04:10 | 27/02 20:04:19 (8sec) | Error: Java heap space |
task_201302210135_0002_m_000001 | 27/02 20:03:45 | 27/02 20:03:57 (12sec) | Error: Java heap space |
task_201302210135_0002_m_000001 | 27/02 20:00:20 | 27/02 20:00:31 (11sec) | Error: Java heap space |
task_201302210135_0002_m_000001 | 27/02 19:56:55 | 27/02 19:57:36 (41sec) | Error: Java heap space |
task_201302210135_0002_m_000001 | 27/02 20:01:17 | 27/02 20:01:26 (8sec) | Error: Java heap space |
attempt_201302210135_0002_m_000000_0 | 27/02 20:00:06 | 27/02 20:00:19 (13sec) | n3.example.com | Error: Java heap space | Last 4KB Last 8KB All | 0 |
attempt_201302210135_0002_m_000000_1 | 27/02 20:00:20 | 27/02 20:00:32 (12sec) | n2.example.com | Error: Java heap space | Last 4KB Last 8KB All | 0 |
attempt_201302210135_0002_m_000000_2 | 27/02 20:04:10 | 27/02 20:04:19 (8sec) | n4.example.com | Error: Java heap space | Last 4KB Last 8KB All | 0 |
attempt_201302210135_0002_m_000000_3 | 27/02 19:57:37 | 27/02 19:57:46 (9sec) | n1.example.com | Last 4KB Last 8KB All | 0 |
n1.example.com | 192.168.0.241 | /default | CDH4 | Good | 9.51s ago | 2 |
| |||||||
n2.example.com | 192.168.0.242 | /default | CDH4 | Good | 1.90s ago | 2 |
| |||||||
n3.example.com | 192.168.0.243 | /default | CDH4 | Good | 3.33s ago | 2 |
| |||||||
n4.example.com | 192.168.0.246 | /default | CDH4 | Good | 6.52s ago | 2 |
|
2013-02-27 19:57:41,710 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-02-27 19:57:43,371 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mapred/local/taskTracker/distcache/1804431985416186481_538532255_486741148/n1.example.com/tmp/hive-beeswax-hdfs/hive_2013-02-27_10-55-58_866_1132759753206554877/-mr-10004/eee2a09a-a215-43c3-aab5-1de316044d27 <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0002/attempt_201302210135_0002_m_000000_3/work/HIVE_PLANeee2a09a-a215-43c3-aab5-1de316044d27 2013-02-27 19:57:43,619 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0002/jars/.job.jar.crc <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0002/attempt_201302210135_0002_m_000000_3/work/.job.jar.crc 2013-02-27 19:57:43,622 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0002/jars/job.jar <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0002/attempt_201302210135_0002_m_000000_3/work/job.jar 2013-02-27 19:57:43,700 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id 2013-02-27 19:57:43,719 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId= 2013-02-27 19:57:46,132 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 2013-02-27 19:57:46,242 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@101a0ae6 2013-02-27 19:57:46,564 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 2013-02-27 19:57:46,568 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: Failed on local exception: java.io.IOException; Host Details : local host is: "n1.example.com/192.168.0.241"; destination host is: "n1.example.com":8020;
Could you please try to run the Hadoop example? https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+Node+in+Pseudo-distributed+Mode#InstallingCDH4onaSingleLinuxNodeinPseudo-distributedMode-RunninganexampleapplicationwithMRv1
What does it say when you click on the 'Last 4KB' of a failed task with the heap space error?
2013-02-27 19:57:46,568 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: Failed on local exception: java.io.IOException; Host Details : local host is: "n1.example.com/192.168.0.241"; destination host is: "n1.example.com":8020;
Attempt | Task | Machine | State | Error | Logs |
---|---|---|---|---|---|
attempt_201302210135_0003_m_000000_0 | task_201302210135_0003_m_000000 | n3.example.com | FAILED |
attempt_201302210135_0003_m_000000_1 | task_201302210135_0003_m_000000 | n1.example.com | FAILED |
attempt_201302210135_0003_m_000000_2 | task_201302210135_0003_m_000000 | n2.example.com | FAILED |
attempt_201302210135_0003_m_000000_3 |
2013-02-28 23:01:24,052 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-02-28 23:01:24,908 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mapred/local/taskTracker/distcache/5651432844641255173_-1826301297_584014340/n1.example.com/tmp/hive-beeswax-hdfs/hive_2013-02-28_13-57-19_791_3663977264499258484/-mr-10003/fb822c07-e95c-413f-b2b5-b6edb08f63c6 <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_0/work/HIVE_PLANfb822c07-e95c-413f-b2b5-b6edb08f63c6 2013-02-28 23:01:24,918 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/jars/.job.jar.crc <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_0/work/.job.jar.crc 2013-02-28 23:01:24,921 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/jars/job.jar <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_0/work/job.jar 2013-02-28 23:01:24,974 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id 2013-02-28 23:01:24,975 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId= 2013-02-28 23:01:25,622 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 2013-02-28 23:01:25,635 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3e152f4 2013-02-28 23:01:26,156 WARN org.apache.hadoop.hive.conf.HiveConf: hive-site.xml not found on CLASSPATH 2013-02-28 23:01:26,436 INFO org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing file hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07/sample_07.csv 2013-02-28 23:01:26,436 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and BYTES_READ as counter name instead 2013-02-28 23:01:26,441 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1 2013-02-28 23:01:26,452 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 50 2013-02-28 23:01:26,629 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 2013-02-28 23:01:26,634 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:797) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:385) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.mapred.Child.main(Child.java:262)
second one;
Task Logs: 'attempt_201302210135_0003_m_000000_1'
stdout logs
stderr logs
syslog logs2013-02-28 22:57:53,287 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-02-28 22:57:54,499 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mapred/local/taskTracker/distcache/4869933896674749018_-1826301297_584014340/n1.example.com/tmp/hive-beeswax-hdfs/hive_2013-02-28_13-57-19_791_3663977264499258484/-mr-10003/fb822c07-e95c-413f-b2b5-b6edb08f63c6 <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_1/work/HIVE_PLANfb822c07-e95c-413f-b2b5-b6edb08f63c6 2013-02-28 22:57:54,506 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/jars/.job.jar.crc <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_1/work/.job.jar.crc 2013-02-28 22:57:54,508 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/jars/job.jar <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_1/work/job.jar 2013-02-28 22:57:54,558 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id 2013-02-28 22:57:54,560 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId= 2013-02-28 22:57:55,155 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 2013-02-28 22:57:55,170 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2326a29c 2013-02-28 22:57:55,881 WARN org.apache.hadoop.hive.conf.HiveConf: hive-site.xml not found on CLASSPATH 2013-02-28 22:57:56,469 INFO org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing file hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07/sample_07.csv 2013-02-28 22:57:56,469 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and BYTES_READ as counter name instead 2013-02-28 22:57:56,475 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1 2013-02-28 22:57:56,499 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 50 2013-02-28 22:57:56,676 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 2013-02-28 22:57:56,680 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:797) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:385) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.mapred.Child.main(Child.java:262)third one;Task Logs: 'attempt_201302210135_0003_m_000000_2'
stdout logs
stderr logs
syslog logs2013-02-28 23:01:43,350 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-02-28 23:01:44,477 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mapred/local/taskTracker/distcache/-6064914283392164676_-1826301297_584014340/n1.example.com/tmp/hive-beeswax-hdfs/hive_2013-02-28_13-57-19_791_3663977264499258484/-mr-10003/fb822c07-e95c-413f-b2b5-b6edb08f63c6 <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_2/work/HIVE_PLANfb822c07-e95c-413f-b2b5-b6edb08f63c6 2013-02-28 23:01:44,510 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/jars/.job.jar.crc <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_2/work/.job.jar.crc 2013-02-28 23:01:44,528 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/jars/job.jar <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_2/work/job.jar 2013-02-28 23:01:44,659 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id 2013-02-28 23:01:44,661 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId= 2013-02-28 23:01:45,147 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 2013-02-28 23:01:45,169 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7d15d06c 2013-02-28 23:01:45,535 WARN org.apache.hadoop.hive.conf.HiveConf: hive-site.xml not found on CLASSPATH 2013-02-28 23:01:45,842 INFO org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing file hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07/sample_07.csv 2013-02-28 23:01:45,843 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and BYTES_READ as counter name instead 2013-02-28 23:01:45,850 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1 2013-02-28 23:01:45,866 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 50 2013-02-28 23:01:46,034 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 2013-02-28 23:01:46,037 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:797) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:385) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.mapred.Child.main(Child.java:262)
and last one;Task Logs: 'attempt_201302210135_0003_m_000000_3'
stdout logs
stderr logs
syslog logs2013-02-28 23:05:29,916 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-02-28 23:05:30,877 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mapred/local/taskTracker/distcache/2824718158320928215_-1826301297_584014340/n1.example.com/tmp/hive-beeswax-hdfs/hive_2013-02-28_13-57-19_791_3663977264499258484/-mr-10003/fb822c07-e95c-413f-b2b5-b6edb08f63c6 <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_3/work/HIVE_PLANfb822c07-e95c-413f-b2b5-b6edb08f63c6 2013-02-28 23:05:30,920 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/jars/.job.jar.crc <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_3/work/.job.jar.crc 2013-02-28 23:05:30,924 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/jars/job.jar <- /mapred/local/taskTracker/hdfs/jobcache/job_201302210135_0003/attempt_201302210135_0003_m_000000_3/work/job.jar 2013-02-28 23:05:30,989 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id 2013-02-28 23:05:30,990 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId= 2013-02-28 23:05:31,447 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 2013-02-28 23:05:31,463 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@67071c84 2013-02-28 23:05:31,819 WARN org.apache.hadoop.hive.conf.HiveConf: hive-site.xml not found on CLASSPATH 2013-02-28 23:05:32,099 INFO org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing file hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07/sample_07.csv 2013-02-28 23:05:32,100 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and BYTES_READ as counter name instead 2013-02-28 23:05:32,103 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1 2013-02-28 23:05:32,111 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 50 2013-02-28 23:05:32,282 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 2013-02-28 23:05:32,286 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:797) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:385) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.mapred.Child.main(Child.java:262)and also;I followed tutorial from cloudera webseite , which you sad to me. ( https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+Node+in+Pseudo-distributed+Mode#InstallingCDH4onaSingleLinuxNodeinPseudo-distributedMode-RunninganexampleapplicationwithMRv1) so far as i understood i have created a user as onur but i could not create a new directory(input) as the user onur.actually i guess , i have to learn more about hadoop commands. if you know some hands on tutorial for hadoop and map-reduce i appreciate you.
thank you for your attentionOnur Turna
...
http://stackoverflow.com/questions/8464048/out-of-memory-error-in-hadoop
...<b style="line-height:17px;font-size:11px;font-family:sans-seri
ID | İsim | Status | Kullanıcı | Maps | Reduces | Queue | Priority | Duration | Date | |
---|---|---|---|---|---|---|---|---|---|---|
201303020304_0003 | SELECT sample_07.description, sample_...DESC(Stage-1) | failed | hdfs | default | normal | 46s | 03/01/13 18:20:09 | |||
201303020304_0002 | grep-sort | succeeded | hdfs | default | normal | 18s | 03/01/13 18:15:08 | |||
201303020304_0001 | grep-search | succeeded | hdfs | default | normal | 54s | 03/01/13 18:14:12 |
Attempt | Task | Machine | State | Error | Logs |
---|
attempt_201303020304_0003_m_000000_0 | task_201303020304_0003_m_000000 |
n3.example.com | FAILED | Error: Java heap space | Last 4KB Last 8KB All |
attempt_201303020304_0003_m_000000_1 | task_201303020304_0003_m_000000 |
n1.example.com | FAILED | Error: Java heap space | Last 4KB Last 8KB All |
attempt_201303020304_0003_m_000000_2 | task_201303020304_0003_m_000000 |
n2.example.com | FAILED | Error: Java heap space | Last 4KB Last 8KB All |
attempt_201303020304_0003_m_000000_3 | task_201303020304_0003_m_000000 | n4.example.com |
2013-03-02 03:24:02,074 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-03-02 03:24:02,861 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mapred/local/taskTracker/distcache/1153535521351037204_943032454_686175782/n1.example.com/tmp/hive-beeswax-hdfs/hive_2013-03-01_18-20-01_992_5419204033953410960/-mr-10003/fd47ce6a-5113-4f45-88e2-90c469b71edf <- /mapred/local/taskTracker/hdfs/jobcache/job_201303020304_0003/attempt_201303020304_0003_m_000000_0/work/HIVE_PLANfd47ce6a-5113-4f45-88e2-90c469b71edf 2013-03-02 03:24:02,870 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201303020304_0003/jars/.job.jar.crc <- /mapred/local/taskTracker/hdfs/jobcache/job_201303020304_0003/attempt_201303020304_0003_m_000000_0/work/.job.jar.crc 2013-03-02 03:24:02,872 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mapred/local/taskTracker/hdfs/jobcache/job_201303020304_0003/jars/job.jar <- /mapred/local/taskTracker/hdfs/jobcache/job_201303020304_0003/attempt_201303020304_0003_m_000000_0/work/job.jar 2013-03-02 03:24:03,010 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id 2013-03-02 03:24:03,011 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId= 2013-03-02 03:24:03,382 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 2013-03-02 03:24:03,393 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3d7dc1cb 2013-03-02 03:24:03,692 WARN org.apache.hadoop.hive.conf.HiveConf: hive-site.xml not found on CLASSPATH 2013-03-02 03:24:03,955 INFO org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing file hdfs://n1.example.com:8020/user/beeswax/warehouse/sample_07/sample_07.csv 2013-03-02 03:24:03,955 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and BYTES_READ as counter name instead 2013-03-02 03:24:03,959 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1 2013-03-02 03:24:03,968 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 50 2013-03-02 03:24:04,113 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 2013-03-02 03:24:04,117 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:797) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:385) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.mapred.Child.main(Child.java:262)
...
<td style="padding:4px 5px;vertical-align:top;border-top-width:1px;border-top-style:solid;borde
...
...
<div style="text-align:center;overflow:hidden;margin-top:auto;margin-bottom:auto;font-size:11px;line-height:11px;background-image:-webkit-gradient(linear,0% 0%,0%
<property> <name>mapred.child.java.opts</name> <value>-Xmx1024m</value> </property>
Or in Beeswax 'SETTINGS' on the left:
key:
mapred.child.java.opts
value:
-Xmx1024m
Romain
...
<label style="font-s
...
<table style="max-width:100%;border-collapse:collapse;background-c
I had assigned 1024 mib as java child heap size both for mapreduce( through cloudera's GUI) as well as for hue's java child heap size manually from hue's config files. and I also changed hadoop's The /etc/hadoop/hadoop-env.sh maximum java heap memory for Hadoop as "-Xmx1024m . and Now everthings work fine .
and except from these i also checked conf files from /etc/security/limits.d which are looks;
[root@n1 limits.d]# ls
90-nproc.conf hbase.nofiles.conf impala.conf mapreduce.conf
cloudera-scm.conf hdfs.conf mapred.conf yarn.conf
there are many conf files, which one should i check and how should I configure?
I actually want to start to import my tables , which are .csv formated, to hbase. I have read also some presentation , to make hbase hive integration .that's why I have established mysql server on one of my cluster for hive's metastore. I thought, I'm going to get a visual for my hbase table on beeswax GUI. but as far as I learned this is not an option at the moment. because of this how can somebody else(except who has built these tables) make a query through beeswax on hbase tables , which are not have a visual of tables on beeswax. and also I want to be sure , whether is that an option through beeswax sql like query for hbase tables or not.
Thank you very much for your attention,
Onur Turna
To allow Hive scripts to use HBase, add the following statements to the top of each script: