rm: Failed to move to trash: hdfs://dlxa101:8020/hypertable. Consider using -skipTrash option

752 views
Skip to first unread message

Shinobi_Jack

unread,
Dec 18, 2013, 4:24:45 AM12/18/13
to hyperta...@googlegroups.com
what's the reason of the error arise as follows?

[cloudil@dlxa101 master_38050]$ sudo -u hdfs hadoop fs -rm -r -f /hypertable
13/12/18 17:02:48 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://dlxa101:8020/user/hdfs/.Trash/Current
rm: Failed to move to trash: hdfs://dlxa101:8020/hypertable. Consider using -skipTrash option
[cloudil@dlxa101 master_38050]$ sudo -u hdfs hadoop fs -rm -skipTrash -r -f /hypertable
rm: Cannot delete /hypertable. Name node is in safe mode.

i found the cdh's hdfs running normally. 
what does /hypertable. Name node is in safe mode means? 

by the way, when cap stop, the error arise as follows:
*** [err :: dlxa101] /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/daemonizing.rb:142:in `kill': No such process (Errno::ESRCH)
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/daemonizing.rb:142:in `force_kill'
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/daemonizing.rb:136:in `rescue in send_signal'
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/daemonizing.rb:120:in `send_signal'
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/daemonizing.rb:109:in `kill'
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/controllers/controller.rb:93:in `block in stop'
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/controllers/controller.rb:134:in `tail_log'
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/controllers/controller.rb:92:in `stop'
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/runner.rb:185:in `run_command'
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/lib/thin/runner.rb:151:in `run!'
*** [err :: dlxa101] from /usr/local/lib/ruby/gems/1.9.1/gems/thin-1.4.1/bin/thin:6:in `<top (required)>'
*** [err :: dlxa101] from /usr/local/bin/thin:23:in `load'
*** [err :: dlxa101] from /usr/local/bin/thin:23:in `<main>'
 ** [out :: dlxa101] Sending QUIT signal to process 5385 ...
 ** [out :: dlxa101] process not found!
 ** [out :: dlxa101] Sending KILL signal to process 5385 ...
    command finished in 6223ms 

all above were happened after power cut then power on immediately.

because cap stop can not stop completely. so i do ./stop-servers.sh on all cluster machines. and the command take well effected. 

i did command "sudo -u hdfs hadoop fs -rm -r -f /hypertable". because i want to have a clean environment to restart cluster using "cap start".

any advice be appreciated. 

Shinobi_Jack

unread,
Dec 18, 2013, 4:53:47 AM12/18/13
to hyperta...@googlegroups.com
i know datanode will be in safe mode when hadoop cluster started. in safe mode, the content on file-system can not be modified and deleted. after blocks on datanode be checked validate,then namenode will be in run time. 
but now, i found it was in safe mode all times. 

在 2013年12月18日星期三UTC+8下午5时24分45秒,Shinobi_Jack写道:

Shinobi_Jack

unread,
Dec 18, 2013, 10:10:09 PM12/18/13
to hyperta...@googlegroups.com
all above errors mentioned were fixed. just modify  dfs.safemode.threshold.pct, 
operations (see also at appendix) as follows:
1. modify dfs.safemode.threshold.pct.
2. deploy configuration and restart service of hdfs .

在 2013年12月18日星期三UTC+8下午5时53分47秒,Shinobi_Jack写道:
modify.jpg
deploy&&restart.jpg
Reply all
Reply to author
Forward
0 new messages