Caused by: alluxio.exception.UnexpectedAlluxioException: java.lang.RuntimeException: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.88.131.233:50010,DS-8d64dc48-f819-4231-b6fa-0d9fa7ce03b1,DISK], DatanodeInfoWithStorage[10.88.131.234:50010,DS-0dda4e57-fd6e-4b35-9368-1eb8d753bf8c,DISK]], original=[DatanodeInfoWithStorage[10.88.131.233:50010,DS-8d64dc48-f819-4231-b6fa-0d9fa7ce03b1,DISK], DatanodeInfoWithStorage[10.88.131.234:50010,DS-0dda4e57-fd6e-4b35-9368-1eb8d753bf8c,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at alluxio.exception.AlluxioException.fromThrift(AlluxioException.java:99) at alluxio.AbstractClient.retryRPC(AbstractClient.java:329) at alluxio.client.file.FileSystemMasterClient.createDirectory(FileSystemMasterClient.java:92) at alluxio.client.file.BaseFileSystem.createDirectory(BaseFileSystem.java:79) at alluxio.hadoop.AbstractFileSystem.mkdirs(AbstractFileSystem.java:494)
May I ask what does this exception mean what may be the cause of it?
Thanks.
Antonio.
--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-user...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.
Hi Bin Fan,thank for your reply:There is worker.log.The ERROR `Non-super user cannot change owner` is a new issue, I think it is independent with the first ERROR .And my cluster has thirty nodes, and all nodes are running well. I guess the true problem is not the DEFAULT policy of 'dfs.client.block.write.replace-datanode-on-failure.policy'.Best regards
Hi, Bin Fan :
Thank you for this suggetion .And the issue of permission,I have solved it.The main issue is about this:```2017-03-20 02:33:00,219 ERROR logger.type (RpcUtils.java:call) - Unexpected error running rpcjava.lang.RuntimeException: alluxio.exception.UnexpectedAlluxioException: java.lang.RuntimeException: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.116.50.230:50010, 10.116.78.14:50010], original=[10.116.50.230:50010, 10.116.78.14:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.at com.google.common.base.Throwables.propagate(Throwables.java:160)at alluxio.AbstractClient.retryRPC(AbstractClient.java:321)at alluxio.worker.block.BlockMasterClient.commitBlock(BlockMasterClient.java:90)at alluxio.worker.block.DefaultBlockWorker.commitBlock(DefaultBlockWorker.java:276)at alluxio.worker.block.BlockWorkerClientServiceHandler$2.call(BlockWorkerClientServiceHandler.java:99)at alluxio.worker.block.BlockWorkerClientServiceHandler$2.call(BlockWorkerClientServiceHandler.java:96)at alluxio.RpcUtils.call(RpcUtils.java:62)at alluxio.worker.block.BlockWorkerClientServiceHandler.cacheBlock(BlockWorkerClientServiceHandler.java:96)at alluxio.thrift.BlockWorkerClientService$Processor$cacheBlock.getResult(BlockWorkerClientService.java:824)at alluxio.thrift.BlockWorkerClientService$Processor$cacheBlock.getResult(BlockWorkerClientService.java:808)at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)at org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:123)at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at java.lang.Thread.run(Thread.java:745)Caused by: alluxio.exception.UnexpectedAlluxioException: java.lang.RuntimeException: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.116.50.230:50010, 10.116.78.14:50010], original=[10.116.50.230:50010, 10.116.78.14:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.at sun.reflect.GeneratedConstructorAccessor28.newInstance(Unknown Source)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:526)at alluxio.exception.AlluxioException.fromThrift(AlluxioException.java:92)... 16 more```I don't know why got this error . As I said last email , My cluster had thirty nodes and hdfs was running well. I think the god nodes is not enough that is not the true reason of this issue.Best Regards
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.
--
You received this message because you are subscribed to a topic in the Google Groups "Alluxio Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/alluxio-users/GLx4Y2dIazg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to alluxio-users+unsubscribe@googlegroups.com.