I have configured the write type with "ASYNC_THROUGH". When I use the command "copyFromLocal" to upload a 10GB file to alluxio. After waiting for several hours, there is still no data being persisted into hdfs. What cause the issue?
--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
2016-10-13 10:00:42,071 INFO logger.type (LeaderInquireClient.java:getMasterAddress) - Master addresses: [10.8.12.16:19998, 10.8.12.17:19998]
2016-10-13 10:00:42,072 INFO logger.type (LeaderInquireClient.java:getMasterAddress) - The leader master: 10.8.12.16:19998
2016-10-13 10:00:43,911 INFO logger.type (LeaderInquireClient.java:getMasterAddress) - Master addresses: [10.8.12.16:19998, 10.8.12.17:19998]
2016-10-13 10:00:43,912 INFO logger.type (LeaderInquireClient.java:getMasterAddress) - The leader master: 10.8.12.16:19998
2016-10-13 10:00:43,998 INFO logger.type (LeaderInquireClient.java:getMasterAddress) - Master addresses: [10.8.12.16:19998, 10.8.12.17:19998]
2016-10-13 10:00:43,998 INFO logger.type (LeaderInquireClient.java:getMasterAddress) - The leader master: 10.8.12.16:19998
2016-10-13 10:02:06,408 ERROR logger.type (DefaultAsyncPersistHandler.java:getWorkerStoringFile) - Not all the blocks of file /linecount/1G.txt stored on the same worker
2016-10-13 10:02:06,408 ERROR logger.type (DefaultAsyncPersistHandler.java:scheduleAsyncPersistence) - No worker found to schedule async persistence for file /linecount/1G.txt
2016-10-13 10:04:40,707 INFO logger.type (BlockMasterSync.java:run) - Block 1952767279104 removed at session -4
2016-10-13 10:05:10,710 INFO logger.type (BlockMasterSync.java:run) - Block 1953086046208 removed at session -4
2016-10-13 10:05:10,710 INFO logger.type (BlockMasterSync.java:run) - Block 1953169932288 removed at session -4
2016-10-13 10:05:20,712 INFO logger.type (BlockMasterSync.java:run) - Block 1953186709504 removed at session -4
2016-10-13 10:05:20,712 INFO logger.type (BlockMasterSync.java:run) - Block 1953270595584 removed at session -4
2016-10-13 10:05:50,715 INFO logger.type (BlockMasterSync.java:run) - Block 1953505476608 removed at session -4
2016-10-13 10:06:10,716 INFO logger.type (BlockMasterSync.java:run) - Block 1953790689280 removed at session -4
I have configured the write type with "ASYNC_THROUGH". When I use the command "copyFromLocal" to upload a 10GB file to alluxio. After waiting for several hours, there is still no data being persisted into hdfs. What cause the issue?
ERROR type: java.net.SocketTimeoutException: Read timed out
alluxio.org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
I have configured the write type with "ASYNC_THROUGH". When I use the command "copyFromLocal" to upload a 10GB file to alluxio. After waiting for several hours, there is still no data being persisted into hdfs. What cause the issue?
I have configured the write type with "ASYNC_THROUGH". When I use the command "copyFromLocal" to upload a 10GB file to alluxio. After waiting for several hours, there is still no data being persisted into hdfs. What cause the issue?
2016-10-13 15:08:44,825 ERROR logger.type (BlockDataServerHandler.java:handleBlockWriteRequest) - Error writing remote block : Temp blockId 201326592 is not available, because it already exists
alluxio.exception.BlockAlreadyExistsException: Temp blockId 201326592 is not available, because it already exists
at alluxio.worker.block.TieredBlockStore.checkTempBlockIdAvailable(TieredBlockStore.java:390)
at alluxio.worker.block.TieredBlockStore.createBlockMetaInternal(TieredBlockStore.java:521)
at alluxio.worker.block.TieredBlockStore.createBlockMeta(TieredBlockStore.java:185)
at alluxio.worker.block.DefaultBlockWorker.createBlockRemote(DefaultBlockWorker.java:299)
at alluxio.worker.netty.BlockDataServerHandler.handleBlockWriteRequest(BlockDataServerHandler.java:147)
at alluxio.worker.netty.DataServerHandler.channelRead0(DataServerHandler.java:73)
at alluxio.worker.netty.DataServerHandler.channelRead0(DataServerHandler.java:42)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:831)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:322)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:254)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
alluxio-env.sh:
ALLUXIO_MASTER_HOSTNAME=${ALLUXIO_MASTER_HOSTNAME:-"10.8.12.16"}
ALLUXIO_WORKER_MEMORY_SIZE=${ALLUXIO_WORKER_MEMORY_SIZE:-"100GB"}
ALLUXIO_RAM_FOLDER=${ALLUXIO_RAM_FOLDER:-"/home/appadmin/ramdisk"}
ALLUXIO_UNDERFS_ADDRESS=${ALLUXIO_UNDERFS_ADDRESS:-"hdfs://ns/alluxio/data"}
export ALLUXIO_MASTER_HOSTNAME=10.8.12.16
alluxio.underfs.hdfs.configuration=/home/appadmin/hadoop-2.7.2/etc/hadoop/core-site.xml
alluxio.zookeeper.enabled=true
alluxio.zookeeper.address=10.8.12.16:2181,10.8.12.17:2181,10.8.12.18:2181
alluxio.master.journal.folder=hdfs://ns/alluxio/journal
alluxio.security.authentication.socket.timeout.ms=3000000
alluxio.worker.block.heartbeat.timeout.ms=300000
alluxio.keyvalue.enabled=true
alluxio.network.thrift.frame.size.bytes.max=64MB
alluxio.user.network.netty.timeout.ms=30000
alluxio.worker.session.timeout.ms=300000
alluxio.user.file.writetype.default=ASYNC_THROUGH
alluxio.user.network.netty.timeout.ms=300000
alluxio.user.block.size.bytes.default=64MB
alluxio.keyvalue.partition.size.bytes.max=64MB
alluxio.user.block.remote.read.buffer.size.bytes=64MB
Hi Kaiming,Can you paste the logs from master and workers?
On Wed, Oct 12, 2016 at 1:23 AM, Kaiming Wan <wan...@gmail.com> wrote:
The configuration can make effect when upload a smaller file such as 512MB. Even when I use upload a 1GB file, the configuration doesn't work.
在 2016年10月11日星期二 UTC+8下午7:09:55,Kaiming Wan写道:I have configured the write type with "ASYNC_THROUGH". When I use the command "copyFromLocal" to upload a 10GB file to alluxio. After waiting for several hours, there is still no data being persisted into hdfs. What cause the issue?
--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-user...@googlegroups.com.
2016-10-14 15:40:10,236 INFO logger.type (FileUtils.java:createStorageDirPath) - Folder /home/appadmin/ramdisk/alluxioworker/.tmp_blocks/870 was created!
2016-10-14 15:40:10,393 INFO logger.type (BlockMasterSync.java:run) - Block 93667196928 removed at session -4
Hi Kaiming,
Can you paste the logs from master and workers?
On Wed, Oct 12, 2016 at 1:23 AM, Kaiming Wan <wan...@gmail.com> wrote:
The configuration can make effect when upload a smaller file such as 512MB. Even when I use upload a 1GB file, the configuration doesn't work.
在 2016年10月11日星期二 UTC+8下午7:09:55,Kaiming Wan写道:I have configured the write type with "ASYNC_THROUGH". When I use the command "copyFromLocal" to upload a 10GB file to alluxio. After waiting for several hours, there is still no data being persisted into hdfs. What cause the issue?
--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-user...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.
2016-10-14 15:37:25,052 ERROR logger.type (DefaultAsyncPersistHandler.java:getWorkerStoringFile) - Not all the blocks of file /linecount/512MB.txt stored on the same worker
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.