[HBase] dfs.client.read.shortcircuit = true > WARN : java.io.FileNotFoundException Permission denied

Showing 1-7 of 7 messages
[HBase] dfs.client.read.shortcircuit = true > WARN : java.io.FileNotFoundException Permission denied Damien Hardy 10/31/12 5:15 AM
Hello,

I tried to run HBase with shortcircuit enable
hdfs-site.xml : dfs.block.local-path-access.user = hbase
hbase-site.xml : dfs.client.read.shortcircuit = true

But running YCSB tests I get some Warns on regionservers (cf logs extracts bellow)
File exists seams to be created at the same time than the Warns and every body can read it (hbase run with hbase unix user, using CDH4.1 defaults)

[root@cdh4worker03.sf.XXXXXX.local:~]# ls -l /data/disk5/hadoop/data/current/BP-1003122217-10.0.70.4-1351266276544/current/finalized/blk_-1028036047643592434
-rw-r--r-- 1 hdfs hdfs 8643521 Oct 31 10:30 /data/disk5/hadoop/data/current/BP-1003122217-10.0.70.4-1351266276544/current/finalized/blk_-1028036047643592434

Is that means "I can't read file directly so I ask it via datanode" ?
Is this the normal behavior ?
If yes why did we get a Warn ?
Did I missed something ?

[...]
2012-10-31 10:30:02,690 WARN org.apache.hadoop.hdfs.DFSClient: BlockReaderLocal: Removing BP-1003122217-10.0.70.4-1351266276544:blk_-1028036047643592434_3175 from cache because local file /data/disk5/hadoop/data/current/BP-1003122217-10.0.70.4-1351266276544/current/finalized/blk_-1028036047643592434 could not be opened.
2012-10-31 10:30:02,691 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /10.0.70.5:50010 for block, add to deadNodes and continue. java.io.FileNotFoundException: /data/disk5/hadoop/data/current/BP-1003122217-10.0.70.4-1351266276544/current/finalized/blk_-1028036047643592434 (Permission denied)
java.io.FileNotFoundException: /data/disk5/hadoop/data/current/BP-1003122217-10.0.70.4-1351266276544/current/finalized/blk_-1028036047643592434 (Permission denied)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:120)
        at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:184)
        at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:766)
        at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
        at java.io.DataInputStream.readFully(DataInputStream.java:178)
        at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:295)
        at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:387)
        at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:405)
        at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1026)
        at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:485)
        at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:566)
        at org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1370)
        at org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:1408)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:777)
        at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1039)
        at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:239)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
2012-10-31 10:30:02,692 INFO org.apache.hadoop.hdfs.DFSClient: Successfully connected to /10.0.70.2:50010 for block -1028036047643592434
2012-10-31 10:30:02,696 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hdfscluster/hbase/usertable/a41bc00873c3746530db207aa781fad5/.tmp/8a2441f316274a25b48fc8eaa740bb4b to hdfs://hdfscluster/hbase/usertable/a41bc00873c3746530db207aa781fad5/f1/8a2441f316274a25b48fc8eaa740bb4b
2012-10-31 10:30:02,712 WARN org.apache.hadoop.hdfs.DFSClient: BlockReaderLocal: Removing BP-1003122217-10.0.70.4-1351266276544:blk_-1028036047643592434_3175 from cache because local file /data/disk5/hadoop/data/current/BP-1003122217-10.0.70.4-1351266276544/current/finalized/blk_-1028036047643592434 could not be opened.
2012-10-31 10:30:02,712 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /10.0.70.5:50010 for block, add to deadNodes and continue. java.io.FileNotFoundException: /data/disk5/hadoop/data/current/BP-1003122217-10.0.70.4-1351266276544/current/finalized/blk_-1028036047643592434 (Permission denied)
java.io.FileNotFoundException: /data/disk5/hadoop/data/current/BP-1003122217-10.0.70.4-1351266276544/current/finalized/blk_-1028036047643592434 (Permission denied)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:120)
        at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:184)
        at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:766)
        at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
        at java.io.DataInputStream.readFully(DataInputStream.java:178)
        at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:295)
        at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:387)
        at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:405)
        at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1026)
        at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:485)
        at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:566)
        at org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:1421)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:777)
        at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1039)
[...]

Cheers,

--
Damien
Re: [HBase] dfs.client.read.shortcircuit = true > WARN : java.io.FileNotFoundException Permission denied Harsh J 10/31/12 5:29 AM
Hi Damien,

DNs further have a permission level security on their block files,
which you need to "expose" out by changing the
dfs.datanode.data.dir.perm to 755 instead of default 700. This will
permit local clients to read the files directly.

(Set and restart DNs)
> --
>
>
>



--
Harsh J
Re: [HBase] dfs.client.read.shortcircuit = true > WARN : java.io.FileNotFoundException Permission denied Damien Hardy 10/31/12 6:30 AM
Thank you Harsh,

No WARNs any more in my logs during the same tests.

Cheers,

--
Damien

2012/10/31 Harsh J <ha...@cloudera.com>
--
Harsh J

--




Re: [HBase] dfs.client.read.shortcircuit = true > WARN : java.io.FileNotFoundException Permission denied Damien Hardy 10/31/12 8:06 AM
Hmm, some new WARNs appear after ... does this need a bigcompaction to recreate dir with good rights ?

Cheers,

--
Damien

2012/10/31 Damien Hardy <damienh...@gmail.com>

Re: [HBase] dfs.client.read.shortcircuit = true > WARN : java.io.FileNotFoundException Permission denied Harsh J 10/31/12 9:02 AM
Hi,

Shortcircuited reads on the HDFS layer should have nothing to do with
HBase's compactions really.

Are these "new WARNs" same as before?

Note that you need to only place those settings in the client who is
allowed to do shortcircuit reads (which in HBase's case is the HBase
service config alone - if you use CM, this is HBase Service Safety
Valve). If you place them on all client configs, then non-allowed
users may begin trying to access local blocks and generate unnecessary
noise all over the cluster (such as in MR jobs, etc.) due to the DN
disallowing them from doing a local read.
> --
>
>
>



--
Harsh J
Re: [HBase] dfs.client.read.shortcircuit = true > WARN : java.io.FileNotFoundException Permission denied Damien Hardy 11/2/12 3:48 AM
Hi Harsh,

I found out what was my problem ... hdfs:hdfs 770 on parent dir managed by puppet /o\
Now sets with 755 should be better to read subdirectories and files.
My bad ...

Cheers,

--
Damien

2012/10/31 Harsh J <ha...@cloudera.com>
--




Re: [HBase] dfs.client.read.shortcircuit = true > WARN : java.io.FileNotFoundException Permission denied Harsh J 11/2/12 5:39 AM
Thanks Damien, good to know and thanks for following up. The DN
auto-sets the permissions upon startup btw, on its data directories,
if its the owner (i.e. is 'hdfs' owned).
> --
>
>
>



--
Harsh J