Every time I try to run TestDFSIO with the file location specified by the alluxio:// protocol, I get the following error:
$ time yarn jar /opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.0-mapr-1602-tests.jar TestDFSIO -D test.build.data=alluxio://
192.168.2.2:19998/alluxio/TestDFSIO -libjars ~/alluxio-demo/alluxio/./core/client/target/alluxio-core-client-1.3.0-SNAPSHOT-jar-with-dependencies.jar,/home/mapr/alluxio-demo/alluxio/./core/client/target/alluxio-core-client-1.3.0-SNAPSHOT-tests.jar -write -nrFiles 10 -size 1GB
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/mapr/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/mapr/alluxio-demo/alluxio/core/client/target/alluxio-core-client-1.3.0-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/16 06:31:10 INFO fs.TestDFSIO: TestDFSIO.1.8
16/08/16 06:31:10 INFO fs.TestDFSIO: nrFiles = 10
16/08/16 06:31:10 INFO fs.TestDFSIO: nrBytes (MB) = 1024.0
16/08/16 06:31:10 INFO fs.TestDFSIO: bufferSize = 1000000
16/08/16 06:31:10 INFO fs.TestDFSIO: creating control file: 1073741824 bytes, 10 files
2016-08-16 06:31:10,3044 ERROR Cidcache fs/client/fileclient/cc/cidcache.cc:1611 Thread: 78867 MoveToNextCldb: No CLDB entries, cannot run, sleeping 5 seconds!
2016-08-16 06:31:15,3047 ERROR Client fs/client/fileclient/cc/client.cc:1104 Thread: 78867 Failed to initialize client for cluster
192.168.2.2:19998, error Connection reset by peer(104)
java.io.IOException: Could not create FileClient
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:609)
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:670)
at com.mapr.fs.MapRFileSystem.delete(MapRFileSystem.java:1135)
at org.apache.hadoop.fs.TestDFSIO.createControlFile(TestDFSIO.java:300)
at org.apache.hadoop.fs.TestDFSIO.run(TestDFSIO.java:973)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.TestDFSIO.main(TestDFSIO.java:870)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:130)
at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
The same configuration works fine with Terasort, etc. Also if I use maprfs:// or hdfs:// in the CLI the command does not fail.