Looked into yarn logs a bit more, and seems like whether I use FQDN, or short name or local host doesn't matter, it all resolves to default rack.
Below is the logs from yarn application.
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/hdp/current/hadoop-client/conf"}
export MAX_APP_ATTEMPTS="5"
export JAVA_HOME=${JAVA_HOME:-"/usr/lib/jvm/java-8-openjdk-amd64"}
export APP_SUBMIT_TIME_ENV="1479139727769"
export NM_HOST="10.0.0.6"
export LOGNAME="sshadmin"
export JVM_PID="$$"
export PWD="/mnt/resource/hadoop/yarn/local/usercache/sshadmin/appcache/application_1476888496011_0177/container_1476888496011_0177_01_000001"
export LOCAL_DIRS="/mnt/resource/hadoop/yarn/local/usercache/sshadmin/appcache/application_1476888496011_0177"
export APPLICATION_WEB_PROXY_BASE="/proxy/application_1476888496011_0177"
export NM_HTTP_PORT="30060"
export LOG_DIRS="/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001"
export NM_AUX_SERVICE_mapreduce_shuffle="AAA0+gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
"
export NM_PORT="30050"
export USER="sshadmin"
export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/usr/hdp/current/hadoop-yarn-nodemanager"}
export CLASSPATH="$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/*"
export ALLUXIO_HOME="$PWD"
export HADOOP_TOKEN_FILE_LOCATION="/mnt/resource/hadoop/yarn/local/usercache/sshadmin/appcache/application_1476888496011_0177/container_1476888496011_0177_01_000001/container_tokens"
export NM_AUX_SERVICE_spark_shuffle=""
export LOCAL_USER_DIRS="/mnt/resource/hadoop/yarn/local/usercache/sshadmin/"
export HOME="/home/"
export NM_AUX_SERVICE_spark2_shuffle=""
export CONTAINER_ID="container_1476888496011_0177_01_000001"
export MALLOC_ARENA_MAX="4"
ln -sf "/mnt/resource/hadoop/yarn/local/filecache/20/alluxio.jar" "alluxio.jar"
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
exit $hadoop_shell_errorcode
fi
ln -sf "/mnt/resource/hadoop/yarn/local/filecache/21/alluxio.tar.gz" "alluxio.tar.gz"
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
exit $hadoop_shell_errorcode
fi
ln -sf "/mnt/resource/hadoop/yarn/local/filecache/22/alluxio-yarn-setup.sh" "alluxio-yarn-setup.sh"
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
exit $hadoop_shell_errorcode
fi
# Creating copy of launch script
cp "launch_container.sh" "/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/launch_container.sh"
chmod 640 "/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/launch_container.sh"
# Determining directory contents
echo "ls -l:" 1>"/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/
directory.info"
ls -l 1>>"/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/
directory.info"
echo "find -L . -maxdepth 5 -ls:" 1>>"/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/
directory.info"
find -L . -maxdepth 5 -ls 1>>"/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/
directory.info"
echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/
directory.info"
find -L . -maxdepth 5 -type l -ls 1>>"/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/
directory.info"
exec /bin/bash -c "./alluxio-yarn-setup.sh application-master -num_workers 2 -master_address localhost -resource_path wasbs address 1>/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/stdout 2>/mnt/resource/hadoop/yarn/log/application_1476888496011_0177/container_1476888496011_0177_01_000001/stderr "
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
exit $hadoop_shell_errorcode
fi
End of LogType:launch_container.sh
LogType:stderr
Log Upload Time:Tue Nov 15 14:08:22 +0000 2016
LogLength:740
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/mnt/resource/hadoop/yarn/local/usercache/sshadmin/appcache/application_1476888496011_0177/container_1476888496011_0177_01_000001/assembly/target/alluxio-assemblies-1.3.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.1.0-56/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/mnt/resource/hadoop/yarn/local/filecache/20/alluxio.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
End of LogType:stderr
LogType:stdout
Log Upload Time:Tue Nov 15 14:08:22 +0000 2016
LogLength:1075
Log Contents:
Launching Application Master
2016-11-14 16:08:53,706 INFO type (ApplicationMaster.java:main) - Starting Application Master with args [-num_workers, 2, -master_address, localhost, -resource_path, wasbs address]
2016-11-14 16:08:54,291 INFO ContainerManagementProtocolProxy (ContainerManagementProtocolProxy.java:<init>) - yarn.client.max-cached-nodemanagers-proxies : 0
2016-11-14 16:08:55,180 INFO type (ApplicationMaster.java:start) - ApplicationMaster registered
2016-11-14 16:08:55,183 INFO type (ContainerAllocator.java:requestContainers) - Requesting 1 master containers
2016-11-14 16:08:55,188 INFO type (ContainerAllocator.java:requestContainers) - Making 1 resource request(s) for Alluxio masters with cpu 1 memory 1024MB on hosts [localhost]
2016-11-14 16:08:55,229 INFO RackResolver (RackResolver.java:coreResolve) - Resolved localhost to /default-rack
End of LogType:stdout
others were pretty similar, ended up resolving to default rack.
Still gives me no clue on where the service hangs though.