Not sure if you have already tired these before. Just in case if you haven't.
I am assuming you have not configured to your HIVE to an external RDBMS like MySQL for metastore. If that's the case, the location from where you start $ hive. Check for a folder called metastore_db (that contains all the tables defined) and delete it (this will delete all the metadata for hive, so you are left with no tables now -- logically not physically from HDFS).
- Now try doing $ show tables, it must not give you anything. If that's the case, then something is wrong with your Hive / Hadoop Installation.
- Try re-installing Hive. If they doesn't work either, then
To check if your Hadoop installation is okay,
- Run one of the example programs on your Hadoop Cluster. Say something like TestDFS, Terasort etc.
- If they come up with some weird ClassNotFoundException, try the steps below.
- You said earlier you have those compression codes specified in your core-site.xml. Check if you have actual jars for them.
-- For lzo check in your HADOOP_HOME/lib for *lzo*.jar file.
-- I haven't used snappy, but I guess the process is the same.
-- Also check in your HADOOP_HOME/lib/native/Linux-<arch>/*.so* files for respective compression natives. For lzo it is typically "libgplcompression.so".
- If you don't have any of the jars / *.so files then remove those entries from the core-site.xml
- If they still don't work, check for other changes you have made in your config. Revert to Default settings if that's an option.