Apache Hadoop is not a perfect Big Data framework all by itself for the real time analytics and this is when you would want to rely on HBase to add the additional features that you would want – to be able to query real time data. Random reads and writes are also one another requirement from your use case to lean over HBase as an ideal Big Data solution in conjunction with Apache Hadoop. Accessing the data that is required can also be achieved by storing the data required in any of the NoSQL databases. HBase provides a rich set of APIs that can be used to pull and push data to it.
HBase finds its use cases where it can be perfectly integrated with Apache Hadoop MapReduce jobs for bulk operations that involve analytics, indexing and the like. One of the best ways to use HBase is to make the repository as Hadoop for all the static data and making HBase as the data store where the data that can be stored which will change in real time after processing. You may consider using HBase in your Organization or in your use cases when you need the following features from HBase:
* When there is huge amounts of data being considered
* When ACID properties are not considered mandatory but are just required
* When the data model schema is sparse
* When your application needs scalability and that too gracefully