Sorry for the late reply.. it's actually rather difficult for me to explain.
But here it goes.
I have polygons and points in my data. I use geohashes at precision level 1 all the way down to 10 to help me get data for both polygons and points.
For points I simply need to associate the highest precision geohash to a point and use my own geohash solver to figure out which geohashes would need to be queried based on my viewport or virtual range. I have a LOT of options on how I store geographic information so I'm just going to explain what I'm doing for a client right now.
My client has lots of polygons that need to be presented on a map. I use GDAL to separate the polygons into lots of different geohashes. First I use precision 1 geohashes and create a document for each precision 1 geohash that the polygon touches... then I do the same for precision 2 and so on until I have reached my target precision. I can optionally.. completely optionally.. store extra information along with each document helping represent a single polygon.
For instance. We are toggling some statistics for each high precision geohash that a polygon touches... so one of the documents will contain some statistical information that users of the data can change. In doing so we also update statistics in the lower precision documents as that happens. This way we are pre-aggregating information and it is immediately accessible depending on our zoom level.
In order to find documents that match a specific viewport or range I have my own options as well. I use the same fast algorithm to create a list of the most appropriate geohashes based on user or site parameters. I personally determine the precision based on the google maps zoom level and then attempt to find all geohashes that are contained in or touch the viewport. It's incredibly fast to solve and it gives me direct index access to my data. I simply need to query for "polygon_geohash in ['hash', ....]"
I can optionally choose to query one geohash at a time based on a concentric query as well. This can sometimes speed up applications that need a quick redraw as users move around.
I chose this technique before Sharding of geographic data was available. What I ended up with was an ultimately more shardable solution that I could tune myself.
So the point. You don't need to use geospatial indexes. However they are wicked fast for point data. I find that storing geospatial information using my own geohash and quadtree 'user oriented' indexing allows for a lot of flexibility.
And it's easy enough to say that being able to rebuild pyramids of aggregated statistics for your data from the highest precision is.. amazingly useful. Completely removes the need to do larger queries later on.
- Shane