To the best of my knowledge, there is no current plan to introduce adaptive
or auto tuning indexing to MongoDB.
I would be very interested in any discussion on the subject that might come
out of this post.
My reading on the subject has mostly been around column stores, rather than
document stores. I've not yet seen adaptive merging or database cracking
done on a document basis. I think a hurdle here might be the lack of
strict schema - since database cracking would attempt to 'crack' the data
structures on a per query basis, while there is no guarantee that the data
structure will be consistent per document. I have yet to do any experiment
on this theory however.
From what I have read, adaptive merging tends to cost more but take fewer
queries to converge on an optimal index, while database cracking takes
longer to converge but is cheaper. Initial queries in both schemes are as
expensive as full table scans (for less or more queries depending on the
scheme), you would need to measure this against the time it takes to
generate a full index.
Using a genetic algorithm to create an index as as side effect of query
execution sounds interesting. Again I would be concerned that the document
structure could change on a per document basis, which could lead to a
difference in allele values and make comparisons harder between
generations, but you could overcome that hurdle by inserting only documents
with the same structure - or you may find that this is not a concern after
Hope this helps,