(These figures include the time for the Ruby code to process and
aggregate the raw data into the required structure.)
Initially I tried denormalized traditional RDBMS (MySQL, PostgreSQL
and MS SQL Server), and the best performance I got from that was 14
seconds for a single query (MySQL with ISAM engine).
Then I tried Amazon SimpleDB, and got a 1.5 second response time, but
ran up against the 255-values-per-entity limit. Working around that
put the real time up to 4 seconds.
Next up, I had a go with Tokyo Tyrant using the B-tree format, which
was a bit clunky but responded in ~1 second.
To be honest, I'd have gone with that, but I happened across a blog
post which said that Tokyo Cabinet/Tyrant was great in terms of disk
space requirements, but that MongoDB was faster. So I decided one last
benchmarking exercise was worth my while. That was a good decision.
MongoDB handled the query in 0.4 seconds. And I haven't indexed the
field I was searching on; I can only assume it's going to get faster
still. And the Ruby code is basically doing a map-reduce on the
returned data, so I'll have to investigate passing that over to Mongo.
Those results in table form:
MySQL/ISAM: 14 seconds.
Amazon SimpleDB: 4 seconds.
Tokyo Tyrant: 1 second.
MongoDB: 0.4 seconds.
Add to that the excellent Ruby API, the replication/failover support,
and the built-in sharding, and what the MongoDB devs have achieved
here is nothing short of stunning. So congratulations on your
technical prowess, and thank you for your generosity. Expect a
donation when my wallet recovers from Xmas.
Best regards,
Mark Rendle
> --
>
> You received this message because you are subscribed to the Google Groups "mongodb-user" group.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.
>
>
>