OK, so I have an ad-network background and can probably hit these
points here. (and I'm talking about experience with > 500M records /
day)
The MySQL method (#1), will suck for a variety of reasons. The biggest
issue you're going to face is that MySQL doesn't really scale
horizontally. You can't put 500 M records on 10 servers and then get a
reasonable roll-up. What you end up having to do is bulk inserts and
roll-ups on each server. You then pass those roll-ups on to another
server which then summarizes the data. (sound familiar?)
Of course, once you're designed this whole process of "bulk insert ->
summarize -> export -> bulk insert -> summarize", you come to this
great realization... you've built a map-reduce engine on top of MySQL
(yay!).
So now you get options #2 & #3. Hadoop will definitely work for this
problem. I know of working installations that not only do stats, but
also do targeting based using Hadoop. So this is indeed a known usage
for Hadoop. Of course, if you want to query the result, you'll have to
dump those output files somewhere (typically in to an RDBMS)
Where does Mongo fall in this?
That's a little more complex. Well, with the release of auto-sharding,
it's definitely looking like MongoDB can handle this type of
functionality. The fact that map-reduce is "single-threaded" can be
limiting in terms of performance (or you can just use more smaller
servers). However, the big win that Mongo gives you is data
flexibility.
You say that you have 50 fields, but are they always present?
(probably not)
Do they change? (probably with every client)
Do you query against all of them all of the time? (probably not)
So the benefit Mongo has is that it can accept all of your data, run
your map-reduces and the output is actually a database into which you
can query some more (holy smokes!). Or you can mongoexport the reduced
data to CSV and quickly drop it into MySQL for the finishing touches
on your reports. Mongo doesn't need a "data dictionary" or all kinds
of Metadata to parse your input files.
Oh and it's relatively easy to configure.
Picking MongoDB over other similar tools is really about learning
curve and ramp-up time. MongoDB is relatively simple compared to many
of the other tools out there (like Hadoop). Give me sudo, ssh and 10
boxes and we can have an auto-partitioned cluster available in
probably 30 minutes. Mongo can clock in at tens of thousands writes /
sec (especially across shards, even on "commodity hardware"). So
you're talking about an hour or two to insert 50M records and start
running sample roll-ups.
So for the cost of about a day, you can actually test if MongoDB is
right for you. That means a lot me :)
On Aug 29, 10:02 am, Eliot Horowitz <
eliothorow...@gmail.com> wrote:
> Yes - there are people with many billion documents.
>
> On Fri, Aug 27, 2010 at 4:08 PM, Bill de hÓra <
b...@dehora.net> wrote:
>
>
>
> > On Fri, 2010-08-27 at 12:15 -0400, Michael Dirolf wrote:
> > > On Fri, Aug 27, 2010 at 12:13 PM, Ankur <
ankursethi...@googlemail.com>
> > wrote:
> > > > Sounds like you also have to deal with the storage layer as a separate
> > > > problem if you are going to have billions of records. You probably
> > > > want something that can scale simply. Mongo may not be an ideal fit
> > > > because of your constantly increasing data. You may want something
> > > > where you can just add notes to increase storage very simply without
> > > > manually partitioning your data. I believe you have to manually
> > > > partition with Mongo.
>
> > > Version 1.6 of MongoDB adds production ready support for auto-sharding
> > > (read auto-partitioning). So no, you don't need to manually partition,
> > > and adding new shards is easy.
>
> > Tested for the said billions of documents?
>
> > Bill
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "mongodb-user" group.
> > To post to this group, send email to
mongod...@googlegroups.com.
> > To unsubscribe from this group, send email to
> >
mongodb-user...@googlegroups.com<mongodb-user%2Bunsubscribe@google
groups.com>
> > .