Meteor Application Performance with >100k users in a single app?

452 views
Skip to first unread message

Ben Hammond

unread,
Aug 10, 2014, 8:12:50 PM8/10/14
to meteo...@googlegroups.com
Considering Meteor for a redesign of a project at work. Need some practical performance numbers for scaling out to ~100k simultaneous users. Anyone working on this scale? If so, what is your infrastructure setup? How many webheads and mongoheads?

Serkan Durusoy

unread,
Aug 11, 2014, 7:30:33 AM8/11/14
to meteo...@googlegroups.com
Hi Ben,

This thread - https://groups.google.com/forum/#!topic/meteor-talk/Y547Hh2z39Y - might be interesting for you.

Cheers,
Serkan

Ben Hammond

unread,
Aug 11, 2014, 11:34:04 AM8/11/14
to meteo...@googlegroups.com
I'll check it out, thanks!



--
You received this message because you are subscribed to a topic in the Google Groups "meteor-talk" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/meteor-talk/Ky22OhnTqA8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to meteor-talk...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Abigail Watson

unread,
Aug 11, 2014, 1:16:02 PM8/11/14
to meteo...@googlegroups.com
We have bursts of thousands of users through our Express server, which writes to our Mongo server, and which displays through our Meteor app.  We're not doing 100k yet, but we have a contract with a company that's likely going to put that kind of load through our system.  So, we're actively designing and testing for it.

So far, setting up our Mongo cluster has turned out to require no less than 11 servers for a fully sharded configuration.  Each shard requires 3 servers, plus an additional 3 config servers, and at least 2 application/routing boxes.  So database scaling starts at 11 boxes, and increments in threes:  14 boxes, 17, 20, 23, etc.  A replica set of 3 servers acts as a single shard.

Within each replica set server instance, data chunks default to a 2GB size.  Each box can have as much vertical storage as you can squeeze into it, but shard replication introduces a whopping 2/3 overhead. RAID5 striping this is not.  So, if you purchase 6 servers with 1TB each, you'll wind up with a 2TB sharded database across 2 replica sets. You'll just need to look at your current usage patterns, and do the math.  

If each user only generates 1kb of configuration data, you can probably put 100k users on a single replica set of 3 boxes.  If they're generating 1kb of data per visit, however, you're probably going to need the 11 box configuration.  If they're generating 1mb of data per visit, you're probably going to need more than 11 boxes.  And that's just for the database layer.

If you have an existing mongo installation, beware backwards compatibility issues with the ObjectId timestamp.  

Once your horizontal database layer is in place, you'll need to horizontally scale the application layer.  Oplog monitoring needs to be turned on for horizontally scaling the application layer, of course.  And you'll also need to figure out what to do with sticky sessions. There's experimental redis support that's now available, in case you want to replace your load balancers and sticky sessions with an in-memory redis store.  And people are starting to look at how to configure Meteor apps to read off secondaries.  Also, a DDP proxy tier is on the roadmap, and people are upvoting nginx support.  The redis server and nginx servers do much of the same function as the database config servers and routers.

So, between a redis box to coordinate sessions and an nginx proxy tier, you have at least 3 more servers.  And then you can start thinking about the actual user volume that each webhead can handle.  There are lots of stories of people handling 20k to 40k users per box with Node with RESTful Express apps.  But Meteor uses websockets, streams, and DDP.  So being conservative, that's what?  5k or 10k users per box?  

So, 11+ mongoheads, 1 redis box, 2 nginx boxes, and 10 to 20 webheads?  You're talking a full 22u rack for 100k users.  Of course, you can squeeze that down, scale it up, or distribute it sideways, depending on what premiums you're willing to spend for miniaturization, heating/cooling, storage area network, virtualization, etc.  

tl;dr - It's similar math to most other big installations, but with the possibility of maybe getting upwards of 40k+ users per box, rather than 5k users per box.  More importantly, all the pieces are in place to grow horizontally to 200k, 500k, or more users.  

Ben Hammond

unread,
Aug 11, 2014, 2:16:50 PM8/11/14
to meteo...@googlegroups.com
Fantastic. Thank you for the write up Abigail, this corresponds to my napkin math assumptions quite nicely.

11m, 15wbh per 100k users with load balancing is going to be my target when I approach my infrastructure department.


--

Abigail Watson

unread,
Aug 11, 2014, 2:35:11 PM8/11/14
to meteo...@googlegroups.com
Glad it was of help.  Don't forget to requisition an aquarium when you contact your infrastructure dept.  ;)





Ben Hammond

unread,
Aug 11, 2014, 4:06:36 PM8/11/14
to meteo...@googlegroups.com
I've always wanted a statue in my honor... Maybe something that breathes fire.


On Mon, Aug 11, 2014 at 11:35 AM, Abigail Watson <awats...@gmail.com> wrote:
Glad it was of help.  Don't forget to requisition an aquarium when you contact your infrastructure dept.  ;)





Reply all
Reply to author
Forward
0 new messages