Ibo -
[Firebase founder here]
The Firebase Database service is built fairly differently than most web services since it uses long-lived, stateful network connections (usually websockets) instead of normal HTTP traffic. This precludes the use of typical load balancing techniques, and instead we terminate each web socket on a single “shard” on our side.
This has a couple implications.
First, it means if that shard that your app is assigned to goes down, your app backend goes down. We have the ability to move connections around between shards (and we do do this regularly, to balance load across our service), but unfortunately that currently requires manual intervention by our ops team, which means if we lose a server, you’ll see a short downtime until we re-balance traffic. We’re working to make that failover automatic, though it’ll be a while still.
Second, it does mean that scaling beyond some (generally quite large) limits requires manual work. We power some very large sites and apps like Twitch.tv and apps from CBS, and the vast majority of our customers never hit any of our scaling limits. For customers that do run into these limits, we shard those customers across multiple app instances. This does (today) mean some manual work on both sides. One note that you may find helpful is that we generally scale quite well to very large numbers of connected devices (100k+ simultaneous connections), but we are more limited in I/O throughput. So if you're building an app that you expect to be writing a lot of data, I'd suggest you ping us beforehand so we can discuss the specifics and make sure we're prepared.
I admit that the wording on the website doesn’t reflect reality very well right now. While we do scale, it’s not quite as automatic today as we’d like. We’ll get the website fixed.
I hope that helps you understand our capabilities and limitations better -
-Andrew