Complete corpus search: Search is focused on result set quality and there are no guarantees to return all matching tweets. Complete results are only available on the Streaming API. Search results are increasingly filtered and reordered for relevance.
Lower latency results: From tweet creation to delivery on the API, latency is usually within a second.Predictable rate limits: Streaming is built upon well-defined elevated access roles so that client rate-limit-avoidance heuristics are eliminated.
Higher peak capacity: During a peak event, when tweets spike, the Streaming API is less likely to fall behind or begin aggressive rate limiting. Furthermore, the risk of a large client peak capacity emergency blacklisting is reduced.
More consistent results: Hosting a continuously updated REST API on a large cluster inevitably leads to temporal result skew due to internal propagation delay. This issue is largely eliminated by long-lived connections.
More efficient: Bandwidth and processing are not wasted on identical results. Also, repetitive and long-tail queries are processed more efficiently in the Streaming architecture.