Persistent connections for notifications certainly allay my fears
around latency and throughput for well-behaved hubs.
I don't see any real cost savings, other than some minor bandwidth
savings on the fan-out. We'd incur significant development cost to
support this alternative approach, as we've already invested in a
solution that suits our needs. We'd probably want to build or modify
and integrate a client that fanned out to multiple hubs and defended
against slow and malicious hubs, as well all the operational features
that we require. We'd need to maintain state of hubs to connect to,
authentication credentials, queue depths, latency statistics and so
forth. This is all roughly as much infrastructure as the Streaming
API.
A major use of the Streaming API is filtered streams. I suspect that
this would be difficult, if not impossible, to support in
PubSubHubBub.
The argument about standards isn't particularly compelling. The
protocol may be documented, but it isn't ubiquitous must-have
standard.
The Streaming API is probably the service that is easiest for us to
replicate for availability during a DDoS.
Technically, someone could build a service to consume from the
Streaming API and push into PubSubHubBub. This would be against the
EULA though.
The elephant in the room is control of the data, at any volume. This
isn't about 100% vs 1%, rather Twitter doesn't allow re-syndication.
With the Streaming API, we can require various implicit and explicit
licensing terms for various levels of access, disable malicious
actors, and so forth. By pushing out to a hub, we've ceded control of
these issues.
On Aug 9, 9:35 am, Jesse Stay <
jesses...@gmail.com> wrote: