Thanks for the communication - this is good. Just curious - with entire businesses being put out of place, and rumors that the Russian Gov't may be behind such attacks, is Twitter communicating with Homeland Security about this? To me this seems like a matter of national security even more than it is a Twitter issue. The US economy is being attacked because of this.Not to sound too radical - I'm just genuinely curious when the Government is going to get involved. (and thank you for doing what you can - I'm sure I speak for all when I say we feel your pain)
In curl this can be accomplished with the --location flag. See the man
page for more details.
On Fri, Aug 7, 2009 at 3:27 PM, Justin Hart<onyx...@gmail.com> wrote:
> Does this affect POST as well as GET? The issue here is the way
> clients handle 30x after POST. Most clients (now by convention) do
> not respect the RFC (http://www.w3.org/Protocols/rfc2616/rfc2616-
> sec10.html#sec10.3) and will send a GET after POST always. Some
> clients will respect the method, but not re-post any data. We need to
> be sure we are all expecting the right things.
You are correct that lots of clients now redirect with a GET instead
of a POST, even curl does this, which is a bummer.
The best thing to do in that case is to catch the response code
(without following the redirect automatically) and manually re-attempt
the POST with the new location.
We know it's a pain, but that's one way around the POST->GET problem.
> I wanted to send everyone an update to let you know what has been
> happening, the known issues, some suggestions on how to resolve them
> and some idea of how to move forward.
This was really appreciated. When the dust clears, maybe one more
suggestion? An API to check on the network status? I think
infrastucture attacks aren't going away anytime soon. We've got a
diversity of applications here, some of which can chew up the bandwidth
pretty well and some of those just don't make sense to run if other
users can't get on-line. Instead of answering, "Is it OK to restart my
cron jobs?" the cron jobs could shut themselves down for increments of
so many hours.
PS - Of course it could be misused, but I think the benefit is net.
When the server is in blackout, simply report that twitter is still
having ongoing issues to the user with a note and a link to the
twitter status page, keeps them informed as to what is going on
without flooding your inbox or you having to respond to all the users
emails, If the message in the UI saves you from having to read through
and answer 10 emails, it is worth the work to implement. Its all about
keeping the user informed, just like we like to be informed by twitter
about what the status is.
Another tip is to make sure you set a resonable timeout for all your
twitter requests or people get stuck waiting for your page to load and
think its broken, better to timeout and again report that twitter is
still recovering for DDoS attack and direct them to the twitter status
While this may seem like unnecessary work if twitter is going to get
back to normal, it makes your app more robust to issues in the future,
keeps your users better informed as to what is happening (without
contacting you), and also helps reduce number of requests to twitter
that they have to ignore while they are recovering
On that note, it seems our servers are disabling significantly less
frequently over the past few hours. Heres hoping it lasts.. Not sure
if it was the back off or if Twitter has just managed to cope better
with the traffic issues.. Doesn't really matter to me which..
Oh and thanks for the update Chad, it really is nice just to hear
something, even if its not great news.