Please let us know if you foresee any ghastly issues with this change.
It won't go into production until early next week at the soonest.
--
Alex Payne
http://twitter.com/al3x
--
Brett Morgan http://brett.morgan.googlepages.com/
No. The AT&T network has multiple IP exit points, and the iPhone does not use
a proxy.
--
------------------------------------ personal: http://www.cameronkaiser.com/ --
Cameron Kaiser * Floodgap Systems * www.floodgap.com * cka...@floodgap.com
-- The only thing to fear is fearlessness -- R. E. M. -------------------------
We are building a web-based twitter reader. In particular, that means
we are proxying all requests from the user (which is a win for you,
because we're caching things like friendship relationships, tweets
read by multiple users...). The calls we make that currently don't
require authentication (e.g. getting lists of who someone follows,
getting status on the "@foo" references in a tweet) are made on behalf
of multiple twitter users. We're going to blow past 100 requests per
hour without even trying.
So yes... I have ghastly issues with the change.
This isn't even just a matter of our exceeding the limits. Writing
code to work inside of them is as much of an issue. For instance, we
are caching tweets. When we fetched the tweet, we got the URL for the
user's image. But that gets stale. So we need to periodically query so
that when the user looks at old tweets, they don't get broken icons.
We can (and are) smart about updating the info if we see a new tweet
from the user, but that doesn't handle all the cases. Previously
getting that info was a "free" call. You're making it have a cost. So
now we need to figure out how to juggle those calls and spread them
out so we don't exceed the limit. It's a nice barrier-to-entry for
other developers, but frankly, I'd rather work on features. These
constant changes are making it very difficult to develop applications,
let alone plan ahead.
I also agree with others who point out that proxies and NATs are going
to cause problems for you. Although I actually suspect the major ones
won't be with ISPs, but with companies with multiple Twitter users.
One suggestion. It won't help us very much, but it might help some of
the other cases. That's to make the limit per-ip-per-user. Which is to
say, give a user 100 authenticated calls (current set), plus 100
didn't-have-to-be-authenticated-but-are calls (new set). Odds are that
most clients are making all on-behalf-of-a-user calls authenticated
anyway--even if they don't have to be. When that is the case, you
don't really care about the IP address.
Kee Hinckley
CEO/CTO Somewhere Inc.
Somewhere: http://www.somewhere.com/
TechnoSocial: http://xrl.us/bh35i
I'm not sure which upsets me more; that people are so unwilling to
accept responsibility for their own actions, or that they are so eager
to regulate those of everybody else.
Evan
--
Evan Weaver
If you can't do it, that's fine, we all just have to work something out via
par...@twitter.com as you suggested before.
Thanks for the heads up.
E.B.
Evan
--
Evan Weaver
Can't you get that user data inline from the tweets?
Evan
--
Evan Weaver
Evan
--
Evan Weaver
Evan
--
Evan Weaver
Have you actually confirmed that you get more than 1200 tweets an hour
by doing that? Last month Alex confirmed[1] that they cache the public
timeline API response every 60 seconds so it shouldn't be possible to
get more than 1200 an hour and hitting it 3000 times is a massive
waste of your users resources as well as Twitter's.
Alex/Evan: Any word on when the Jabber feed will be open to all?
-Stut
However, please note that the public timeline does not have all tweets
even when uncached.
Sorry for the disconnect. I definitely understand your frustration.
Evan
--
Evan Weaver
No, don't get discouraged. That's why we're having this conversation
before and not after.
However, I'm confused as to whether you guys think that you're getting
a full tweet stream or not.
Evan
--
Evan Weaver
I am going to talk to Evan Williams about this and get back to you.
Evan
--
Evan Weaver
We will allow some exceptions, but this way at least we will know
about them, instead of getting hammered by anyone who wants to do 7000
requests an hour without warning.
There's been a lot of discussion around providing an alternative to
password-based authentication. Please search the group of "OAuth".
It's coming later this year.
--
Alex Payne
http://twitter.com/al3x
As we hope you've noticed, the site has been feeling much snappier for
both web and API requests over the past several days, even despite the
increased rate limit. In our continued effort to keep things fast and
prevent abuse we're planning on introducing rate-limiting by IP for
unauthenticated API requests. We'll allow 100 unauthenticated
requests per IP per hour, just as we currently do with authenticated
requests.
Postel's Law: "Be conservative in what you do, be liberal in what you
accept from others."
If the data "breaks" your app, then your application is broken: it's a
defect in your app, and you should fix it.
--
Dossy Shiobara | do...@panoptic.com | http://dossy.org/
Panoptic Computer Network | http://panoptic.com/
"He realized the fastest way to change is to laugh at your own
folly -- then you can let go and quickly move on." (p. 70)
+1.
1. No timeline has been decided yet.
2. We're working on a better, clearer process for API consumer rate
exceptions. Because it requires coordination with business
development, it might be another week or two at least.
Evan
--
Evan Weaver
Be *really* sure the error refers to unauthenticated requests. That's
exactly the message you get if you exceed the 100/hr limit on
*authenticated* requests. I suspect you're mistaken.
- Ed Finkler
On Jul 17, 2008, at 8:18 AM, jstrellner <jstre...@urltrends.com>
wrote:
--
Alex Payne
http://twitter.com/al3x
That's pure speculation. Leave it.
--
Ed Finkler
Hanlon's razor: "Never attribute to malice that which can be adequately
explained by stupidity."
:-)
We'll be announcing our partnership with Gnip today or early next
week, so that's another option for keeping up to date with the
activities of a large set of Twitter users. They expect to be able to
relay our XMPP feed in the near future as well, though that will
require an agreement signed by Twitter, Gnip, and the developer
consuming the feed.
Evan and I are engineers at Twitter; our job is to support the
technology, and by extension to support this community. While Twitter
has a fairly open internal conversation about our business direction,
at the end of the day we're not the ones making those decisions.
Clearly, there are some applications that directly compete with
Summize, now Twitter Search. The entire ecosystem of applications
that do "datamining, trends analysis, following conversations" and so
forth is not doomed, however.
When Apple and Intel form a partnership, they don't do so via
conversations on public forums. If you're trying to build a business
atop the Twitter API, the appropriate course of action is to talk to
us privately about an agreement between our two businesses so we can
provide you with the support and quality of service that your
application requires.
We'll be writing more on this topic at the Twitter Technology Blog.
--
Alex Payne
http://twitter.com/al3x
--
Alex Payne
http://twitter.com/al3x
Not wanting to beat a dead horse, but I just took a look at the Gnip
API. It looks like that is just a feed of the Activity events and not
a full feed. The developer will still have to go back to the Twitter
API to get the full body of the tweet. This would still run up against
the API Rate limit either authenticated or unauthenticated correct?
-steve
--
Alex Payne
http://twitter.com/al3x