Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Public API Rate Limiting

170 views
Skip to first unread message

Matthew Hughes

unread,
Oct 25, 2017, 1:58:47 AM10/25/17
to Accelo Developers
Hi All,

We are in the process of introducing rate limiting to our public API. The idea behind this is to ensure that the quality of the service provided by the API is constantly at the same high level while also preserving reasonable use limits. Currently we are aiming to set the limit at 5000 requests per hour per deployment on requests outside of those sent to "/oauth2/" endpoints. 

For the first stage of this implementation we will not be enforcing this rate limit, but simply monitoring usage and noting any deployments that have exceeded it. Before we do implement the limit we will contact these deployments and look to see if their usage can be reduced, or possibly if our limit needs adjustment. We have added the headers "X-RateLimit-Reset", "X-RateLimit-Remaining", and "X-RateLimit-Limit" to help you track your usage. More information can be found in the API documentation.

bsu...@noojee.com.au

unread,
Oct 26, 2017, 12:38:08 AM10/26/17
to Accelo Developers
5000 an hour is going to be a real problem.

I'm in the process of developing an app and during the debug/test cycle I'm regularly making large no.s of api calls.

To work, the application has to do a large chunk of caching of accelo data as the accelo rest end points are too slow to use without caching.

Each restart of my app does 1-2K api calls!

Geoff McQueen

unread,
Oct 28, 2017, 10:46:26 PM10/28/17
to accel...@googlegroups.com
Unfortunately, we need to make sure service experience is great for all of our users, and this means rate limiting. Doing 2K API calls on each restart isn't something we will be accommodating.

--
You received this message because you are subscribed to the Google Groups "Accelo Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to accelo-devs+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Geoff McQueen
Founder & CEO


__________________________________________________________________________________________________________________________________________________________________________________________________

bsu...@noojee.com.au

unread,
Oct 29, 2017, 6:27:12 PM10/29/17
to Accelo Developers
Then maybe I'm doing something wrong but I've just reviewed my code and it doesn't look like I"m doing anything particularly aggressive.

Firstly I should correct my numbers. I've just pulled some hard data and the system does about 600 calls on startup. I think this number is lower than my initial 2K estimate partly due to the additional of the new 'and' 'or' clauses.

When using the API I'm seeing typical response times of 300ms with some queries blowing out to 1sec.

To put this in context that means if you want to display just 10 tickets on screen its going to take 3-10 seconds (assuming you don't need to pull any activity/company/contact/staff data, which is unlikely).

The result is that we came to the conclusion that we need to heavily cache the accelo api results.

The system I'm building is designed to allow me to do some quick reviews of contracts and take some specific actions.

The opening screen is designed to show a summary of all active contracts and the total work done MTD and last month.

We have 60 contacts in our system and it takes the above mentioned 600 calls to accumulate just this data.

Once this initial burst of calls is complete my system does very few calls to the accelo api due to the heavy caching.

There are two problems here:
1) During the development cycle you can see how a few restarts will quickly each into a 5k hourly budget. 

2) whilst we are heavily caching the results, the cache does have a 'time to live' of 10 minutes to ensure that the data is relatively fresh.  We can probably play with the cache 'time to live' but with a number of other screens we are building I'm concerned that we may get precariously close to the 5K limit just in normal production use.

So my suggestion, which is probably going to hurt my development cycles even further but will probably be fairer for every one.

Don't make it a limit 'per hour' but 'per second.

The problem with your 5K limit is two fold

1) it won't protect your system. It would be quit easy to put a client together that attempts to do the whole 5K calls within a single second.
This would most likely crash your system.

2) the hard limit will stop client applications working all together for the rest of the hour.

Instead set a limit of say 100 calls  per second.

This does two things.
1) stops any single user doing ginormous no.s of calls at the start of any period which will bring your system to its knees.
2) allow any client application to pace itself but continue to operate rather than coming to a grinding halt.

My two cents worth.

Brett


On Sunday, 29 October 2017 13:46:26 UTC+11, Geoff McQueen wrote:
Unfortunately, we need to make sure service experience is great for all of our users, and this means rate limiting. Doing 2K API calls on each restart isn't something we will be accommodating.
On Wed, Oct 25, 2017 at 9:38 PM, <bsu...@noojee.com.au> wrote:
5000 an hour is going to be a real problem.

I'm in the process of developing an app and during the debug/test cycle I'm regularly making large no.s of api calls.

To work, the application has to do a large chunk of caching of accelo data as the accelo rest end points are too slow to use without caching.

Each restart of my app does 1-2K api calls!

--
You received this message because you are subscribed to the Google Groups "Accelo Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to accelo-devs...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

bsu...@noojee.com.au

unread,
Oct 29, 2017, 7:13:25 PM10/29/17
to Accelo Developers
I should point out that the 300ms timers assumes that I have a warmed up https connection.

If I have to start by establishing an https connection then the transaction times are in the order of 2 seconds.
All my above mentioned timings are re-using an established https connection.

The implication is that doing single one off api calls is expensive for the client and the accelo servers.

Kurt Wagner

unread,
Oct 29, 2017, 11:13:59 PM10/29/17
to accel...@googlegroups.com
Hi Brett,

You make some good points.

The rate limiting is done at the application level to encourage the best use of the API- it will not prevent someone flooding it. If someones use case exceeds the limit then knowing why and improving the experience when feasible is in our best interests. For example, a missing filter, field or endpoint may be resulting in you needing to paginate through entire collections of objects to locate what you require. This may be something we can improve.

The team did discuss implementing the 5000/hour as a basic token bucket per minute. e.g, you may accumulate a maximum of 5000 tokens and are allocated 5000/60 tokens per minute. Each token allows one request. This avoids a deployment being locked out for up to an hour as in the next minute they will be able to make a few more (not as much need for the client to implement their own throttling based on remaining requests). However, initially we decided to go with the simplest approach on our end of a per hour limit. If this negatively impacts a lot of users we will re-consider.

Above the application we will be more aggressively blocking what we would deem harmful or malicious activity. e.g, a burst of 5000 requests is not something we will tolerate and will block before it hits the application.

Right now it’s about collecting information, so we really appreciate your feedback. We’ll reach out to anyone exceeding the limit to learn from before locking down and may revise what’s planned.


--
You received this message because you are subscribed to the Google Groups "Accelo Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to accelo-devs+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Kurt Wagner
Full-Stack Engineer
__________________________________________________________________________________________________________________________________________________________________________________________________


bsu...@noojee.com.au

unread,
Oct 30, 2017, 12:34:23 AM10/30/17
to Accelo Developers
Kurt,
thanks for the response.

This is always a tricky one. From my perspective I'm trying to balance between being a good citizen and getting the job done.

I've just changed my dev. environment so I'm doing less requests, but the proposed rate limits still give me some angst.

If you read my current use case, I'm doing what I would regard as a set of fairly simple actions but the result is quite a lot of api requests.

If I rolled this out across the organisation, along with some additional functionality, then I don't think it would take much to break through the 5K limit.

When I look at your 5000/60 tokens idea, if I've read it correctly, this essentially equates to 83 request per minute. This approach would essentially make my application unusable as it would take 600 / 83 = 7 minutes to load the first page of my app.

I understand the requirements to control requests limits but if the api is to be usable it will need to support the building of reasonable apps and the queries that come with that.

Brett

On Monday, 30 October 2017 14:13:59 UTC+11, Kurt Wagner wrote:
Hi Brett,

You make some good points.

The rate limiting is done at the application level to encourage the best use of the API- it will not prevent someone flooding it. If someones use case exceeds the limit then knowing why and improving the experience when feasible is in our best interests. For example, a missing filter, field or endpoint may be resulting in you needing to paginate through entire collections of objects to locate what you require. This may be something we can improve.

The team did discuss implementing the 5000/hour as a basic token bucket per minute. e.g, you may accumulate a maximum of 5000 tokens and are allocated 5000/60 tokens per minute. Each token allows one request. This avoids a deployment being locked out for up to an hour as in the next minute they will be able to make a few more (not as much need for the client to implement their own throttling based on remaining requests). However, initially we decided to go with the simplest approach on our end of a per hour limit. If this negatively impacts a lot of users we will re-consider.

Above the application we will be more aggressively blocking what we would deem harmful or malicious activity. e.g, a burst of 5000 requests is not something we will tolerate and will block before it hits the application.

Right now it’s about collecting information, so we really appreciate your feedback. We’ll reach out to anyone exceeding the limit to learn from before locking down and may revise what’s planned.

On Mon, Oct 30, 2017 at 10:13 AM, <bsu...@noojee.com.au> wrote:
I should point out that the 300ms timers assumes that I have a warmed up https connection.

If I have to start by establishing an https connection then the transaction times are in the order of 2 seconds.
All my above mentioned timings are re-using an established https connection.

The implication is that doing single one off api calls is expensive for the client and the accelo servers.



On Wednesday, 25 October 2017 16:58:47 UTC+11, Matthew Hughes wrote:
Hi All,

We are in the process of introducing rate limiting to our public API. The idea behind this is to ensure that the quality of the service provided by the API is constantly at the same high level while also preserving reasonable use limits. Currently we are aiming to set the limit at 5000 requests per hour per deployment on requests outside of those sent to "/oauth2/" endpoints. 

For the first stage of this implementation we will not be enforcing this rate limit, but simply monitoring usage and noting any deployments that have exceeded it. Before we do implement the limit we will contact these deployments and look to see if their usage can be reduced, or possibly if our limit needs adjustment. We have added the headers "X-RateLimit-Reset", "X-RateLimit-Remaining", and "X-RateLimit-Limit" to help you track your usage. More information can be found in the API documentation.

--
You received this message because you are subscribed to the Google Groups "Accelo Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to accelo-devs...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Kurt Wagner

unread,
Nov 2, 2017, 7:11:23 AM11/2/17
to accel...@googlegroups.com
Hi Brett,

The bucket approach would not leave your described situation unusable unless you smashed through the 5000 in the first minute. Each minute you would accumulate 87 requests to your bucket to a max of 5000. i.e, no requests in an hour and you have 5000. If this was done it would be to throttle.

You mention loading 10 tickets potentially taking 3-10 seconds due to response times of 300ms to 1000ms. What's the reason for making individual calls per ticket instead of using a single list query and are you making requests synchronously as this would suggest? Is there a limitation of the list endpoints preventing retrieval of data in batches?

Feel free to direct email me if you would like to share a more detailed response on what you're doing and we can see whether requests can be combined for better results.

Regards,
Kurt






 

To unsubscribe from this group and stop receiving emails from it, send an email to accelo-devs+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

bsu...@noojee.com.au

unread,
Nov 3, 2017, 3:25:16 AM11/3/17
to Accelo Developers
Kurt,

The application does actually pull back much of the data using broad filters (e.g. all open contracts) rather than singular id look ups.
We also do a lot of parallel processing - if we have a list of request we typically spawn 8 threads.

There are however lots of times where its not particularly practical to do broad filters without a lot of extra work and logic.
e.g. get a list of open contract, but I now need to get the company and contact for each of those contracts.

The issue is that you can only spend so much time an parallising (is that even a word) queries before development slows to a crawl.
This is one of my issues right now. I've spent the best part of two weeks building a library that caches results so I don't have to do performance tuning on every piece of code.
The trade off is that I probably end up doing more individual queries 

As to the rate limiting, I missed the bit about starting with a bucket of 5K calls. That certainly will make the proposal more feasible for a production system. 
I still have some concerns during development particularly that I need to test with reasonably large data sets to ensure the caching is working under load.

An alternate dev system might help as would general speed improvements on the api.

As to your mention of combing queries is there a limit on the size of the json filter? e.g. whats the limit if a no. of arguments to the _or operator?

Reply all
Reply to author
Forward
0 new messages