Spending LImits Going Away :(

1,100 views
Skip to first unread message

Joshua Smith

unread,
Aug 25, 2020, 12:03:33 PM8/25/20
to Google App Engine
Once again last night, my wallet was saved when a runaway bot chewed up my site’s whole daily spending limit. I got an email from a user, set up a firewall rule, and goosed my budget to get things going again.

I’m very concerned about Google’s decision to remove this feature. Offering a cloud service that bills by usage without having a way to limit the spend shifts an unreasonable amount of risk onto the subscriber.

I’ve set up budget alerts, as suggested, but I’m concerned that:

- What if my bill shoots up really fast? How quickly is this alert going to go out?

- What if I am away from the computer (remember when we used to be able to leave our houses? good times… good times…)?

I run this particular site as a not-for-profit social good. (It’s a site that small town governments use to post their meetings.) I make no money on it.

I’d be perfectly happy to handle this with self-set quotas on something other than dollars. For example, in my case the budget-buster is always “Cloud Datastore Read Operations.” If I could set a cap on that one thing, it’d give me the protection I need.

-Joshua

Alexis (Google Cloud Platform Support)

unread,
Aug 25, 2020, 3:45:12 PM8/25/20
to Google App Engine
Hello Joshua,

I'll try to help the best I can. I've added a question below, along with some answers too.

Questions:
- Which feature did Google remove? What was this feature called and where was it shown in the GUI? I'm asking so that we can try to maybe do a feature request or see why it was removed.

To answer your question, the alerts should be prompt unless there is an outage or some other exceptional circumstance. However, keep in mind that we do not have control over public communication channel delays such as SMS, email, etc... Double-checking in the GUI could tell you if it was already sent out or not. Delays can also happen if alerts have multiple conditions and one of them hasn't been met yet. See full article here[1] for more details about latency possibilities.

If you're away from the computer, you have notification options here[2], called "channels".

Joshua Smith

unread,
Aug 25, 2020, 4:01:26 PM8/25/20
to Google App Engine
Attaching an email from Google, which should answer your first question.

Your answer to my first question is a bit vague (“prompt”) but I’ll accept it.

The answer to the second question misses the point. I can get the notification while I’m away from home, but I can’t do much of anything about it. I’d be going to the web browser on my phone and trying to do what I can from there, but… yikes.

Anyway, here’s the email from Google...

Configure alternative cost management method(s) for App Engine projects by July 24, 2021.

Hello,

In December 2019, we removed the ability to create new spending limits in App Engine. We’ve since rolled out new tools to help you manage costs in App Engine, and will be removing all spending limits on July 24, 2021. Review our new set of cost management tools and choose the mechanism that best suits your needs.

What do I need to know?

While App Engine has evolved, the spending limit functionality has not. We’re replacing this feature because it doesn’t cover App Engine costs related to newer capabilities, or services like App Engine Flex or Cloud Build. The new Google Cloud cost management tools will help you control costs associated with a broad range of resources, including App Engine.

What do I need to do?

If you don’t need a cost management mechanism for App Engine, no action is required.

To manage App Engine-related costs, implement any of the following mechanisms:

Learn more about these cost-management mechanisms by reviewing Managing App Engine costs.

Your affected projects are below:

How can I get help?

If you have any questions or require assistance, please reply to this email to contact Google Cloud Support.

Thanks for choosing App Engine.



--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-appengine/f77ae31b-61bd-4251-bb5d-4809076b70fan%40googlegroups.com.

Luca de Alfaro

unread,
Aug 25, 2020, 6:46:38 PM8/25/20
to google-a...@googlegroups.com
I concur with the worry.  Is there any _technical_ reason why it is a good idea to do away with a spending limit?  Can we get an instance limit instead?
This is suddenly making standard non-scalable systems on AWS look better than appengine!

--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@googlegroups.com.

Luca de Alfaro

unread,
Aug 25, 2020, 6:51:49 PM8/25/20
to google-a...@googlegroups.com
Yes, at least, we can hard limit the number of active instances: see https://cloud.google.com/appengine/docs/standard/python3/config/appref
So if every active instance has a limited rate of use of backend services (like datastore), and there are no services accessible except via an appengine instance (e.g., no GCS direct bandwidth), in practice we can put a bound using that.

Luca
Message has been deleted

yananc

unread,
Aug 26, 2020, 4:47:44 PM8/26/20
to Google App Engine

Hello Joshua,

The budget alert will be triggered once your costs rise above the threshold you specify. However, as Alexis has explained, there are various factors that might affect ‘how quickly’ you will receive the alert.

Same as the email you shared, the documentation [1] provides details on how to manage App Engine costs. Specifically, besides mechanisms such as setting the max number of instances, Budget Alerts, Pub/Sub, and Cloud Functions could be used to automatically disable your app when your costs exceed the threshold you specify. The documentation also provides steps on how to implement it with sample codes.

Back to the issue of ‘Cloud Datastore Read Operations’ being too high, a possible solution is to leverage cache mechanism to avoid excessive operations. You may find more information from the topic [2].

Hope it helps.

[1]: https://cloud.google.com/appengine/docs/managing-costs

[2]: https://stackoverflow.com/questions/12939376/google-app-engine-too-many-datastore-read-operations

Linus Larsen

unread,
Aug 27, 2020, 4:52:47 AM8/27/20
to Google App Engine
True story

One day in December some year ago customers called in complaining about our service wasn't responding. A quick look in the console we could see that we had gone over our spending limit. Why? Digging deeper we could see that we had 1000+ instances running (standard, java, autoscale, and we hadn't specified maximum instances). After some trouble shooting and help with Google support we found the reason. It turned out that we had hit the roof of a socket _connect quota, for each request we get to our service we send a pub/sub message, in the pub/sub library we used it was not possible to batch send messages so each message sent was literally a socket_connect.

After hitting the socket _connect quota limit the calls to pub/sub where timing out, and the timeout value was too high (we were using the library default value), so the autoscaler scheduler started a new instance in a forever loop.

When this happen everything froze and we couldn't kill instances more rapidly than the scheduler spun up a new one, kind of a moment 22 situation.

In order to get it fixed we had to ask Google to bump the socket_connect quota, then we had to rise our spending limit to get everything working again. 

In a situation like this the only thing that wasn't getting us bankrupt was the spending limit,  the irony here the suggestion about using "Budget Alerts, Pub/Sub, and Cloud Functions when your costs exceed the threshold" won't work if your not allowed to call pub/sub because of a stupid socket_connect quota.

We probably stumbled on an edge case with a lot of combinations of usage of old libraries / apis, and hidden quota limits (I can't even find the socket_connect quota anymore on the quota pages in console), however my point here is spending limit is the last outpost we have to actually to set a cost limit and it's sad seeing it going away just as the other dismounting of what to be a great service.

Joshua Smith

unread,
Aug 27, 2020, 11:32:26 AM8/27/20
to 'Mary (Google Cloud Support)' via Google App Engine
On this particular topic of Cloud Datastore Read Operations being the cost driver, unfortunately caching doesn’t help my case.

This site has about 60,000 individual meetings listed, and I like having *useful* *well-behaved* crawlers find all those meetings so people can use search engines to find meetings that happened in various towns. But what keeps happening is some new useless/poorly-behaved crawler will decide to read all 60,000 of those meetings as fast as it possibly can. (Ignoring the crawl-delay.) Caching in that situation would just add *more* cost, since there are no repeat hits from a crawler.

I periodically look at my access logs, find peaks, look at the requests, and figure out what new bot needs to be added to my robots.txt disallow list. And sometimes I have to add a firewall rule because that bot isn’t obeying robots.txt at all.

-Joshua

yananc

unread,
Aug 28, 2020, 10:27:38 AM8/28/20
to Google App Engine

Hello Joshua,

Thank you for your response. I agree with you on the point that caching won’t be of much help if repeat hits are quite limited. Besides the recommendations aforementioned, I suggest you to open tech support tickets with GCP support so we could be able to dig deep into your projects for any possible improvements.

@Linus, thank you for sharing your experience. In such a scenario, setting the max number of instances for autoscaling could greatly help, which is part of the recommendations to control App Engine-related costs.

Vitaly Bogomolov

unread,
Sep 2, 2020, 11:18:59 AM9/2/20
to Google App Engine
Canceling a simple and straightforward option "daily spending limit" is a very cruel action for beginners just starting to learn the platform. A separate warning should be given in the starter guide:

In order not to go bankrupt when using GCP, you first need to carefully study the documentation section https://cloud.google.com/cost-management.

My story.

Ndb model with a lot of indexed fields (since the field property indexed = True by default). The cost of writing one entry to such a model was very high.

The main program sent a message to the queue when a fairly rare event occurred. The handler of such a message from the queue updated the corresponding record in the model, and then an exception occurred due to an error in the code, and the handler exited with an error code.

The platform initiated several processing retries, which also failed. In each attempt, there was a write to the model.

The main program, not finding the results of processing, added a new message to the queue, which caused a new series of attempts described above. And so on.

The daily budget was depleted within a few minutes and the program was stopped by a fuse "daily spending limit".

In the current reality, I would have received a bill for several thousand dollars if I had reacted to this situation within 24 hours.

Yes, then I changed the model, leaving indexed=True only for a couple of fields where it was really necessary and changed the logic of both the main program and the message handler from the queue.

But all this happened later.

вторник, 25 августа 2020 г. в 20:03:33 UTC+4, Joshua Smith:

Alexis (Google Cloud Platform Support)

unread,
Sep 2, 2020, 3:19:57 PM9/2/20
to Google App Engine

Hello everyone,

To summarize this conversation, it is possible to set proactive limits by:
- Maximum number of instance[1] in an app
- Disable your app programmatically[2] for any resource consumption, etc..
- Cap API limits to prevent too many requests[3]

If I understood properly, Joshua's situation is about caching. So in that case, we are talking about resource limits related to CPU cycles, or maybe amount of requests... I suspect that whatever resource spiked by this situation, can probably make use of the second option above or even the third if it's API consumption. But it all depends on how fast this happened and I would agree that having a hard cap feature would prevent the delay.

In terms of instantaneous hard caps (with no delays), a feature request[4] can be done. However, I think it would be advantageous to clarify how such a cap can be helpful without impacting scalability. Please see below.

When submitting a feature request:
- Try to clarify how the limit should differentiate a CPU cycle that is legit and versus one that isn't legit. Would you want that on all your instances? Would you mind if it stopped all your services as a false alarm? (The current solution for that is max instances in my first point above because scaling is a horizontal concept and it doesn't completely stop things since it's a delta quantity. If some of the other issues mentioned were due to excessive instances or API requests ,  then please refer to the above).

I hope this message consolidates the research and saves time for any new person reading this post. Thank you in advance.

Jukka Hautakorpi

unread,
Sep 2, 2020, 4:39:40 PM9/2/20
to Google App Engine
Is there a simple example how to disable after budget alert, let's say php google app engine application ?

Olu

unread,
Sep 15, 2020, 10:22:10 AM9/15/20
to Google App Engine
Hello, Jukka

I am hopeful that the steps described in the documentation[1][2] on how disable your App Engine application when costs exceed your set threshold are comprehensible as I cannot find an example at this time that shows the step.

Should you certainly consider a need for the Documentation improvement, perhaps to have the example added, you can submit such Documentation improvement feature request using the issue link[3]. Our Documentation team would be glad to review the possibility. However, such requests do not have ETA. 

Thank you. 

Reply all
Reply to author
Forward
0 new messages