Thanks for listening to and addressing the community's feedback.
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-a...@googlegroups.com.
> To unsubscribe from this group, send email to
> For more options, visit this group at
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
My current provider offers me Unlimited bandwidth for $14.95 a month. How
much is your unlimited plan?
To unofficially answer your question, I presume that one of the new
scheduler options will be "do not exceed N instances" which, if you
set N=1, will prevent you from going over free instance quota. Bursty
traffic will sit in the pending queue.
I'm looking forward to seeing the scheduler improvements in the
coming weeks. Overall this looks promising, I'm actually pretty happy
to see that the API calls 'with checks' will be included and just
billed by time consumed. Is the 4x cost relative to a master-slave
(M/S) or high-replication (HR) apps? Certainly if that is relative to
M/S then it is not that bad, HR is totally worth it. ;)
Thanks for listening to and addressing the community's feedback.
Hi Greg,Can you raise On-demand Frontend Instances free quota to 25 Instance Hours per day?The small apps have very low traffics in average, but sometime (maybe several minutes) it may use more than 1 instances to handle burst traffics, so that they will got OverQuotaErrors at the end of the day.
1) What are the expected limits on the concurrency for Python 2.7
instances? Assuming the requests handlers / threads are just waiting
for RPC to finish (say on urlfetch service), how many per-process are
allowed? This is probably still TBD, but a ballpark figure would be
2) How the keys-only queries will be charged for?
3) What controls are in place to make sure that the instances do not
get stuck on a bad / slow host? I have experienced very different
response times from a noop WSGI app hosted on GAE, and given the costs
will now be tied very directly to the latency, how can you make us
comfortable with the fact that this latency is volatile and often
completely out of our control? (or remove the volatility)
4) Can we have some assurance that the hosts are not oversold and the
CPU / Memory quota is actually guaranteed? Volatility in response
times (as measured by the GAE dashboard itself) suggests that
different hosts are under a different load and sometimes the
instance's process has to wait to get to run on a CPU. (When a no-op
app sometimes runs in 10ms and sometimes in 300ms+, that doesn't look
like guaranteed CPU to me).
5) Can we configure scheduler to shut instances down faster than in 15
minutes? (And not get charged for that idle time). If not, please
justify this limitation.
6) Will we have a way to explicitly shut down an instance from the
instance itself? (Without returning an error, basically to suggest to
scheduler that "this is the last request I want to handle")
7) Will the pricing become stable after this change? How can you
assure us that it will?
8) Is there any intention to adjust the prices in a year or two to
account for falling hardware prices?
PS I also wanted to mention to people asking if GIL will be removed --
of course it will stay. Also, there's no need to remove it, so please
don't make random requests and learn what GIL is and why it's there. I
would bet that the concurrency will be via regular Python threads (no
multiprocessing), but the app itself would not be allowed to spawn or
control those threads.
It was a joke… My Sprint Phone has Unlimited Text, Talk and Data. It would be farer to say unmetered… Obviously I can’t download 100 Terabytes of data a month the phone won’t go that fast, nor can I talk 2000 hours a month, because I can only have one conversation at a time.
2011/5/18 Luís Marques <luism...@gmail.com>:
Welcome to the real world.
Did you know that when you buy Gasoline on a day that is 10 below zero, you
get 30% more energy per gallon (and dollar) than when you buy on a 114
degree day? Do they charge less for gasoline on Hot days?
When the 85 pound skinny little Asian girl that is my best friend comes to
visit and doesn't have any checked bags flies she is 110 pounds of cargo,
but they charge her as much as the 250 pound guy with two 55 pound checked
bags and a 25 pound carry on even though she is 1/3 the weight.
The Buffet at Circus Circus is all you can eat, but I don't get a discount
if the family of 6 sumo wrestlers are in line ahead of me and eats all of
the prime-rib before I can get to it.
Oh, and the only variability I have seen is not tied to nodes, or instances
it was tied to if I get good rolls of the dice on my caching and data store
look ups. But I could probably write some test code to see if different
instances have variability between them performance wise. I would just have
to test with sever app Id's so that I could tell if I had the same
instances. (also variability probably gets lessened by me having 100+
instances across my apps at almost all times)
My test case is this: there's an app that is just a noop wsgi app
(webob.Response('pong')). It is not under any load. There's a
different GAE app that urlfetches it once a minute on a cron schedule.
I am looking at the logs of the first app and the graphs on the
dashboard, and the time / request and even cpu / request vary wildly.
Remember, this is an app that does absolutely nothing.
Such variability isn't perfect but is OK, if I only pay for the CPU
used -- if at some point the host running the app is too busy to
actually run my code, I'm fine with a new instance serving the
concurrent requests. But when I have to pay for this, it's not OK at
Frankly, I'm not too happy that every request takes ~15ms even when
the luck is on my side, cause I know the app itself doesn't take even
that long to execute, but that's not the issue here.
Anyway, your comparisons are wrong.
$9/mo accumulates when you have multiple apps that aren't being
monetized. I have one app that is a failed commercial venture which
never took off, so I run it as a free service to a small group of
users. I could pay $9/mo for it, or I could let it stop working for
the handful of users towards the end of the day. I'm just going to
let it hit the quota limits.
This isn't quite right. Yes, there is a real-world cost to latency -
but due to a poor design decision by GAE's engineers, that cost is at
least two orders of magnitude higher than it is on any other platform.
Google has sheltered us from not only the cost of their
single-threaded python system, but also the knowledge that the system
is as inefficient as it is.
I built a GAE/Python backend for a client whose "internet bill" ends
up being ~$10/mo. I'm very concerned that this bill is about to go to
$100-$200/mo (even with the price halved) for a system that would
still be $10/mo if I had written it in Java like all my other GAE
apps. Hell, this guy's app could even run on a simple LAMP stack - it
peaks around 4 hits per second.
If my client's bill spikes like this, I'm going to look like a total
asshole. And I really shouldn't - if I had known the technology
limitations ahead of time, I could have worked around it (or simply
not used appengine for that project). I don't mind that GAE's price
reflects actual cost of services as long as 1) I know what that is
ahead of time and 2) the cost is reasonable - and a single-threaded
service layer is simply not an adequate use of precious memory in this
day and age.