I’m not worried about the minimum price so much as I am the 8 cents per instance hour. Rackspace cloud gives you a significantly more powerful instance for 11 cents an hour, and most of my CPU hours were billed by the API not by the Instance. So I’m not sure how my model just changed.
All of my apps that were running at $40 a month are at least 3 instances 24x7 and most are well above that… From what I can tell, I’m going from $1.40 a day. To 60*24*.05 = $72 a day. Plus 98 cents for bandwidth. So $73 a day. That’s a change from $45 a month to $2,100 a month.
If this is the case, I’m out. We had expected things to go up 25-50% not 500%
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
I’m going to hold you to that… I know where you work, and I’m not beyond sitting in front of the door every day with a sign that says “Gregory D'alesandre promised to only increase my price 2.5x not 70x” and people will say “really? He’d be happy with knowing his cost went up 2.5x? That man is a loon”
Only likely not being from Canada or Michigan they wouldn’t use the word loon.
-----Original Message-----
From: google-a...@googlegroups.com
[mailto:google-a...@googlegroups.com] On Behalf Of Sylvain
Sent: Tuesday, May 10, 2011 3:25 PM
To: Google App Engine
Subject: [google-appengine] Re: Is App Engine suddenly becoming more
expensive???
--
This could be fixed by restoring the 2000 email quota for emails sent
to admins of the app, and bumping up the price of the first X emails
sent to non-admins, with a bulk discount for further usage.
> $9 = cost of a dream host account...
Dreamhost gives you storage, bandwidth, memory etc for $9. On App
Engine $9 will buy you the opportunity to be further charged for
actual usage. It's a regressive tax on startup projects, which
considering the effort made to offer a completely free tier, is self
defeating.
Go is currently single-threaded, but this too is something that could change over time.Greg
In the future if Google's scheduler is not optimal, you will be
charged for it. What is the incentive to get it right? How will you
know it is right?
Here's another perverse incentive: under the current scheme memory
usage is used as an input to the scheduler as well as latency and
start-up time. Apps which use less memory can have more instances at
the same resource cost to Google. My incentive is to optimise memory
usage. Under the upcoming scheme you are charged per-instance, so
there is zero incentive to optimise memory usage.
For once, it now seems that Java has an advantage, as Python (and GO) users do not have the option of using multi-threaded to reduce the number of instances. So although Python (and GO) will *potentially use less resources and a lower footprint, will they now get charged more due to a limitation in the app engine runtime?
Also, although goroutines and channels are present, when a Go app runs on App Engine only one thread is run in a given instance. That is, all goroutines run in a single operating system thread, so there is no CPU parallelism available for a given client request. We expect this restriction will be lifted at some point.
Even though $9/month is pretty insignificant to many of us because1. It is a fixed cost2. It is smallWe still should not gloss over it. Like Stephen said, this is really just a tax for the privilege of using blob-store and other billing-enabled services.
Having said that, I reckon that this is Google's attempt to streamline their business package (which they are doing away with) by saying that anyone that needs higher level services will pay a per-app or per-account cost. In that light, it becomes more palatable (as it simplifies the offering), and I for one am okay with that.I think the bigger concern is the per-instance cost. This is especially troubling for1. folks that depend on the always-on features in java.
2. folks in Python or the new GO runtime that don't as yet have concurrent request support. More instances will be spun dynamically with a consequential cost to them which is expected to be significant).
Hopefully, the new scheduler will iron out a lot of these unknowns so we're back to being happy app-engine users. Right now, it seems we're the only ones in the whole Google Ecosystem that's unhappy with some of the recent announcements.Actually, scratch that - I'm very happy for 2 things:- GO language runtime support (the geek in me is just thrilled)- Moving away from Preview status (I was always concerned about the life of GAE beyond the promised 3 years support post EOL. This removes my fears).
Hi Stephen,I am totally with you on it. I actually alluded to this earlier, when I said:For once, it now seems that Java has an advantage, as Python (and GO) users do not have the option of using multi-threaded to reduce the number of instances. So although Python (and GO) will *potentially use less resources and a lower footprint, will they now get charged more due to a limitation in the app engine runtime?In building my app, I spent a lot of time optimizing resource usage, et al. With this model, all that seems to be for naught.Somewhere in the docs today, there's a note to the effect that "apps on the java runtime will be charged more for using using more resources".
I don't understand how this will be ironed out, which is why I termed this all unknowns. From what Greg says, it sounds like there's still some stuff to be ironed out on Google's end, and we'd only really know how things shake out once we see how the new scheduler works.I'm holding my breath ...P.S. Google has been pretty fair and upfront with app engine - I haven't had a reason to distrust them yet. I'm hoping that their promotion of AppEngine from Preview will still maintain this fairness.
I just checked the new proposed pricing here...
http://www.google.com/enterprise/appengine/appengine_pricing.html
I'm confused why all the items below "Channel API" in the API Pricing
models have check marks instead of a price per unit. What does that
mean?
And when they say, "Frontend Instances", does that include instances
handling task queues and crons?
Thanks!
Albert
On May 11, 8:24 am, Ugorji <ugo...@gmail.com> wrote:
> eeIt's actually stated in the blog:http://blog.golang.org/2011/05/go-and-google-app-engine.html
>
> Also, although goroutines and channels are present, when a Go app runs on
> App Engine only one thread is run in a given instance. That is, *all
> goroutines run in a single operating system thread, so there is no CPU
> parallelism* available for a given client request. We expect this
> restriction will be lifted at some point.
>
> So you can still use go routines, channels, etc - but we're back to like the
> days of green threads in java where the runtime multiplexes them on a single
> thread (which is fine). However, we don't get concurrent web requests on the
> same instance (which is not fine). Consequently, right now, Java Runtime
> seems to have a pretty significant advantage over the others (even over GO
> which has concurrency as some of its major advantages). And with instance
> pricing, it seems like it directly affects cost.
--
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Well, always-on is pretty important for low-traffic apps. They usually
do not use free quota and quite often they are not even close. Having
to pay for always-on + app fee + instance hours would be huge
difference. And all of that just to make smooth start for new
requests...
BTW, good to see Google App being expanded. It's great to have service
like that - it really takes away most of that stuff that is not actual
development :)
Yeah I may need to re-write my stuff to run java instead of Python. 100k
people show up in one hour to watch a survivor clip and suddenly I need to
serve 10 requests a page times 27 people per second, so I have to serve 270
Requests per second (this happens for me a lot). If each request takes
200ms (which is about the average) then each instance serves 5 requests per
second. I need 54 instances.
Previously Paying for CPU hours it appears the instance did almost nothing I
got billed for the API call, and the "Instance" sat there idle saying Oh I'm
just waiting for the write buffer to send this data I pulled from Mem-cache.
And I would get a bill for "Read from Mem cache" and "data out".
@Greg, Can you provide more clarity on this Instance Hours thing?App engine was attractive because the pricing was for actual CPU consumed as against paying for idle time. And the way GAE forces everything like deferred tasks, queues, cron jobs, etc., into new web requests made sense only as long as it was CPU based billing. Imagine someone doing things this way on instance based platforms like EC2.
Can someone please clarify how the Instance based billing will impact apps based on single threaded python runtime?
$9 upfront payment also sounds absurd, just to have ability to exceed free quota. Can't that be made as SLA + SSL + whatever_extra_some_apps_want fees for those who want it?I can understand reducing free quota limits and maybe increasing prices to achieve your 2x-4x increases. It's hard to accept tiered pricing and instance hour based pricing.
I am currently having Always on feature. so does that mean I shall end up paying 3 * 24 * 30*0.05 $s?
Also my application uses email heavily to notify users.And the free quota been just reduced to 100 recipients .
Runtime overhead is counted against the instance memory limit. This will be higher for Java than for Python.
I wonder how this is going to work with super heavy things like django. Previously having an idle instance around for a while was not a big deal. Now with a more agressive scheduler it might be interesting.
Glad I use light frameworks and have superlow startup times.
Important concerns raised on the blog comment:<Quoting @Deep>"Due to customer feedback and to better service memory intensive applications, we will be eliminating CPU hours."
I can't imagine anyone actually requested this. That's corporate bs for "we are making this unpopular change but going to pretend customers requested it".
"Instead, our serving infrastructure will charge for the number of Instances running"
As companies age, they start looking for ways to make free money without actual work. (Think of the big banks.) Sad to see signs Google is going that way. If this move results in charging even for instances sitting idly (while we don't even have direct control over the # of instances!) that would be a pretty big change from "no evil". My app has light load and is set to multithreaded yet AE keeps spawning new instances for no reason. I refuse to pay for those.
As far as I can tell, yes that is how it is. The second instance would spin up if the first one is busy serving some other request.They probably have a request pool. So requests would wait a little and if none of the existing instances is ready to serve the request then a new instance would spin up.
-Brandon
Dear Gregory D'alesandre
--
+1IMHO, This is the single biggest risk to Google with this move. The developers that adopted this did so out of trust, even when the platform was in preview mode and some runtimes were experimental. The price was right, and Google guaranteed that they would not pull the plug under us without giving us three year advance notice. With this, and the trust (mostly deserved), we went about happily enjoying all of this insane google engineering now at our fingertips.
It feels like the rug was pulled out from under us, and the unknowns are killing us.
Google has always been very protective of their brand and their company's DNA (which is rooted in trust and engineering). Let's hope all this concern ends up being much ado about nothing.
On Wed, May 11, 2011 at 7:19 AM, Vinuth Madinur <vinuth....@gmail.com> wrote:Important concerns raised on the blog comment:<Quoting @Deep>"Due to customer feedback and to better service memory intensive applications, we will be eliminating CPU hours."
I can't imagine anyone actually requested this. That's corporate bs for "we are making this unpopular change but going to pretend customers requested it".Hi Vinuth, I can imagine how it sounds like corporate bs, but in reality with the current CPU-only based model, we have a number of limitations that many potential customers were unhappy about. High latency apps essentially hold on to lots of memory without any CPU usage, this means that we can't scale it because it would just continue to gobble up more memory unbounded. Under the new model any app can scale, but will be paying for the memory as well as the cpu used, this opens App Engine up to a number of developers/applications that weren't able to use it before and wanted to.
From the blog post:
"It will also reaffirm our deprecation policy whereby we will
support deprecated versions of product APIs for 3 years, allowing
applications written to prior API specifications to continue to function."
Before this announcement we could be sure that App Engine would be
around for at least 3 years. After the announcement we can be sure it
will be around for at least... 3 years.
To be clear, prices will be higher, but I've seen people quoting number such as 70x higher which should not be the case. Once we have the changes to the scheduler done as well as the comparative bills, you'll then be able to see how much it will actually cost. We could've waited until that point to talk about this but we wanted to give as much advance notice as possible.
Instance hours are billed for the instances being up for an app. This is one of the reasons that we are changing our scheduler, to ensure we aren't creating instances that aren't needed and that we are taking down instances once they are no longer needed. Does that help clarify?
1000 user threads, each doing 100 web page requests that issue a couple of backend datastore reads with sessions enabled: - without multithreading - 30 nodes - average 2.4 sec response time - no errors - with multi threading - 8 nodes - average 1.2 sec response time - no errors Summary: for my app (Java, Stripes, Slim3), multi-threading is twice as fast and requires about 1/4th as many instances. |
--
class configuration | Memory limit | CPU limit | Cost per hour per instance |
---|---|---|---|
B1 | 128MB | 600MHz | $0.08 |
B2 (default) | 256MB | 1.2GHz | $0.16 |
B4 | 512MB | 2.4GHz | $0.32 |
B8 | 1024MB | 4.8GHz | $0.64 |
class configuration | Memory limit | CPU limit | Cost per hour per instance |
---|---|---|---|
B1 | 15MB | 600MHz | $0.02 |
B2 | 25MB | 1.2GHz | $0.04 |
B4 | 50MB | 2.4GHz | $0.08 |
B8 | 100MB | 4.8GHz | $0.16 |
Jeff