--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Has that actually changed? Its the same thing, just stated in a
slightly different way.
You are not always changed the 15 minutes 'fee'. In the example in the
FAQ, only changed 4 minutes for the intermediate gap, not a full 15
minutes.
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
Datastore APIs
Q: Which operations are being charged for?
A: There are 3 categories of Datastore operations:
- Write operations (Entity Put, Entity Delete, Index Write), each of these operations will cost $0.10 per 100k operations
- Read operations (Query, Entity Fetch), each of these operations will cost $0.07 per 100k operations
- Small operations (Key Fetch, Id Allocation), each of these operations will cost $0.01 per 100k operations
Q: Under the new scheme, is it more economical to do a keys-only query that fetches 1000 keys, and then do a get on the 500 of them that I need, or just do a regular (non keys-only) query for all 1000 directly?
A: The first is more economical. Fetching 1000 keys + fetching 500 entities = $0.0001 + 0.00035 = $0.00045; fetching 1000 entities = $0.0007.
--
A: Many customers have optimized for low CPU usage to keep bills low, but in turn are often using a large amount of memory (by having high latency applications).
--
--
Hi Greg,Thanks for putting together the update. I have some questions, if you wouldn't mind answering them:(1) Why no RAM-charge?The argument that a CPU-charge was not reflective of an application's real resource usage is understandable, and that the underlying reason for changing the model was because RAM usage was not taken into account with the old model. Why not then just simply add a RAM-charge to account for this, instead of flipping the ecosystem on its head and drastically changing the pricing model, as you have now done? This question has been asked many times since the new pricing model was released and it is yet to receive an official answer.
As a side note, and with regards to each instance being pre-allocated a fixed amount of RAM upon creation: Wouldn't it be great if the geniuses at Google were able to come up with a way to utilise the non-used part of an instance's RAM for other things (eg. for extra global Memcache storage), and then when the instance requires more RAM, the system can just flush whatever is in the "non-used" part of that instance's RAM and hand it back to the instance? That would make for some serious resource-usage-optimisation!(2) How do you propose we run MapReduce type jobs under the new pricing model?The new (time used + 15 minutes) time charge for an instance would mean that running a MapReduce type job will come with a large cost overhead. For example, if such a job completes in 5 minutes and requires a number of new instances to be spun up and down (as the MapReduce philosophy dictates), 75% of the cost of this job would be in paying that extra 15 minute surcharge for each instance. So the cost of a 5 minute MapReduce job would be inflated by 4x. If we assume the new instance prices are inflated by 10x that of other offerings, a 5 minute MapReduce job will now cost 40x the amount if you were doing it elsewhere.
If the solution is to not run any MapReduce type jobs, then the past three years have been extremely poor education by Google, as we have been told time and time again that we should embrace and utilise the power of being able to run multiple instances in parallel to complete work that would have otherwise taken a much longer time to complete in serial. Furthermore, it is a great shame that the brilliant work done by Google Engineers such as Mike Aizatskyi and Brett Slatkin in creating design patterns and libraries that allow for massively-parallel work to be done is completely voided with the new prices.
(3) How can you justify your instance price when compared with Amazon EC2?On the surface, when comparing the instance costs of GAE and AWS, it appears that GAE is quite competitive with their $0.08/hour vs AWS's default Standard Small $0.085/hour for on-demand instances. However, the RAM available for each instance type is dramatically different, GAE providing 128MB RAM vs AWS's 1.7GB RAM. Even AWS's $0.02/hour Micro instance trumps a GAE instance in terms of the available memory, where each AWS Micro instance is given 613MB RAM.
If the answer to this question is something along the lines of "our prices are a true reflection of the overhead required to run a PaaS offering", then see next question....(4) Isn't gifting all apps $60/month a recipe for disaster?If we assume that the new pricing model is a "fair" reflection of the costs associated with sustainably running GAE, then Google will be effectively gifting every single app $60/month under the new pricing model. With "the majority of current active apps" falling under the new free quotas, this means Google will be running at a huge loss of $700/year on most apps. Doesn't this go totally against the point of making GAE sustainable by introducing these new prices in the first place?The only way I can see this making any sense is if the new pricing model is indeed a massively marked up price when compared to the real costs of running GAE, and $60/month is not even slightly close to what it is actually costing Google to support these free apps. From an economically viable point of view, I see no other way to justify the apparent $60/month "gift" to every app, other than the true cost is an extremely small fraction of that $60/month.
Greg, I'm sure most of us would appreciate a candid response to each of the above questions, and thanks in advance for your replies.
Hi Nick, I've answered your questions below as best I can...On Sat, Jun 25, 2011 at 4:40 AM, Nickolas Daskalou <ni...@daskalou.com> wrote:
Hi Greg,Thanks for putting together the update. I have some questions, if you wouldn't mind answering them:(1) Why no RAM-charge?The argument that a CPU-charge was not reflective of an application's real resource usage is understandable, and that the underlying reason for changing the model was because RAM usage was not taken into account with the old model. Why not then just simply add a RAM-charge to account for this, instead of flipping the ecosystem on its head and drastically changing the pricing model, as you have now done? This question has been asked many times since the new pricing model was released and it is yet to receive an official answer.We have in essence added a RAM charge by charging for Instances. By having an Instance up with an allocated amount of memory you are essentially using that RAM. So, by charging for the Instance we are charging you for the combination of the RAM and CPU. We considered splitting this charge out so we would continue to charge CPU-hours and then also charge Instance-hours (which we could've called RAM-hours). This both seemed more confusing as well as would not have been cheaper, so it didn't seem worthwhile. I know that there has been a lot of discussion about charging this way instead. In the end it, whether you call it RAM-hours or Instance-hours and whether or not you charge for CPU-hours on top of it, it would end up with the same result. Which is that applications are charged for the amount of RAM allocated and the amount of time it is allocated for. This means applications that want to save money will need to optimize around using fewer RAM-hours which in essence means taking less time to get things done. But I might be misunderstanding the question because if you feel a RAM charge is straightforward I'm not sure I understand why you feel charging Instance-hours is "flipping the ecosystem on its head." I hope that helps give some clarity.
A: Many customers have optimized for low CPU usage to keep bills low, but in turn are often using a large amount of memory (by having high latency applications).How does that work? Can you share an example?Also, instead of pushing the burden of supporting threading on to the developers, shouldn't GAE optimize resources behind the scenes? You're running many instances on every physical computer, so when one instance is idel, others should be using the CPU and other resources. If keeping an idle instance in memory costs too much, then adjust the pricing by adding a cost per idle second, or something along these lines.
My point is: GAE has a lot of limitations compared to other solutions, but it has one HUGE advantage: it's simple and scales automatically. That advantage is so big that it trumps all the other limitations (at least for me). Now that I have to manage my instances and rewrite my code to handle threading, I can't help but feel that I'm losing that one big advantage.
Interesting idea. Could you use backends for non-urgent tasks? Then the scheduler wouldn't need to be involved at all.
Actually, even getting per-version config would be great. At least
then we could use a separate version for low-priority backend
processing.
Robert
Robert
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/XMy-m5aodMQJ.
I guess the differences are as follows:1. With instance hours the focus is not on optimizing RAM consumption at all, but on reducing latency (increasing RAM consumption, reducing costs) and controlling when instances come up and go.2. With RAM-hours pricing, idle instances would cost lesser as CPU-hours are not charged.3. The pricing would be linear and pay-for-what-you-use: amount of RAM for the time used. Not a fixed instance type with some amount of RAM.So, it is "flipping the ecosystem on its head".
Thanks, this clarifies much! Questions below:On Thu, Jun 23, 2011 at 11:49 PM, Gregory D'alesandre <gr...@google.com> wrote:
Datastore APIs
Q: Which operations are being charged for?
A: There are 3 categories of Datastore operations:
- Write operations (Entity Put, Entity Delete, Index Write), each of these operations will cost $0.10 per 100k operations
- Read operations (Query, Entity Fetch), each of these operations will cost $0.07 per 100k operations
- Small operations (Key Fetch, Id Allocation), each of these operations will cost $0.01 per 100k operations
Q: Under the new scheme, is it more economical to do a keys-only query that fetches 1000 keys, and then do a get on the 500 of them that I need, or just do a regular (non keys-only) query for all 1000 directly?
A: The first is more economical. Fetching 1000 keys + fetching 500 entities = $0.0001 + 0.00035 = $0.00045; fetching 1000 entities = $0.0007.This makes sense, and encourages more use of memcache. to hold entities. One question that I've been wondering a while - presuming no caching, does this query-keys+batch-get approach produce higher latency than a simple query, and if so, by how much?
Also, is there any way we can get the transaction timestamp out on datastore writes? This would *dramatically* improve the robustness of code that tries to keep memcache in sync with the datastore during contention. I've spoken with Alfred and Max about this, but I don't know if it's a priority. This could potentially reduce datastore bills by orders of magnitude.
Thanks,Jeff
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
This makes sense, and encourages more use of memcache. to hold entities. One question that I've been wondering a while - presuming no caching, does this query-keys+batch-get approach produce higher latency than a simple query, and if so, by how much?a db.Get will force strong consistency in the High Replication Datastore, which will introduce higher latency depending on how many entity groups you are fetching from (see other threads about this problem). If you set the read_policy to EVENTUAL_CONSISTENCY (or you are still using M/S Datastore) you will only pay the additional RPC latency. This is the not taking into account the 1000 entities vs 500 entities part of this question, which is hard to predict.
Also, is there any way we can get the transaction timestamp out on datastore writes? This would *dramatically* improve the robustness of code that tries to keep memcache in sync with the datastore during contention. I've spoken with Alfred and Max about this, but I don't know if it's a priority. This could potentially reduce datastore bills by orders of magnitude.:-), Ya it will save you the gets to populate the cache.
On Sat, Jun 25, 2011 at 3:26 PM, Alfred Fuller <arfuller+...@google.com> wrote:
This makes sense, and encourages more use of memcache. to hold entities. One question that I've been wondering a while - presuming no caching, does this query-keys+batch-get approach produce higher latency than a simple query, and if so, by how much?a db.Get will force strong consistency in the High Replication Datastore, which will introduce higher latency depending on how many entity groups you are fetching from (see other threads about this problem). If you set the read_policy to EVENTUAL_CONSISTENCY (or you are still using M/S Datastore) you will only pay the additional RPC latency. This is the not taking into account the 1000 entities vs 500 entities part of this question, which is hard to predict.For fetching let's say 100 entities, what % of the cost of the fetch is just the RPC? Is this fairly insignificant compared to the cost of pulling 100 entities from the datastore?Also, when you say RPC, I presume you mean the RPC from our appserver to whatever system manages datastore requests, right? Because there is likely a handful of requests from that system out to the individual tablet servers that hold the entity data, right?
Also, is there any way we can get the transaction timestamp out on datastore writes? This would *dramatically* improve the robustness of code that tries to keep memcache in sync with the datastore during contention. I've spoken with Alfred and Max about this, but I don't know if it's a priority. This could potentially reduce datastore bills by orders of magnitude.:-), Ya it will save you the gets to populate the cache.A couple of us in Objectify-land tried to come up with a way to keep the memcache synchronized with the datastore during contended writes, and we failed to come up with a good answer. I'm fairly certain that without access to the write timestamp, there's no way to guarantee synchronization.Jeff
--
Speaking of charges, how are we to interpret the new fields
CommitCost.requested_entity_puts and
CommitCost.requested_entity_deletes in eg. the PutResponse protocol
buffer which were added in the latest SDK? Should we ignore the
existing IndexWrites, EntityWrites etc.?
Here are some sysadmins who will run your AWS instances for you:
http://www.heroku.com/pricing/
The price is a flat $0.05/hour. On Appengine you need to give a weeks
notice to get that price and if you don't use it you loose it,
otherwise the charge is $0.08/hour. An Appengine front-end instance
has less than half memory.
On this note, I'm very glad to see GAE coming out of preview.
I believe you are underestimating the amount of bad experience users
that are locked into the platform will put up with before they switch.
I'm sure Google could have done some PR before this change, and
prepped the users for the change, but I'm also sure most users will
just bite the bullet and pay up. Sure, some will leave, but I bet most
will stick to GAE.
If you coded your application without any layer of abstraction, and
your code is highly optimized for running on GAE, it costs more to
move away from it, than to sustain the increased fees until you can
monetize your application. Of course, if the application wasn't meant
to be used for business, that's different. If it's a hobby, you can
'afford' to move to another platform. But for a business that is
planning on monetizing, moving is just as expensive as staying.
--
Branko Vukelić
bra...@herdhound.com
Lead Developer
Herd Hound (tm) - Travel that doesn't bite
www.herdhound.com
And out here we have Google, where in 3 years of going live with the
product, we are seeing paradigm shift in pricing. And this shift has
come as a total disregard for the application developers time, effort
and contribution to the platform. Existing developers with really
large applications are being forced(without alternative option) down a
path that is unknown, while existing issues have not been resolved and
features asked for years back not delivered. Personally i dont even
have time and energy to understand all the new jargon that has been
thrown out there.
I'm holding 2 workshops for Google AppEngine next month. I used to
look forward to encourage developers to use GAE as a development
platform, but due to this new pricing changes, I'm not that excited
anymore. It's not because I don't like GAE anymore. I believe that the
GAE infrastructure offers a lot of value as an infrastructure.
However, I really think that the pricing changes announcement was
announced to early. The early announcement has caused a lot of
confusion over a long period of time even until now. The most basic
question that needs to be answered is "How much will our apps cost to
run now?" Maybe sample apps and their corresponding prices will help
us visualize if we should really worry about the upcoming changes.
Something like...
------
Sample App 1 (datastore intensive app)
Average Response Time: 200ms
Average # of users per day: 100K
Cost / day: ???
------
Sample App 2 (Compute intensive app)
Average Response Time: 800ms
Average # of users per day: 100K
Cost / day: ???
------
and so on...
------
At least I can get a kind of "official" idea of the costs of running
apps under the new pricing scheme.
Thanks and enjoy!
Albert
Well, that's the thing. It's the "What did you expect?" factor.
Did we seriously expect that this (otherwise great) platform will be
cheap forever? "Look, Google is giving away their own tech, so we can
build sites at ridiculously low price!" Not a chance. If you think
rationally about it, there was no chance it could have been that way
except maybe initially to attract developers and lock them into the
platform. Nobody lied to us here. They just never claimed it'd be
almost free forever.
You are aware that there was a preview of the business edition[1]? It
lists some of the features that people here were screaming for (SSL on
custom domains anyone?). Well, it will come to the 'normal' GAE, but
I'm guessing not before the normal GAE _becomes_ the business edition.
[1] http://code.google.com/appengine/business/
I believe that one was also listed on the business version page as
coming to all of GAE (probably excluding the completely free version).
Type | Count | Cost/Operation (new pricing) | Ammount (new pricing) |
Datastore Entity Put Ops | 612295 | 0,000001 | $0,61 |
Datastore Entity Delete Ops | 25042 | 0,000001 | $0,03 |
Datastore Index Write Ops | 2620456 | 0,000001 | $2,62 |
Datastore Query Ops | 6688767 | 0,0000007 | $4,68 |
Datastore Entity Fetch Ops | 178293618 | 0,0000007 | $124,81 |
Datastore Id Allocation Ops | 0 | 0,0000001 | $0,00 |
Datastore Key Fetch Ops | 35712 | 0,0000001 | $0,00 |
$132,75 |
I understand that you are upset that your appengine bill might go up
4X, but how do you jump from this to the conclusion that "Google
should support PHP"??
> Also, i am not able to understand the logic behind charging half for
> python instances since AppEngine does not support threading as of now.
> I am a python threading noob, but going by the offer that AppEngine
> team has thrown out, it seems threading can increase performance at
> most by 2x, which is the only way you can justify 1/2 price. I find it
> hard to believe.
I think you assume too much. I interpret this as a temporary salve to
keep Python developers from feeling like second-class citizens until
multithreaded Python is available.
Jeff
Nothing in that rant had anything to do with billing. If you want to
start a separate thread for "want php" go ahead, or better yet star
the issue. Or even better yet just use Quercus.
The obvious conclusion from these billing changes is not that GAE
needs developers, it's that GAE needs revenue.
>> I think you assume too much. I interpret this as a temporary salve to
>> keep Python developers from feeling like second-class citizens until
>> multithreaded Python is available.
>
> I am not assuming too much. It's simple math. Besides that, i have
> never seen a hosting company tell me that since PHP version x now has
> support for this new feature. If you implement it, its good, else we
> are going to change 4X for the server.
You assume that because Python instances are now priced 1/2 Java
instances, that this is a) permanent and b) reflective of the cost of
a Python instance. It may just be a temporary salve to placate Python
developers until multithreading is available.
I've been fairly critical of the billing changes on this list, but you
come across as a total wingnut. We have all (apparently) been living
on the largess of Google - which is nice of them, except they didn't
tell us about it. And the price signaling they were giving us was all
wrong. There's a fair bit of reason to be upset about the changes
(especially if you now need to redesign parts of your app), but you
can't argue that these changes aren't rational. What isn't rational
is how you are somehow trying to use this issue as a springboard for
PHP advocacy.
Jeff
This is the stupidest thing anyone has said to me in years.
Jeff
And more to come if you keep replying to him. :)
--
The plan that is in place will be very close to what we launch with, because when we looked at different pricing plans, our analysis of previous usage trends and billing led us to believe that the one we have announced was the most balanced in terms of being developer friendly as well as sustainable. Unfortunately, we did understand that the changes would not work for some people. The most constructive discussion we can have right now is around how we can make this pricing work. What tools can we provide? What data do we not display? How should support work? And so forth. Throttler knobs, for instance, are an example of a feature where much of the requirements were sourced from constructive user feedback. Raising the priority of Python concurrency was another one.
[...snippage...]
I agree. On the other hand, it would be foolish if Google went forth
with this change without calculating this risk into the projected
profit. And I doubt they have made the mistake of omitting the risk
factor. So, while I wish they would reconsider it, and while Ikai's
and others' goodwill is greatly appreciated, I'm very skeptical as to
the possible reconsideration of this move.
The best we can hope for, I believe, is clarification (if not
simplification) of calculations we have to make in order to determine
the final cost of running our apps ahead of time. Maybe an in-depth
video which thoroughly cover all nooks and crannies would be nice.
I know I'm not being very constructive, but it's nevertheless my opinion.
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Also, on a spike in traffic today I got 70 instances up and running for around 30 minutes (Java app without threading yet...). This gives around 70 instances * (30+15 minutes) = 3150 instance hours = $252.
Also, I think the 15 minutes is a floor. So it's just 30 minutes per instance.
BTW one thing to keep in mind when talking about datastore operations
is that this really isn't a fundamental change to the way things are
priced now. If you look closely at request costs you'll notice there
are already a lot of magic numbers (for example, an index write is
always 17 cpu_ms). The new pricing of datastore ops just makes the
magic numbers more explicit. And, it would seem, more expensive.
So... the question of whether to fetch one big entity vs lots of small
entities is something you have to deal with already. New datastore
pricing doesn't change this. To save $$ and wall-clock time, you
should denormalize your data structures and utilize memcache now.
Jeff
On Fri, Jul 1, 2011 at 10:16 AM, Sergey Schetinin <ser...@maluke.com> wrote:Also, I think the 15 minutes is a floor. So it's just 30 minutes per instance.
> On 29 June 2011 07:57, Ronoaldo Pereira <rono...@gmail.com> wrote:
>>
>> Also, on a spike in traffic today I got 70 instances up and running for
>> around 30 minutes (Java app without threading yet...). This gives around 70
>> instances * (30+15 minutes) = 3150 instance hours = $252.
>
> That's 3150 instance-minutes = 52.5 instance-hours = 4.2USD or 2.625USD at
> reserved pricing
I think you have to have "room" for it to do something. Run your QPS up over 100 so that you can see it working.
I would have preferred a flat "we charge in increments of 15 minutes". ie once an instance is started, it will only be shutdown after multiples of 15 minutes, and charged for that live time / uptime (whether actively handling requests or not).
Wasn't the point of GAE that you do not have to do that?
--
Branko Vukelić
bra...@herdhound.com
Lead Developer
Herd Hound (tm) - Travel that doesn't bite
www.herdhound.com
Love coffee? You might love Loveffee, too.
loveffee.appspot.com
On Wed, Jul 6, 2011 at 12:03 AM, zdravko <email.w...@gmail.com> wrote:Wasn't the point of GAE that you do not have to do that?
> Why can we not config the types of load increases and drops, under
> which it should be started and shut down ?
--
Branko Vukelić
bra...@herdhound.com
Love coffee? You might love Loveffee, too.
loveffee.appspot.com
100% agree; it is a complete shame that we do have to care now.
>>
> But with the scheduler knobs and more configurability asked for, you'll
> anyway have to do that.
The only reason to care *at all* about this, is because you now *have
to* due to the new pricing.
Executive summary:What I want up front is a clear statement about what GAE is aimed at today, something definite that lets me plan, not just marketing speak, but "this is what it is, we're aiming at this kind of use, not this, that or the other".