--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Has that actually changed? Its the same thing, just stated in a
slightly different way.
You are not always changed the 15 minutes 'fee'. In the example in the
FAQ, only changed 4 minutes for the intermediate gap, not a full 15
minutes.
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
Datastore APIs
Q: Which operations are being charged for?
A: There are 3 categories of Datastore operations:
- Write operations (Entity Put, Entity Delete, Index Write), each of these operations will cost $0.10 per 100k operations
- Read operations (Query, Entity Fetch), each of these operations will cost $0.07 per 100k operations
- Small operations (Key Fetch, Id Allocation), each of these operations will cost $0.01 per 100k operations
Q: Under the new scheme, is it more economical to do a keys-only query that fetches 1000 keys, and then do a get on the 500 of them that I need, or just do a regular (non keys-only) query for all 1000 directly?
A: The first is more economical. Fetching 1000 keys + fetching 500 entities = $0.0001 + 0.00035 = $0.00045; fetching 1000 entities = $0.0007.
--
A: Many customers have optimized for low CPU usage to keep bills low, but in turn are often using a large amount of memory (by having high latency applications).
--
--
Hi Greg,Thanks for putting together the update. I have some questions, if you wouldn't mind answering them:(1) Why no RAM-charge?The argument that a CPU-charge was not reflective of an application's real resource usage is understandable, and that the underlying reason for changing the model was because RAM usage was not taken into account with the old model. Why not then just simply add a RAM-charge to account for this, instead of flipping the ecosystem on its head and drastically changing the pricing model, as you have now done? This question has been asked many times since the new pricing model was released and it is yet to receive an official answer.
As a side note, and with regards to each instance being pre-allocated a fixed amount of RAM upon creation: Wouldn't it be great if the geniuses at Google were able to come up with a way to utilise the non-used part of an instance's RAM for other things (eg. for extra global Memcache storage), and then when the instance requires more RAM, the system can just flush whatever is in the "non-used" part of that instance's RAM and hand it back to the instance? That would make for some serious resource-usage-optimisation!(2) How do you propose we run MapReduce type jobs under the new pricing model?The new (time used + 15 minutes) time charge for an instance would mean that running a MapReduce type job will come with a large cost overhead. For example, if such a job completes in 5 minutes and requires a number of new instances to be spun up and down (as the MapReduce philosophy dictates), 75% of the cost of this job would be in paying that extra 15 minute surcharge for each instance. So the cost of a 5 minute MapReduce job would be inflated by 4x. If we assume the new instance prices are inflated by 10x that of other offerings, a 5 minute MapReduce job will now cost 40x the amount if you were doing it elsewhere.
If the solution is to not run any MapReduce type jobs, then the past three years have been extremely poor education by Google, as we have been told time and time again that we should embrace and utilise the power of being able to run multiple instances in parallel to complete work that would have otherwise taken a much longer time to complete in serial. Furthermore, it is a great shame that the brilliant work done by Google Engineers such as Mike Aizatskyi and Brett Slatkin in creating design patterns and libraries that allow for massively-parallel work to be done is completely voided with the new prices.
(3) How can you justify your instance price when compared with Amazon EC2?On the surface, when comparing the instance costs of GAE and AWS, it appears that GAE is quite competitive with their $0.08/hour vs AWS's default Standard Small $0.085/hour for on-demand instances. However, the RAM available for each instance type is dramatically different, GAE providing 128MB RAM vs AWS's 1.7GB RAM. Even AWS's $0.02/hour Micro instance trumps a GAE instance in terms of the available memory, where each AWS Micro instance is given 613MB RAM.
If the answer to this question is something along the lines of "our prices are a true reflection of the overhead required to run a PaaS offering", then see next question....(4) Isn't gifting all apps $60/month a recipe for disaster?If we assume that the new pricing model is a "fair" reflection of the costs associated with sustainably running GAE, then Google will be effectively gifting every single app $60/month under the new pricing model. With "the majority of current active apps" falling under the new free quotas, this means Google will be running at a huge loss of $700/year on most apps. Doesn't this go totally against the point of making GAE sustainable by introducing these new prices in the first place?The only way I can see this making any sense is if the new pricing model is indeed a massively marked up price when compared to the real costs of running GAE, and $60/month is not even slightly close to what it is actually costing Google to support these free apps. From an economically viable point of view, I see no other way to justify the apparent $60/month "gift" to every app, other than the true cost is an extremely small fraction of that $60/month.
Greg, I'm sure most of us would appreciate a candid response to each of the above questions, and thanks in advance for your replies.
Hi Nick, I've answered your questions below as best I can...On Sat, Jun 25, 2011 at 4:40 AM, Nickolas Daskalou <ni...@daskalou.com> wrote:
Hi Greg,Thanks for putting together the update. I have some questions, if you wouldn't mind answering them:(1) Why no RAM-charge?The argument that a CPU-charge was not reflective of an application's real resource usage is understandable, and that the underlying reason for changing the model was because RAM usage was not taken into account with the old model. Why not then just simply add a RAM-charge to account for this, instead of flipping the ecosystem on its head and drastically changing the pricing model, as you have now done? This question has been asked many times since the new pricing model was released and it is yet to receive an official answer.We have in essence added a RAM charge by charging for Instances. By having an Instance up with an allocated amount of memory you are essentially using that RAM. So, by charging for the Instance we are charging you for the combination of the RAM and CPU. We considered splitting this charge out so we would continue to charge CPU-hours and then also charge Instance-hours (which we could've called RAM-hours). This both seemed more confusing as well as would not have been cheaper, so it didn't seem worthwhile. I know that there has been a lot of discussion about charging this way instead. In the end it, whether you call it RAM-hours or Instance-hours and whether or not you charge for CPU-hours on top of it, it would end up with the same result. Which is that applications are charged for the amount of RAM allocated and the amount of time it is allocated for. This means applications that want to save money will need to optimize around using fewer RAM-hours which in essence means taking less time to get things done. But I might be misunderstanding the question because if you feel a RAM charge is straightforward I'm not sure I understand why you feel charging Instance-hours is "flipping the ecosystem on its head." I hope that helps give some clarity.
A: Many customers have optimized for low CPU usage to keep bills low, but in turn are often using a large amount of memory (by having high latency applications).How does that work? Can you share an example?Also, instead of pushing the burden of supporting threading on to the developers, shouldn't GAE optimize resources behind the scenes? You're running many instances on every physical computer, so when one instance is idel, others should be using the CPU and other resources. If keeping an idle instance in memory costs too much, then adjust the pricing by adding a cost per idle second, or something along these lines.
My point is: GAE has a lot of limitations compared to other solutions, but it has one HUGE advantage: it's simple and scales automatically. That advantage is so big that it trumps all the other limitations (at least for me). Now that I have to manage my instances and rewrite my code to handle threading, I can't help but feel that I'm losing that one big advantage.
Interesting idea. Could you use backends for non-urgent tasks? Then the scheduler wouldn't need to be involved at all.
Actually, even getting per-version config would be great. At least
then we could use a separate version for low-priority backend
processing.
Robert