Boat Load of Questions (which might turn in to feature requests)

117 views
Skip to first unread message

Drake

unread,
Jul 29, 2012, 3:43:31 PM7/29/12
to google-a...@googlegroups.com

Even with threading I don’t seem to be able to get an F8 backend to do more than 1.9 CPU Seconds per second. Which seems to be the same number I get on an F4 Backend.

Am I doing something wrong? Or is the Python Gil not capable of using all the CPU on this system? Or is an F8 CPU Second twice as much as an F4 and this is as expected.  If the latter why is the app only 15% faster on an F8 (I know you can’t answer that with out the code)


If the gil is limited case why not allow backends to have more than one request at a time?  If the Gil is not limited still why can I only have 1 request per backend?

 

Why can’t I configure how many requests  Frontends handle? F1’s serving 8 requests is silly. F4’s handling 8 requests is often equally silly in the opposite direction.

Why can’t I configure how many requests a backend can handle? Or have a back end act like a front end and handle 8?

 

Why can’t I specify a specific version of an App in Apps For domains?  It would be very useful to have  User.Somedomian.com be a version of an app  and Affiliates.somedomian.com be an app and they would share a database and nothing else.

 

Public backends can run longer than 60s but only handle one request.  But they 503 if the request before them in line takes more than 10s to complete. Why is this not configurable?  Especially since dynamic Public frontends seem to only spin up a new instances if there are 8 requests in the queue.  This means you get a lot of 503’s

 

Defer task buckets don’t have a mechanism for manual token addition.  I’d like this. It would let me read the queue and decide how many backends to spin up, and when tasks completed in 1 to 500s I would put a token in the bucket. (that’s not a question so much as a Does this make sense?)

Memcache gets slow sometimes. Really slow. Slower than datastore. Is there a “memcache health status” that we can hit to identify when we should stop using it for a while? MasterSlave had a Readonly we could detect and change behavior. I can’t find any such thing for HR.

 

Requests through edge cache get edge cached. Why do outgoing requests not do the same?  There seems to be a lot of people “scraping” content via Appengine. Why are those requests not “edge” cached so that a. it would be harder to build appengine DOS attacks. B. fetches would take less time.

 

Because I am threaded and use 200% more CPU seconds than Realtime seconds I suspect I’m a bad neighbor to share an instance with.  Is this true? Am I clobbering the neighbor?

 

Do we always have to go through Apps For Domains? How much money do I have to send you to never have to talk to their tech support again? Similar question Naked Domians. I want one. Building a URL Shortener that has to have www. On the front is breaking my heart.

 

Brandon Wirtz
Stremor.com Chief Technology Officer
BlackWaterOps: President / Lead Mercenary

Description: http://www.linkedin.com/img/signature/bg_slate_385x42.jpg

Work: 510-992-6548
Toll Free: 866-400-4536

IM: dra...@gmail.com (Google Talk)
Skype: drakegreene
YouTube: BlackWaterOpsDotCom

BlackWater Ops

Cloud On A String Mastermind Group

 

 

 

 

 

 

 

image001.jpg

Marcel Manz

unread,
Jul 30, 2012, 6:15:50 AM7/30/12
to google-a...@googlegroups.com
Same question for task queue health status, or any kind of programmatic access to the last measured value (+ maybe 5 min average) by the system status for any service / type from: http://code.google.com/status/appengine

In my use case I would be highly interested in reading the current add to taskqueue latency. So when app engine has a hiccup I don't have to wait for enqueuing tasks but rather could process them directly.

Certainly I could measure those latencies myself and store the information in memcache for subsequent requests, but I would prefer if those already GAE measured latencies could be read via an API, such as an extended capabilities API?

https://developers.google.com/appengine/docs/java/javadoc/com/google/appengine/api/capabilities/CapabilityState

Barry Hunter

unread,
Jul 30, 2012, 10:59:57 AM7/30/12
to google-a...@googlegroups.com
On Mon, Jul 30, 2012 at 11:15 AM, Marcel Manz wrote:
>
> Certainly I could measure those latencies myself and store the information
> in memcache for subsequent requests,

I would think you would have to. AppEngine is massively distributed,
operating out of multiple data-centres.

So the latencies you see accessing APIs are only really going to apply
to your little corner. The (taskqueue) server you happen could be
seeing very different access patterns to another one.

In fact different instances of your app, might even be experiencing
different latencies, because they may be seperated into different
parts. Although it makes sense for AppEngine to keep your instances
clustered so can for example use a local memcache server.

So rather than memcache, use instance storage to store API health data?

--

Its one of the reasons you see so many people complaining about the
AppEngine Status site - it can be showing 'all fine' when someones
particular app is massivly degraded.

AppStatus would have to make sure it has instances running in every
corner of AppEngine datacenters, and possible report
'best/worse/average' or something.

Marcel Manz

unread,
Jul 30, 2012, 11:28:51 AM7/30/12
to google-a...@googlegroups.com
It would be good to get an explanation from Google on how the system status is probed (please excuse me if this is explained somewhere, I haven't yet come across detailed info on this topic). Eg. is this one and the same instance in just one data center probing such times or are these aggregated measurements from several probes & data centers?

As for the task queue latency, my app so far always could see very much the same delays as reported in the system status. I hardly believe my app all the time lives in the same part of the system as the probe that is providing measurement data to the system status. So how come, this matches so nicely (at least for tasks add) ?

Kyle Finley

unread,
Aug 3, 2012, 3:33:37 AM8/3/12
to google-a...@googlegroups.com
Why can’t I configure how many requests  Frontends handle? F1’s serving 8 requests is silly. F4’s handling 8 requests is often equally silly in the opposite direction.

Good idea. I have created a feature request for it, Please star it:

Do we always have to go through Apps For Domains? How much money do I have to send you to never have to talk to their tech support again? Similar question Naked Domians. I want one. Building a URL Shortener that has to have www. On the front is breaking my heart. 

Would the VIP SSL allow you to circumvent Apps for Domains? 

Kristopher Giesing

unread,
Aug 3, 2012, 3:48:55 AM8/3/12
to google-a...@googlegroups.com


On Sunday, July 29, 2012 12:43:31 PM UTC-7, Brandon Wirtz wrote:

Why can’t I configure how many requests a backend can handle? Or have a back end act like a front end and handle 8?

The Java version has an experimental field in backends.xml called max-concurrent-requests.  I would assume Python will get the same at some point.
Reply all
Reply to author
Forward
0 new messages