--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@googlegroups.com.
To post to this group, send email to google-a...@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/d/optout.
Could be related to this thread?
On Sunday, 25 May 2014 09:53:15 UTC+1, Robert King wrote:Don't get me wrong - I absolutely love cloud endpoints - they speed up my development time and simplify my code significantly.
Having said that, I'd really like to see some clarification from google. Are endpoints intended to be high performance? I haven't once seen mentioned in any google documentation that endpoints are low latency?
I've often been waiting 5-20 seconds for calls such as /_ah/api/discovery/v1/apis/archivedash/v1/rpc?fields=methods%2F*%2Fid&pp=0.
even on apps that have little traffic,
tiny payloads and no rpc calls. One of the new systems i'm building is using endpoints but i'll have to switch away from endpoints ASAP if I can't get some reassurance. Also I don't have time to wait "a couple of months" to see if they get faster. I'd also be interested to know how efficient python / go / java / php endpoints are at encoding & decoding different sized payloads with json or protobuff protocols. (Will probably have to generate these statistics myself & present some graphs etc - although I'm assuming google would have already performance tested their own product?)cheers
On Sunday, 25 May 2014 08:29:48 UTC+12, Diego Duclos wrote:I've done some (non extensive) tests on google appengine,and my response times vary from anywhere between 100ms and 5000ms when directly sending http requests to a cloud endpoints.Regardless of the actual response time, the google cloud console always shows a processing time of around 50ms, which, while also somewhat long-ish, is much more reasonable.For the 100ms requests, I can safely know that the other 50ms are just regular latency, but I have no idea where the cloud endpoint could be spending 4.5 seconds at, and the logs show nothing useful at all.Does anyone have some guidance for me regarding to this ? 5 seconds is unacceptable slow and makes them completely unusable.
Our infrastructure handles many many APIs. And thus some parts are loaded only on demand. Currently we don't load an API everywhere when we see the first request (we should and we are working on it). If you warm your API using ~50 requests, you should see fast responses from then on.
On Fri, Jun 6, 2014 at 3:14 PM, Jun Yang <jy...@google.com> wrote:Our infrastructure handles many many APIs. And thus some parts are loaded only on demand. Currently we don't load an API everywhere when we see the first request (we should and we are working on it). If you warm your API using ~50 requests, you should see fast responses from then on.
Hi Jun, thanks for stepping in and handling some questions.You noted in your post that the whole API isn't loaded; if we want to warm up the whole API service, should the 50 warmup requests be split among multiple API endpoints or to a single endpoint?
Thanks.------------------Vinny PTechnology & Media AdvisorChicago, ILApp Engine Code Samples: http://www.learntogoogleit.com
--
You received this message because you are subscribed to a topic in the Google Groups "Google App Engine" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/google-appengine/3XGJFaosX9s/unsubscribe.
To unsubscribe from this group and all its topics, send an email to google-appengi...@googlegroups.com.
Hello Jun,The endpoint is sitting at https://login-dot-psg-delta.appspot.com/_ah/api/version/v1/info
We are also having some performance problems with our cloud endpoints at the moment and will spend some time to improve the response time for some endpoints.
Do you have something like 'appstats' planned for cloud endpoints or is there a way to get a breakdown re what takes how long (datastore queries, memcache, other API calls, …)?
It seems like right now if we want to get the advantages from google cloud endpoints we need to give up the performance visibility/insight that we get from 'good old' app engine request handlers or am I missing something?
(PS: we're on the python 2.7 runtime if that matters)(Sorry for taking this thread a bit off-topic)
On Saturday, May 24, 2014 10:29:48 PM UTC+2, Diego Duclos wrote:I've done some (non extensive) tests on google appengine,and my response times vary from anywhere between 100ms and 5000ms when directly sending http requests to a cloud endpoints.Regardless of the actual response time, the google cloud console always shows a processing time of around 50ms, which, while also somewhat long-ish, is much more reasonable.For the 100ms requests, I can safely know that the other 50ms are just regular latency, but I have no idea where the cloud endpoint could be spending 4.5 seconds at, and the logs show nothing useful at all.Does anyone have some guidance for me regarding to this ? 5 seconds is unacceptable slow and makes them completely unusable.
--
Hi Jun,
Thanks so much for answering these questions - it's very helpful.
What does the additional frontend API layer do?
What's the performance impact?
--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@googlegroups.com.
On Sat, Nov 15, 2014 at 8:28 AM, J Jones <jona...@planetjones.co.uk> wrote:Not sure if Google Endpoints are serious or not. My appengine instance is warm and serving http request very quickly. However, my endpoints are ranging from 3 seconds to 9 seconds... the endpoint just returns a hardcoded object so there's 0 happening there. What a joke.
Any news on this? It's almost a year since last discussion here and Google Cloud Endpoints is still very slow.
Regarding suggestion to "hit the rpc 50x to warm it up", how do you know "when"? I mean, "hit rpc from time to time, and when it's slow, hit it 50x" doesn't sound like a good solution for me.