[google-appengine] Chat Time transcript for May 5, 2010

21 views
Skip to first unread message

Jason (Google)

unread,
May 8, 2010, 7:25:28 PM5/8/10
to Google App Engine
The high-level summary and complete transcript of the May 5th edition
of the IRC office hours is pasted below. Our next normally scheduled
office hour session (May 19th) is CANCELLED because of conflicts with
Google I/O -- if you happen to be coming to I/O, you can meet us in
person and have a face-to-face session. :)

IRC office hours will resume Wednesday, June 2, 7:00-8:00 p.m. PST in
the #appengine channel.

Note: On the first and third Wednesdays of every month, the App Engine
team signs into the #appengine IRC channel on irc.freenode.net for an
hour-long chat session. On the first Wednesday, we meet in the channel
from 7:00-8:00 p.m. PST (evening hours). On the third Wednesday, we're
available from 9:00-10:00 a.m. PST (morning hours). Please stop by!

- Jason



--SUMMARY-----------------------------------------------------------
- Q: When memcache set and delete operations are unavailable, does get
return stale data? A: No. In read-only maintenance periods, memcache
is disabled altogether. All gets return False, so stale data is not
returned. [7:02-7:03]

- Discussion on techniques for deleting large numbers of entities
[7:05-7:20, 7:24]

- Discussion on achieving many-to-many relationships using App
Engine's datastore [7:06, 7:10, 7:14-7:15, 7:18]

- Q: Why am I seeing a lot of warning logs saying "Request was aborted
after waiting too long to attempt to service your request"? A: As of
release 1.3.1, there is no longer an upper limit on the number of
requests an app can service simultaneously -- the number of
application instances now scales indefinitely with your load, but only
if your average request latency is under 1,000 ms. If you see this
warning, it usually indicates that your latency is too high; reducing
your latency to 1,000 ms will make the error go away. Note: this does
not affect tasks, which can run for up to 30s before being terminated,
so move your heavier processing to the background using tasks. Also,
take advantage of Appstats which can help with performance profiling.
[7:15, 7:17, 7:20-7:21, 7:27]

- Discussion on RDBMS support and the benefits and restrictions of a
distributed datastore [7:24-7:30, 7:36-7:37, 7:39, 7:42-7:47]

- Discussion on storing a large (5000+) list of items with arbitrary
order using the datastore [7:33-7:35]

- Discussion on customizing and/or adding decorators to SDK methods
like get_or_insert, etc. [7:35-7:38]

- Discussion on accessing SQLite database from 1.3.3 SDK using the App
Engine Helper for Django utility [7:46-7:49, 7:52]

- Discussion on implementing counters that will remain operational
during a maintenance period when memcache and datastore writes are
disabled [7:54-7:56, 7:58-7:59]



--FULL
TRANSCRIPT-----------------------------------------------------------
[7:00pm] apijason_google: Hi Everyone. It's the third Wednesday of
April, which means time for another IRC office hour session. I'll be
in here for the next 60 minutes to field your questions, and I'll try
to pull at least one other person in here before too long. Welcome!
(EDITOR'S NOTE: This chat took place on May 5th, not the third
Wednesday of April as indicated here.)
[7:02pm] mbw: I am curious about the behaviour of memcache when there
are .set/.delete failures (which we saw today)... if you can't set or
delete, does get still return data (potentially stale)?
[7:02pm] mbw: I didn't get a chance to check on that myself today.
[7:03pm] mbw: We found some bugs in our code today as a result of the
issues. We obviously are missing some unit test coverage.
[7:03pm] apijason_google: In read-only periods, you can't read from or
write to memcache, so it shouldn't return stale data, AFAIK.
[7:03pm] axiak: oh yey!
[7:03pm] Relaed: apijason_google: why do I get Deleting a composite
index failed: ApplicationError: 1
[7:04pm] mbw: apijason_google: that sounds great. Thanks.
[7:04pm] apijason_google: mbw: It's noted in the maintenance page too:
http://code.google.com/appengine/docs/java/howto/maintenance.html
[7:04pm] apijason_google: Relaed: What's your app ID?
[7:04pm] Relaed: apijason_google: clouyama
[7:04pm] mbw: I wasn't sure if memcache was in read only mode or just
having intermitent issues.
[7:05pm] apijason_google: Relaed: I see that a couple of your indexes
are in the error state. Have you tried vacuuming?
[7:05pm] axiak: apijason_google: I'm having trouble with too much
data. I have entities that essentially expire after a certain time,
but it takes too much effort (cpu time, slowly iterating over each
element) to delete them all. Is there a better way to do bulk delete?
[7:05pm] Relaed: apijason_google: I did, and it failed saying "why do
I get Deleting a composite index failed: ApplicationError: 1"
[7:05pm] Relaed: apijason_google: *without "why do I get" part
[7:06pm] apijason_google: mbw: Yep, we were in a read-only period. Are
you subscribed to our downtime-notify announcement list? It's not very
noisy -- we just post when there are planned or unplanned downtimes --
so I encourage you to subscribe if you aren't already.
[7:06pm] mbw: axiak: for cleanup, we have a cron job fire late at
night and kick off task queues to do the work.
[7:06pm] axiak: hmm
[7:06pm] mbw: apijason_google: I must have missed it. But yes, I am
subscribed. Thanks again.
[7:06pm] ristoh: hey apijason_google: what's the recommended way
currently to do many-to-many relationships ( python ) for entities
which are all the same Kind?
[7:07pm] apijason_google: axiak: There's no real bulk delete
functionality. I suggest you look into the mbw's option and do any
bulk operations using a series of background tasks, preferably in an
off-peak period.
[7:08pm] apijason_google: Relaed: I see. Can you file a new production
issue in the issue tracker? I'll get the datastore team to take a
look, and hopefully we can get it addressed soon.
[7:08pm] axiak: hmm, okay. since I can't really count I have no idea
if my deleting task is even working it is certainly not keeping pace
with my data collection scripts
[7:08pm] mbw: axiak: you could kick off a task, have it get some,
delete some and then if it has more work, kick off another task, etc,
up to some limit
[7:09pm] axiak: mbw: the problem is that I can't predict if I'll
exceed a deadline
[7:09pm] mbw: axiak: the tasks should do small amounts of work,
nothing crazy
[7:10pm] apijason_google: ristoh: I model many-to-many relationships
in GAE's datastore very similar to the way I model them in a
traditional relational database -- using an intermediary kind with two
reference properties. Of course, you can't query very effectively, but
if you have the key, it helps to find all related entities fairly
efficiently.
[7:10pm] axiak: also, it seems that my task that is currently just
deleting is taking quite a bit of CPU power (about 1 second of CPU /
second)
[7:11pm] apijason_google: axiak: Are you bulk deleting?
[7:11pm] Wooble: axiak: or just use nickjohnson's bulk updater tool...
[7:11pm] axiak: apijason_google: yes, 10 entities at a time
[7:12pm] Wooble: axiak: are you doing a keys_only query? fetching the
whole entities is really slow and completely unnecessary...
[7:13pm] mbw: axiak: it can't keep up with data creation? Are you
using __key__ queries and just db.delete'ing by db.Key or are you
loading up a lot of data?
[7:13pm] mbw: Wooble:
[7:13pm] axiak: oh I'm not!
[7:13pm] axiak:
[7:14pm] axiak: so I would do: SELECT __key__ WHERE ... ?
[7:14pm] Wooble: yeah, if you insist on using GQL
[7:14pm] apijason_google: The actual delete operation will take the
same amount of CPU, but you'll save on the query for sure, since you
won't have to pull down all the data for the matching entities.
[7:14pm] mbw: SELECT __key__ from Kind Where... ya
[7:14pm] ristoh: apijason_google: I was thinking of tracking the
many_to_many relationship by encoding the related keys onto the key of
'relation' entity, and then separately from that add a 3rd Kind to
track the relation from each entity to the relationship
[7:15pm] tensaix2j: 05-05 07:13PM 11.052 Request was aborted after
waiting too long to attempt to service your request. This may happen
sporadically when the App Engine serving cluster is under unexpectedly
high or uneven load. If you see this message frequently, please
contact the App Engine team.
[7:15pm] apijason_google: ristoh: Interesting. What does the third
kind get you? Meaning, why use three instead of two?
[7:15pm] axiak: what's a good number to bulk releat?
[7:16pm] axiak: bulk delete*
[7:16pm] tensaix2j: what happened to app engine?
[7:16pm] apijason_google: axiak: As many as you can. Eventually,
you'll hit the 30 second limit and that represents too many.
[7:16pm] Relaed: apijason_google: Thank you, I have filed the issue
report just now.
[7:16pm] axiak: apijason_google: I mean, if I do results = q.fetch(N);
db.delete(results)
[7:16pm] axiak: what's a good number of N
[7:16pm] axiak: does it matter much?
[7:17pm] Wooble: axiak: the last time I did it, I had good luck with
50 at a time and fairly frequent timeouts with 200. I didn't bother
to try to find an optimum middle.
[7:17pm] apijason_google: tensaix2j: That error is generally thrown
when your average live latency (incoming HTTP requests, not background
tasks) exceeds 1,000 ms or so. Is your ms/s graph higher than 1,000
ms?
[7:17pm] apijason_google: Relaed: Great, thanks. I'll take a look
tomorrow, and hopefully get it resolved very soon.
[7:17pm] tensaix2j: no
[7:17pm] Wooble: axiak: the number of affected indexes is probably
relevant to the best N to use, if I understand the delete operation
correctly.
[7:18pm] apijason_google: tensaix2j: What is your app ID?
[7:18pm] tensaix2j: bcideathnotes
[7:18pm] ristoh: apijason_google: I can use a relation entity, which
has an key_name with upto 5 concatenated keys, for counters and to
create the initial relationship with minimum writes, then I can use
that to create the 3rd Kind ( with taskqueue) to be used for query the
relations between entities
[7:18pm] tensaix2j: my ms/s is < 600
[7:18pm] axiak: Thanks guys, __key__ saved quite a bit of CPU time
[7:18pm] axiak: it screams now
[7:19pm] apijason_google: tensaix2j: That's not what I see. It looks
like your ms/request is steady at 10,000 ms right now.
[7:19pm] mbw: axiak: one thing you can do, put in a timeout limit of
like 15s for the delete (or lower) and if the delete times out you
wont deadline, and you can then re-queue the task
[7:20pm] mbw: apijason_google: on the topic of the concurrent request
limit. Do the task queues figure into that? Do they figure into the
ms/request graph?
[7:20pm] axiak: mbw: sorry, what is this timeout limit you speak of?
[7:21pm] tensaix2j: why is that.
[7:21pm] apijason_google: mbw: No, I don't believe so. They do factor
into your CPUs/s graph, but I believe ms/req only reflects live
traffic.
[7:21pm] apijason_google: tensaix2j: I don't know what your
application is doing. Are you making a lot of URL fetch calls?
[7:22pm] tensaix2j: no.
[7:22pm] tensaix2j: I don't have any http request at this moment, and
why is it still 10000 ms/r
[7:23pm] apijason_google: tensaix2j: I see a little bit of traffic.
Not a lot though.
[7:23pm] apijason_google: You can see your request logs to get a sense
of which handlers are being hit and how long they're taking to
execute.
[7:24pm] apijason_google: It looks like your getCurse handler is the
culprit.
[7:24pm] tensaix2j: it is weird, because it has been running for 2
weeks without problem
[7:24pm] mbw: axiak: I am not able to find the docs right now for
setting that up, I forget where I saw that
[7:24pm] tensaix2j: and it only happens this morning.
[7:24pm] pv__: Any plan to support RDBMS in future?
[7:25pm] mbw: tensaix2j: do you have more concurrent hits today or
anything?
[7:25pm] tensaix2j: nope.
[7:25pm] axiak: lol
[7:25pm] apijason_google: pv__: We can't comment on anything other
than the items on the public roadmap.
[7:25pm] apijason_google: pv__: And it's not on the public roadmap.
[7:25pm] Wesley_Google: pv__: what are your requirements?
[7:25pm] Wooble: "wordpress"
[7:26pm] mbw: hehe
[7:26pm] pv__: No, if you could give a clue with time frame, we could
be prepared
[7:26pm] pv__: I have a huge portal to build on appengine
[7:26pm] pv__: I'm tired of Big table -
[7:26pm] mbw: i would guess soonish to never
[7:26pm] Wooble: pv__: "it's not on the roadmap" is their nice way of
saying "we're never going to do that"
[7:26pm] pv__: designing the Database taking lots of time for me
[7:27pm] Wesley_Google: pv__: you can build a huge portal without
RDBMS... what's "tiring" about BT? you're not really even interfacing
with it
[7:27pm] mbw: Does anyone know where the docs are for the new rpc
stuff and setting up the timeout for a query/get ?
[7:27pm] apijason_google: tensaix2j: Strange. I can't get a more
accurate picture of what exactly is taking so long, but we did just
roll out Appstats which could help you. It basically gives you a view
of the performance of each handler and graphs what operations are
eating up the most time.
[7:27pm] pv__: I'm breaking my head with it for about an year
[7:28pm] pv__: you are right I'm not willing to do it in the tough way
[7:28pm] apijason_google: Wooble: I usually qualify with "It's not on
the roadmap... yet."
[7:28pm] pv__: I couldn't even do the 'Hibernate' way of doing it
[7:28pm] tensaix2j: ok
[7:28pm] pv__: Atleast you could support Hibernate
[7:28pm] ristoh: is there a way to decorate an inherited method, such
as 'get_or_insert' in a class that inherits from db.Model?
[7:28pm] mbw: axiak: http://code.google.com/appengine/docs/python/datastore/functions.html#create_rpc
[7:29pm] Wesley_Google: pv__: are you using Django?
[7:29pm] pv__: Pure Java. Pl.
[7:29pm] apijason_google: tensaix2j: If you notice anything out of
sorts, like certain queries taking a huge amount of time, let us know.
There haven't been any widespread reports of major performance drops
today though, at least as far as I'm aware.
[7:29pm] Wesley_Google: pv__: i guess if you said Hibernate, i
should've guessed not
[7:29pm] max-oizo: Hi Jason. Only one question about SSL in current
roadmap. Will it be supported on all platforms and browsers or not?
(for example without ie6/solaris/etc)?
[7:30pm] pv__: I started with pytho/django, switched to my known
language - java
[7:30pm] mbw: Wesley_Google: just curious, what was the thought if he
was using django?
[7:30pm] Wesley_Google: pv__: IIRC, isn't Hiberate even higher-level
than JDO or JPA? does it give you a SQL interface?
[7:30pm] apijason_google: pv__: Hibernate isn't supported on App
Engine because it doesn't work within the sandbox restrictions.
Several framework/tool vendors have built special versions to support
App Engine, but so far, this hasn't happened for Hibernate yet.
[7:31pm] apijason_google: max-oizo: That is our intention, yes.
Without going into specifics, it's a tough problem. But that's what
we're aiming for.
[7:31pm] axiak: Would db.EVENTUAL_CONSITENCY help the speed of a
delete operation if I don't mind how consistent it is?
[7:32pm] mbw: axiak: looks like thats only useful on the query
[7:32pm] axiak: yeah that's what I thought
[7:32pm] pv__: at least is there any Java CMS on AE?
[7:32pm] max-oizo: 2Jason: Great news, thanks!
[7:33pm] pv__: I was thinking of Life Ray once, but sounds its boiling
the osean
[7:33pm] apijason_google: pv__: Not sure, but we're always adding new
libraries and projects to our wikis. Let me get you the links, one
sec.
[7:33pm] tensaix2j: ms/r has dropped to 0 now, but still getting the
same error.
[7:33pm] MrSpandex: So Id like to store a large list of items that
keeps an arbitrary order (think a playlist) - whats the best way to do
that? Im thinking it would have to support >5k entries. Im assuming
that an entity with 5000 children and an order property would cause
way too many puts when it was reordered, or is that ordering handled
at a lower level? I a linked list approach, but is that any more
efficient to GET than one big list property?
[7:33pm] mbw: MrSpandex: do you need to be able to query the list?
[7:33pm] Wesley_Google: mbw> if you use Django, you can use django-
nonrel, which is a fork of Django that lets you run it on top of any
non-relational DB, including App Engine -- it should run a pure Django
app on App Engine just fine without having to change your models
[7:34pm] MrSpandex: mbw: generally, no
[7:34pm] pv__: I know there is some effort on Python cms
[7:34pm] mbw: Wesley_Google: cool
[7:34pm] MrSpandex: mb: but Id like to update it when any of the
members change - maybe a song title changes in the playlist
[7:34pm] Wesley_Google: pv__: yes, i think there are more than 1 CMS
project but dont' know what they are off the top of my head
[7:34pm] mbw: MrSpandex: Then you could always use a blob/text
property and pickle a list to that. Save your indexed properties to
help you find them.
[7:35pm] apijason_google: pv__ Here are the pages I was referring to:
http://groups.google.com/group/google-appengine/web/google-app-engine-open-source-projects
and ://groups.google.com/group/google-appengine-java/web/will-it-play-
in-app-engine.
[7:35pm] ristoh: Wesley_Google: is there a way to use decorators on
inherited models, such as .get_or_insert ? I'm trying to encode an
hourly timestamp into my key_names. Or do I have to rewrite my own
construction method that overrides get_or_insert and decorate that?
[7:35pm] apijason_google: Doh, imagine the second http.
[7:35pm] pv__: I did spend fair amount of my time searching and trying
to convert Life Ray -
[7:35pm] MrSpandex: mbw: I guess that makes sense - just doesn't seem
clever enough
[7:35pm] moraes: here's another: http://code.google.com/hosting/search?q=label:appengine
[7:36pm] mbw: ristoh: you should be able to override and then call the
base class get_or_insert. We have our own KeyName class that sets up
the key_name on its constructor (and other magic)
[7:36pm] Wesley_Google: pv__> what does javax.jdo.Query/JDOQL *not*
give you that you feel you need a real RDBMS for?
[7:37pm] apijason_google: tensaix2j: Can you describe what your calls
are doing? Otherwise, try to deploy a new version with Appstats so you
an get a better window into the issue.
[7:37pm] pv__: Wesley: DB Design , Joins
[7:37pm] moraes: aw man. only today i discovered that 1.3.3 made
several blobstore functions protected.
[7:37pm] pv__: How do I do recursive queries?
[7:37pm] Wesley_Google: ristoh> App Engine only uses the 2.5.x
runtime, meaning only function/method decorators. class decorators was
added in 2.6, so that's not supported yet.
[7:38pm] tensaix2j: Nothing much, just retrieve some data from
datastore with limit and return to the requestor..
[7:38pm] Wesley_Google: ristoh> you can also choose to customize your
own keys instead of using what the system gives you by default... then
you can encode anything into it that you wish
[7:38pm] moraes: ristoh, implement your own get_or_insert. it is a
small one.
[7:39pm] mbw: The "Features on deck" on the roadmap is so tasty right
now... I can't wait.
[7:39pm] apijason_google: pv__: No one is going to argue that
designing for GAE's datastore is more convenient than for a RDBMS. But
the restrictions do exist for a reason, and if/when you're app reaches
1,000+ QPS, you won't have the same issues that a MySQL developer
would have, for example. A lot of the scalable datastores that are
coming out, from Google and others, operate in a very similar manner
and have similar limitations.
[7:40pm] tensaix2j: and the size of data is the same when it was still
working
[7:40pm] Wesley_Google: pv__> yeah, what jason said.
[7:41pm] apijason_google: tensaix2j: That is very odd then. Can you do
me a favor and try re-deploying your app? We did have a short downtime
today -- what time did you notice your app start throwing the errors?
[7:42pm] mbw: Any of you googlers manning the sandboxes this year at I/
O? They look kinda small. (I will be at the WebFilings booth)
[7:42pm] pv__: When you say 1000+qps, do you mean, its okay to have
nested queries forth and back to the datastore?
[7:42pm] tensaix2j: an hour ago.
[7:42pm] Wesley_Google: mbw> that's great! we'll c u there!!
[7:42pm] apijason_google: mbw: Yep, I'll try to stop by.
[7:43pm] Wesley_Google: pv__> QPS means how many hits per second your
web app is fielding from the wild
[7:43pm] Wesley_Google: "queries per second"
[7:43pm] apijason_google: tensaix2j: The downtime ended about 5 hours
ago. Is your app developed in Python or Java?
[7:43pm] tensaix2j: is there anyway to retrieve the source file from
the app engine?
[7:43pm] tensaix2j: Python
[7:43pm] mbw: tensaix2j: no, thank god
[7:44pm] Wesley_Google: pv__> what he's trying to say is that while
you'll have to work at sharding your MySQL DB 8-ways to support your
scaling, your friend the App Engine developer is out having a beer
[7:44pm] mbw: well, actually... doesnt appstats allow you to dig down
into calling code.. thats a bit of a workaround
[7:44pm] Wooble: well, not unless you planned for it in advance.
[7:44pm] pv__: thats exactly my question is, if I'm making 100 queries
in place of 1 query in RDBMS , is it Well tuned in Datastore?
[7:44pm] MrSpandex: Is there any reason App Engine couldn't inherit
from the Java Principal object so I could get more than just name from
the security context? It would be great to cast Principal to User
[7:44pm] apijason_google: tensaix2j: Let's continue this thread over
email. Can you ping me at apijason at google.com?
[7:45pm] tensaix2j: ok.
[7:45pm] pv__: Wesley, I agree, but starting trouble is huge with AE
[7:45pm] Wooble: pv__: that one query in the RMDBS is going to be very
slow and won't scale. the database just hides the complexity from
you.
[7:45pm] apijason_google: tensaix2j: The short answer is no, we can't
retrieve your source files.
[7:45pm] pv__: I could have completed at-least 10 apps bye now for the
time I spent with BT
[7:45pm] axiak: pv__: I also think you have to consider
denormalization much earlier with AE
[7:46pm] axiak: but with experience it becomes quick and natural to
denormalize so that you don't have to join often
[7:46pm] pv__: axiak: thats the big head ache I'm in trouble with
[7:46pm] ristoh: anyone of you guys run appengine django helper with
the latest 1.3.3 sdk and sqlite? is there a preferred way to get
console access to my local sqlite datastore?
[7:46pm] axiak: pv__: it's different from RDBMS, but it's not really
harder
[7:47pm] apijason_google: pv__: In the distributed data world, you
have to work to denormalize. Throw the normal forms out the window.
That helps reduce your reliance on joins.
[7:47pm] Wesley_Google: pv__> FWIW, you're not alone... the datastore
is the most complex component of App Engine, and it's the most
difficult to wrap your head around...
[7:47pm] mbw: ristoh: a local remote api console works pretty well
[7:47pm] Wesley_Google: ... esp. if you're coming from relational...
[7:47pm] mbw: at this point, I am a bit worried about going back to
RDBMS, I am so used to datastore now.
[7:47pm] johnlockwood: member:ristoh I do
[7:47pm] apijason_google: tensaix2j: One more question: are you using
any frameworks? Django, webapp, etc.?
[7:47pm] Wooble: ristoh: I suspect accessing the actual underlying
sqlite database is going to be ugly.
[7:48pm] amalgameate: hi, is there anything bad about having cyclical
references so that you dont have to query for relations either way?
[7:48pm] amalgameate: *reference properties
[7:48pm] ristoh: mbw: after 1.3.3 I started using remote_api_shell to
localhost and customizing my shell with history etc. I guess I just
got spoiled enjoying things like running manage.py shell or manage.py
console with appengine helper for too long
[7:49pm] johnlockwood: wooble, risgoh: the underlying sqlite db IS
ugly
[7:49pm] tensaix2j: webapp
[7:49pm] mbw: ristoh: you can always customize manage.py, add or
change the management commands it has available
[7:51pm] MrSpandex: are there any plans to move to datanucleus 2?
[7:52pm] apijason_google: tensaix2j: I sent out an internal email to
investigate further. Hopefully we can get this straightened out soon.
[7:52pm] ristoh: yeah, that + accessing over remote_api is probably
what I'll stick to doing, glad to hear the comments here so I will not
go spending too much time trying to interface with sqlite locally
[7:52pm] tensaix2j: ok, thanks a lot
[7:53pm] Wesley_Google: MrSpandex> not sure yet... it's not on the
roadmap right?
[7:53pm] ristoh: any suggestions on running scalable counters that
would survive a appengine scheduled outage? should I ran my counter
creation through taskqueue?
[7:53pm] apijason_google: MrSpandex: I know Max likes occasionally
upgrades the version of DataNucleus between releases. I haven't heard
anything specifically about 2, but assuming it's compatible and
performant, there's a strong chance that it will be integrated
eventually.
[7:54pm] ristoh: I went through counter class to sharded counters to
memcached counters, just to find out they'll track nothing during a
scheduled outage
[7:54pm] apijason_google: ristoh: That could work. Tasks are
automatically re-tried if they fail, so they should persist long
enough, past the downtime, and eventually succeed.
[7:54pm] Wesley_Google: ristoh> that's not a bad idea, however keep in
mind that taskqueues won't be able to write during a scheduled
downtime either... however, they will be rerun until they succeed.
[7:54pm] apijason_google: ristoh: Yeah, unfortunately we take memcache
down during maintenances.
[7:54pm] Wesley_Google: yeah, what jason said
[7:55pm] apijason_google: We're here for 5 more minutes, so get your
questions in.
[7:55pm] mbw: I'm not sure how a counter would work when things are
read only.. you would have to use something off site
[7:55pm] ristoh: and I can create as much taskqueue work as I want
( within the taskquee and storage quota limits ) ?
[7:55pm] amalgameate: hi, why do queries from within template filters
take so long?
[7:56pm] apijason_google: mbw: Task queues are available during read-
only periods, and they automatically retry until they succeed.
[7:56pm] amalgameate: i'm doing a query that takes 100ms in a a handle
and it's taking a couple seconds when doing from a template filter
[7:56pm] axiak: another question: if I'm trying to count entities and
it takes longer than the deadline, should I serialize the query and
use a task queue? is this the right way?
[7:56pm] apijason_google: ristoh: within the limits, yes. Currently,
billed apps are limited to 1,000,000 tasks a day, hopefully to
increase sooner rather than later.
[7:56pm] ristoh: so, I would just add my counter.incr( key_name )
tasks to taskque and after the outage is over, they would gradually be
written ?
[7:57pm] apijason_google: amalgameate: Interesting. I'm not sure why
off the top of my head. Is there any way to do the query in the
handler and pass the data to the template instead?
[7:58pm] Wesley_Google: mbw> it's not a bad idea to build an external
"to-do" list either... then apply those changes serially when the
system allows writes again, but creating tasks will work too, only
asynchronously whereas a serial log will ensure more ordering if
desired
[7:58pm] amalgameate: apijason_google: yea, that's what i'm doing now,
but it was a whole lot more convenient doing it with filters
[7:58pm] amalgameate: apijason_google: just wondering if there was
some innerworkings that i wasnt aware of that forbid doing queries
from template fitlers
[7:58pm] apijason_google: ristoh: Yep. This works for counters well
since you don't care about when the task started. If you were updating
more sensitive data, you would have to write logic to not overwrite
newer values with older data, but counters are a good use case for
your proposed solution.
[7:58pm] Wesley_Google: ristoh> yes; axiak> yes
[7:59pm] apijason_google: Double answer!
[7:59pm] mbw: amalgameate: I think it is best practice to get your
context complete before rendering your template. It makes it easier
to unit test for sure
[7:59pm] amalgameate: apijason_google: also, is there a good article
written on how the local SDK works? i want to understand why things
like loading css/js/images take so long on dev server
[7:59pm] Wesley_Google: axiak> another option is to maintain a count
so you never have to count
[7:59pm] apijason_google: amalgameate: I don't know of any. Wesley, do
you have any ideas?
[7:59pm] amalgameate: mbw: i see i see
[7:59pm] axiak: Wesley_Google: yeah but I never did that and now I
have a lot of entities to count...
[8:00pm] apijason_google: amalgameate: I don't know how in-depth the
books go on the local server, but it IS open source, so you can take a
look at what's going on.
[8:00pm] Wesley_Google: amalgameate> not sure of any documentation
like that for the SDK
[8:00pm] mbw: amalgameate: static files are just slow mostly because
its single threaded and just serially loads files.. its not able to
handle a bunch of requests at the same time like most sites
[8:00pm] Wesley_Google: most ppl just realize that it's dev and don't
care about loading time
[8:01pm] apijason_google: Alright guys, I have to take off. Thanks for
the great chat! The next one is in two weeks. Hmm, that's the first
day of I/O so we might actually cancel that one. Stay tuned to the
discussion group.
[8:01pm] amalgameate: aaah gotcha..ok thanks!
[8:01pm] mbw: wohooo! I/O cya there

--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.

Reply all
Reply to author
Forward
0 new messages