The first *really important* thing to understand about boto, is that under the hood it uses httplib, which isn't thread-safe. So you have to do something like:
sdb_con = boto.connect_sdb( credentials, credentials, region=sdbregion)
sdb_ptr = sdb_con.get_domain( storename)
for each connection object you want to use - you cannot get away with using just a single sdb_con...
>>> So my connection pooling class is created once (not per thread?) or uses classmethods?
Yes, it is neat to use a class to encapsulate all of this, but I'm sure there are other ways of doing it.
>>> and cherrypy kicks off a thread which will eventually call my page handling function
Yes, that's about it - pretty much as you describe.
I really believe it is worth making a very tidy formal arrangement for accessing AWS via boto, as boto really doesn't do much by way of retries, and you will want to catch all sorts of low-level exceptions, log them and deal with them. For example, SDB has some real issues if you are trying to do a lot of writes in a short time. You pretty much *must* use batch_write, but there are limits - and the way it responds is to fail the request, and you will have to work out the exceptions that can be retried, and use a back-off and retry algorithm to manage them. All the same, there are real limits to the performance you can get out of SDB (it isn't boto's fault).
If your usage is light though, SDB is extremely flexible.
If your usage is likely to be heavy in terms of writes or deletes (maybe more than say 50 writes/sec), then you should forget SDB and go to DynamoDB (that is what we are up to right now...)
Hope this helps