Hi,
Al's approach is good, essentially, you have to adapt to the operator sending shorter batches, but note that this "negotiation" goes both ways: you should only ask for as many entries as you're prepared to receive!
For example, I've made a prototype log implementation here which "streams" entries, building the JSON response for "get-entries" dynamically, as it receives entries from the database... With that implementation, if you asked for a million entries, you might get a million entries! This might mean having to hold a million entries (at least, as there might be more than one copy) in memory on your side, and potentially give you difficulties... :-)
So you request as many as you're prepared to receive, and the log will send as many as it's prepared to send (which might be 32, or a billion!), and everyone is happy.
I'd also like to attract attention to the "-1" in Al's "end=STH.tree_size-1", as this is the most common incorrect request that our log servers receive (off-by-one error past the end of the tree)! ;-)
Kind regards,
Pierre