Oh, oops, I did have a question but I mis-sent it and it didn't go to councilroom-dev. Resending.
So I see two separate ways that cr code scrapes logs - one is through scrape.py, called from update-loop.py, and the second is through update.py which sets a background task with an IsotropicScraper . What's the deal with that? Which one is used in production? Or are they both used? I didn't get how they related to each other.
The one that deals with S3 buckets is one that I won't be able to rewrite and test on my own, since I don't have access to the S3 instance... well, I'll figure out how to deal with that later I suppose. Writing the parsing code for now.