Technical issues

48 views
Skip to first unread message

Arun Prasad

unread,
Mar 26, 2026, 2:06:19 AMMar 26
to ambuda-discuss
Setting aside all feature work, there are three big technical issues remaining:

1. Managing the current Flask + Celery setup, which consumes too much memory until the site comes to a crawl and restarts. Options are

(a) figure out why it's consuming so much memory (1.5G at peak) and tame the code,
(b) pay for a nicer server and avoid this problem entirely,
(c) re-architect the site to avoid these problems (more static assets, switch to go/rust, etc)

2. Support blue/green deployments

Right now the site is on bare metal, and when I deploy, the site goes down. This is actually worse than it sounds because if I'm (e.g.) updating a template, the templates are updated before the code is. This isn't an atomic deploy, and it should be. Various PaaS solutions are available if there's no easy way to do this on commodity servers.

3. Figure out a scaling strategy

We use a single DigitalOcean droplet, which is fine for our current scale of roughly 1 request per second, but I have no clue how the site will do at 100x that. This might involve anything up to and including moving off of DO entirely.

Arun

Bakul Shah

unread,
Mar 31, 2026, 1:15:11 AMMar 31
to Arun Prasad, ambuda-discuss
On Mar 25, 2026, at 11:06 PM, Arun Prasad <aru...@gmail.com> wrote:

Setting aside all feature work, there are three big technical issues remaining:

1. Managing the current Flask + Celery setup, which consumes too much memory until the site comes to a crawl and restarts. Options are

(a) figure out why it's consuming so much memory (1.5G at peak) and tame the code,
(b) pay for a nicer server and avoid this problem entirely,
(c) re-architect the site to avoid these problems (more static assets, switch to go/rust, etc)

Have you looked at Hugo? https://gohugo.io/documentation/
There are lots of themes for it + it is very fast if you have mostly static assets. Lots of organizations + people are using it.

2. Support blue/green deployments

Right now the site is on bare metal, and when I deploy, the site goes down. This is actually worse than it sounds because if I'm (e.g.) updating a template, the templates are updated before the code is. This isn't an atomic deploy, and it should be. Various PaaS solutions are available if there's no easy way to do this on commodity servers.

With hugo this should be less of an issue even if you have thousands of pages.


3. Figure out a scaling strategy

We use a single DigitalOcean droplet, which is fine for our current scale of roughly 1 request per second, but I have no clue how the site will do at 100x that. This might involve anything up to and including moving off of DO entirely.

Wait, is the site on bare metal (physical h/w colocated somewhere) or a droplet? Anyway, 1 request/sec seems far too low to worry about.


Arun

--
You received this message because you are subscribed to the Google Groups "ambuda-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ambuda-discus...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ambuda-discuss/fca497ae-5938-4a36-9501-bba997dc8f15n%40googlegroups.com.

Arun Prasad

unread,
Mar 31, 2026, 2:21:02 PMMar 31
to ambuda-discuss
Thanks for the tip on Hugo. It will help partially but not completely. And my mistake, I meant a droplet, not bare metal.

Some more notes on the site architecture might make the problem clearer.

We currently use a Python backend (Flask) to handle all requests. Texts and dictionaries are mostly static content, but we also have a proofing interface that needs read-write support, and small features in the library need read-write (e.g. user settings and bookmarks). For redundancy, we have two Python web workers served through gunicorn. They share some memory due to copy-on-write but I think are still using redundant memory due to multiple imports of sqlalchemy, etc. in addition to other side data.

Ambuda also handles a variety of async tasks for things like running OCR, splitting PDFs into separate images, and calling other APIs. These touch the database and likewise cause imports of sqlalchemy and other libraries.

In total Ambuda is around 5-6 Python processes, which each pay the 40MB cost of the Python interpreter, additional cost for Python imports and side data, and normal memory consumption for handling requests and async tasks. This is not efficient.

So while texts could be served as static files, this doesn't fix the core problem, which is the resource-heavy proofing setup that needs async workers to process PDFs, run OCR, tokenize, run reports, etc.

Some options I am mulling over:
- spend some time on removing or optimizing heavy imports and data.
- switch to an async Python framework, which means we can reduce the number of Celery processes we create by putting more async tasks in the web backend itself.
- move off of Python to something like Go. I like Rust much more than Go but I value quick iteration and am wary of Rust's compilation times.
- throw money at the problem and get a better droplet.

Arun
Reply all
Reply to author
Forward
0 new messages