That is one of the TODO items for a DSL I'm currently working on. The DSL absolutely isn't pure actor model or "pure" anything, but it inherits this problem very much.
The v0.3 of my language design confirms to the idea that a message sent (in my DSL as an invocation) will eventually be processed, but sending itself can fail with a scheduling_error, what is kind of an exception the sender should handle locally.
Basically right now it's a runtime level global queue, so no OOM but one DOSing producer van still induce an avalanche of scheduling errors amongst all other producers.
In my TODO I state that I need to look into small "sending" queues for isolation for v0.4 of my spec.
It's something that needs serious attention because Carl identified reduction of latency along critical paths as one of the main tools for performance, and adding extra sending queues increases latency, and I already have small "keep the event loop busy" queues. And also because right now one actor has the possibility to dominate the scheduling queue (or actually more of a scheduling in memory graph db, but that's not relevant for the discussion). So I likely need to add extra queue like structures to my main queue-ish scheduling DAG.
While I don't have the option for my DSL, a pure language can I think be synchronous on a small send queue, what in the majority of sends would be effectively asynchonous.
My DSL has an escape hatch that while there are no return values, completion can be awaited, and I could extend that to scheduling being independently awaitable, but that would really clog down the whole fire and forget part of the language. Hence it's very much a TODO item for me that I'm ignoring untill I have a running 0.3.
But as said, a pure actor language could create a hybrid synchronous asynchonous model around short send queues, where sends are asynchonous only if there is room in the send queue.