Building on Stephan's point that there isn't a single “best" way: in many formalisms, this creates a real modeling dilemma, since you have to pick the "right" level of detail up front. In TLA, refinement lets us turn that into a modeling strategy instead.
In particular, approaches like (1), where processes directly read each other's state, are sometimes dismissed as "not really modeling communication". But in TLA, that's actually useful. It lets us start with a simple, high-level spec that captures the essential safety and liveness properties, without committing early to a specific communication mechanism.
From there, we can introduce lower-level specs using any of the more detailed variants you list (queues, sets, lossy networks, etc.), and then formally show that they refine the more abstract specs. In other words, we can show that the more concrete model is a correct implementation of the high-level one.
This is where TLA+ distinguishes itself: the ability to formally connect different points in the abstraction/precision tradeoff, instead of picking one and hoping it was the right choice. A typical flow is:
* start with a simple, readable model (often close to (1)),
* validate the core properties,
* and then refine toward more concrete communication models like (4) or your favored variant.
So you end up not just with multiple models, but with a clear relationship between them, avoiding both a single model at the wrong granularity and a set of disconnected ones.
This kind of abstraction is also how some algorithms were found in the first place. As Lamport himself writes about Paxos, "It would have been very hard to find this algorithm had my mind been distracted by the irrelevant details introduced by having the processes communicate by messages." [1,2]
Suppressing communication details isn't just about keeping the state space small. It can make the problem easier to think about. Refinement then gives you a way to add the details back in a controlled way.
M.
[1]
https://github.com/tlaplus/Examples/blob/master/specifications/PaxosHowToWinATuringAward/Voting.tla#L8-L10
[2]
https://lamport.azurewebsites.net/tla/paxos-algorithm.html