Is it possible to do better than random?
Absolutely.
Is it impossible to do perfectly?
Absolutely.
So even to begin with, you have to say what quality is acceptable.
Then after that, to even measure quality you need a definition of
spam. Then you'll find that humans will disagree far more often than
you would expect. In the well established field of experimental test
collection information retrieval, where the goal is to find documents
relevant to a user's query, the relevant sets (A,B) of two human
professionals will typically only agree 60% of the time
(A intersection B / A union B is about .6).
Then after that, the biggest problem is that you are in an adversarial
relationship with the spammers. Once you start interfering with the
spammers, they will change their approach. Retrospectively, given a decent
learning set, current machine learning approaches will do a decent job
at identifying spam in these past sets. But as the spammers learn what is
acceptable and what is not, the reliance on past spam will become less and
less useful. In the 2000's, some 30% of Google's search effort was spent
in this cat and mouse game with the spammers.
All that's in theory. In practice, any barrier at all to spam on
Usenet will reduce spam since the return from the spam is so small -
there are better places for the spammers. What would doom an effort
such as you suggest would be the complaints from the
borderline-legitimate posters about posts improperly identified as
spam. Usenet is dying fast enough as it is; it can't afford to
send these posters packing!
Chris