On 2/4/24 11:05, Julieta Shem wrote:
> Interesting. But could all of this be done with NNTP? I mean---you
> connect, pull it all with NNTP and disconnect. But I bet it's how
> efficient the transfer are. It seems intuitive to me we can sort of
> compact it all in a package, download, unpack and deliver it to the NNTP
> server. Am I in the right direction here? Even without compression, it
> seems more efficient to do it in one package.
NNTP is really two dialects of a common protocol. The different
dialects have different requirements and capabilities.
NNTP peering is done with (near) real-time push from the upstream server
to the downstream server. Meaning that the downstream server must be
online and accessible for the upstream to be able to push to it.
NNTP client is done with an asynchronous pull from the upstream server
to the downstream server.
NNTP peers are all push and (near) real time. -- You can get into
strange behaviors / side effects of the protocol wherein you might be
able to get an upstream peer to re-try pushing articles to you thinking
there was transient connection issue, but this should not be relied on
and is almost certainly going to fail at some point, with the only
questions being when and how spectacularly.
You might be able to convince a peer to do a hybrid approach wherein
they accept pushes from you as a normal NNTP peer however they don't
actually /feed/ you anything and instead rely on you to pull using some
sort of pullnews / suck / etc. utility. -- This would be extremely
atypical and I doubt many news masters would be willing to do it.
If you want to be an NNTP peer, you need to make arrangements to be
online to receive (near) real time pushes -or- use something like UUCP
to do batch transfers.
N.B. I'm saying "near" real time to account for a few second delay as
articles come into a server and are distributed to peers.