For a real peer feed, the two machines (both being servers)
essentially tell each other what they have and ask for what
they want. In theory, (over time) each ends up with everything.
Most of the internet protocols are relatively simple to grok.
If you read through them, you can understand why certain error
messages (result codes) may be necessary in certain cases.
Things get a bit funky as most services have to retain forward
and backwards compatibility, to some extent. So, will have to
be able to indicate what sorts of capabilities they have and
then adapt to the capabilities of the current peer. Some
other peer may have a different set of capabilities so the
"dialog" with that peer will differ.
If you are just trying to emulate a client, then things
are simpler; you query *the* server for the articles that
you want. Typically, you fetch the HEADERs for them
to present to your human user. When he indicates an
interest in a particular article, you fetch the BODY
to present to the user. This is why you will often see
a pause when entering a newsgroup (as the newest headers
are retrieved) and then a separate pause-per-message
as you *initially* examine each message (once you've looked
at a message, its BODY is likely cached on your client).
This HEADER then BODY allows filters to be applied to
HEADERs to economize on the BODYs that the user likely
won't want to see.
[If you want to filter on the content of BODYs, then you
have to fetch the bodies in order to apply those filters.
E.g., I filter based on things like "is this a top post?"
"does this have a high proportion of quoted material
and relatively little NEW content?" "does this contain
profanity?", etc. So, I can decide which headers to
expose to the user and, thus, which bodies he will possibly
want (and which he *won't*!)]