One common thing I see at various clients is that clients do a get to fetch links, then POST / PUT to one of the links. The schema of what to POST / PUT seems to be "known" by the client. I can see a potential issue here as the server cannot add additional information later on without making it optional. For example, if the server decides that some sort of anti-forgery token or cluster id or something else is needed due to back-end policy changes. Such a change may be deemed necessary and required, yet will break clients.
I would have thought that (in accordance with the human walking web pages analogy), that the initial GET would return links, and the client would do a GET to one of the links which would return a representation (with submission link) that would need to be "filled in" by the client, which the client would POST / PUT to the submission link. This way, the server is free to add additional required information, and as long as clients submit the filled in representation (copying fields it doesn't know about verbatim), clients will not be broken.
I've generally stuck to the latter approach, though I see many people skip the additional GET and simply have the client know about the full schema.
Would you say one is preferable to the other, or am I being pedantic? Would love to hear your thoughts.
Summary of approach one:
Client Server
--------- -----------
-------GET----->
<------Links ----
------ POST --->
Summary of approach two:
Client Server
--------- -----------
--------GET ----->
<------Links ------
--------GET rel=foo link --->
<------ Form representation (with defaults filled in)---
------- POST filled in form --->