We have had similar conversations in my current project.
We came up with the concept of an "aggregate" API, which is, actually, very akin to graphQL/Falcor. Essentially, the client already knows about the relationships between objects and can issue a single call to a special API which will combine all or some of the data from several other API calls. This means the client can obtain all the data it needs for a view with a single round-trip.
Example:
Given this basic API calls and return values:
v1/user/userA/ => {
"name": "userA",
"supervisor": "v1/user/userB"
}
v1/user/userB/ => {
"name": "userB"
}
Now we want to get userA and also userA's supervisor, so we might issue a call to the aggregate API like:
{
"get": {
"url": "v1/user/userA/",
"id": "user"
},
"replace": {
"userA.supervisor": "deference(userA.supervisor)"
}
}
And get back:
{
"name": "userA",
"supervisor": {
"name": "userB",
}
}
It does have a down-side however: the client cannot cache the individual pieces of data (well, it could, but it wouldn't be easy to create a generalized mechanism) which means every aggregate API invocation which includes a reference to "userB" would receive "userB"'s data in the payload. However, in our case, because of the sensitivity of our data, caching is a huge no-no anyways, so we're not worried about the performance impact of repeatedly getting all/some of the same data.
-Luke