One of the issues with these types of gateways is that the folks who build these gateways (Amazon, Google, Microsoft, etc) don't generally talk about them. Not out of secrecy (although I'm sure there is an element of that), but simply because the need for such a gateway is fairly rare outside those environments. More specialized companies, such as Dropbox, have gone with different DNS endpoint simply to avoid needing to have a common gateway.
Building a gateway is complex, and is full of pros and cons. Generally speaking, having different DNS names for basic partitioning will be easier.
As a general pattern, I see:
The gateway routes to internal endpoints based on the combination of {serviceProvider}/{version}.
For companies such as Google / Amazon / Microsoft there are huge benefits to this approach - you can have a single, highly specialized, team that deals with the boundary. This means that team worries about things efficient TLS termination, translation from a text protocol to a binary protocol such as Protocol Buffers, Bond, Thrift, Avro, or Coral, and all of the other craziness that comes from living at the Edge. This approach also makes dealing with DDOS easier, as the team that manages that endpoint lives in that space every day and is staffed for dealing with such issues.
The downside is (obviously) the Single Point of Failure problem. Rolling upgrades across the front-end fleet, at least in my experience, is the biggest risk as there is No Place Like Production. No matter how you test, Prod (especially at scale) will expose bugs unique to that environment.
Now, if we were to talk pure front-end API proxies, such as Azure API Management, Apigee (I think - I'm not very familiar with them), or others, then it's a slightly different and more nuanced topic.
Cheers,
Chris