Thanks for the clarification... I apologize: I totally misinterpreted your question in my first response. This makes perfect sense now. It's very common to have some service calls that are internal only, and others that are externally accessible.
The most common way to handle this is to avoid exposing your services directly to external clients in the first place. Instead, keep them on a private network where they are only accessible to each other. Then, have a proxy server at the edge of the network, with a public IP that is accessible to external clients, and a private one on your internal private network. This proxy server can forward the externally-accessible routes to the appropriate internal back-end services. This deployment pattern is often known as an
API gateway or service gateway.
Lagom's development mode implements a simple
service gateway that can be used this way. When you "runAll" and then connect to
http://localhost:9000, you are using the service gateway. You can configure the routes to forward to each service using
service ACLs in the service descriptor. If you follow the examples you see in the docs, you might be calling "withAutoAcl(true)" in your service descriptor. This means: make
all service calls externally accessible and forward them all from the gateway, which is not what you want in your case. Instead, you can call
withAcls and pass a list of
ServiceAcl objects, constructed using one of the factory methods in the
companion object. This allows you to specify regular expressions to match against the path, and optionally the HTTP method you wish to allow.
In production, the specifics will vary depending on the deployment environment you're using. In Kubernetes, this concept is called
Ingress and is usually implemented using nginx. AWS offers a couple of different solutions:
API gateway and
Application Load Balancer. These have somewhat different feature sets and intended use cases, and you can use them separately or together. If you are self-hosting without using Kubernetes, you can set up a server such as nginx or haproxy to perform this role.
For a lot of people, this level of security is enough: if the services aren't available via the network from the clients, then you can be pretty sure they won't be able to call the internal service routes. If you want an extra layer of security (for example, to protect against misconfiguration of the gateway, or a network intrusion) then you can implement additional measures.
A secret token in the headers like you describe is a perfectly fine, simple approach. The main advantage is the simplicity of implementation. It does have a couple of disadvantages:
- It won't help you determine which service issued the request, if you need to audit that information. For that, you would need per-service tokens.
- If the secret is somehow compromised, you'll need to change it everywhere, all at once. This will probably require taking your entire system offline for redeployment.
- If you are concerned about network security, then a cleartext token sent with each request is a risk. An attacker could potentially sniff network traffic to obtain the token.
If you're worried about any of these, a more secure approach would be to use a public key infrastructure that allows each service to be issued with its own key pair. Requests can include a client signature (for example, using
JWT) that can be verified by the receiving service. Using a unique key pair per service allows you to securely identify the specific service making the request, and rotate an individual service's key pair without having to change all the others. Use of asymmetric encryption means that no secrets need to be transmitted over the network. The tradeoff is that this is much more complex to set up and administer, and you will need a solution to handle key revocation.
Best,
Tim