I'm early stages too as far as Salt is concerned, but for what it's worth, I had a similar issue. My problem was not multi-tenant but automatically assigning nodegroups to a large number of minions, so they could be managed separately. Seems to me this might be the same problem as you but for a different reason. I ended up doing two things.
First - the solution you mentioned. Using a top file in pillar, look at a grain or grains and decide to include additional pillar data based on what is found. For me this helps with dev / staging / prod issues. Say dev salt master has ip 1.1.1.1 and prod 2.2.2.2 (you could segregate clients here using different dns names for the same ip, too), I can provide different variables to each (like aws credentials) but they can use the same state files. top.sls:
set-env.sls :
{% if grains['master'] == '1.1.1.1' %}
include:
- dev-stuff
{% elif grains['master'] == '2.2.2.2' %}
include:
- prod-stuff
{% else %}
env: ''
{% endif %}
Second - I wrote some python code that connects to an internal database. The db already knows which nodegroup each node should be in, but salt doesn't and this changes often. So I read the db once a day and regenerate the tail end of the master config file such that every group in the database gets created as a salt nodegroup, and each minion is listed in whatever nodegroup it should belong to. This way as long as my database is right, salt takes care of itself. I can issue commands to a nodegroup and whatever minions are supposed to get the command, get it. If anyone knows a better way I'd love to hear it. This was the simplest solution I found that kept everything on the master where I wanted it.