Hi Søren,
Indeed, it was great to meet you a talk about all things GraphQL! :)
There is nothing in sangria that will prevent you from doing this. In fact it is very easy to do. Sangria itself does it a lot as well. The project now contains about 900 tests and most of them are creating schema on the fly for every individual test case. I recently started work on
CATs support which takes it to the next level: the executable schema is created dynamically based on the
IDL definitions. You can find the code in a separate branch:
https://github.com/sangria-graphql/sangria/compare/cats
In our company we use GraphQL/sangria as well and we already thought on implementing very similar functionality. At the moment the schema is pretty static, but we have user-defined data structures in form of custom product attributes. In order to provide nice GraphQL API for these, we would like to generate parts of the GraphQL schema based in the definitions that come from a database. The only thing that prevented us from doing it so far is the fact that it simply too expensive to load all these custom data types from the database on every request, or at least we would like to avoid if possible. I don't think that building an in-memory representation of GraphQL schema would be an issue in this case. Creating additional objects for it has it's costs in terms of garbage collection, but I doubt that it will have big impact, especially in comparison to amount of data that we need to load from the database and time it takes to load load this data.
At the moment it's just an assumption, which we need verify first. But if, indeed, it takes too much time to load this data, then there are number of ways how this can be optimized. One possible solution is to use something like in-memory LRU cache which will build in-memory schema on demand and evict it if it is unused for some time and cache size has grown too big or there are new changes in the generated parts of the schema.
Another possible solution is to use the fact that it's scala which has pretty rich ecosystem nice mature libraries. For example one can use an akka cluster and dedicate a cluster singleton actor for every tenant of the system. These actors (or set of actors) will encapsulate all of the interactions with the schema of particular tenant. I think this kind of approach will require a bit of investment in infrastructure code, but in a long term may provide a very scalable and flexible solution.
tl;dr it is possible and very easy to do, but you need to measure the impact on performance (CPU utilization, garbage collection, etc.) and optimize it if you see that it is necessary. But I guess it's not specific to sangria or scala. I would also take very similar approach if I would implement this kind solution with the GraphQL reference implementation and nodejs.
Cheers,
Oleg