Hello! I've been playing around with T2T for awhile now, still getting my feet around some of the finer points of the framework, but coming up against something I'm not really sure where to start:
I want to do some conditional language modeling: say I have a bunch of text that comes from a bunch of different stories or 'universes' -- eg. a bunch of texts from Harry Potter, a bunch of texts from Lord of the Rings, etc. I have a few related questions that I'll number for clarity:
1) How would I go about training a model that can generate text depending on some user-controllable 'universe' vector?
2) The texts all come from only one universe (eg. there aren't texts that are both harry potter and lord of the rings), but it would also be very interesting to be able to specify multiple at the time of generation, I assume that would be trivial if the conditioning vector was just a one-hot vector, but there are hundreds/thousands of universes in the dataset and I'm not sure if having that large of a conditioning vector will make the model worse.
3) I also think it would be interesting to train a sort of 'universe embedding' so that the model can learn from the styles of nearby 'universes' as well.
4) There are additional parameters that would also be good to be able to condition on (eg. the writing has 'ratings' of maturity) - is it possible to condition in multiple parameters?
5) Some of the conditions are more or less important than the others, (eg. 'universe' is much more important than it maturity rating), is it possible to weight the strength of conditioning when training/generating?
Thanks in advance! I'll be trying to figure this out on my own and will post anything I figure out in this thread.