GAN for one-hot sequence data

63 views
Skip to first unread message

Matthew Pocock

unread,
May 6, 2020, 3:54:36 PM5/6/20
to Keras-users
Hi - I'm developing a GAN to (re)construct letter sequences. I'm encoding the letters as one-hot features. So for a string length n, and an alphabet size a, I have n vectors, each length a. For the training data each a-vector is one-hot. The generator is making new n-mers of unit vectors length a, and the discriminator attempts to tell if the n-mer is real or generated.

The problem I'm having is that the generator is fooling the discriminator by modelling high-scoring *distributions* in the a-vectors, rather than picking vectors peaked around a single high value with lots of low ones. If the training examples have 20% 'A', then the generator tends to produce a-vectors where the 'A' feature is 0.2, rather than producing 20% of a-vectors where the 'A' feature is 1.

I've added an activity regularizer to the last layer of the generator. I'm slowly ramping up the strength of it - currently using activity_regularizer=regularizer.l1(0.05), which is improving the situation, but perhaps there is a better way. This situation hasn't happened when I've done similar things with an autoencoder, because at some point the generated a-vectors are compared against the training examples that have actual 1's and 0's in them, which sharpens the distributions without the need for regularization.

Perhaps there's a standard way to do this -- my google searches haven't hit anything so maybe I am not searching for the right terms.

Thanks,
Matthew
Reply all
Reply to author
Forward
0 new messages