Optimizing Loss with Random Variable

22 views
Skip to first unread message

Mariano

unread,
Jul 20, 2022, 2:23:57 AMJul 20
to TensorFlow Probability
Hi TensorFlow Probability,

        I am trying to solve an optimization problem in which I have two matrices X_t^f , Y_t^f , in which t are types and f features in each type. This is observed data given as input.

        I am sampling F relaxed Bernoulli variables called d and multiplying X_t^f * d_f and Y_t^f * d_f. 

      Next, I am building a correlation matrix C_{t_1, t_2} = Corr(X_{t_1}, Y_{t_2}) and computing a loss function such that I emphasize the diagonal and diminished the off-diagonal terms:

      L = \sum_t (1 - C_{t_1, t_1}) + \sum_{t_1 not equal t_2} C_{t_1, t_2}  

      WOULD IT BE POSSIBLE to tell me if there is any numerical errors that you see or if there is any problem that you foresee when optimizing this function ?

      I am attaching incomplete code next, which I am using to solve the problem (the complete version actually works):

Sequence of Function Calls: Optimize --> defined_loss -->correlation_loss

def optimize():
   logits = tf.Variable(10 * tf.ones(cardinality), name='logits')
   loss_fn = lambda: defined_loss(RBern)
   for temperature in np.arange(10., 0.5, -0.25):
            RBern = tfp.distributions.RelaxedBernoulli(tf.constant(temperature, dtype=tf.float32), logits=logits)
             losses = tfp.math.minimize(loss_fn=loss_fn,
                                optimizer=tf.optimizers.Adam(learning_rate=0.1), num_steps=100)




def correlation_loss(relaxbern):
    ss = relaxbern.sample(10)
    e_m = tf.einsum('bg,pg->bpg', ss, X_m)
    e_f = tf.einsum('bg,pg->bpg', ss, X_f)
    corr = tfp.stats.correlation(e_m, e_f, sample_axis=2, event_axis=1)
    diagonal_part = tf.linalg.diag_part(corr)
    off_diagonal = tf.linalg.LinearOperatorDiag(tf.linalg.diag_part(corr)).to_dense() - tf.linalg.LinearOperatorLowerTriangular(corr).to_dense()

     diag_corr_score = tf.reduce_sum(diagonal_part, axis=1)
     off_diag_corr_score = tf.reduce_sum(tf.abs(off_diagonal), axis=[1, 2])

     return diag_corr_score, off_diag_corr_score

def defined_loss(relaxbern):
       ss = relaxbern.sample(10)
       diag_corr_score, off_diag_corr_score = correlation_loss(RBern)

       return tf.reduce_sum(- diag_corr_score + off_diag_corr_score, axis=0)




       

                       
Reply all
Reply to author
Forward
0 new messages