Hi,
Thanks for your reply.
You are correct. The size of S is about 10K and k can be as large as 10K too, but number of unique values of k is about 500.
Best,
Behzad
for item in items:
sites = item[0]
tDiff = item[1]
d = item[2]
t = item[3]
tDiff_cdf = item[4]
survived = item[5]
if tDiff_cdf.shape[0] == 0:
continue
beta_d = tDiff_cdf*beta_temp[d,:].T
sites_pi = cvx.vstack(*[pi[i] for i in sites])
sites_alpha = cvx.vstack(*[alpha[i] for i in sites])
loglikelihood += cvx.sum_entries(cvx.log(sites_pi[:survived])+
cvx.log(sites_alpha[:survived]+t[:survived]*beta_temp[d,:].T)
-(cvx.mul_elemwise(tDiff[:survived],sites_alpha[:survived])+beta_d[:survived]))
loglikelihood += cvx.sum_entries(cvx.log(sites_pi[survived:]) \
-(cvx.mul_elemwise(tDiff[survived:], sites_alpha[survived:]) +beta_d[survived:]))
Hi,
Thanks for your reply. Can you elaborate a little bit how for example vstack(x[2],x[1],x[3]) gets expanded?
From what you said, I get that x[2] is replaced with multiplication of a sparse matrix and vector x. What happens next? One idea I was thinking was to merge sparse matrices as we find it possible to do. This can start with merging matrices that are in the sum, i.e if two terms involve x, their correspond sparse matrices get summed.
B.
Let me know when it is done and I can test it with my code.