Is the optimization in the learning algorithm convex?

13 views
Skip to first unread message

namrata ghadi

unread,
Jul 14, 2017, 2:04:56 PM7/14/17
to Factorie
Hello,

I am trying to use SampleRank with GibbsSampling for learning. . I wanted to know if the optimization algorithm for learning parameters of the function a convex problem.
If not, what would be a way to give different initial / starting points to the algorithm?

Thanks
Namrata

Luke Vilnis

unread,
Jul 14, 2017, 5:34:10 PM7/14/17
to dis...@factorie.cs.umass.edu
Hi Namrata,

The full set of soft hinge-constraints implicitly optimized by samplerank leads to a convex learning problem. However, you are just sampling those constraints while exploring through MCMC. The intialization of the weights shouldn't be very important except in how it changes the implicit MCMC constraint sampling "policy", but I wouldn't worry about weight initialization.

MCMC of course, has a given starting point in terms of variable settings (not the parameter tensor initialization), and initializing this properly can have big effects on how well SampleRank works in practice. I think people have good luck initializing it at the ground truth setting of the variables as given by your labeled data, and then letting the chain wander from there for a while to gather constraints.

I haven't used SampleRank in a while though so maybe others could chime in.

Best
Luke

--
--
Factorie Discuss group.
To post, email: dis...@factorie.cs.umass.edu
To unsubscribe, email: discuss+unsubscribe@factorie.cs.umass.edu

---
You received this message because you are subscribed to the Google Groups "Factorie" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@factorie.cs.umass.edu.

Reply all
Reply to author
Forward
0 new messages