The full set of soft hinge-constraints implicitly optimized by samplerank leads to a convex learning problem. However, you are just sampling those constraints while exploring through MCMC. The intialization of the weights shouldn't be very important except in how it changes the implicit MCMC constraint sampling "policy", but I wouldn't worry about weight initialization.
MCMC of course, has a given starting point in terms of variable settings (not the parameter tensor initialization), and initializing this properly can have big effects on how well SampleRank works in practice. I think people have good luck initializing it at the ground truth setting of the variables as given by your labeled data, and then letting the chain wander from there for a while to gather constraints.
I haven't used SampleRank in a while though so maybe others could chime in.