How about using random objective functions, for instance, linear objective functions with random coefficients? If that doesn't get you enough different solutions, then add (bound or linear) constraints on subsequent solves to exclude previous solutions. How well this works depends on the problem type and how you generate the random coefficients. There are also "semi-random" things you can do, such as minimizing or maximizing one variable at a time, or sum of 2 at a time, etc.. Of course, if you don't impose constraints to exclude previous solutions, you may wind up getting some duplicate solutions,.
I have used such a technique to find different feasible starting points for use by a nonlinear optimizer on non-convex linearly constrained nonlinear optimization problems. I solve Linear Programs having the same constraints as the nonlinear optimization problem, using randomly generated coefficients for the LP objective function. It was far better than using the optimizer's built-in way of generating random starting points