I'm attempting to minimize a function which has both piecewise-linear components and quadratic components using Gurobi. The built-in setPWLObj() is really helpful and simple to use, except that setting a piecewise linear objective for a given variable overrides any other objective for that variable. This appears to be the case even if I attempt to use multi-objectives.
This is presents an issue because my complete objective function is of the form x^T Q x + d \sum_i ||x[i] - x[i]^*|| + \sum_i P_i(x[i] - x[i]^0), where each P_i is piecewise linear. Explicitly, each component of my variable vector x effects the quadratic term, the linear term, and the piecewise linear term of my function. My problem involves both box constraints and linear non-box constraints (e.g. \sum_i x[i] == 1), so Gurobi initially seemed like one of the best options out there. However, this inability to handle piecewise linear objectives in tandem with other objectives may make Gurobi essentially unusable for me.
I believe the model you are trying to formulate is a non-convex minimization problem. I will outline my reasoning for this statement below. Nevertheless, I believe Gurobi could be a very good solver choice, as using MIP framework one can formulate a desired-quality approximating model --you can consult the example I have alluded to previously-- and solve the subsequent model using the solver typically quite efficiently. An alternative to this would be trying to identify a solver that would promise to handle non-convex constraints natively, but typically this would come at a cost of dramatically loosing the efficiency of handling MIPs. You may want to consult NEOS Optimization Guide https://neos-guide.org/optimization-tree
to exploit this option further to try to identify a suitable solver (like KNITRO, IPOPT etc.). One additional thing to keep in mind when choosing the solver is whether you anticipate extending your model in the future to support integrality or additional logic within the model, where MIP methodology is extremely well suited for.2) If p(t) is such as above, you can think of p(t) as a limiting case of a sequence of smooth differentiable functions whose second derivative p'' >= 0 for t <= 0 (think of smoothing the break points of the original p(x) ever so slightly). Now, we can evaluate
f''(t) = p'' t + 2 p'
and so (a) f'' <= 0 if t <= 0, and p(t) is decreasing (this corresponds to your graph in red), and so is concave, and (b) f'' may or may not be positive, if p(t) is increasing. Moreover, noting that most of the time p'' is zero (since p was piece-wise linear), one can deduce that f'' will be convex if p is increasing -- this corresponds to the decreasing portion of the graph in red (recall p is "flipped", as you put it).
3) So far we have that f(t) term is concave for the blue graph, and only partly convex for the red graph. I would suggest that an efficient way to handle this situation is to construct a p-w linear approximation in both cases (a) and (b), whereas in case (b) you may further possibly take advantage of the partial convex segment of f(t). The latter may not be worth the effort though, depending on how many break points "decreasing" segment of p in red has (I would say if there are more than a handful number of such breakpoints, you may as well not bother).
4) Once you construct a p-w linear approximation to f(t), call it ff(t), you can introduce
w >= ff(t)
and simply add the respective w to the objective F(x) you aim to minimize.
[You can probably plot a few figures of f(t) to confirm that this reasoning makes sense.]
Hope this helps, and please let me know if you have any further questions.