This question gets asked a lot, so I will try to answer it carefully. As a simple example to illustrate the principles involved, consider a constraint of the form
[expression] > 0
where [expression] could be any AMPL expression that has some variables in it. The question you need to ask yourself is: Is there a smallest positive value that [expression] can have? If there exists a smallest positive value V for [expression], then you can write the constraint as
[expression] >= V
But if there is no smallest positive value, then in general it will be possible to make the objective value better and better by moving the value of [expression] closer and closer to zero, without ever reaching a minimum; as a result, optimization over the > constraint will not be well-defined.
As a specific example, consider
param N integer > 0;
param a {1..N} > 0;
var x {1..N} binary;
minimize obj: sum {j in 1..N} a[j] * x[j];
subject to con: sum {j in 1..N} x[j] > 0;
Because x is binary, the smallest positive value of sum {j in 1..n} x[j] is 1, and so you can replace "> 0" by ">= 1". However if instead you define x as a continuous variable (replacing "binary" by ">= 0, <= 1") then there is no smallest positive value; and also there is no minimum -- since for any positive value epsilon there is a feasible solution that makes the objective smaller than epsilon, but there exists no feasible solution that makes the objective zero. You could try picking a positive epsilon and replacing "> 0" by ">= epsilon" but even then you could get an optimal solution in which all of the x[j] values are either zero or very close to zero.
A similar analysis can be make for any other > constraint, by subtracting the right-hand side from both sides to put it into the above form; and also of course any < constraint can be reversed to put it into the form of a > constraint.
Bob Fourer
am...@googlegroups.com