Hello,
My question is about running a scatter search algorithm with Python/Pyomo. The scatter search is done by fixing the value of a single variable, a threshold. I am running the 64-bit version of Pyomo and using the 64-bit version of Gurobi to solve the problem.
The example problem that I am trying to solve is an approximation of a multi-stage stochastic problem. The original problem is hourly in granularity and needs to have at least 24 periods, thus as a multi-stage stochastic problem it explodes really quickly. We generated a number of scenarios and are estimating the problem with a rolling-horizon approach within every scenario.
In the instance that I am now running, there are 5 scenarios, and within each scenario there are 24 IPs that are solved sequentially. So to evaluate a single value of the threshold, I need to solve 120 IPs. I will refer to each set of 120 IPs as one evaluation.
It takes about 23 seconds to solve one evaluation. I want to see if that time can be improved. The reason I think it can be improved is that when I look at the output of the solver, it finds the optimal solution of every IP at the root node (it doesn’t form a tree) in 0.01 seconds. Hence, for 120 IPs it should take about 1.2 seconds. I am guessing Pyomo or Gurobi or both spend a lot of time setting up the problem, feeding it to the solver, and getting the results back. Is there any way to cut on the setup time? From your experience, can you tell me which is more likely to cause a bottleneck here: Pyomo or Gurobi?
More details about my algorithm: I create only one abstract model as a global variable. Before I solve the instance, I update the values of some parameters (Python constructs used as parameters in the model), create the instance from the global abstract model, and activate/deactivate some of the constraints. Does the fact that I create a new instance every time explain the long setup time? And can it be accelerated?
Thank you very much!
Goran
JP is right: profiling the script would be useful. That said, some possible options for speeding things up (depending on the details of your model) come to mind:
- use mutable Params for the parameter values that you expect to change. That way you only create the concrete model (from the abstract one) once, and then update the mutable params directly on the concrete model.
- use the gurobi direct interface to Gurobi (that is the Python bindings and not the default LP file interface).
john
I am using Gurobi. I believe John already recommended I use the gurobi-direct version, and I do want to try it. I am not too familiar with this, can you point me in the right direction about how I would implement it? Are you referring to the below Gurobi/Python interface?
I have also just tried creating one instance of the model, solving it, updating the parameters and constraints, and then solving it again and repeating the process in a loop. Instead of creating an instance at every iteration of a loop, this just creates one instance for the whole loop, but the results of the optimizations are not consistent (when compared to creating an instance for every iteration of the loop). More precisely, the results of the first optimization are the same but the following results are not.