Re: [Gurobi] Optimization solver in the whole software architecture

325 views
Skip to first unread message

Zengjian Hu

unread,
Feb 21, 2013, 7:14:00 PM2/21/13
to gur...@googlegroups.com
hi, we have built an optimization software here using Gurobi. do you have more specific question? 

Jian

On Thu, Feb 21, 2013 at 12:58 PM, Huy Nhiem Nguyen <huynhie...@gmail.com> wrote:
Hi all, I have a hard time to figure out how software developers and optimization engineers work together to build an optimization-based application. Could someone help me ? Thank you

--
 
---
You received this message because you are subscribed to the Google Groups "Gurobi Optimization" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gurobi+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Tim Chippington Derrick

unread,
Feb 22, 2013, 5:10:30 AM2/22/13
to gur...@googlegroups.com
I find it a strange question, but I have been doing that for more than 20 years.... so let me illustrate with a story. Please be patient and read it - hopefully, the effect will change how you think about using LP/MIP solvers...

Imagine you have a requirement to build a blending application for an industrial customer. I use this example as in many cases a blending problem is the first one people are taught when learning about LP and MIP modelling. I have also done this for real for a customer making high-specification alloys. The requirement is for a real desktop application with a graphical user interface. The application needs to read the current inventory of raw materials (the weight available for each and its chemical composition, cost per Kg, its weighing granularity (some are in fine-grain form like sand, some are like grit, some are in ingots of maybe 100g or 1Kg), the origin of the material (not all clients can use materials from every source), the type of material such as whether it is new material or recycled, and so on. Also we need to read the list of batches of material to be produced, so that includes the target weight and chemical composition.

All this data currently sits in the alloy blending company's databases (in the real case, they were held in SAP) so there needs to be some interface code to extract that data from those databases. We wrote our code in C++ (we started in 1997, so C++ was an ideal choice at the time), and created C++ classes for the data that we read, so for example a raw material was modeled in memory by a class instance of type RawMaterial which included the name, ID, weight available, date purchased, granularity, etc., and the chemistry for each material was modeled as a list of (element, proportion) pairs. The target material chemistry was specified by a list of tuples, like (element, min proportion, target proportion, max proportion). Also, for each target batch we have a list of the raw materials that are applicable (the applicable list of material is a subset because some are excluded for licencing reasons, some contain the wrong elements). The application's GUI allowed the user to do all the obvious things like browse the list of batches to be made, and for each one browse the list of available materials, and for each material you can browse the chemistry, etc. The user can also exclude materials from a batch or limit how much is used (so we may want to limit use of a raw material to not more than 100Kg), or may want to force a material to be used (e.g. must use more than 50 Kg of a particular raw material). Again, all of this is kept in our in-memory data.

Also on the GUI, there is a "Run" button, so that when the user has finished looking through the available materials and done some inclusions and/or forced some materials in, he or she can make the system find the cheapest set of raw materials to use. At this point, this is all completely standard software engineering, and nothing we have done makes any mention of using LP or MIP. But given all the above data, it is easy to write code using the C++ API to create a suitable MIP model in the solver of choice (e.g. Gurobi) for the blending problem we have described - we just need to create the MIP model by iterating over the batches to be made, and for each one we iterate over the available materials and create a modelling variable for how much of each material will be used (usually a continuous amount but may be in discrete amounts for ingots of raw material). Then we add a constraint that the total amount of all the materials used must equal the target batch weight. We also add the chemistry constraints, again by iterating over all the applicable materials and all the elements in each material we create expressions for how much of each element will be in the batch of target material, and add upper and lower bounds on the expression to match the target batch chemistry. We also add the other constraints, e.g. forcing or limiting the amount of each material in each target batch. All of this is still done in C++ using the API. Then we tell the solver to solve the problem - takes maybe 30 seconds for complex cases - and then extract the answers from the solver again using the C++ API, and we can display the result for the user in the GUI as a table of materials and weights - of course this looks like a spreadsheet, so the materials can be sorted by name, ID, cost, weight used, etc. We can also provide other detailed displays e.g. the final chemistry, and highlight any elements which are near their upper or lower limits.

Note that from an optimisation perspective this is really very close to a standard blending problem; but we are doing it embedded in an application with a lot more supporting code around it. The real application is probably more than 90% standard desktop GUI application software, and the C++ code that does the MIP modelling stuff with all the variables and constraints is probably less than 10% of the code lines. The customer just uses it as a tool that gives the right results - they don't care much how it works. It has been used for fourteen years now (we have of course done lots of updates and changes over the years) and is still in daily use on a number of sites, and is probably used for making hundreds of thousands of dollars worth of materials every week. The customer likes it because we took the time to mould the way it works to fit in with their internal processes.

Note that we do these systems from the perspective of software engineering where the solver is just another software component. We often have not used abstract or algebraic modelling layers as they sometimes get in the way of achieving a good tight integration with the rest of the system. In effect we produce our own modelling language for each system, deeply embedded in the data structures of the software. The only use we have for things like LP and MPS files in systems like these is to see what went into the solver for debugging or diagnostics.

Please don't assume that I don't like modelling languages - I have used a number of them, including AMPL, MPL, OPL, GAMS, AIMMS, FlopC++. All have really good features, and are of great help for some stages of almost every problem, and can be used all the way for some problems.

Note that in cases like the one above, we were doing the job of software engineer and optimisation expert at the same time. If you want to think about separation of those roles, then you need to get the OR/opti experts to know what data they need to solve the problem, then the software people can build the system to find and expose that data so that it can be accessed by the opti experts. At the other side of the solver, the opti experts extract the answers from the solver and save those numbers etc back in the application so the software can use the answers and the GUI can display the results to the user. Actually, while exploring in the initial stages of a large project, using a high-level modelling language can be extremely helpful in clarifying what data is needed for a problem and what sorts of answers can be available; such understanding can greatly help both the software architects and engineers as well as the opti experts.

Tim

Zengjian Hu

unread,
Feb 22, 2013, 6:51:02 PM2/22/13
to gur...@googlegroups.com
hi Huy,

We have Gurobi integrated in our application. We adopt a service oriented architecture where our optimization engine is set up as a stand alone windows service. The optimization engine wraps around Gurobi which also includes logic for building models and interpreting results. Other pieces of the software talk to the optimization engine via service calls. 

It worth to note that Gurobi has recently rolled out a client-server version (last time I heard about it it was in alpha testing), which is fairly fantastic. Had it come out earlier, we might just install Gurobi on some dedicated servers and put the rest business logic on other servers. I see two main advantages - 1) Gurobi is completely separated from the rest business logic. 2) Gurobi will take care of load balancing and fail-safe issues so that you won't need to worry about it.

hope that helps,

Jian

Tim Chippington Derrick

unread,
Feb 24, 2013, 11:00:36 AM2/24/13
to gur...@googlegroups.com
I totally agree about keeping the optimisation model from being pervasive through a large system as that can lead to long-term maintenance issues - the case I described is a special case because the business and manufacturing process is *very* stable, so we have made only very minor changes in the modelling. In another system we have delivered an optimisation engine as a service running under windows and accessed from an application server - then the optimisation engine really had no user interface, and all feedback to users was done via a very simple protocol and updates to database tables. Again, this works fine (the customer has been running live like this for about four years).

The most important point is that optimisation software is just software - build the right architecture for the system, and fit the optimisation engine where it needs to be in that architecture. You just need the extra layers of structure and glue to hold it all together.

Tim

On 24 February 2013 10:46, Huy Nhiem Nguyen <huynhie...@gmail.com> wrote:
Hi Tim and Jian,

Your replies are very helpful. Thank you very much.

I think software engineers should develop and code everything and leave only the optimization modelling to optimization experts. Not like Tim's story, there will be a possible chance that I will need a different optimization model for the same type of data. In that case, it is best if I have to change only the optimization piece rather than the entire software. I think service calls is a good option in this case.

Thanks

Neo
Reply all
Reply to author
Forward
0 new messages