I find it a strange question, but I have been doing that for more than 20 years.... so let me illustrate with a story. Please be patient and read it - hopefully, the effect will change how you think about using LP/MIP solvers...
Imagine you have a requirement to build a blending application for an industrial customer. I use this example as in many cases a blending problem is the first one people are taught when learning about LP and MIP modelling. I have also done this for real for a customer making high-specification alloys. The requirement is for a real desktop application with a graphical user interface. The application needs to read the current inventory of raw materials (the weight available for each and its chemical composition, cost per Kg, its weighing granularity (some are in fine-grain form like sand, some are like grit, some are in ingots of maybe 100g or 1Kg), the origin of the material (not all clients can use materials from every source), the type of material such as whether it is new material or recycled, and so on. Also we need to read the list of batches of material to be produced, so that includes the target weight and chemical composition.
All this data currently sits in the alloy blending company's databases (in the real case, they were held in SAP) so there needs to be some interface code to extract that data from those databases. We wrote our code in C++ (we started in 1997, so C++ was an ideal choice at the time), and created C++ classes for the data that we read, so for example a raw material was modeled in memory by a class instance of type RawMaterial which included the name, ID, weight available, date purchased, granularity, etc., and the chemistry for each material was modeled as a list of (element, proportion) pairs. The target material chemistry was specified by a list of tuples, like (element, min proportion, target proportion, max proportion). Also, for each target batch we have a list of the raw materials that are applicable (the applicable list of material is a subset because some are excluded for licencing reasons, some contain the wrong elements). The application's GUI allowed the user to do all the obvious things like browse the list of batches to be made, and for each one browse the list of available materials, and for each material you can browse the chemistry, etc. The user can also exclude materials from a batch or limit how much is used (so we may want to limit use of a raw material to not more than 100Kg), or may want to force a material to be used (e.g. must use more than 50 Kg of a particular raw material). Again, all of this is kept in our in-memory data.
Also on the GUI, there is a "Run" button, so that when the user has finished looking through the available materials and done some inclusions and/or forced some materials in, he or she can make the system find the cheapest set of raw materials to use. At this point, this is all completely standard software engineering, and nothing we have done makes any mention of using LP or MIP. But given all the above data, it is easy to write code using the C++ API to create a suitable MIP model in the solver of choice (e.g. Gurobi) for the blending problem we have described - we just need to create the MIP model by iterating over the batches to be made, and for each one we iterate over the available materials and create a modelling variable for how much of each material will be used (usually a continuous amount but may be in discrete amounts for ingots of raw material). Then we add a constraint that the total amount of all the materials used must equal the target batch weight. We also add the chemistry constraints, again by iterating over all the applicable materials and all the elements in each material we create expressions for how much of each element will be in the batch of target material, and add upper and lower bounds on the expression to match the target batch chemistry. We also add the other constraints, e.g. forcing or limiting the amount of each material in each target batch. All of this is still done in C++ using the API. Then we tell the solver to solve the problem - takes maybe 30 seconds for complex cases - and then extract the answers from the solver again using the C++ API, and we can display the result for the user in the GUI as a table of materials and weights - of course this looks like a spreadsheet, so the materials can be sorted by name, ID, cost, weight used, etc. We can also provide other detailed displays e.g. the final chemistry, and highlight any elements which are near their upper or lower limits.
Note that from an optimisation perspective this is really very close to a standard blending problem; but we are doing it embedded in an application with a lot more supporting code around it. The real application is probably more than 90% standard desktop GUI application software, and the C++ code that does the MIP modelling stuff with all the variables and constraints is probably less than 10% of the code lines. The customer just uses it as a tool that gives the right results - they don't care much how it works. It has been used for fourteen years now (we have of course done lots of updates and changes over the years) and is still in daily use on a number of sites, and is probably used for making hundreds of thousands of dollars worth of materials every week. The customer likes it because we took the time to mould the way it works to fit in with their internal processes.
Note that we do these systems from the perspective of software engineering where the solver is just another software component. We often have not used abstract or algebraic modelling layers as they sometimes get in the way of achieving a good tight integration with the rest of the system. In effect we produce our own modelling language for each system, deeply embedded in the data structures of the software. The only use we have for things like LP and MPS files in systems like these is to see what went into the solver for debugging or diagnostics.
Please don't assume that I don't like modelling languages - I have used a number of them, including AMPL, MPL, OPL, GAMS, AIMMS, FlopC++. All have really good features, and are of great help for some stages of almost every problem, and can be used all the way for some problems.
Note that in cases like the one above, we were doing the job of
software engineer and optimisation expert at the same time. If you want
to think about separation of those roles, then you need to get the
OR/opti experts to know what data they need to solve the problem, then
the software people can build the system to find and expose that data so
that it can be accessed by the opti experts. At the other side of the
solver, the opti experts extract the answers from the solver and save
those numbers etc back in the application so the software can use the
answers and the GUI can display the results to the user. Actually, while exploring in the initial stages of a large project, using a high-level modelling language can be extremely helpful in clarifying what data is needed for a problem and what sorts of answers can be available; such understanding can greatly help both the software architects and engineers as well as the opti experts.
Tim