Basic Solution Linear Programming

1 view
Skip to first unread message

Klacee Sawatzky

unread,
Aug 5, 2024, 12:14:19 AM8/5/24
to xemerriaprop
Ive just started learning linear programming, and for some reason, have run into a question about something that isn't mentioned in the first chapter (and we're supposed to answer these questions based on the first chapter).

But the question now asks; "indicate EACH basic solution, and determine which are feasible and which are infeasible", and I don't see how the above quote says anything about several basic solutions. Surely there's only one way we can set all variables to zero, and so only one basic solution?


I dont understand what are Basic and non basic variables,why we are talking them specially, what they have got to do with the rank of the coefficient matrix and augmented matrix ,and some deal with the linearly independent set corresponding to the decision variables , and some finding the determinant of the coefficient matrix.


So in a such a problem, you usually want to get the extrema of the object, because it is at the corner points of this multidimensional object that the objective function (what the equations represent and are modelling) is at a max or min. So if you look at the algebra, since you have a matrix/system of equations, there is an assignment of variables (from the equation standpoint) or a solution vector (from the matrix standpoint) that identifies/is the coordinates of the extrema. Now some of these variables you will set equal to zero, and some you will solve for by setting the equations equal to each other. The ones you set equal to zero are "non basic variables". Any one not set equal is a "basic" variable.


There is no routine to do that with glpk; and IMHO it is very unlikely that any real-world solver implements something like that, since it is not very useful in practise and it is certainly not a simple problem.


Consider a LP whose domain has dimension n; the set S of the optimal solutions is a convex polyhedron whose dimension m can be anything from 0 to n-1.You want a method to list all the basic solutions of the problem, that is all the vertices of S: as soon as m is greater than 2, you will need to carefully avoid cycling when you move from one basic solution to another.


You can (probably) use the glpk simplex routines. Once you get an optimal BFS, you can get the reduced cost associated with all non-basic columns using the routine glp_get_row_dual (the basis status of the variable can be obtained with glp_get_row_stat), so you can find a non-basic variables with a null reduced cost. Then, I think that you can use function glp_set_row_stat to change the basis status of this column in order to have it enter the basis.(And then, you can iterate this process as long as you avoid cycling.)


Note that I did not try any of those solutions myself; I think that the first one is by far the best, although it will require that you learn the PPL API. If you wish to go for the second one, I strongly suggest that you send an e-mail to the glpk maintainer (or look at the source code), because I am really not sure it will work as is.


We say that\(\mathbfx^* \in \mathbbR^n\) is a basic solutionto the system \(\mathbfA \mathbfx = \mathbfb\)if there exists a basis \(B\) such that\(\mathbfA \mathbfx^* = \mathbfb\);


We say that the basis \(B\) determines thebasic solution \(\mathbfx^*\). Observe that there cannot betwo different basic solutions determined by the same basis.This is because if \(\mathbfx^*\) is determined by \(B\),then \(\sum_j \in B x^*_j\mathbfA_j = \mathbfb\)where \(\mathbfA_j\) denotes the \(j\)th column of \(\mathbfA\).Hence, \(x^*_j\), \(j \in B\) are uniquely determined sincethe \(m\) columns \(\mathbfA_j\), \(j\in B\), are linearly independent.


Proposition 4.1.Let \(\mathbfA \in \mathbbR^m\times n\) with\(\operatornamerank(\mathbfA) = m\) and let\(\mathbfb \in \mathbbR^m\).Let \(\mathbfx^*\) be a solution to the system\(\mathbfA\mathbfx = \mathbfb\).Let \(J = \ i \in \1,\ldots, n\ : x^*_i \neq 0\. \)Prove that \(\mathbfx^*\) is a basic solution to \(\mathbfA\mathbfx = \mathbfb\)if and only if the columns of \(\mathbfA\) indexed by \(J\) arelinearly independent.


Conversely, suppose that the columns of \(\mathbfA\) indexedby elements of \(J\) are linearly independent.As \(\operatornamerank(\mathbfA) = m\), \(J \subseteq m\). If \(J = m\), then \(J\) is a basisdetermining the basic solution \(\mathbfx^*\).Otherwise, by a result in linear algebra,the columns indexed by elements of \(J\) can be extended to \(m\) linearly independent columns of \(\mathbfA\).The \(m\) indices of these column form a basis determiningthe basic solution \(\mathbfx^*\). Hence,\(\mathbfx^*\) is a basic solution.


Let \(\mathbfA \in \mathbbR^m\times n\) with\(\operatornamerank(\mathbfA) = m\) and let\(\mathbfb \in \mathbbR^m\).Let \(\mathbfx^*\) be a solution to the system\(\mathbfA\mathbfx = \mathbfb\).Let \(J = \ i \in \1,\ldots, n\ : x^*_i \neq 0\. \)Prove that \(\mathbfx^*\) is a basic solution to \(\mathbfA\mathbfx = \mathbfb\)if and only if the columns of \(\mathbfA\) indexed by \(J\) arelinearly independent.


Suppose that \(\mathbfx^*\) is a basic solution determined by a basis\(B\). Clearly, \(J \subseteq B\) since all nonbasic variables are 0.By definition, the columns of \(\mathbfA\) indexed by \(B\), and hencethose by \(J\), are linearly independent.


The simplex algorithm walks greedily on the corners of a polytope to find the optimal solution to the linear programming problem. As a result, the answer is always a corner of the polytope. Interior point methods walk the inside of the polytope. As a result, when a whole plane of the polytope is optimal (if the objective function is exactly parallel to the plane), we can get a solution in the middle of this plane.


Suppose that we want to find a corner of the polytope instead. For example if we want to do maximum matching by reducing it to linear programming, we don't want to get an answer consisting of "the matching contains 0.34% of the edge XY and 0.89% of the edge AB and ...". We want to get an answer with 0's and 1's (which simplex would give us since all corners consist of 0's and 1's). Is there a way to do this with an interior point method that guarantees to find exact corner solutions in polynomial time? (for example perhaps we can modify the objective function to favor corners)


While the question in general makes sense, it's odd that you pick maximum matching as an example, because there are many algorithms (max flows for max cardinality bipartite matching, Edmonds' algorithm for nonbipartite matching, and the Hungarian algorithm for max weight bipartite matching) that will all give integer vertex solutions to the problem.


Karmarkar's polynomial time algorithm does only move near the edge. At the end, it finds a suitable basic solution (e.g. corner) which is optimal using a purification scheme. You can use this or a similar technique to move to a corner from a plane.


I can't make it out in in Karmarkar's original paper. My reference is "Lineare Optimierung und Netzwerkoptimierung" (English: Linear and network optimisation) by Hamacher and Klamroth which has German and English text side by side.


Yes, there is a simple method and I have implemented it in C++ to combine the speed of interior point methods with the accuracy of simplex methods (using iterative refinement of the inverse of the basis matrix I can achieve accuracies of 1 part in 10^15 and better on dense constraint matrices with more than 1000 variables and constraints).


The key is in the simplex method that you use. Assume that the simplex method has a mechanism for refactoring a basis (e.g. after cumulative rounding errors render it necessary), and that this refactorization method simply recreates a basis inverse matrix for a basis that contains all the desired list of basic variables. Furthermore assume that even if the desired basis cannot be recreated in full, that the simplex algorithm is able to continue from a basis that contains 95% of the target basis, then the answer is very simple.


All you have to do is take the solution from your interior point method, eliminate the variable whose primary solution values are implied to be zero by complementary slackness, and given a basis size in the simplex problem of b, take the b variables in the interior point solution with the largest values (or as many as there are non-zero values if that is less than b), and refactor the simplex basis to contain those b variables. Then continue the simplex method until it solves. Since you are starting the simplex problem close to the finish this is usually very fast.


Linear Programming is widely used in Mathematics and some other fields such as economics, business, telecommunication, and manufacturing fields. In this article, let us discuss the definition of linear programming, its components, and different methods to solve linear programming problems.


Linear programming (LP) or Linear Optimisation may be defined as the problem of maximizing or minimizing a linear function that is subjected to linear constraints. The constraints may be equalities or inequalities. The optimisation problems involve the calculation of profit and loss. Linear programming problems are an important class of optimisation problems, that helps to find the feasible region and optimise the solution in order to have the highest or lowest value of the function.

3a8082e126
Reply all
Reply to author
Forward
0 new messages