Depending on the formulation of the objective function f, and the structure of the constraint set S, this optimization problem can be grouped into different categories (linear programming, quadratic programming, nonconvex nonlinear programming, etc). Drake will call suitable solvers for each category of optimization problem.
Drake wraps a number of open source and commercial solvers (+ a few custom solvers) to provide a common interface for convex optimization, mixed-integer convex optimization, and other non-convex mathematical programs.
The MathematicalProgram class handles the coordination of decision variables, objectives, and constraints. The Solve() method reflects on the accumulated objectives and constraints and will dispatch to the most appropriate solver. Alternatively, one can invoke specific solver by instantiating its SolverInterface and passing the MathematicalProgram directly to the SolverInterface::Solve() method.
Optimization problems are omnipresent in many fields of industry, from healthcare to finance, manufacturing to logistics, as well as in academic sectors. Many of these are so-called NP-hard problems: they are difficult to solve and not scalable on classical computers, but could have an enormous impact on our society, so there has been a growing interest in using quantum computing to solve them.
When trying to solve an optimization problem, the first step is to encode the cost function f in a Hamiltonian H and then finding the ground state of H, which corresponds to the solution of the optimization problem. Current standard approaches to formulating optimization problems on quantum computers include methods based on QUBO (Quadratic Unconstrained Binary Optimization) and one-hot encoding, which have substantial limitations depending on the type of optimization problem and type of hardware used.
The library of problems presented is intended to facilitate the Hamiltonian formulation beyond QUBO and one-hot encoding. Common problems in the literature are revisited and reformulated using encoding-independent formulations, meaning that they can be encoded trivially using any spin encoding. The possible constraints of the problem are also identified and presented separately from the cost function, making it easy to explore the dynamic implementation of the constraints.
Among the problems presented there are the well-known clustering problem and the traveling salesperson problem. Below is the example of what a decision tree for formulation of the traveling salesperson would look like. The entire process is presented in four different blocks (Define problem, Encode variables, Replace variables and Final Hamiltonian) and the decisions taken are highlighted in green.
Overall, the paper presents important findings that have the potential to improve and streamline the formulation of optimization problems for quantum computing. Firstly, it shows that obtaining the spin Hamiltonian for an optimization problem involves several choices. This is a a feature rather than a burden, as it leaves room to improve the performance of quantum algorithms by making different choices. In addition, the identification of common, reoccurring building blocks allows users to develop a construction kit for the formulation of optimization problems. This paves the way to an automated preprocessing algorithm picking the best choices among the construction kit, to facilitate the search for optimally-performing quantum algorithms.
Finding the optimal spin Hamiltonian for a given hardware platform for a quantum computer is in itself an important problem, as they are very likely to differ. It is important to have a procedure capable of tailoring a problem to any given platform, and the hardware agnostic approach presented here is a first key step towards this direction.
For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure m0. For example, if there is a graph G which contains vertices u and v, an optimization problem might be "find a path from u to v that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from u to v that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.
In the field of approximation algorithms, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.[2]
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal.[1] This fact is called weak duality.
In general, the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. This fact is called strong duality.
In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of the convex relaxation of the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closed convex hull and with replacing a non-convex function with its convex closure, that is the function that has the epigraph that is the closed convex hull of the original primal objective function.[6][7][8][9][10][11][12][13][14][15][16]
Linear programming problems are optimization problems in which the objective function and the constraints are all linear. In the primal problem, the objective function is a linear combination of n variables. There are m constraints, each of which places an upper bound on a linear combination of the n variables. The goal is to maximize the value of the objective function subject to the constraints. A solution is a vector (a list) of n values that achieves the maximum value for the objective function.
In the dual problem, the objective function is a linear combination of the m values that are the limits in the m constraints from the primal problem. There are n dual constraints, each of which places a lower bound on a linear combination of m dual variables.
In the linear case, in the primal problem, from each sub-optimal point that satisfies all the constraints, there is a direction or subspace of directions to move that increases the objective function. Moving in any such direction is said to remove slack between the candidate solution and one or more constraints. An infeasible value of the candidate solution is one that exceeds one or more of the constraints.
In the dual problem, the dual vector multiplies the constraints that determine the positions of the constraints in the primal. Varying the dual vector in the dual problem is equivalent to revising the upper bounds in the primal problem. The lowest upper bound is sought. That is, the dual vector is minimized in order to remove slack between the candidate positions of the constraints and the actual optimum. An infeasible value of the dual vector is one that is too low. It sets the candidate positions of one or more of the constraints in a position that excludes the actual optimum.
The optimal solution to the dual program is a lower bound for the optimal solution of the original (primal) program; this is the weak duality principle. If the primal problem is convex and bounded from below, and there exists a point in which all nonlinear constraints are strictly satisfied (Slater's condition), then the optimal solution to the dual program equals the optimal solution of the primal program; this is the strong duality principle. In this case, we can solve the primal program by finding an optimal solution λ* to the dual program, and then solving:
According to George Dantzig, the duality theorem for linear optimization was conjectured by John von Neumann immediately after Dantzig presented the linear programming problem. Von Neumann noted that he was using information from his game theory, and conjectured that two person zero sum matrix game was equivalent to linear programming. Rigorous proofs were first published in 1948 by Albert W. Tucker and his group. (Dantzig's foreword to Nering and Tucker, 1993)
In support vector machines (SVMs), the formulating the primal problem of SVMs as the dual problem can be used to implement Kernel trick, but the latter has higher time complexity in the historical cases.
FIGURE 1. Summary of popular encodings of discrete variables in terms of binary xi or Ising si variables. Each encoding has a particular representation of the value of the variable v and the value indicator δvα. We also include a visualization example for each encoding, where the red circles represent excited qubits and #v is the range of variable v for the given number of qubits. If every quantum state in the register represents a valid value of v, the encoding is called dense. On the contrary, sparse encodings must include a core term in the Hamiltonian in order to represent a value of v. Sparse encodings have simpler representations of the value indicators than dense encodings, but the core term implementation demands extra interaction terms between the qubits. The cost of a Hamiltonian in terms of the number of qubits and interaction terms strongly depends on the chosen encoding and should be evaluated for every specific problem.
c80f0f1006