How to use GLPK.exact ?

884 views
Skip to first unread message

Stéphane Laurent

unread,
Apr 7, 2014, 6:14:21 PM4/7/14
to julia...@googlegroups.com
Hello, 

 When I use the exact function of the GLPK package, I get something like that :

julia> GLPK.exact(lp) glp_exact: 6 rows, 9 columns, 18 non-zeros
 GLPK bignum module is being used
 (Consider installing GNU MP to attain a much better performance.)

 * 6: objval = 0.107142857142857 (1)
 * 6: objval = 0.107142857142857 (1)
 OPTIMAL SOLUTION FOUND
 0


GNU MP should be installed on my Windows machine because I use the gmp package in R. What should I do in order that GLPK uses this library in Julia ? 

Iain Dunning

unread,
Apr 7, 2014, 10:58:09 PM4/7/14
to julia...@googlegroups.com
I strongly recommend you update to Julia 0.2 at least and use the dedicated GLPK package https://github.com/JuliaOpt/GLPK.jl - I'm sure you'll get much better support. Version 0.1 of Julia (which had inbuilt GLPK - I had forgotten!) isn't supported anymore.

How did you get version 0.1?

Cheers,
Iain

Stéphane Laurent

unread,
Apr 8, 2014, 2:24:43 AM4/8/14
to julia...@googlegroups.com
Hello Iain, I don't understand what you mean :


julia> versioninfo()
Julia Version 0.2.0
Commit 05c6461 (2013-11-16 23:44 UTC)
Platform Info:
  System: Windows (x86_64-w64-mingw32)
  WORD_SIZE: 

julia> 
64
julia> 

  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
  LAPACK: libopenblas
  LIBM: libopenlibm

julia> Pkg.status()
Required packages:
 - DataFrames                    0.4.2
 - Distributions                 0.3.0
 - GLM                           0.2.4
 - GLPK                          0.2.10
 - GSL                           0.1.1
 - GnuTLS                        0.0.0
 - MixedModels                   0.2.3
 - NLopt                         0.0.3
 - NumericExtensions             0.3.6
 - RDatasets                     0.1.1
 - Stats                         0.1.0
 - Winston                       0.9.0
Additional packages:
 - BinDeps                       0.2.12
 - Blocks                        0.0.2
 - Cairo                         0.2.12
 - Color                         0.2.9
 - DataArrays                    0.0.2
 - GZip                          0.2.12
 - HTTPClient                    0.1.1
 - IniFile                       0.2

julia> 
.2
 - LibCURL                       0.1.1
 - LibExpat                      0.0.4
 - Nettle                        0.1.3
 - SortingAlgorithms             0.0.1
 - StatsBase                     0.2.10
 - Tk                            0.2.11
 - URIParser                     0.0.1
 - URLParse                      0.0.0
 - WinRPM                        0.0.13
 - Zlib                          0.1.5

Ivar Nesje

unread,
Apr 8, 2014, 3:04:08 AM4/8/14
to julia...@googlegroups.com
He might have been confused because you linked to the 0.1 version of the documentation where GPLK apparently was in the standard library.

Ivar

Carlo Baldassi

unread,
Apr 8, 2014, 9:33:34 AM4/8/14
to julia...@googlegroups.com
Unfortunately, from insepcting GLPK source code it seems that whether to use GNU MP or GLPK bignum is decided at compile time. So it appears that the Windows binaries which are automatically downloaded by the Julia package are just not compiled with GNU MP. As far as I can tell, the options are 1) compile from source in Windows and put the resulting shared library under GLPK/deps/usr/lib 2) find alternative binaries 3) ask the guy which provides the Windows binaries (http://www.xypron.de)

Tony Kelman

unread,
Apr 8, 2014, 9:24:19 PM4/8/14
to julia...@googlegroups.com
It looks like those GLPK binaries are Visual Studio builds, and as I understand it recent versions of GMP are difficult if not impossible to compile with Visual Studio. In fact GMP was forked into MPIR (http://www.mpir.org/#about), with one major motivation being MSVC support. MPIR might be usable as a drop-in replacement for GMP?

A MinGW build of GLPK should work with a MinGW build of GMP (like the one Julia's using, and probably R too?), but if you want to use a pre-compiled binary for one or the other you'll have to be very careful to use compatible compiler versions.

-Tony

Stéphane Laurent

unread,
Apr 9, 2014, 1:42:09 AM4/9/14
to julia...@googlegroups.com
Thank you for your answers. Unfortunately, "(pre)-compiled binaries", "dll", etc, is like Chinese for me. Moreover when you talk about GLP I don't know if you talk about the C library or the julia Package. Currently I was just trying GLPK "for fun" so this is not important. Thank you again.

Stéphane Laurent

unread,
Apr 9, 2014, 1:08:19 PM4/9/14
to julia...@googlegroups.com
Hello guys, 

 I hope you'll enjoy this article on my blog.

If you're able to use GNU MP on your machine, would you be able to find 3/28

Any other comment is welcomed !

Ivar Nesje

unread,
Apr 9, 2014, 2:16:26 PM4/9/14
to julia...@googlegroups.com
You might be exited when you find out that you can use μ, ν, M₁ and M₂  as variable names in Julia, because we support UTF-8.

Sometimes it might give more readable code, when you compare it to the math texts.

Ivar

Carlo Baldassi

unread,
Apr 9, 2014, 4:51:00 PM4/9/14
to julia...@googlegroups.com
Hi Stéphane, nice post! I have a number of comments and suggestions which you may find useful. I accompanied these comments with some demo code, you can find here.

A) Generic to Julia

A.1) as Ivar said, you can use Unicode characters if you want;

A.2) sarting from the next Julia version, an expression like "1 - eye(n)" will raise a warning and urge you to use the dotted-minus operator, i.e. "1 .- eye(n)". Better start getting in the habit of doing that! Also, in Julia v0.3 you can write the same expression as "ones(n,n) - I", which is kind of nicer maybe;

A.3) nested loops like

for i = 1:n
   
for j = 1:n
       
# ...
   
end
end

can be written concisely as
for i = 1:n, j=1:n
   
# ...
end



B) Specific to GLPK

B.1) it is not advisable to do "using GLPK", it's better to do "import GLPK", because of the large number of common names exported by GLPK, which makes name collisions likely

B.2) one easy way to get the "ia, ja, ar" vector for the sparse representation is to call the function "findnz" on a sparse matrix, i.e. you just do
ia,ja,ar = findnz(sparse(A))
Furthermore, the GLPK.jl package does that for you (it's one of the few improvements over the native GLPK), so that you can simply call
GLPK.load_matrix(lp, sparse(A))

B.3) Indeed, using GNU MP, GLPK.exact on this problem is as fast as GLPK.simplex. The solutiion found is "0.10714285714285714" (the best approximation of 3/28 in the 64 bit precision) instead of "0.10714285714285715" (which is the next floating point value, i.e. there's a 1 ulp difference. It's likely that setting the tolerance will fill this gap too. Unfortunately, it's not possible to get the rational value from GLPK.exact, and there is no linear programming solver which accepts rational arguments in Julia at the moment).

C) On linear programming in Julia

C.1) Instead of using GLPK directly, it's way easier to use a higher level Julia package, such as MathProgBase, which uses GLPK internally (via a glue package) but does most of the low-level settings for you. It also has the advantage that the same program can use different solvers (GLPK, Clp, Gurobi, CPLEX, Mosek, ...) by simply changing a function argument, which I think is pretty cool.
In the gist which I linked before there is an example which demonstrates how easy and concise and more readable the program becomes.

C.2) Going up yet another level, you can use JuMP, which uses MathProgBase itself, but introduces special syntax aimed at allowing to express these kind of optimization problems much more naturally. Again, it lets you choose which solver you want to use under the hood.

Hope this helps, cheers.

Stéphane Laurent

unread,
Apr 9, 2014, 6:09:45 PM4/9/14
to julia...@googlegroups.com
Thank you infinitely Carlo ! 
I will adopt your comments in the next update of my blog post.

About GLPK.exact it is not possible to get the rational number 3/28 instead of a decimal approximation ? 


Le mercredi 9 avril 2014 22:51:00 UTC+2, Carlo Baldassi a écrit :

Carlo Baldassi

unread,
Apr 9, 2014, 6:18:26 PM4/9/14
to Julia Users
About GLPK.exact it is not possible to get the rational number 3/28 instead of a decimal approximation ?

No, unfortunately. Also, for that to happen/make sense, you'd also need to be able to pass all the inputs as exact rational values, i.e. as "1//7" instead of "1/7". This would be possible if we had a native generic Julia linear programming solver, but it's not possible with GLPK, which can only use exact arithmetic internally.
Message has been deleted

Miles Lubin

unread,
Apr 9, 2014, 7:28:41 PM4/9/14
to julia...@googlegroups.com
When we have a simplex solver (either in Julia or external) that supports rational inputs, we could consider making this work with JuMP, but for now JuMP stores all data as floating-point as well. 

Stephane, nice work. LP definitely needs more exposure in the probability community. Please please write your LPs algebraically, there's really no excuse not to do this in Julia when your original model is in this form.

Compare this:

using JuMP
m
= Model()
@defVar(m, p[1:n,1:n] >= 0)
@setObjective(m, Max, sum{p[i,j], i in 1:n; i != j})

for k in 1:n
   
@addConstraint(m, sum(p[k,:]) == μ[k])
   
@addConstraint(m, sum(p[:,k]) == ν[k])
end
solve
(m)
println
("Optimal objective value is:", getObjectiveValue(m))


with the matrix gymnastics that you had to do to use the low-level GLPK interface. Writing down a linear programming problem shouldn't be that hard! (Note: I haven't tested that JuMP code).

Miles

Tony Kelman

unread,
Apr 9, 2014, 8:04:56 PM4/9/14
to julia...@googlegroups.com
Another semi-hacky option here is using a conventional double-precision LP solver to tell you the active set at the solution the solver considered optimal (up to whatever its tolerance was set to). Then you can take that active set and solve the arbitrary-precision version of the constraint matrix subject to the known subset of the variables being equal to zero. You should then be able to look at the dual variables at that active set in arbitrary precision, to tell you whether that same active set is also optimal in exact arithmetic.

Stéphane Laurent

unread,
Apr 10, 2014, 2:27:43 AM4/10/14
to julia...@googlegroups.com
Thank you for these precious informations. The JuMP package looks very awesome, I hope to give it a try soon.

There was a Julia age during which  BigInt(3)/BigInt(28) was equal to the BigRational 3/28, why this feature has been removed ?

It would be too long to explain what my R appli here http://glimmer.rstudio.com/stla/ShinyIntrinsicMetric/ does but it returns some Kantorovich distances in exact rational numbers, it would be sad if we can't do this with Julia.

By the way for another problem I need to get the vertices of the polyhedron defined by the linear constraints, as with the cddlib library, do you know how I could get that ?

Miles Lubin

unread,
Apr 10, 2014, 3:36:38 AM4/10/14
to julia...@googlegroups.com, Joseph Huchette

By the way for another problem I need to get the vertices of the polyhedron defined by the linear constraints, as with the cddlib library, do you know how I could get that ?

Enumerating vertices requires a very different algorithm from optimizing over polyhedra. The best way to do this in Julia right now is probably to call pycddlib using PyCall (I've personally done this and it works). I think you can pass rationals to it as well.

Tony Kelman

unread,
Apr 10, 2014, 3:39:58 AM4/10/14
to julia...@googlegroups.com
Unless I'm blind and just can't find one, it appears there is not yet a solid high-dimensional computational geometry package for Julia, like cddlib or the Multi-Parametric Toolbox for Matlab. I imagine a wrapper around cddlib would be fairly easy to write (perhaps even autogenerated), or you could go through Python and pycddlib as Miles just suggested. Translating the algorithm into Julia code would take a bit more effort, but give you a more powerful result that could likely work immediately with arbitrary-precision Julia types.

Carlo Baldassi

unread,
Apr 10, 2014, 4:11:01 AM4/10/14
to Julia Users
There was a Julia age during which  BigInt(3)/BigInt(28) was equal to the BigRational 3/28, why this feature has been removed ?

I don't think that command ever worked like that actually; in order to get Rational values, you need to use a double-slash "//" in stead of a single slash:

big(3) // 28

does what you want (it returns a Rational{BigInt}).

julia> r = big(3)//28
3//28

julia> typeof(r)
Rational{BigInt} (constructor with 1 method)

julia> r ^ 100
515377520732011331036461129765621272702107522001//5197603356578053743046457091718238387315570720700158666355787339743938954474733738504301832337506542362159928768916101388187175111771723845861376


Message has been deleted

Stéphane Laurent

unread,
Apr 10, 2014, 5:49:16 AM4/10/14
to julia...@googlegroups.com
Again, thank you for all these answers. Sorry Carlo, I missed the double slash in your previous answer. 

It would be a good opportunity for me to call Python in order to train my skills in Python in addition to Julia. But why do you suggest me to call pycddlib with PyCall rather than calling cddlib with ccall ? 

Joey Huchette

unread,
Apr 10, 2014, 11:53:17 AM4/10/14
to julia...@googlegroups.com
Either approach should work in principle (pycddlib vs. cddlib), but I suspect that the Python wrapper is higher-level, and will be easier to use from Julia. For reference, here's a snippet of how you might calculate the extreme points as rationals, calling pycddlib from Julia (adapted from code from Miles): https://gist.github.com/joehuchette/10396014 

A computational geometry package in Julia would be a godsend for me (right now I'm working in Perl which is...not ideal), but I agree that it would be quite a bit of work. Certainly on my radar, though.

Stéphane Laurent

unread,
Apr 12, 2014, 11:50:26 AM4/12/14
to julia...@googlegroups.com
Thank you everybody. I have updated my blog post, especially to include Carlo's comments.  
Unfortunately I have some problems to use JuMP (I have opened another discussion about it). And installing pycddlib on Windows 64bit is a real nightmare.

Iain Dunning

unread,
Apr 16, 2014, 11:07:29 PM4/16/14
to julia...@googlegroups.com
I implemented a version of simplex method for rational numbers - so you solve it exactly in pure Julia.
https://github.com/IainNZ/RationalSimplex.jl
Not for serious work - just for fun!

Miles Lubin

unread,
Apr 16, 2014, 11:11:10 PM4/16/14
to julia...@googlegroups.com
Where's the MathProgBase interface? :)

Carlo Baldassi

unread,
Apr 17, 2014, 5:03:08 AM4/17/14
to Julia Users
Stéphane, sorry for the confusion (I should have made myself clearer before), but the new version of the blog post is not correct, in that the difference between 1-X and 1.-X has nothing to do with Int64 vs Float64, but rather with the difference between the operators `-` (minus) and `.-` (dot-minus, i.e. element-wise-minus). That's because the expressions above are parsed as 1 - X and 1 .- X so the number on the left is always an Int. (Using spaces around operators is always advisable for clarity anyway, and in this case where ambiguity can arise it may become mandatory at some point.)
So the main points are:

i) when dealing with Arrays, `-` and `.-` are distinct. In Julia 0.2 you are allowed to use `Number - Array` (or `Array + Number` etc.), but in upcoming Julia 0.3 you'll receive a warning telling you to use `Number .- Array` instead (or `Array .+ Number` etc.), and you would only use `-` (and `+`) between Arrays of the same shape.

ii) using an Int is no problem, it will get automatically promoted and there is no ambiguity, and this is not going to change.

Stéphane Laurent

unread,
Apr 17, 2014, 8:12:20 AM4/17/14
to julia...@googlegroups.com
Thank you Iain I will try your solver soon I hope. And thank you again Carlo, I will update my post.

Stéphane Laurent

unread,
Apr 21, 2014, 4:16:38 PM4/21/14
to julia...@googlegroups.com
My blog post is updated.

Iain, I have tried your code with the example of my blog. I see the good result in the output (3//28), but I don't understand how to know it is the good one.

using RationalSimplex
using Base.Test

b = [1//7, 2//7, 4//7, 1//4, 1//4, 1//2]
c = [0//1, 1//1, 1//1, 1//1, 0//1, 1//1, 1//1, 1//1, 0//1]
c = [c, repmat([0//1], size(b)[1])] # surely clumsy
M = [1//1 1//1 1//1 0//1 0//1 0//1 0//1 0//1 0//1;
    0//1 0//1 0//1 1//1 1//1 1//1 0//1 0//1 0//1;
    0//1 0//1 0//1 0//1 0//1 0//1 1//1 1//1 1//1;
    1//1 0//1 0//1 1//1 0//1 0//1 1//1 0//1 0//1;
    0//1 1//1 0//1 0//1 1//1 0//1 0//1 1//1 0//1;
    0//1 0//1 1//1 0//1 0//1 1//1 0//1 0//1 1//1]
Id = zeros(Rational{Int64}, size(M)[1], size(M)[1])
for i in 1:size(M)[1]
        Id[i,i] = 1//1
end
M = hcat(M, Id)

julia> simplex(c, :Min, M, b, ['=','=','=','=','=','='])
(:Optimal,[1//7,0//1,0//1,0//1,1//4,0//1,0//1,0//1,1//2,0//1,1//28,1//14,3//28,0//1,0//1])

Stéphane Laurent

unread,
Apr 22, 2014, 2:28:01 PM4/22/14
to julia...@googlegroups.com
Miles, I have successfully installed JuMP and GLPKMathProgInterface on Windows 32-bit. 

Your code works very well, this is really awesome !! However the result is not as precise as the one given by GLPK.exact.

using JuMP

 mu
= [1/7, 2/7, 4/7]
 nu
= [1/4, 1/4, 1/2]
 n
= length(mu)

 
 m
= Model()
 
@defVar(m, p[1:n,1:n] >= 0)

 
@setObjective(m, Min, sum{p[i,j], i in 1:n, j in 1:n; i != j})
 
 
for k in 1:n
 
@addConstraint(m, sum(p[k,:]) == mu[k])
 
@addConstraint(m, sum(p[:,k]) == nu[k])
 
end
 solve
(m)


julia
> println("Optimal objective value is:", getObjectiveValue(m))
Optimal objective value is:0.10714285714285715

julia
> 3/28
0.10714285714285714



Miles Lubin

unread,
Apr 22, 2014, 2:35:36 PM4/22/14
to julia...@googlegroups.com
Cool! Glad to hear you got it working. Supporting exact coefficients in JuMP is technically possible, and I've opened an issue for it: https://github.com/JuliaOpt/JuMP.jl/issues/162. This will probably remain on the wishlist for a while.

Carlo Baldassi

unread,
Apr 22, 2014, 5:25:07 PM4/22/14
to julia...@googlegroups.com
Note that you can still use GLPK.exact with JuMP, you just need to add change the m=Model() line to this:

using GLPKMathProgInterface
m
= Model(solver=GLPKSolverLP(method=:Exact))

while all the rest stays the same.

As an aside, it's really kind of annoying that GLPK.exact uses (basically) Rational{BigInt} internally, but the interface does not allow to access this. Seems a waste.

Stéphane Laurent

unread,
Apr 23, 2014, 3:40:02 PM4/23/14
to julia...@googlegroups.com
Right, it works. Thank you. 
If I don't call GLPKMathProgInterface, does JuMP use an internal solver ? 

Miles Lubin

unread,
Apr 23, 2014, 5:02:58 PM4/23/14
to julia...@googlegroups.com
On Wednesday, April 23, 2014 3:40:02 PM UTC-4, Stéphane Laurent wrote:
If I don't call GLPKMathProgInterface, does JuMP use an internal solver ?

If a solver isn't specified, JuMP (actually MathProgBase) will search for an available solver and pick one by default. JuMP does not have an internal solver.

By the way, future discussions on linear programming etc. should take place on the julia-opt mailing list: https://groups.google.com/forum/#!forum/julia-opt.

Thanks,
Miles

Stéphane Laurent

unread,
May 2, 2014, 12:55:49 PM5/2/14
to julia...@googlegroups.com
Thank you everybody, almost every point discussed here is now written on my blog.

Miles Lubin

unread,
May 2, 2014, 1:00:32 PM5/2/14
to julia...@googlegroups.com
Looks good!
Reply all
Reply to author
Forward
0 new messages