Global optimization package for JuliaOpt

1,025 views
Skip to first unread message

Hans W Borchers

unread,
Feb 6, 2014, 12:13:41 PM2/6/14
to juli...@googlegroups.com
I am working on a package named gloptim for global optimization problems.
See below an example of applying a "simple evolutionary algorithm" to a well-know test function with thousands of local minima.
I plan to include a version of "differential evolution" and one of the pure CMAES approach.
Hope to have a package on github within a few days.

I certainly would like to work together with others on such a global optimization problem,
especially as I am a total newcomer to the Julia programming language.
Please let me know when someone is interested in global optimization or is working on similar solvers.
Also, I have a whole bunch of more or less 'official' test problems.

Hans Werner

PS:
The following is a well-known test function from the SIAM 100-Digits Challenge with thousands of local minima:

    # Trefethen's test function
    julia> function ftrefethen(p)
       x = p[1]; y = p[2]
       return exp(sin(50 * x)) + sin(60 * exp(y)) + sin(70 * sin(x)) + 
                      sin(sin(80 * y)) - sin(10 * (x + y)) + (x^2 + y^2)/4
           end

Here is an implementation of the simple evolutionary algorithm applied to this function:

    julia> @time pmin, fmin, hmin = simple_ea(ftrefethen, [-1, -1], [1, 1]);
    elapsed time: 0.053213764 seconds (8640504 bytes allocated)

    # Display minimum point and value
    julia> println(pmin); println(fmin)
    -.024403079696579247
    .2106124267973353

    -3.3068686474752407

This is exactly the global minimum of Trefethen's test function, found in about 0.05 sec.
It is 10 times faster than my implementation in another interpretative language.
But I assume the allocation of space can still be greatly improved.

Iain Dunning

unread,
Feb 6, 2014, 12:50:17 PM2/6/14
to juli...@googlegroups.com
You should check out https://github.com/robertfeldt/BlackBoxOptim.jl, which would seem to have considerable overlap.

Steven G. Johnson

unread,
Feb 6, 2014, 1:31:10 PM2/6/14
to juli...@googlegroups.com
Hi Hans,

The NLopt package for Julia also provides interfaces to several global optimization algorithms.

I tried it out on the Trefethen test function via:

function ftrefethen(p, grad)

    x = p[1]; y = p[2]
    if length(grad) > 0
        grad[1] = 50*cos(50x)*exp(sin(50x)) + 70*cos(x)*cos(70 * sin(x)) -
                  10*cos(10*(x+y)) + 0.5*x
       
        grad[2] = 60*exp(y)*cos(60*exp(y)) + 80*cos(80y)*cos(sin(80y)) -
                  10*cos(10*(x+y)) + 0.5*y
    end

    return exp(sin(50 * x)) + sin(60 * exp(y)) + sin(70 * sin(x)) +
           sin(sin(80 * y)) - sin(10 * (x + y)) + 0.25*(x^2 + y^2)
end

for alg in (:GN_DIRECT_L, :G_MLSL_LDS, :GN_ESCH, :GN_CRS2_LM, :GN_ISRES)
    opt = Opt(alg, 2)
    lower_bounds!(opt, [-1,-1])
    upper_bounds!(opt, [1,1])
    min_objective!(opt, ftrefethen)
    stopval!(opt, exactmin + 1e-3)
       
    # gradient-based local optimizer, for MLSL
    localopt = Opt(:LD_LBFGS, 2)
    ftol_abs!(localopt, 1e-5)
    local_optimizer!(opt, localopt)
   
    t = @elapsed (minf,minx) = optimize(opt, [0,0])
    println("got exact + $(minf - exactmin) at ($(minx[1]),$(minx[2])) after $t secs")
end

Deciding on termination criteria for global optimizers is always tricky.  For benchmark purposes, I terminated when the objective was within 1e-3 of the exact minimum; what criterion did you use?

The DIRECT and MLSL algorithms are deterministic global optimizers, and they converge in about 0.04sec and 0.0016sec on my machine, respectively.  (MLSL is typically my method of choice for smooth functions like this one.)

ESCH and ISRES are genetic algorithms, and their convergence time is highly variable for this objective, depending on the random numbers produced for each run ... ESCH typically took a few seconds, and the other two were slower.  So, your 0.05 seconds is pretty good!  Is that consistent from run to run?

--SGJ

Steven G. Johnson

unread,
Feb 6, 2014, 2:13:42 PM2/6/14
to juli...@googlegroups.com
Timing benchmarks of optimization algorithms are also a bit suspect; my feeling is that it is better to simply count function evaluations.  The reason is that, in real applications (as opposed to synthetic benchmarks), the objective function is typically very expensive to evaluate, and as a result the computation time spent in the optimizer code is often mostly irrelevant.

Hans W Borchers

unread,
Feb 6, 2014, 2:18:29 PM2/6/14
to juli...@googlegroups.com
Steven,

I know -- I am the author of the 'nloptwrap' R package that provides simplified interfaces to the 'nloptr' package and thus to the NLopt library. These wrappers will be included in the next version of 'nloptr'.

You may be interested to hear that there will appear a benchmark study on "Global Optimization With R" in the next special issue of the Journal of Statistical Software. Functions in NLOpt are included in the comparison. E.g., DIRECT turned out to be one of the less powerful approaches; can't remember MLSL at the moment, will look at it again.

"Differential Evolution" certainly is one of the most widely usable global solvers. To have an implementation in Julia would be interesting. Recently I tested a new implementation that was even capable of solving the notorious modified Langerman test function reliably.

There is a "philosophical" standpoint that it is good to have many different solvers and even more than one implementation of the same solver as there will not be one optimization algorithm that can solve all (global) problems. I would not see a 'Gloptim' package as a rival, but as a useful addition.

Hans Werner

Steven G. Johnson

unread,
Feb 6, 2014, 3:34:12 PM2/6/14
to juli...@googlegroups.com


On Thursday, February 6, 2014 2:18:29 PM UTC-5, Hans W Borchers wrote:
You may be interested to hear that there will appear a benchmark study on "Global Optimization With R" in the next special issue of the Journal of Statistical Software. Functions in NLOpt are included in the comparison. E.g., DIRECT turned out to be one of the less powerful approaches; can't remember MLSL at the moment, will look at it again.

That's great, I'm looking forward to the paper!   (DIRECT/DIRECT-L is mainly useful in very low dimensions.  However, I find that it works much better if you don't try to use DIRECT to converge to high accuracy ... instead, get the rough answer from DIRECT and then "polish" it with a local optimizer.)

I definitely agree that having many algorithms to choose from is good, but ideally they should have a similar interface so that switching between them is easy.

If there are major omissions from NLopt (i.e. algorithms that do much better in many cases), it would be nice to add them to NLopt so that they are available in many languages.   This is the general technical problem of developing libraries in high-level languages such as Julia, Python, or R: development is easy, but then the results are very difficult to use from any other language (it is possible in principle to call e.g. Python libraries from C, but few people do it in practice).

Stefan Karpinski

unread,
Feb 6, 2014, 3:52:45 PM2/6/14
to Steven G. Johnson, juli...@googlegroups.com
This problem could be significantly addressed by the ability to generate C-compatible shared libraries from Julia code. Then everyone would be able to call Julia libraries without needing anything more than the ability to call C. I don't think we're terribly far from this being a reality.


--
You received this message because you are subscribed to the Google Groups "julia-opt" group.
To unsubscribe from this group and stop receiving emails from it, send an email to julia-opt+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Steven G. Johnson

unread,
Feb 6, 2014, 4:07:23 PM2/6/14
to juli...@googlegroups.com, Steven G. Johnson
Yes, I've discussed that with Jeff as well.  As long as you are willing to introduce a dependency on libjulia, this seems feasible and will make it much more attractive to write libraries in Julia.

(You will probably still have to write a thin "cfunction" glue layer in many cases, to provide a nice C interface that automatically initializes julia and doesn't expose any internal jl_* types to the C caller.  But at least it will be possible to write this glue entirely in Julia.)

Still, I'm guessing that in practice it will be a year or more before that feature is in a reliable state.

Hans W Borchers

unread,
Feb 6, 2014, 4:14:43 PM2/6/14
to juli...@googlegroups.com
> I definitely agree that having many algorithms to choose from is good,
> but ideally they should have a similar interface so that switching between them is easy.

There is the 'optimx' package by John C. Nash that builds a common interface to many of the relevant optimization solvers in R. Results of all solvers are displayed in one table. That helped to find a lot of errors in the different implementations -- e.g., (un)fortunately, also one in my code for the bounded version of Hooke-Jeeves. To start building such a common interface for all the solvers available in Julia would be great and a push towards stability and correctness.

Miles Lubin

unread,
Feb 6, 2014, 4:43:38 PM2/6/14
to juli...@googlegroups.com
MathProgBase already has a common interface for linear/quadratic/mixed-integer optimization. At some point in the near future JuMP will be expanding into constrained nonlinear optimization, so we will need a similar common interface.

Robert Feldt

unread,
Feb 7, 2014, 4:18:11 AM2/7/14
to juli...@googlegroups.com
Hans,

I have implemented quite a few DE variants in BlackBoxOptim, see:


I'd be happy to join forces on getting state-of-the-art global/blackbox implemented in native Julia...

Ian is helping me to integrate BlackBoxOptim as an official package and then possibly including it in JuliaOpt within the coming days.

Cheers,

Robert

Robert Feldt

unread,
Feb 7, 2014, 4:33:51 AM2/7/14
to juli...@googlegroups.com
In fact I added an example using your 2-dim Trefethen function implementation here:


and it is easily solved by a DE although I have not added any good convergence termination yet in BlackBoxOptim so it runs all 10000 func evals although not needed. Thus the execution time is somewhat longer than yours; should be easy to add better convergence checking and termination. I have not really optimized for speed or mem allocation at this point; have found it good for my applications so far.

julia> @time bboptimize(ftrefethen, (-1.0, 1.0); dimensions = 2)
Starting optimization with optimizer AdaptiveDE/rand/1/bin/radiuslimited

Optimization stopped after 10000 steps and 0.19718408584594727 seconds
Steps per second = 50714.031799770266
Function evals per second = 101428.06359954053
Improvements/step = 0.2059

Mean value (in population) per position:1x2 Array{Float64,2}:
 -0.0244031  0.210612

Std dev (in population) per position:1x2 Array{Float64,2}:
 1.30379e-10  1.30423e-10

Best candidate found: 1x2 Array{Float64,2}:
 -0.0244031  0.210612

Fitness: -3.306868647475241

elapsed time: 0.200417369 seconds (75077204 bytes allocated)
(
1x2 Array{Float64,2}:
 -0.0244031  0.210612,

-3.306868647475241)

Cheers,

Robert
Reply all
Reply to author
Forward
0 new messages