ANN: JuMP 0.14 released

311 views
Skip to first unread message

Miles Lubin

unread,
Aug 8, 2016, 10:56:25 AM8/8/16
to julia-opt, julia...@googlegroups.com
The JuMP team is happy to announce the release of JuMP 0.14. The release should clear most, if not all, deprecation warnings on Julia 0.5 and is compatible with ForwardDiff 0.2. The full release notes are here, and I'd just like to highlight a few points:

- All JuMP users read this: As previously announced, we will be deprecating the sum{}, prod{}, and norm{} syntax in favor of using Julia 0.5's new syntax for generator statements, e.g., sum(x[i] for i in 1:N) instead of sum{x[i], i in 1:N}. In this release, the new syntax is available for testing if using Julia 0.5. No deprecation warnings are printed yet. In JuMP 0.15, which will drop support for Julia 0.4, we will begin printing deprecation warnings for the old syntax.

- Advanced JuMP users read this: We have introduced a new syntax for "anonymous" objects, which means that when declaring an optimization variable, constraint, expression, or parameter, you may omit the name of the object within the macro. The macro will instead return the object itself which you can assign to a variable if you'd like. Example:

# instead of @variable(m, l[i] <= x[i=1:N] <= u[i]):
x = @variable(m, [i=1:N], lowerbound=l[i], upperbound=u[i]) 

This syntax should be comfortable for advanced use cases of JuMP (e.g., within a library) and should obviate some confusions about JuMP's variable scoping rules.

- We also have a new input form for nonlinear expressions that has the potential to extend JuMP's scope as an AD tool. Previously all nonlinear expressions needed to be input via macros, which isn't convenient if the expression is generated programmatically. You can now set nonlinear objectives and add nonlinear constraints by providing a Julia Expr object directly with JuMP variables spliced in. This means that you can now generate expressions via symbolic manipulation and add them directly to a JuMP model. See the example in the documentation.

Finally, I'd like to thank Joaquim Dias Garcia, Oscar Dowson, Mehdi Madani, and Jarrett Revels for contributions to this release which are cited in the release notes.

Miles, Iain, and Joey


Uwe Fechner

unread,
Aug 8, 2016, 1:10:50 PM8/8/16
to julia-users, juli...@googlegroups.com
Hello,
I updated, and now I get the following error:
julia> include("Plotting.jl")
INFO: Recompiling stale cache file /home/ufechner/.julia/lib/v0.4/JuMP.ji for module JuMP.
INFO: Recompiling stale cache file /home/ufechner/.julia/lib/v0.4/ReverseDiffSparse.ji for module ReverseDiffSparse.
INFO: Recompiling stale cache file /home/ufechner/.julia/lib/v0.4/ForwardDiff.ji for module ForwardDiff.
INFO: Recompiling stale cache file /home/ufechner/.julia/lib/v0.4/HDF5.ji for module HDF5.
ERROR: LoadError: LoadError: LoadError: LoadError: UndefVarError: GradientNumber not defined
while loading /home/ufechner/00PythonSoftware/FastSim/src/Projects.jl, in expression starting on line 433
while loading /home/ufechner/00PythonSoftware/FastSim/src/Model.jl, in expression starting on line 19
while loading /home/ufechner/00PythonSoftware/FastSim/src/Optimizer.jl, in expression starting on line 13
while loading /home/ufechner/00PythonSoftware/FastSim/src/Plotting.jl, in expression starting on line 22


The code, that fails is the following:
"""
Helper function to convert the value of an optimization results, but also
simple real values.
"""
my_value(value::ForwardDiff.GradientNumber) = ForwardDiff.value(value)
my_value(value::Real) = value
my_value(val_vector::Vector) = [my_value(value) for value in val_vector]


Any idea how to fix this?

Uwe

Miles Lubin

unread,
Aug 8, 2016, 1:14:45 PM8/8/16
to julia-opt, julia...@googlegroups.com
ForwardDiff 0.2 introduced some breaking changes, you will need to update your code (GradientNumber is no longer defined). See the upgrading guide.

Uwe Fechner

unread,
Aug 8, 2016, 1:27:38 PM8/8/16
to julia-opt, julia...@googlegroups.com
Well, but in the upgrading guide there is no replacement for GradientNumber mentioned.

Any idea?

Uwe

Uwe Fechner

unread,
Aug 8, 2016, 5:23:26 PM8/8/16
to julia-users, juli...@googlegroups.com
Ok, I replaced all occurrences of GradientNumber in my code with Dual (not DualNumber!),
and now the code works again.

But this is NOT an implementation detail. It could be an implementation detail, if a function for converting
Dual or GradientNumber to real would be part of the package, but it is not.

Therefore it would be nice, if this info could be added to the upgrading guide.

Best regards:

Uwe

On Monday, August 8, 2016 at 10:18:10 PM UTC+2, Kristoffer Carlsson wrote:
It is more of an implementation detail but there is DualNumber now.
Reply all
Reply to author
Forward
0 new messages