Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

parallel NMinimize[]

446 views
Skip to first unread message

robert

unread,
Sep 8, 2004, 7:04:10 AM9/8/04
to
dear all,

I am trying to use NMinimize[f[x],x] from a windows based mathematica
to minimize the function f[x] which evaluates via RUN[] and rsh remote
shell
commands on a linux cluster.
this works fine so far.

Now I would like to exploit the parallel capabilities of the linux
cluster.

usually NMinimize[] calls f[x] for a single value x at one step.
the nature of NMinimize[f[x],x, Method->"DifferentialEvolution"] as I
understand it causes a whole population of x values to be evaluated
before
minimization progresses.
In order to profit from the cluster I would need NMinimize[] to
evaluate
f[] for the whole populatuion in a single call.
e.g. f[{x1,x2,x3,x4,....xn}] with n beeing the size of the actual
popolation such that NMinimize[] would call the function to be
minimized with a wole list of x values instead of calling it with just
a single value. this way the cluster could evaluate all the (time
expense) function calls in parallel and return a list of results
{f[x1],f[x2],f[x3], ... f[xn]} to NMinimize[]

is there any chance to achieve this ?

does anyone know what for the following "DifferentialEvolution"
options are used ?

"InitialPoints" set of initial points
"SearchPoints" size of the population used for evolution

note i dont ask for any parallel aktion of mathematica itself.

thanks robert

Michal Kvasnicka

unread,
Sep 9, 2004, 5:53:54 AM9/9/04
to
From my point of view there are not simple straight way how to use function
NMinimize directly on a cluster of PCs.

There is only one way how to solve your optimization problem on the grid.
You must completely rewrite differential evolution algorithm to the
distributed parallel version. The Parallel Toolkit may be good starting
point for you. This task is not so difficult, because the differential
evolution is very simple optimization algorithm. Moreover, differential
evolution is very suitable for parallelization via distributed evaluation of
the optimized function f[x] during each population.

For detailed information regarding differential evolution you can visit:
http://www.icsi.berkeley.edu/~storn/code.html

Hope this help,
Michal Kvasnicka

P.S. On the other hand try to contact Dan Lichtblau from Wolfram. The
reformulation of the NMinimize function to the distributed parallel version
via Parallel Toolkit will be really challenging task.

"robert" <lu...@cheerful.com> píąe v diskusním příspěvku
news:chmova$a4j$1...@smc.vnet.net...

Mark Westwood [EPCC]

unread,
Sep 10, 2004, 4:23:50 AM9/10/04
to
Robert,

Forgive me if I've misunderstood your problem or differential evolution,
but here's a suggestion.

You want to search for the global minimum of a function on some domain ?

Define a form of the NMinimize function you wish to evaluate which is
parameterised in terms of p, where p = 1, 2, 3, ... is the number of the
processor on which the function will run.

For example, if your whole domain was [0,1) (yes, I know, very simple
example, but work with me on this !), if you have P processors and let p
be the identifier of the processor, then your function might be:

NMinimize[ { f[x], (p-1)/P <= x < (p/P) }, {x}, Method ->
"Differential Evolution" ]

Use the parallel toolkit to send this function to each of the worker
processors on your cluster. Each will then evaluate the function over
1/p of the whole domain. Bring the answers together and choose the
global minimum from them. You'll have to figure out how each processor
knows its own identity yourself, I don't have the parallel toolkit.

I guess that this approach might break down at the boundaries between
sub-domains, so you might want to send overlapping domains out to avoid
that problem.

I don't know what InitialPoints and SearchPoints are, but from the
documentation I think that they are:

InitialPoints - a guess of where to start searching, you would want to
ensure that this was within the subdomain passed to each processor.

SearchPoints - how many points within the domain to examine, more points
-> more time, more accuracy and less chance of returning a local minimum.

Hope this is of some use

Regards
Mark Westwood
Edinburgh Parallel Computing Centre

Michal Kvasnicka

unread,
Sep 11, 2004, 6:52:27 AM9/11/04
to
Mark,

your suggestion is simple and easy to test, but this is really "brute-force"
approach. Its effectivity rapidly decrease for high-dimensional optimization
problems, due to the high level of necessary subdomain overlaping.

Michal
"Mark Westwood [EPCC]" <ma...@epcc.ed.ac.uk> píąe v diskusním příspěvku
news:chroam$50c$1...@smc.vnet.net...

robert

unread,
Sep 11, 2004, 6:59:35 AM9/11/04
to
"Mark Westwood [EPCC]" <ma...@epcc.ed.ac.uk> wrote in message news:<chroam$50c$1...@smc.vnet.net>...


hi mark, hi michal

thank you for your contributions,
mark, yes this could be a way, ....

but first of all i dont own a parallel grid version of Mathematica.
second its not the NMinimize aktion thats time consuming its the
evaluation of the objective function which is extreme time consuming
consisting on FEM calculations by an linux based FEM package. so the
natural way would be try to calculate the function values parallel on
a linux cluster (which i have access to).

by the time i got some hints from dan lichtblau perhaps its posssible
for me to modify the NMinimize behavier.

regards robert

Michal Kvasnicka

unread,
Sep 13, 2004, 2:24:54 AM9/13/04
to
Robert,
if your problem has extremely time consuming objective function (like FEM)
you must focus on complete different methods. The standard differential
evolution (via NMinimize) is not suitable for this kind of optimization
problems.

A very powerful method is SNOBFIT, for example. For more details see:
http://www.mat.univie.ac.at/~neum/software/snobfit/

In this time the SNOBFIT is not implemented in Mathematica, but I think that
this task is not very difficult. On the other hand, way not to make
Mathematica package for global optimization of the extremely time consuming
objective functions. Think about it...!!!

Regards,
Michal

0 new messages