beginner question

84 views
Skip to first unread message

Russ

unread,
Apr 24, 2019, 10:25:25 AM4/24/19
to hyperopt-discuss
I am new to hyperopt.  To me, it appears to be trying random values, instead of choosing values so as to converge on an optimal, loss-driven result.  It looks as if it continues searching after stumbling on the right solution.  Is this how TPE is supposed to work?  Consider this code, and simple function x**2:

from hyperopt import fmin, tpe, hp

MINX, MINL=0.0, 10000.0
def minfunc(x):
    global MINX, MINL
    loss = x**2.
    nms = ""
    if loss<MINL:
        nms="<<<MIN"
        MINL = loss
        MINX = x
    print("x, loss =, %8.4f, %8.3f, %s" % (x,loss,nms))
    nms = ""
    return loss

best = fmin(fn=minfunc,
    space=hp.uniform('x', -10, 10),
    algo=tpe.suggest,
    max_evals=100)

If you plot the specific x's being tried, it appears to be random.  Further, the minimums appear to be spread out.  In a gradient descent algorithm, the minimums would occur one right after the other, until the algorithm stopped.  (That is my second problem with hyperopt - it appears to use all max_evals instead of stopping when it finds a minimum.)

Am I doing something wrong, or is this just how TPE is supposed to work (e.g., deliberately trying other values to avoid the local minima problem)?

Thanks,
Russ

James Bergstra

unread,
Apr 24, 2019, 2:15:35 PM4/24/19
to Russ, hyperopt-discuss
Indeed, hyperopt is for *global* optimization, so it will keep trying to make sure it has the best point until it exhausts the max_evals budget.

Now that you mention it, the parameter should maybe have been called `n_evals` instead of `max_evals`.

TPE should choose points more densely around the minimum of your function than e.g. uniform guessing, but yes it will still be a bit spread out.

--
You received this message because you are subscribed to the Google Groups "hyperopt-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hyperopt-discu...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Russ Fink

unread,
Apr 24, 2019, 4:53:43 PM4/24/19
to James Bergstra, hyperopt-discuss
image.png

Hmm, yes, I see it now.  This is a run over 2000+ iterations attempting to find a zero of x**2.  Before, when I only had 100 iterations, I could not see such a pattern.

Thanks for your answer!
Russ

Reply all
Reply to author
Forward
0 new messages