I am writing Fortran code to minimize an objective function with IMSL
subroutine
duming(FCN,GRAD,N,XGUESS,XSCALE,FSCALE,IPARAM,RPARAM,X,FVALUE).
FCN: user supplied subroutine to be minimized
GRAD: user supplied subroutine to compute the gradient
...
X - vector of length N containing the computed solution
FVALUE - scalar containing the value of the function
Here is my problem. Since I am doing Monte Carlo simulations, say, 100
times, there are chances that my FCN and/or GRAD will compute log(x1)
with x1 being calculated to be 0 (actually it is a number very small,
say < 1.0E-16) during the optimization procedure. When this happens,
the program crashed. But what I hoped was if this happens, I would wish
the program is still running to complete but would like to only exclude
this simulation and say I have only 100-1 simulations.
What would be a satisfying trick to do this? Thank you for any
suggestions. Wang
Precisely w/o knowing more difficult, but two basic choices come to
mind--
Most "satisfying" might be to find a way to renormalize the variable x1
to control the range and then use an inverse transformation to scale
the results. Owing to the log() function, this may not be
straightforward.
Not clear to me at what point the problem arises for sure--in the
minimization on a per trial basis or in the sampling. If in the
sampling, some sort of stratified sampling may be possible, if in the
minimization, perhaps there's a reasonable approximation that can be
made inside the function in order to apply a magnitude test prior to
the evaluation and make some fixup--what that should, don't know.
Perhaps, as you say, figure out a way to simply abort that
trial/minimization and go on to a new trial. How to do that is again
dependent on the actual structure of the simulation.
Michael
But R is too slow for my simulation. I suspect there is something
similar to hand errors like this.
Thanks.
zwa...@hotmail.com wrote:
> Here is my problem. Since I am doing Monte Carlo simulations, say, 100
> times, there are chances that my FCN and/or GRAD will compute log(x1)
> with x1 being calculated to be 0 (actually it is a number very small,
> say < 1.0E-16) during the optimization procedure. When this happens,
> the program crashed. But what I hoped was if this happens, I would wish
> the program is still running to complete but would like to only exclude
> this simulation and say I have only 100-1 simulations.
>
> What would be a satisfying trick to do this? Thank you for any
> suggestions.
Fortran 2003 offers a nice solution (or Fortran 95 with TR 15580:1999),
but not many compilers support it: IEEE floating point exception
handling.
(Of the 5 compilers I have, only NAG f95 5.1 supports it, currently.)
Example:
----------------------------------------------------------------------------------------------------
program ieee
use,intrinsic :: ieee_arithmetic
implicit none
if(.not. IEEE_support_standard(1.0d0) &
.and. IEEE_support_halting(IEEE_INVALID) &
.and. IEEE_support_halting(IEEE_DIVIDE_BY_ZERO)) &
stop 'No IEEE support available!'
call ieee_set_halting_mode([IEEE_INVALID, IEEE_DIVIDE_BY_ZERO],&
.false.)
call printLog(1.0d0)
call printLog(0.0d0)
call printLog(-1.0d0)
call printLog(2.0d0)
contains
subroutine printLog(x)
double precision, intent(in) :: x
double precision :: y
y = log(x)
if(.not. ieee_is_finite(y)) then
write(*,*) 'Ignoring calculation for x = ', x,'; result is: ',y
else
print *, 'Result is: log(',x,') = ',y
end if
end subroutine printLog
end program ieee
----------------------------------------------------------------------------------------------------
Using the NAG f95 compiler, this gives: ./a.out
Result is: log( 1.0000000000000000 ) = 0.0000000000000000
Ignoring calculation for x = 0.0000000000000000 ; result is:
-Infinity
Ignoring calculation for x = -1.0000000000000000 ; result is: NaN
Result is: log( 2.0000000000000000 ) = 0.6931471805599453
Warning: Floating invalid operand occurred
Warning: Floating divide by zero occurred
Tobias
Well, I don't know how satisfying it is, but simply making your
objective function "robust" by checking the arguments prior to calling
functions such as log() should help. Return a large number for the
objective function value when an out-of-bounds input is requested. That
should keep the optimizer away from problem areas. I find that many
"fancy" optimizers sometimes take "branches to left field" -- calls to
the objective function with grotesquely wild values, perhaps generated
when a point is approached where the gradient in at least one parameter
is almost zero. Because of this, I have generally gone to a Nelder-Mead
simplex sort of optimizer. It never takes a "branch to left field".
For bad guesses it may be a bit slower, but by the time a complicated
objective function is made "left field proof", I save time with the
simplex optimizer. HTH.
--
Bob Walton
Email: http://bwalton.com/cgi-bin/emailbob.pl
Tobias wrote:
> if(.not. IEEE_support_standard(1.0d0) &
> .and. IEEE_support_halting(IEEE_INVALID) &
> .and. IEEE_support_halting(IEEE_DIVIDE_BY_ZERO)) &
> stop 'No IEEE support available!'
This should be, of cause:
if(.not. IEEE_support_standard(1.0d0) &
.or. .not.IEEE_support_halting(IEEE_INVALID) &
.or. .not.IEEE_support_halting(IEEE_DIVIDE_BY_ZERO)) &
stop 'No IEEE support available!'
Tobias
IF (abs(x1).GE.epsilon(x1)) THEN
! insert here code for the simulation as normal
ELSE
! insert here code for what to do if excluding this simulation
END IF
If you're using f77 just replace epsilon(x1) by 1e-15 or whatever suits
you and your compiler.
-- John Harper, School of Mathematics, Statistics and Computer Science,
Victoria University, PO Box 600, Wellington 6140, New Zealand
e-mail john....@vuw.ac.nz phone (+64)(4)463 5341 fax (+64)(4)463 5045