Hello,
I want to solve PDE constrained optimization problems in deal.II, but for now, I want to make sure that I can correctly calculate the derivative of an objective function via the adjoint method.
Keeping things simple, my objective function is:
J(u,z) = 1/2 \| u - u_D \|_2^2 + \alpha/2 \| z \|_2^2
where u_D = 1 and the state variable u solves the Poisson equation
-\Delta u = z on \Omega = [unit circle in R^2]
u = 0 on \partial\Omega
driven by the control variable z.
Given a function v, we know the L2 inner product of v with \grad J at a given control z can be calculated as
<\grad J(z), v> = <p + \alpha z, v> (*)
where the adjoint variable p solves
-\Delta p = u - u_D on \Omega
p = 0 on \partial\Omega
Theoretically, (*) can be arbitrarily well approximated by the difference quotient
(J(u_eps, z+\epsilon v) - J(u, z))/\epsilon (**)
In implementing this finite difference check in deal.II, however, I only observe a decrease in the error between (*) and (**) up to \epsilon \approx 10^{-3}. After, that, the error increases. The details can be seen the attached screenshot.
For reference, I am using a (sparse) direct solver for solving both the state and adjoint equations. The behavior is essentially the same if a CG solver is used instead. I am using Q2 finite elements, but I observe the same behavior for Q3 as well.
Additionally, it is worth mentioning that in a sanity-check-test where the objective function is modified to simply be:
J(z) = \alpha/2 \| z\|_2^2
so that the derivative is simply
J'(z) = \alpha z
(i.e. no adjoint calculation needed), the convergence behavior is what one would expect (i.e. the error decreases up to \epsilon ~ 1e-7). So, the issue should be with the adjoint solve.
Can anyone point out a reason why I shouldn't expect the standard behavior? Or, perhaps there is an error in my adjoint calculation. I tried to produce a light-weight, minimal-working-example. It is attached here.
Thank you for the assistance--
Best regards,
--Sean