I have a problem I am having trouble proving.
Let Rmax, d, T, mu > 0 with T*Rmax > d. Find the function R that
maximizes
I(R) = \int_0^T \int_0^T R(t)R(t') \exp(-\mu|t-t'|) dt' dt subject to
the constraints
0 \leq R(t) \leq Rmax for all t in [0,T] and \int_0^T R(t) dt = d.
I know the answer. It is not unique. R is Rmax times the
characteristic function of
[t, t+d/Rmax] for some 0 \leq t \leq T-d/Rmax.
I can prove that IF R is feasible and unequal to 0 or R on a set of
positive measure, THEN R is not a maximizer. Consequently, IF a
maximizer exists, then it equals 0 or R almost everywhere. But it is
not easy to prove that a maximizer exists.
Greg Spradlin
Embry-Riddle University
You can formulate the problem as a standard optimal control problem,
then try applying methods such as the Pontryagin Maximum principle,
etc. Writing 'a' instead of '\mu' we have I(R) = int_{t=0..T} R(t)
{exp(-a*t)*S1(t) + exp*a*t)*S2(t)} dt, where S1(t) = int_{t'=0..t}
exp(a*t')*R(t') dt' and S2(t) = int_{t'=t..T} exp(-a*t')*R(t') dt'. We
can regard this as an optimal control problem with states S1, S2 and
control variable R: max_{R} I(S1,S2,R)= int_{t=0..T} {exp(-a*t)*S1(t)+
exp(a*t)*S2(t)}*R(t) dt, subject to dS1/dt = exp(a*t)*R(t), dS2/dt = -
exp(-a*t)*R(t), S1(0)=0, S2(T)=0, and 0 <= R(t) <= Rmax. The
Hamiltonian will be linear in the control variable R, so the optimal
control (if it exists) will be of "bang-bang" type (except in
intervals of singular control, if any). That is, barring singular
control intervals we should have R(t) = 0 or Rmax for all t.
Consulting a book on optimal control should point the way towards
getting an optimal solution. (Basically, the Pontryagin principle is a
_sufficient_ condition for an optimum, so if you can find a control
that obeys all the optimality conditions, this would prove the
existence of an optimum. If you don't know whether there is an optimum
or not, you won't know for sure whether it is possible to satisfy the
optimality conditions.) Further to the issue of existence: the
objective and dynamical equations are linear in R, S1 and S2, and the
Hamiltonian involves R, S1 and S2 jointly only through the products
R*S1 and R*S2 with positive coefficients, so you might be able to find
some theorems that apply.
A useful on-line summary of optimal control is in
http://www.scholarpedia.org/article/Optimal_control .
R.G. Vickson
A better reference is the optimal control course in
http://math.berkeley.edu/~evans/control.course.pdf . Chapter 2,
especially, deals with some existence theorems.
R.G. Vickson
Thanks for the advice.
Greg