Optimal control using a different cost/payoff function

175 views
Skip to first unread message

simon...@gmail.com

unread,
Nov 13, 2015, 9:27:48 AM11/13/15
to QuTiP: Quantum Toolbox in Python
I have am working with strongly correlated electrons, and I have been using qutip for time evolution etc. I am interested in the optimal control package you've written, and I have two questions:

1. Is it possible to maximize/minimize another function than the fidelity function?

2. Is it possible let the control amplitudes be a function of some specific parameters? That is, could I for example find the best A to maximize the fidelity if A*cos(t) is my control field?

If no, do you recon that it would require a lot of surgery in the code to include these features.

Alex Pitchford

unread,
Nov 13, 2015, 9:57:27 AM11/13/15
to QuTiP: Quantum Toolbox in Python
1. It is possible to create a custom fidelity computer (via sub-classing). A few of my colleagues have done this successfully.  I have not yet produced a real notebook example of this, but will do when some of the work has been published.
I have however produced a template for this in my personal repository
look for file '*_custom_fidelity.py'
I cannot guarantee that this is up-to-date with the latest github of qutip version. I have made a lot of changes recently, and hence I am not quite in sync.

2. What you describe here is optimisation of an analytical pulse function. Much work has been done on this by our collaborators at Ulm. They call in the CRAB algorithm and it is implemented in qutip.control. What we have implemented so far is the optimisation of pulses modelled by a Fourier series Sum{A_i*sin(w_i*t) + B_i*cos(w_i*t)}. Example (see '*_CRAB_*' ) of these can be found in:
They have not been promoted to official examples, because the CRAB algorithm is only in the GitHub release. There are also links to the supporting papers in the notebooks.
Although this is not exactly what you asked, it does mean that it would be very easy to implement what you want by subclassing the PulseGenCrab.

I will be very happy to continue provide an help and advice on this.

I will be making a major update later today, which may effect how you would write a custom fidelity, so I suggest you download the latest github version after that.

simon...@gmail.com

unread,
Nov 17, 2015, 4:36:35 AM11/17/15
to QuTiP: Quantum Toolbox in Python
Thanks a lot for your answer! I think I will manage to get it to work! 

I am interested in a fidelity function of the magnetization ordering in the system and i would like to go as large as possible, what would be the largest Hilbert space dimension, and which algorithm is cheapest numerically?

Alex Pitchford

unread,
Nov 17, 2015, 5:49:24 AM11/17/15
to QuTiP: Quantum Toolbox in Python
Your plan does sound like the kind of thing that people use custom fidelities for.

Hilbert space limitations are always the issue. Often people manage to find some way to make other assumptions to avoid the 2^n scaling. 

Otherwise it is a case of what computer resources you have available.  The stats in the control modules should point you to where the processing cost is. Typically the bottleneck is exponentiating Hamiltonians, but not always.

by-the-way, I have still not merged my control modules update. Hopefully I will get chance later. I will post here when I have done it.

simon...@gmail.com

unread,
Nov 17, 2015, 10:09:48 AM11/17/15
to QuTiP: Quantum Toolbox in Python
Ok I see. I am quite new to optimal control so I don't really know what is feasible. Ideally I would like to find the optimal control to a system which has a Hilbert space dimension of about 5000. I guess that matrix exponentiation will be a big bottleneck.

Would it be possible to subclass some propagatorcalculation class or in some otherway calculate the dynamics? I have been doing a bit of dynamics and implemented my own version of a qutip dynamic solver which calculates exp(i*H*ht) at every timestep within a Krylov subspace, and this method is quite quick even for very large systems. 

Do you think it would be possible to implement something like this or is the full eigendecomposition of the Hamitonian matrix needed for other parts in the algorithm?

Thank you a lot for your help and sorry for the bombardment of questions!

/Simon

Alex Pitchford

unread,
Nov 17, 2015, 11:20:36 AM11/17/15
to QuTiP: Quantum Toolbox in Python
No worries. You can customise your propagator computer as well, meaning you could use your exponentiation function. However, the gradient with respect to the controls also needs to be calculated. For this we either use the eigendecomposition or the Frechet derivative. The Frechet derivative is more general, i.e. it does not assume unitary evolution. You could see propcomp.PropCompFrechet for how this is calculated. There is does not seem to be a sparse equivalent in scipy. If there were, or if there were some other way of doing the efficiently for Hilbert spaces of that size, then that would be very interesting.  Alternatively one can use approximate gradients, however this is generally very slow.

simon...@gmail.com

unread,
Nov 18, 2015, 10:42:39 AM11/18/15
to QuTiP: Quantum Toolbox in Python
I will investigate further if it is possible to also calculate the derivative of the propagator with respect to the controls in the Krylov subspace aswell. Do you have any rough estimate of how long it would take with the current version of the optimization package to calculate the optimal controls for one control field 100 timeslots and hilbert space dimension 5000. Is it a calculation that would take a week, a month or 1 million years if I have a modern PC? 

Alex Pitchford

unread,
Nov 18, 2015, 11:21:46 AM11/18/15
to QuTiP: Quantum Toolbox in Python
This would be beyond anything that we have tried here. However, I am aware of people doing control pulse optimisation for very large systems in the NMR field (a Matlab package called Spinach for example). There they have done much work on efficient Hamiltonian exponentiation. I am afraid that I cannot predict how long it would take.  I would think that it would be infeasible without doing some novel (for qutip.control) work on calculating gradients.

Alex Pitchford

unread,
Nov 24, 2015, 2:45:37 PM11/24/15
to QuTiP: Quantum Toolbox in Python
I have now merged the changes that I mentioned previously above. So you would be safe to start working on a custom fidelity now

simon...@gmail.com

unread,
Dec 14, 2015, 5:56:36 AM12/14/15
to QuTiP: Quantum Toolbox in Python
Thanks you! =)

Since we're mostly interested in larger systems I implemented my own optimal control using the GRAPE algorithm highly inspired by the qutip implementation. (The propagation uses krylov subspaces and the gradient uses a second order split-operator method). I am able to reproduce the same results as the code in qutip and it seems reasonable fast even for larger systems (one iteration for a system with dimension ~5000 takes about 10 seconds). The problem that I have is that it seems to converge to local minima all the time. Prehaps the GRAPE algorithm is unsuitable for larger systems with the overlap as a fidelity function? 

Is it possible to use custom fidelity together with the GRAPE algorithm or should I consider using Crab together with nelder-mead instead, what do you think?

Cheers,
Simon

Alex Pitchford

unread,
Dec 14, 2015, 8:57:34 AM12/14/15
to QuTiP: Quantum Toolbox in Python
Local minima traps are a much discussed and widely experienced issue. I would say it is an active area of research, and hence there are differing views. Some claim there are no traps in an unconstrained optimisation (see work by H Rabitz group), but one has to consider that constraint covers lots of things, e.g. pulse amplitude limits, timeslot size, total energy available.  I have found, in my reasonably limited personal experience, that as I increase the degrees of freedom more traps occur. How to avoid, counter, or escape, local traps is still much under discussion. The CRAB people in Ulm claim to have a partial solution using the latest version of their algorithm call adaptive CRAB. I believe there is published work on this too now. I know that they were/are working on extending qutip to include this method.

The blunt instrument used by most people is to run many optimisations based on different initial conditions and choose the solution with the best fidelity. Of course this does not mean that you have a global minima. Finding the global minima is literally a hard problem.

I would say that there is no reason to believe that the standard CRAB algorithm is less likely to fall in unwanted local minima. There are many reasons to use the CRAB algorithm. These are mainly around when you want a pulse that is more easily physically realised. Nelder-mead is considered to be the best method when you can't calculate the fidelity gradient with respect to your pulse parameters. If you can, then a gradient based algorithm usually converges faster. However, as mentioned previously, adaptive CRAB may help, but I have not tried it personally (yet).

I would say that 10 seconds per iteration seems very efficient for a system of the size you mention. This is very interesting.
Reply all
Reply to author
Forward
0 new messages