using Piccolo
using Optim
using CairoMakie
using Random
set_theme!(theme_dark())
const ⊗ = kron
kron (generic function with 38 methods)
Background¶
There are many different ways to solve an optimization problem! What makes $\texttt{Piccolo.jl}$ special? Under the hood, it solves a direct trajectory optimization, which is a state-of-the-art approach from robotics 🤖 with a number of key advantages, like
✅ Flexible problem design
✅ Better global convergence
✅ Minimum time control
✅ ...and more!
$\texttt{Piccolo.jl}$ modernizes quantum optimal control. Plus, the code is completely open source, meaning the added flexibility of trajectory optimization is completely at your disposal.
Overview¶
In this Notebook, we will find controls to implement a two qubit gate a few different ways.
[🎵] Piccolo will begin and end our notebook to showcase the advantages of trajectory optimization 🏆.
[🍇] GRAPE optimizes for control variables by cutting waveforms into a bunch of piecewise constant slices. The controls look like staircases, but the stairs can go up and down wildly.
[🦀] CRAB is like GRAPE ⁉️ (at least, in quantum control, it is!) In CRAB, we replace the independent, piecewise constant controls with coefficients that weigh the influence of predetermined smooth shapes.
We'll see that GRAPE and CRAB only require a two or three lines of code! This partly explains their lasting appeal---GRAPE was introduced in 2004 for Nuclear Magnetic Resonance imaging! With $\texttt{Piccolo.jl}$, it's now just as easy to write modern optimal control code for building modern quantum technology. Let's see how!
Desiging a two qubit gate¶
Quantum technology is all about entanglement! Let's entangle a pair of qubits. We'll allow them to be controlled individually and via a single coupling term.
H_drives = [
PAULIS.X ⊗ PAULIS.I,
PAULIS.Y ⊗ PAULIS.I,
PAULIS.I ⊗ PAULIS.X,
PAULIS.I ⊗ PAULIS.Y,
PAULIS.X ⊗ PAULIS.X
]
system = QuantumSystem(H_drives)
# Entangling gate
U_goal = GATES.CX
# Timing information (e.g. 20 ns superconducting qubit gate)
T = 50
Δt = 0.4
QuantumSystem(PiccoloQuantumObjects.QuantumSystems.var"#12#23"{SparseArrays.SparseMatrixCSC{ ... }}, ... )
Piccolo¶
A quantum control problem will allow us to optimize the controls for this gate. In quantum control problems, we want to minimize the infidelity 📉.
In Piccolo.jl, we solve a trajectory optimization problem, which looks like $$ \begin{align} \arg \min_{\mathbf{Z}}\quad & |1 - \mathcal{F}(U_T, U_\text{goal})| \\ \nonumber \text{s.t.} \qquad & U_{t+1} = \exp\{- i H(a_t) \Delta t_t \} U_t, \quad \forall\, t \\ \end{align} $$
🎯 The objective is the infidelity.
⚛️ The constraint is the Schrödinger equation.
📦 $Z$ is the trajectory, a container for the unitaries $U_{1:T}$ and the controls $a_{1:T}$ at all of the timesteps. Our optimization problem stores this at prob.trajectory
, and we can access variables like the controls using prob.trajectory.a
.
prob = UnitarySmoothPulseProblem(system, U_goal, T, Δt);
building dynamics from integrators... constructing knot point dynamics functions... constructing full dynamics derivative functions... building evaluator... initializing optimizer... applying constraint: initial value of Ũ⃗ applying constraint: initial value of a applying constraint: final value of a applying constraint: bounds on a applying constraint: bounds on da applying constraint: bounds on dda applying constraint: bounds on Δt applying constraint: time step all equal constraint finished.
# save these initial controls for later
a_init = prob.trajectory.a
series(get_times(prob.trajectory), a_init)
⚙️ Let's turn the crank and solve! ⚙️
solve!(prob, max_iter=100, print_level=1)
ℱ = 1 - unitary_rollout_fidelity(prob.trajectory, system)
println("The infidelity is ", 1 - ℱ)
The infidelity is 0.9947673485951398
a_final = prob.trajectory.a
series(get_times(prob.trajectory), a_final)
GRAPE¶
What about commonly used tools like GRAPE? They solve an optimization problem that looks like $$ \arg \min_{a_{1:T}} \quad 1 - \mathcal{F}(U_T(a_{1:T}), U_{goal}) $$
It's a bit different: this is an indirect method, while our approach is a direct method. Here, $U_T(a_{1:T})$ is the final state from the rollout of the controls on the system.
⚙️ We can use Optim.jl to minimize the rollout infidelity with respect to the controls. ⚙️
timesteps = fill(Δt, T)
GRAPE(controls) = unitary_rollout_fidelity(U_goal, controls, timesteps, system)
result_GRAPE = optimize(GRAPE, a_init, LBFGS(); autodiff = :forward)
* Status: success * Candidate solution Final objective value: 5.061426e-18 * Found with Algorithm: L-BFGS * Convergence measures |x - x'| = 1.60e-06 ≰ 0.0e+00 |x - x'|/|x'| = 1.62e-06 ≰ 0.0e+00 |f(x) - f(x')| = 1.23e-10 ≰ 0.0e+00 |f(x) - f(x')|/|f(x')| = 2.42e+07 ≰ 0.0e+00 |g(x)| = 6.20e-10 ≤ 1.0e-08 * Work counters Seconds run: 7 (vs limit Inf) Iterations: 4 f(x) calls: 11 ∇f(x) calls: 11
🚧 ⚠️ You'd be forgiven if you thought we accidentally plotted a_init
again! It's hard to tell the difference between the initial value and the converged solution. Also, we definitely don't want to hand this over to the scientists and engineers working on the hardware 😬.
series(cumsum(timesteps), result_GRAPE.minimizer)
CRAB¶
A function basis can help. CRAB uses Fourier modes, so we will be true to that. Our optimization parameters are now coefficients. $$ a(t) = a_0 + \sum_{j=1}^n c_j a_j(t). $$ During the optimization, the modes $a_j(t)$ stay fixed and the coefficients $c_j$ vary.
# First n=5 entries in a Fourier series, including the constant term
n = 5
fourier_series = [cos.(π * j * (0:T-1) / T .- π/2) for j in 0:n-1]
function get_controls(coefficients)
a(c) = sum(cⱼ * aⱼ for (cⱼ, aⱼ) in zip(c, fourier_series))
return stack([a(c) for c in eachrow(coefficients)], dims=1)
end
function CRAB(coefficients)
controls = get_controls(coefficients)
return unitary_rollout_fidelity(U_goal, controls, timesteps, system)
end
CRAB (generic function with 1 method)
Random.seed!(1234)
c_init = rand(system.n_drives, n)
result_CRAB = optimize(CRAB, c_init, LBFGS(); autodiff = :forward)
series(cumsum(timesteps), get_controls(result_CRAB.minimizer))
🚧 ⚠️ These shapes are a lot nicer! But we're totally beholden to the basis we choose, and we have lost a lot of flexibility! Also, our solution will often depend on the local landscape around our inital guess. We'll get a different result each time!
Random.seed!(5678)
c_init = randn(system.n_drives, n)
result_CRAB = optimize(CRAB, c_init, LBFGS(); autodiff = :forward)
series(cumsum(timesteps), get_controls(result_CRAB.minimizer))
Fine-tuned Quantum Control with $\texttt{Piccolo.jl}$¶
Flexible designs¶
Why does flexibility matter? Well, what if we have very specific bounds on the control hardware? We could try to encode those into a specific basis, or we could just directly ask for what we want with $\texttt{Piccolo.jl}$!
bound_prob = UnitarySmoothPulseProblem(
system, U_goal, T, Δt,
a_bound = 0.3,
da_bound = 0.015
)
solve!(bound_prob, max_iter=100, print_level=1)
ℱ = 1 - unitary_rollout_fidelity(bound_prob.trajectory, system)
println("The infidelity is ", 1 - ℱ)
building dynamics from integrators... constructing knot point dynamics functions... constructing full dynamics derivative functions... building evaluator... initializing optimizer... applying constraint: initial value of Ũ⃗ applying constraint: initial value of a applying constraint: final value of a applying constraint: bounds on a applying constraint: bounds on da applying constraint: bounds on dda applying constraint: bounds on Δt applying constraint: time step all equal constraint finished. The infidelity is 0.9971485277577704
👍 Piccolo has no trouble desiging the solution, but CRAB would require a lot of Fourier modes, or a very carefully designed basis.
series(get_times(bound_prob.trajectory), bound_prob.trajectory.a)
Minimum time controls¶
🚀 Piccolo can also make sure our original gate happens as fast as possible. We'll use the original because the bound gate we just found is already hitting its control limits.
min_prob = UnitaryMinimumTimeProblem(prob, system)
solve!(min_prob, max_iter=100, print_level=1)
building dynamics from integrators... constructing knot point dynamics functions... constructing full dynamics derivative functions... building evaluator... initializing optimizer... applying constraint: initial value of Ũ⃗ applying constraint: initial value of a applying constraint: final value of a applying constraint: bounds on a applying constraint: bounds on da applying constraint: bounds on dda applying constraint: bounds on Δt applying constraint: time step all equal constraint finished.
fig = Figure()
ax1 = Axis(fig[1, 1], title="Before")
ax2 = Axis(fig[2, 1], title="After")
series!(ax1, get_times(prob.trajectory), prob.trajectory.a)
series!(ax2, get_times(min_prob.trajectory), min_prob.trajectory.a)
linkxaxes!(ax1, ax2)
fig