Quantum Circuit Born Machine
Yao is designed with variational quantum circuits in mind, and this tutorial will introduce how to use Yao for this kind of task by implementing a quantum circuit born machine described in Jin-Guo Liu, Lei Wang (2018)
let's use the packages first
using Yao, LinearAlgebra, Plots
Training Target
In this tutorial, we will ask the variational circuit to learn the most basic distribution: a guassian distribution. It is defined as follows:
\[f(x \left| \mu, \sigma^2\right) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}\]
We implement it as gaussian_pdf
:
function gaussian_pdf(x, μ::Real, σ::Real)
pl = @. 1 / sqrt(2pi * σ^2) * exp(-(x - μ)^2 / (2 * σ^2))
pl / sum(pl)
end
pg = gaussian_pdf(1:1<<6, 1<<5-0.5, 1<<4);
We can plot the distribution, it looks like
Plots.plot(pg)
Create the Circuit
A quantum circuit born machine looks like the following:
It is composited by two different layers: rotation layer and entangler layer.
Rotation Layer
Arbitrary rotation is built with Rotation Gate on Z, X, Z axis with parameters.
\[Rz(\theta) \cdot Rx(\theta) \cdot Rz(\theta)\]
Since our input will be a $|0\dots 0\rangle$ state. The first layer of arbitrary rotation can just use $Rx(\theta) \cdot Rz(\theta)$ and the last layer of arbitrary rotation could just use $Rz(\theta)\cdot Rx(\theta)$
In 幺, every Hilbert operator is a block type, this ncludes all quantum gates and quantum oracles. In general, operators appears in a quantum circuit can be divided into Composite Blocks and Primitive Blocks.
We follow the low abstraction principle and thus each block represents a certain approach of calculation. The simplest Composite Block is a Chain Block, which chains other blocks (oracles) with the same number of qubits together. It is just a simple mathematical composition of operators with same size. e.g.
\[\text{chain(X, Y, Z)} \iff X \cdot Y \cdot Z\]
We can construct an arbitrary rotation block by chain $Rz$, $Rx$, $Rz$ together.
chain(Rz(0.0), Rx(0.0), Rz(0.0))
nqubits: 1
chain
├─ rot(Z, 0.0)
├─ rot(X, 0.0)
└─ rot(Z, 0.0)
Rx
, Rz
will construct new rotation gate, which are just shorthands for rot(X, 0.0)
, etc.
Then let's chain them up
layer(nbit::Int, x::Symbol) = layer(nbit, Val(x))
layer(nbit::Int, ::Val{:first}) = chain(nbit, put(i=>chain(Rx(0), Rz(0))) for i = 1:nbit);
We do not need to feed the first n
parameter into put
here. All factory methods can be lazy evaluate the first arguements, which is the number of qubits. It will return a lambda function that requires a single interger input. The instance of desired block will only be constructed until all the information is filled. When you filled all the information in somewhere of the declaration, 幺 will be able to infer the others. We will now define the rest of rotation layers
layer(nbit::Int, ::Val{:last}) = chain(nbit, put(i=>chain(Rz(0), Rx(0))) for i = 1:nbit)
layer(nbit::Int, ::Val{:mid}) = chain(nbit, put(i=>chain(Rz(0), Rx(0), Rz(0))) for i = 1:nbit);
Entangler
Another component of quantum circuit born machine are several CNOT operators applied on different qubits.
entangler(pairs) = chain(control(ctrl, target=>X) for (ctrl, target) in pairs);
We can then define such a born machine
function build_circuit(n, nlayers, pairs)
circuit = chain(n)
push!(circuit, layer(n, :first))
for i in 2:nlayers
push!(circuit, cache(entangler(pairs)))
push!(circuit, layer(n, :mid))
end
push!(circuit, cache(entangler(pairs)))
push!(circuit, layer(n, :last))
return circuit
end
build_circuit (generic function with 1 method)
We use the method cache
here to tag the entangler block that it should be cached after its first run, because it is actually a constant oracle. Let's see what will be constructed
build_circuit(4, 1, [1=>2, 2=>3, 3=>4])
nqubits: 4
chain
├─ chain
│ ├─ put on (1)
│ │ └─ chain
│ │ ├─ rot(X, 0.0)
│ │ └─ rot(Z, 0.0)
│ ├─ put on (2)
│ │ └─ chain
│ │ ├─ rot(X, 0.0)
│ │ └─ rot(Z, 0.0)
│ ├─ put on (3)
│ │ └─ chain
│ │ ├─ rot(X, 0.0)
│ │ └─ rot(Z, 0.0)
│ └─ put on (4)
│ └─ chain
│ ├─ rot(X, 0.0)
│ └─ rot(Z, 0.0)
├─ [cached] chain
│ ├─ control(1)
│ │ └─ (2,) X
│ ├─ control(2)
│ │ └─ (3,) X
│ └─ control(3)
│ └─ (4,) X
└─ chain
├─ put on (1)
│ └─ chain
│ ├─ rot(Z, 0.0)
│ └─ rot(X, 0.0)
├─ put on (2)
│ └─ chain
│ ├─ rot(Z, 0.0)
│ └─ rot(X, 0.0)
├─ put on (3)
│ └─ chain
│ ├─ rot(Z, 0.0)
│ └─ rot(X, 0.0)
└─ put on (4)
└─ chain
├─ rot(Z, 0.0)
└─ rot(X, 0.0)
MMD Loss & Gradients
The MMD loss is describe below:
\[\begin{aligned} \mathcal{L} &= \left| \sum_{x} p \theta(x) \phi(x) - \sum_{x} \pi(x) \phi(x) \right|^2\\ &= \langle K(x, y) \rangle_{x \sim p_{\theta}, y\sim p_{\theta}} - 2 \langle K(x, y) \rangle_{x\sim p_{\theta}, y\sim \pi} + \langle K(x, y) \rangle_{x\sim\pi, y\sim\pi} \end{aligned}\]
We will use a squared exponential kernel here.
struct RBFKernel
σ::Float64
m::Matrix{Float64}
end
function RBFKernel(σ::Float64, space)
dx2 = (space .- space').^2
return RBFKernel(σ, exp.(-1/2σ * dx2))
end
kexpect(κ::RBFKernel, x, y) = x' * κ.m * y
kexpect (generic function with 1 method)
There are two different way to define the loss:
In simulation we can use the probability distribution of the state directly
get_prob(qcbm) = probs(zero_state(nqubits(qcbm)) |> qcbm)
function loss(κ, c, target)
p = get_prob(c) - target
return kexpect(κ, p, p)
end
loss (generic function with 1 method)
Or if you want to simulate the whole process with measurement (which is entirely physical), you should define the loss with measurement results, for convenience we directly use the simulated results as our loss
Gradients
the gradient of MMD loss is
\[\begin{aligned} \frac{\partial \mathcal{L}}{\partial \theta^i_l} &= \langle K(x, y) \rangle_{x\sim p_{\theta^+}, y\sim p_{\theta}} - \langle K(x, y) \rangle_{x\sim p_{\theta}^-, y\sim p_{\theta}}\\ &- \langle K(x, y) \rangle _{x\sim p_{\theta^+}, y\sim\pi} + \langle K(x, y) \rangle_{x\sim p_{\theta^-}, y\sim\pi} \end{aligned}\]
which can be implemented as
function gradient(qcbm, κ, ptrain)
n = nqubits(qcbm)
prob = get_prob(qcbm)
grad = zeros(Float64, nparameters(qcbm))
count = 1
for k in 1:2:length(qcbm), each_line in qcbm[k], gate in content(each_line)
dispatch!(+, gate, π/2)
prob_pos = probs(zero_state(n) |> qcbm)
dispatch!(-, gate, π)
prob_neg = probs(zero_state(n) |> qcbm)
dispatch!(+, gate, π/2) # set back
grad_pos = kexpect(κ, prob, prob_pos) - kexpect(κ, prob, prob_neg)
grad_neg = kexpect(κ, ptrain, prob_pos) - kexpect(κ, ptrain, prob_neg)
grad[count] = grad_pos - grad_neg
count += 1
end
return grad
end
gradient (generic function with 1 method)
Now let's setup the training
import Optimisers
qcbm = build_circuit(6, 10, [1=>2, 3=>4, 5=>6, 2=>3, 4=>5, 6=>1])
dispatch!(qcbm, :random) # initialize the parameters
κ = RBFKernel(0.25, 0:2^6-1)
pg = gaussian_pdf(1:1<<6, 1<<5-0.5, 1<<4);
opt = Optimisers.setup(Optimisers.ADAM(0.01), parameters(qcbm));
function train(qcbm, κ, opt, target)
history = Float64[]
for _ in 1:100
push!(history, loss(κ, qcbm, target))
ps = parameters(qcbm)
Optimisers.update!(opt, ps, gradient(qcbm, κ, target))
dispatch!(qcbm, ps)
end
return history
end
history = train(qcbm, κ, opt, pg)
trained_pg = probs(zero_state(nqubits(qcbm)) |> qcbm)
64-element Vector{Float64}:
0.004446054657298753
0.004331157645300311
0.005835850732537775
0.005836326060827502
0.006572217842013155
0.0075577836174582904
0.008049808007401531
0.008851524894909079
0.009743607822144324
0.010528610129109328
0.011478611161272094
0.012431473989718075
0.013292480438956596
0.014546249912351756
0.01525412618005244
0.016560007178163545
0.017336804464180428
0.018419677210259496
0.019281125874896513
0.020026705759927953
0.020943354659739614
0.021945779730180776
0.02265330315848815
0.023451074736356034
0.02395534540947105
0.02455544195711308
0.024956887715311314
0.025591250870995643
0.025880777464795172
0.02601395500724068
0.026055570857316045
0.026017126001370815
0.026024252450169285
0.025849329915048843
0.02560132889392464
0.025129103668838446
0.024634149085080933
0.023981785924987162
0.023328590610284416
0.022669945822563688
0.022006605927837153
0.020996101110117016
0.020020881452901977
0.019331699335207675
0.018092360081346556
0.017423526856525047
0.016177651572765794
0.015236136561980132
0.014483802363532233
0.01345447804491319
0.012658255566757901
0.011566055592914945
0.010704366966141325
0.009898455311057716
0.00867751468056457
0.008222897629418413
0.007283275767419372
0.006682452030052422
0.006074115300337746
0.005354124439857948
0.00433052204952558
0.004399982209411372
0.003744869866319961
0.0035613117650391936
The history of training looks like below
title!("training history")
xlabel!("steps"); ylabel!("loss")
Plots.plot(history)
And let's check what we got
fig2 = Plots.plot(1:1<<6, trained_pg; label="trained")
Plots.plot!(fig2, 1:1<<6, pg; label="target")
title!("distribution")
xlabel!("x"); ylabel!("p")
So within 50 steps, we got a pretty close estimation of our target distribution!
This page was generated using Literate.jl.