lecture 3: quantum simulation algorithms
DESCRIPTION
Lecture 3: Quantum simulation algorithms. Dominic Berry Macquarie University. 1996. Simulation of Hamiltonians. We want to simulate the evolution The Hamiltonian is a sum of terms:. Seth Lloyd. We can perform For short times we can use For long times. 1996. - PowerPoint PPT PresentationTRANSCRIPT
Lecture 3: Quantum simulation algorithms
Dominic BerryMacquarie University
We want to simulate the evolution
The Hamiltonian is a sum of terms:
Simulation of Hamiltonians
Seth Lloyd
1996
We can perform
For short times we can use
For long times
For short times we can use
This approximation is because
If we divide long time into intervals, then
Typically, we want to simulate a system with some maximum allowable error .
Then we need .
Simulation of Hamiltonians
Seth Lloyd
1996
Higher-order simulation A higher-order decomposition is
If we divide long time into intervals, then
Then we need . General product formula can give error for time . For time the error is To bound the error as the value of scales as
The complexity is .
2007Berry, Ahokas,
Cleve, Sanders
Higher-order simulation
The complexity is . For Sukuki product formulae, we have an additional factor in
The complexity then needs to be multiplied by a further factor of . The overall complexity scales as
We can also take an optimal value of , which gives scaling
2007Berry, Ahokas,
Cleve, Sanders
Solving linear systems Consider a large system of linear equations:
First assume that the matrix is Hermitian. It is possible to simulate Hamiltonian evolution
under for time : . Encode the initial state in the form
2009
Harrow, Hassidim & Lloyd
The state can also be written in terms of the eigenvectors of as
We can obtain the solution if we can divide each by . Use the phase estimation technique to place the estimate of in an
ancillary register to obtain
Solving linear systems Use the phase estimation technique to place the
estimate of in an ancillary register to obtain
Append an ancilla and rotate it according to the value of to obtain
2009
Harrow, Hassidim & Lloyd
Invert the phase estimation technique to remove the estimate of from the ancillary register, giving
Use amplitude amplification to amplify the component on the ancilla, giving a state proportional to
Solving linear systems
What about non-Hermitian ?
Construct a blockwise matrix
The inverse of is then
This means that
In terms of the state
2009
Harrow, Hassidim & Lloyd
Solving linear systemsComplexity Analysis
We need to examine:1. The complexity of simulating the Hamiltonian to
estimate the phase.2. The accuracy needed for the phase estimate.3. The possibility of being greater than .
2009
Harrow, Hassidim & Lloyd
The complexity of simulating the Hamiltonian for time is approximately .
To obtain accuracy in the estimate of , the Hamiltonian needs to be simulated for time .
We actually need to multiply the state coefficients by , to give
To obtain accuracy in , we need accuracy in the estimate of .
Final complexity is
Differential equations Discretise the differential equation, then encode as a linear system. Simplest discretisation: Euler method.
1j jjA
h
x x
x b
0 in
1
2
3
4
0 0 0 0( ) 0 0 00 ( ) 0 00 0 0 00 0 0 0
II Ah I h
I Ah I hI I
I I
x xx bx bxx
sets initial condition
sets x to be constant
d Adt
x x b
2010Berry
Quantum walks A classical walk has a position which is an
integer, , which jumps either to the left or the right at each step.
The resulting distribution is a binomial distribution, or a normal distribution as the limit.
The quantum walk has position and coin values
It then alternates coin and step operators, e.g.
The position can progress linearly in the number of steps.
Quantum walk on a graph The walk position is any node on
the graph.
Describe the generator matrix by
The quantity is the number of edges incident on vertex .
An edge between and is denoted .
The probability distribution for a continuous walk has the differential equation
Quantum walk on a graph
Quantum mechanically we have
The natural quantum analogue is
We take
Probability is conserved because is Hermitian.
1998Farhi
Quantum walk on a graph The goal is to traverse the graph
from entrance to exit. Classically the random walk will
take exponential time. For the quantum walk, define a
superposition state
On these states the matrix elements of the Hamiltonian are
entrance exit
2002Childs, Farhi,
Gutmann
Quantum walk on a graph Add random connections
between the two trees.
All vertices (except entrance and exit) have degree 3.
Again using column states, the matrix elements of the Hamiltonian are
This is a line with a defect.
There are reflections off the defect, but the quantum walk still reaches the exit efficiently.
2003
entrance exit
Childs, Cleve, Deotto, Farhi, Gutmann,
Spielman
NAND tree quantum walk In a game tree I alternate making moves with
an opponent.
In this example, if I move first then I can always direct the ant to the sugar cube.
What is the complexity of doing this in general? Do we need to query all the leaves?
2007
AND
OR
𝑥1 𝑥2 𝑥3 𝑥4
AND AND
OR
𝑥5 𝑥6 𝑥7 𝑥8
AND
AND
Farhi, Goldstone, Gutmann
NAND tree quantum walk2007
AND
OR
𝑥1 𝑥2 𝑥3 𝑥4
AND
NAND
NAND NAND
𝑥1 𝑥2 𝑥3 𝑥4
AND
OR
𝑥1 𝑥2 𝑥3 𝑥4
AND
NOT
NOT
NOT
NOT
Farhi, Goldstone, Gutmann
NAND tree quantum walk
The Hamiltonian is a sum of an oracle Hamiltonian, representing the connections, and a fixed driving Hamiltonian, which is the remainder of the tree.
Prepare a travelling wave packet on the left. If the answer to the NAND tree problem is , then after a fixed time the
wave packet will be found on the right. The reflection depends on the solution of the NAND tree problem.
2007
wave
Farhi, Goldstone, Gutmann
Simulating quantum walks A more realistic scenario is that we have
an oracle that provides the structure of the graph; i.e., a query to a node returns all the nodes that are connected.
The quantum oracle is queried with a node number and a neighbour number .
It returns a result via the quantum operation
Here is the ’th neighbour of .
wavequery node
connected nodes
𝑈𝑂¿ 𝑥 , 𝑗⟩
¿0 ⟩
¿ 𝑥 , 𝑗⟩
¿ 𝑦 ⟩
Decomposing the Hamiltonian In the matrix picture, we have a
sparse matrix. The rows and columns correspond to
node numbers. The ones indicate connections
between nodes. The oracle gives us the position of
the ’th nonzero element in column .
𝐻=[0 0 1 0 0 1 ⋯ 00 1 0 0 0 1 ⋯ 01 0 0 0 0 0 ⋯ 10 0 0 1 1 0 ⋯ 00 0 0 1 1 0 ⋯ 01 1 0 0 0 0 ⋯ 0⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮0 0 1 0 0 0 ⋯ 1
]2003
Aharonov, Ta-Shma
Decomposing the Hamiltonian In the matrix picture, we have a
sparse matrix. The rows and columns correspond to
node numbers. The ones indicate connections
between nodes. The oracle gives us the position of
the ’th nonzero element in column . We want to be able to separate the
Hamiltonian into 1-sparse parts. This is equivalent to a graph
colouring – the graph edges are coloured such that each node has unique colours.
𝐻=[0 0 1 0 0 1 ⋯ 00 1 0 0 0 1 ⋯ 01 0 0 0 0 0 ⋯ 10 0 0 1 1 0 ⋯ 00 0 0 1 1 0 ⋯ 01 1 0 0 0 0 ⋯ 0⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮0 0 1 0 0 0 ⋯ 1
]2003
Aharonov, Ta-Shma
Graph colouring How do we do this colouring? First guess: for each node, assign
edges sequentially according to their numbering.
This does not work because the edge between nodes and may be edge (for example) of , but edge of .
Second guess: for edge between and , colour it according to the pair of numbers , where it is edge of node and edge of node .
We decide the order such that . It is still possible to have ambiguity:
say we have .
𝐻=[0 0 1 0 0 1 ⋯ 00 1 0 0 0 1 ⋯ 01 0 0 0 0 0 ⋯ 10 0 0 1 1 0 ⋯ 00 0 0 1 1 0 ⋯ 01 1 0 0 0 0 ⋯ 0⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮0 0 1 0 0 0 ⋯ 1
]2007
Berry, Ahokas, Cleve, Sanders
Graph colouring How do we do this colouring? First guess: for each node, assign
edges sequentially according to their numbering.
This does not work because the edge between nodes and may be edge (for example) of , but edge of .
Second guess: for edge between and , colour it according to the pair of numbers , where it is edge of node and edge of node .
We decide the order such that . It is still possible to have ambiguity:
say we have . Use a string of nodes with equal
edge colours, and compress.
𝑥
𝑦𝑧
1
11
2
2
2
3
3
3
(1,2)
(1,2)
2007Berry, Ahokas,
Cleve, Sanders
𝐻=[0 0 1 0 0 1 ⋯ 00 1 0 0 0 1 ⋯ 01 0 0 0 0 0 ⋯ 10 0 0 1 1 0 ⋯ 00 0 0 1 1 0 ⋯ 01 1 0 0 0 0 ⋯ 0⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮0 0 1 0 0 0 ⋯ 1
]
General Hamiltonian oracles
More generally, we can perform a colouring on a graph with matrix elements of arbitrary (Hermitian) values.
Then we also require an oracle to give us the values of the matrix elements.
𝑈𝑂¿ 𝑥 , 𝑗⟩
¿0 ⟩
¿ 𝑥 , 𝑗⟩
¿ 𝑦 ⟩
𝐻=[0 0 2 0 0 √2 𝑖 ⋯ 00 3 0 0 0 1/2 ⋯ 02 0 0 0 0 0 ⋯ −√3+ 𝑖0 0 0 1 𝑒𝑖𝜋 /7 0 ⋯ 00 0 0 𝑒−𝑖 𝜋 /7 2 0 ⋯ 0
−√2𝑖 1/2 0 0 0 0 ⋯ 0⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮0 0 −√3− 𝑖 0 0 0 ⋯ 1 /10
]𝑈 𝐻
¿ 𝑥 , 𝑦 ⟩
¿0 ⟩
¿ 𝑥 , 𝑦 ⟩
¿𝐻 𝑥 ,𝑦 ⟩
2003Aharonov, Ta-Shma
Simulating 1-sparse case
Assume we have a 1-sparse matrix. How can we simulate evolution under this Hamiltonian? Two cases:
1. If the element is on the diagonal, then we have a 1D subspace.
2. If the element is off the diagonal, then we need a 2D subspace.
𝐻=[0 0 0 0 0 √2𝑖 ⋯ 00 3 0 0 0 0 ⋯ 00 0 0 0 0 0 ⋯ −√3+𝑖0 0 0 1 0 0 ⋯ 00 0 0 0 2 0 ⋯ 0
−√2𝑖 0 0 0 0 0 ⋯ 0⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮0 0 −√3−𝑖 0 0 0 ⋯ 0
]2003
Aharonov, Ta-Shma
Simulating 1-sparse case We are given a column number . There are then 5 quantities
that we want to calculate:
1. : A bit registering whether the element is on or off the diagonal; i.e. belongs to a 1D or 2D subspace.
2. : The minimum number out of the (1D or 2D) subspace to which belongs.
3. : The maximum number out of the subspace to which belongs.
4. : The entries of in the subspace to which belongs.
5. : The evolution under for time in the subspace.
We have a unitary operation that maps
2003Aharonov, Ta-Shma
Simulating 1-sparse case
We have a unitary operation that maps
We consider a superposition of the two states in the subspace,
Then we obtain
A second operation implements the controlled operation based on the stored approximation of the unitary operation :
This gives us
Inverting the first operation then yields
2003Aharonov, Ta-Shma
Applications
2007: Discrete query NAND algorithm – Childs, Cleve, Jordan, Yeung
2009: Solving linear systems – Harrow, Hassidim, Lloyd
2009: Implementing sparse unitaries – Jordan, Wocjan
2010: Solving linear differential equations – Berry
2013: Algorithm for scattering cross section – Clader, Jacobs, Sprouse
Implementing unitaries
Construct a Hamiltonian from unitary as
Now simulate evolution under this Hamiltonian
Simulating for time gives
2009Jordan, Wocjan
Quantum simulation via walks Three ingredients:
1. A Szegedy quantum walk2. Coherent phase estimation3. Controlled state preparation
The quantum walk has eigenvalues and eigenvectors related to those for Hamiltonian.
By using phase estimation, we can estimate the eigenvalue, then implement that actually needed.
Szegedy Quantum Walk The walk uses two reflections
The first is controlled by the first register and acts on the second register.
Given some matrix , the operator is defined by
2004Szegedy
Szegedy Quantum Walk The diffusion operator is controlled by the second
register and acts on the first. Use a similar definition with matrix .
Both are controlled reflections:
The eigenvalues and eigenvectors of the step of the quantum walk
are related to those of a matrix formed from and .
2004Szegedy
Szegedy walk for simulation Use symmetric system, with
Then eigenvalues and eigenvectors are related to those of Hamiltonian.
In reality we need to modify to “lazy” quantum walk, with
Grover preparation gives
2012Berry, Childs
Szegedy walk for simulation Three step process:
1. Start with state in one of the subsystems, and perform controlled state preparation.2. Perform steps of quantum walk to approximate Hamiltonian evolution.3. Invert controlled state preparation, so final state is in one of the subsystems.
Step 2 can just be performed with small for lazy quantum walk, or can use phase estimation.
A Hamiltonian has eigenvalues , so evolution under the Hamiltonian has eigenvalues
is the step of a quantum walk, and has eigenvalues
The complexity is the maximum of
2012Berry, Childs