a generalization of the deutsch-jozsa algorithm andballhyse/myfiles/msthesis.pdf · b.s., computer...
TRANSCRIPT
A GENERALIZATION OF DEUTSCH-JOZSA ALGORITHM AND
THE DEVELOPMENT OF A QUANTUM PROGRAMMING
INFRASTRUCTURE
by
Elton Ballhysa
B.S., Computer Engineering, Boğaziçi University, 2000
Submitted to the Institute for Graduate Studies in
Science and Engineering in partial fulfillment of
the requirements for the degree of
Master of Science
Graduate Program in Computer Engineering
Boğaziçi University
2004
ii
A GENERALIZATION OF DEUTSCH-JOZSA ALGORITHM AND
THE DEVELOPMENT OF A QUANTUM PROGRAMMING
INFRASTRUCTURE
APPROVED BY:
Prof. A.C. Cem Say . . . . . . . . . . . . . . . . . . . . . . . .
(Thesis Supervisor)
Prof. Metin Arık . . . . . . . . . . . . . . . . . . . . . . . .
Assist. Prof. Murat Zeren . . . . . . . . . . . . . . . . . . . . . . . .
DATE OF APPROVAL: 17.06.2004
iii
ACKNOWLEDGMENTS
I want to express my deepest gratitude to my supervisor, Prof. Cem Say, without
whose inspiration and motivation I would not have dared to delve into the subject of
quantum computing. His precious advices during our meetings, so many times even
beyond the working hours, have been the perfect guide to the infinite world of research.
Special thanks also go to Prof. Metin Arık, Assist. Prof. Murat Zeren and Assist.
Prof. Muhittin Mungan for their willingness and their useful comments. Moreover, I want
to thank Prof. Mike Vyalyi for his precious guidelines, Dr. Høyer for answering our naïve
questions and Dr. Bernhard Ömer for sharing his knowledge. I would like to thank also my
colleague, Cem Keskin, for his useful discussions and insight in the initial stages of this
work.
I am grateful to Hitit Computer Services for the support and tolerance they have
showed toward my work.
Also, I want to thank my fiancée, Elona, for her careful editing work on the many
manuscripts of this thesis.
Finally, I want to thank my family for their support and motivation throughout these
years. Without their love and affection, I would have fallen at the very fist hurdle, and it is
to them that I would like to dedicate this thesis.
iv
ABSTRACT
A GENERALIZATION OF DEUTSCH-JOZSA ALGORITHM AND
THE DEVELOPMENT OF A QUANTUM PROGRAMMING
INFRASTRUCTURE
If miniaturization trends in computer technology continue for the next 20 years, it has
been estimated that by that time only one atom will be needed to store one bit of
information. At such scales, our classical intuitions no longer work, and the laws of
quantum mechanics allow a quantum bit to exist in a superposition of its logical values.
The superposition and ensuing parallelism properties of quantum systems allow for faster
computation than offered by the classical computing paradigm. The field of quantum
computation examines the possibility of using these physical properties for solving
computational properties more efficiently.
In this thesis, we consider the problem of generating superpositions of arbitrary
subsets of basis states whose cardinalities are not necessarily powers of two. Two
alternative algorithms for this problem are examined with respect to complexity and
precision, and a variant based on the Grover iteration is shown to yield an algorithm with
one-sided error for a generalization of the Deutsch-Jozsa problem, where the task is to
decide whether a specified subset of the oracle function is constant or balanced.
We also propose a quantum programming infrastructure which translates a classical
irreversible program into the domain of quantum algorithms. A visual component of the
system outputs the corresponding quantum circuit in terms of either low level gates such as
CNOT, or higher level gates corresponding to programming language level operations.
v
ÖZET
DEUTSCH-JOZSA ALGORİTMASININ BİR GENELLEMESİ VE
BİR KUANTUM PROGRAMLAMA ALTYAPISININ
GELİŞTİRİLMESİ
Bilgisayar bileşenlerinin boyutlarındaki küçülme eğilimi şimdiki hızıyla devam
ederse 20 yıl içinde bir bitlik bilginin birkaç atomla gösterilebilmesi sözkonusu olacaktır.
Bu ölçekte klasik fiziğin alışılmış kuralları geçerliliğini yitirir. Kuantum mekaniğinin
kuralları, bir kuantum bitinin aynı anda sıfır ve bir değerlerinin bir süperpozisyonunda
olmasına izin vermektedir. Kuantum sistemlerinin bundan kaynaklanan koşutluk
özellikleri, klasik modelin elverdiğinden daha hızlı hesaplamayı mümkün kılmaktadır.
Kuantum hesaplama, bu fiziksel özelliklerden yararlanılarak bilişim problemlerinin nasıl
daha verimli şekilde çözülebileceğini inceleyen araştırma alanıdır.
Bu tezde, eleman sayılarının ikinin üssü olması gerekmeyen baz vektörü kümelerinin
eşit olasılıklı süperpozisyonlarının üretilmesi problemi incelenmiştir. Bu iş için
geliştirilmiş iki algoritma karmaşıklık ve hassasiyet açılarından karşılaştırılmış ve Grover
döngüsü tabanlı olan bir seçeneğin, Deutsch-Jozsa probleminin amacın karakutu
fonksiyonunun verilmiş bir altkümesinin sabit mi yoksa dengeli mi olduğuna karar vermek
olduğu bir genellemesi için tek taraflı hata özelliğine sahip bir algoritma hazırlanmasına
elverdiği gösterilmiştir.
İkinci bir katkı olarak, klasik bir tersinemez programı kuantum bilgisayarlarında
çalıştırılabilecek hale çevirebilen bir kuantum programlama altyapısı geliştirilmiştir.
Sistemin görsel bileşeni verilen programa karşılık gelen kuantum devresini istenirse
“kontrollü değil” kapısı gibi düşük düzeyde kapıları, istenirse de programlama dili
işlemlerine karşılık gelen daha yüksek düzeyde kapıları temel alarak çizebilmektedir.
vi
TABLE OF CONTENTS
ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
ÖZET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2. QUANTUM COMPUTING CONCEPTS . . . . . . . . . . . . . . . . . . . . 5
2.1. Qubit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2. Quantum Registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3. Quantum Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1. Single Qubit Gates . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.2. Multiple Qubit Gates . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4. Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5. Entanglement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6. Universality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7. Reversible Computation . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7.1. Reversibility of Quantum Gates . . . . . . . . . . . . . . . . . . . 34
2.7.2. Classical Computing and Reversibility . . . . . . . . . . . . . . . 36
2.7.3. Reversifying Mathematical Functions . . . . . . . . . . . . . . . 38
3. QUANTUM COMPUTING ALGORITHMS . . . . . . . . . . . . . . . . . . 41
3.1. Variants of the Deutsch-Jozsa Algorithm . . . . . . . . . . . . . . . . . . 42
3.1.1. Deutsch’s Algorithm for One-Bit Functions . . . . . . . . . . . . 42
3.1.2. Deutsch-Jozsa Algorithm for Multiple Bit Functions . . . . . . . . 46
3.2. Quantum Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.1. The Single Solution Case . . . . . . . . . . . . . . . . . . . . . . 49
3.2.2. Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2.3. Multiple Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3. Factorization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.1. Quantum Fourier Transform . . . . . . . . . . . . . . . . . . . . . 65
vii
3.3.2. Phase Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.3.3. Order Finding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.3.4. Shor’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4. ARBITRARY EQUIPROBABLE SUPERPOSITIONS . . . . . . . . . . . . 74
4.1. Algorithm KSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.2. Algorithm G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.3. Comparison of the Algorithms . . . . . . . . . . . . . . . . . . . . . . . 80
4.4. A New Generalization of the Deutsch-Jozsa Algorithm . . . . . . . . . . 82
5. A PROGRAMMING INFRASTRUCTURE FOR REVERSIBLE AND
QUANTUM COMPUTING . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.2. Register Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.3. Structure of the Classical Function . . . . . . . . . . . . . . . . . . . . . 91
5.4. Classical Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.4.1. Arithmetic Operations . . . . . . . . . . . . . . . . . . . . . . . 93
5.4.1.1. Addition Operation . . . . . . . . . . . . . . . . . . . . 94
5.4.1.2. Subtraction Operation . . . . . . . . . . . . . . . . . . . . 96
5.4.1.3. Multiplication Operation . . . . . . . . . . . . . . . . . . 96
5.4.1.4. Increment Operation . . . . . . . . . . . . . . . . . . . . 96
5.4.1.5. Decrement Operation . . . . . . . . . . . . . . . . . . . . 97
5.4.2. Logical Operations . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.4.2.1. AND Operation . . . . . . . . . . . . . . . . . . . . . . 98
5.4.2.2. OR Operation . . . . . . . . . . . . . . . . . . . . . . . 98
5.4.2.3. NOT Operation . . . . . . . . . . . . . . . . . . . . . . 98
5.4.3. Comparison Operations . . . . . . . . . . . . . . . . . . . . . . 98
5.4.3.1. Equality Comparison . . . . . . . . . . . . . . . . . . 99
5.4.3.2. Inequality Comparison . . . . . . . . . . . . . . . . . . 99
5.4.3.3. “Less than” Comparison . . . . . . . . . . . . . . . . . . 99
5.4.3.4. “Less than or equal to” Comparison . . . . . . . . . . . . 99
5.4.3.5. “Greater than” Comparison . . . . . . . . . . . . . . . . 99
5.4.3.6. “Greater than or equal to” Comparison . . . . . . . . . . 99
5.5. Control Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.5.1. Conditional Statement . . . . . . . . . . . . . . . . . . . . . . . 100
viii
5.5.1.1. Reversible Conditional Statement . . . . . . . . . . . . . 101
5.5.1.2. Quantum If Statement . . . . . . . . . . . . . . . . . . . 102
5.5.1.3. Quantum If-Else Statement . . . . . . . . . . . . . . . . 105
5.5.2. Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.5.3. G Gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.6. Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.7. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.8. Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6. CONCLUSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
ix
LIST OF FIGURES
Figure 2.1. Bloch sphere diagram . . . . . . . . . . . . . . . . . . . . . . . . 7
Figure 2.2. Qubits x and y passing through single qubit gates . . . . . . . . . . 16
Figure 2.3. Qubits x and y passing through two qubit gate . . . . . . . . . . . 16
Figure 2.4. CNOT gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Figure 2.5. Controlled-U gate enabled when control qubit is in state 0 . . . . . 18
Figure 2.6. General controlled-U gate with n controls and m target qubits . . . 19
Figure 2.7. Toffoli gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Figure 2.8. Quantum circuit producing an entangled state . . . . . . . . . . . 24
Figure 2.9. Rotation of a qubit around z-axis . . . . . . . . . . . . . . . . . . 29
Figure 2.10. Simulation of rotation about z-axis . . . . . . . . . . . . . . . . . 29
Figure 2.11. Simulation of rotation about y-axis . . . . . . . . . . . . . . . . . 30
Figure 2.12. Approximation of Hadamard gate with controlled-R gates . . . . . 32
Figure 2.13. Approximation of Hadamard gate with G gates . . . . . . . . . 32
Figure 2.14. Transformation of a qubit under a unitary operator . . . . . . . . . 34
Figure 2.15. Transformation of a qubit under the adjoint of a unitary operator . . 35
x
Figure 2.16. An arbitrary quantum circuit . . . . . . . . . . . . . . . . . . . . . 35
Figure 2.17. The reverse circuit of the quantum circuit in Figure 2.16 . . . . . . 35
Figure 2.18. Fan-out gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Figure 2.19. Reversified form of an arbitrary n-to-one irreversible function . . . 40
Figure 3.1 The Deutsch algorithm . . . . . . . . . . . . . . . . . . . . . . . . 43
Figure 3.2. Quantum circuit for Deutsch algorithm . . . . . . . . . . . . . . . 43
Figure 3.3. Quantum circuit for Deutsch-Jozsa algorithm . . . . . . . . . . . 47
Figure 3.4. The Grover algorithm . . . . . . . . . . . . . . . . . . . . . . . . 49
Figure 3.5. Illustration of the action of Pw reflection . . . . . . . . . . . . . . 52
Figure 3.6. Illustration of the action of Ps reflection . . . . . . . . . . . . . . 54
Figure 3.7. Controlled-Z gate . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Figure 3.8. Grover iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Figure 3.9. Illustration of the action of first Grover iteration . . . . . . . . . . 59
Figure 3.10. Plot of probability of observing the desired state vs. iteration angle . 61
Figure 3.11. Circuit for quantum Fourier transform . . . . . . . . . . . . . . . 66
Figure 3.12. Circuit for quantum phase estimation algorithm . . . . . . . . . . 68
Figure 3.13. Shor’s factorization algorithm . . . . . . . . . . . . . . . . . . . . 73
xi
Figure 4.1. Algorithm KSV’ for the generation of state ηn,q . . . . . . . . . . . 75
Figure 4.2. Circuit for approximation of rotation operator R(ν/π) . . . . . . . . 76
Figure 4.3. Circuit for partial permutation operator Q S . . . . . . . . . . . . 77
Figure 4.4. Algorithm KSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Figure 4.5. Algorithm G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Figure 4.6. Circuit for new generalization of Deutsch-Jozsa algorithm . . . . . 83
Figure 4.7. Plot of minimum success probability vs. q/N for different value of c 87
Figure 5.1. General flow of reversifier program . . . . . . . . . . . . . . . . . 90
Figure 5.2. Reversible synthesis of irreversible functions . . . . . . . . . . . . 93
Figure 5.3. Reversible implementation of addition . . . . . . . . . . . . . . . 94
Figure 5.4. Reversible implementation of AND function . . . . . . . . . . . . 98
Figure 5.5. Conditional statements . . . . . . . . . . . . . . . . . . . . . . . . 100
Figure 5.6. Schematic representation of a conditional statement . . . . . . . . 100
Figure 5.7. Irreversible conditional statement . . . . . . . . . . . . . . . . . . 101
Figure 5.8. Reversified conditional statement . . . . . . . . . . . . . . . . . . 102
Figure 5.9. Modification of conditional statement . . . . . . . . . . . . . . . . 103
Figure 5.10. CNOT gate as an if statement . . . . . . . . . . . . . . . . . . . 103
xii
Figure 5.11. Controlled-U gate as an if statement . . . . . . . . . . . . . . . . 103
Figure 5.12. General if statement . . . . . . . . . . . . . . . . . . . . . . . 104
Figure 5.13. Distributed if statement . . . . . . . . . . . . . . . . . . . . . . . 104
Figure 5.14. Implementation of a conditional block as a quantum circuit . . . . 104
Figure 5.15. Controlled-U gate enabled when control value is 0 . . . . . . . . . 115
Figure 5.16. Controlled-U gate enabled when control value is 1 . . . . . . . . . 115
Figure 5.17. Simple else statement . . . . . . . . . . . . . . . . . . . . . . . . 115
Figure 5.18. Simple if-else statement . . . . . . . . . . . . . . . . . . . . . . 116
Figure 5.19. Distributed if-else statement . . . . . . . . . . . . . . . . . . . 116
Figure 5.20. Circuit implementation of a simple if-else statement . . . . . . . 116
Figure 5.21. Reordered implementation of if-else statement . . . . . . . . . . 107
Figure 5.22. Quantum for loop . . . . . . . . . . . . . . . . . . . . . . . . . 110
Figure 5.23. Simple for loop . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Figure 5.24. Supported syntax of source function . . . . . . . . . . . . . . . . 112
Figure 5.25. Code for irreversible addition . . . . . . . . . . . . . . . . . . . . 113
Figure 5.26. Low level visual representation of addition . . . . . . . . . . . . . 113
xiii
Figure 5.27. Code for reversified addition function . . . . . . . . . . . . . . . . 114
Figure 5.28. Code for irreversible test function . . . . . . . . . . . . . . . . . . 114
Figure 5.29. Reversified program code for test function . . . . . . . . . . . . . 115
Figure 5.30. Code for reversified test function . . . . . . . . . . . . . . . . . . 115
xiv
LIST OF TABLES
Table 2.1. Truth table of CNOT gate . . . . . . . . . . . . . . . . . . . . . . 38
Table 5.1. Truth table of reversible one bit addition . . . . . . . . . . . . . . 95
1
1. INTRODUCTION
Computer technology is omnipresent in everyday life and has become indispensable
for a broad range of activities. One of the main reasons for its widespread use has been the
tremendous development in speed and miniaturization. The laptop that I used to write this
thesis weighs less than 1.5 kilograms, and yet is hundreds of times faster than the state of
the art servers of the 1980s. The distinction becomes even more obvious if we look back at
the 1950s. This development has been mainly due to the underlying technology that is
employed in order to carry out the computation. Computers were using vacuum tubes in
the 1950s, transistors in the 1960s, integrated circuits in the 1970s, and they are currently
based on microchips. However, from a mathematical point of view, the Cray
supercomputer is still equivalent to the old ENIAC of 1946. Of course, the Cray is much
faster than the ENIAC, but still, they both lie within the computing paradigm. This model
is called the Turing machine after its inventor, Alan M. Turing, and which was published
in 1936 [1], ten years before the ENIAC was made operational.
A Turing machine model is beyond the scope of this thesis, it can be envisaged as a
machine which has an infinite amount of secondary memory to read information, process it
and write down the results. In this process, it is governed by a transition function, which,
given the current state of the machine and the memory location that it is currently reading,
decides what to do next. The model is deterministic, because at any instant, it has only one
choice of action.
The Turing machine model was proven to be identical to other computational models
that were devised by other theorists of the time, such as Emil Post and Alonzo Church.
Although very different in appearance, the fact that all those models were proven to be
completely equivalent went some way to explain the universal nature of the concept of
computation embodied by the Turing machine. As early as 1936, Turing realized that any
work on the limits of the capabilities of computational systems should not be based on any
physical implementation, but rather on theoretical concepts which would be more robust to
technological changes. The validity of his model up to this day is a clear indication of this
point. However, despite his best efforts, the model is still based on classical physics, and
2
would cease to be valid in the quantum level. For example, Turing defined his model with
the assumption that any memory cell would contain one of a finite set of symbols (either 0
or 1, in the common conceptualization) at any time. As we will see, this need not be true in
the quantum world.
Let us turn back to the miniaturization trends of computer technology. If we accept as
a quantitative measure the number of atoms needed to represent one bit of information, it
was shown that this measure has been decreasing by a factor of 1.5 every year [2]. If one
extrapolates this trend, it suggests that by the year 2020 or so, we will reach the level of
one atom per bit. At this level, the classical understanding of reality does not hold any
more and a new physical model, quantum mechanics, takes over. Although it is often at
odds with our intuition, quantum mechanics has been shown time and again to be the most
accurate model for describing the world both at the microscopic and the macroscopic
levels.
The most important implication of computing at the quantum level is the observation
that the particles which will be utilized to represent a single bit can be at the same time
both in the 0 and 1 states, each to a certain degree. The quantum bit, qubit for short, can
exist in a superposition of the logical states 0 and 1. The speed of a computing system can
be regarded as a quantitative change that does not affect the theoretical model on which it
is founded. However, the capability of a qubit to exist in a superposition of both its logical
states is a qualitative change from the classical model.
The idea of quantum computing dates back to the 1980s. Richard Feynman, the 1961
Nobel Prize winner, observed that an arbitrary quantum physical system can be simulated
by current computers only with an exponential slowdown [3]. On the other hand, he
suggested that if one could build a computer that would behave according to the laws of
quantum mechanics, the simulation could be done rather more efficiently.
Benioff had already presented a quantum view of computation in 1980 [4]. However,
the first well-founded model for explicit quantum computation was presented by Deutsch
in 1985 [5]. Bernstein and Vazirani showed the existence of a universal quantum Turing
machine which could simulate any quantum computer with polynomial slowdown [6]. In
3
1992, Deutsch and Jozsa demonstrated a problem for which a quantum algorithm ran
provably exponentially faster that any classical deterministic algorithm [7].
The most celebrated discovery up to date in the field of quantum computing was
made by Shor in 1994, when he introduced polynomial time algorithms for factorizing an
integer and extracting the discrete logarithm [8]. Other important algorithms are the search
algorithm proposed by Grover in 1996 [9] and the quantum random walk graph traversal
algorithm, whose superiority above classical methods was demonstrated by Childs et al. in
2002 [10].
As of today, quantum computing remains a theoretical novelty. There is still a long
way to a general purpose quantum computer. The state of the art quantum computer of the
moment is the molecule called perfluorobutadienyl [11], which has been used to
successfully factor the number 15 by applying a (highly optimized) part of Shor’s
factorization algorithm.
The publication of Shor’s algorithm caused a great deal of interest in the field of
quantum computing. However as of today, no other genuinely different algorithm has been
found other than the ones listed above. The reason might be because there are not many
problems for which quantum computing would be useful in the complexity theoretic sense.
Alternatively, it might be due to the conceptual difficulties that quantum phenomena pose
to us [12].
We provide two separate contributions to the field in this thesis. Motivated by the
observation that most known fast quantum algorithms start by initializing a quantum
register to the superposition of all the basis states that it may assume classically, we
consider the more general problem of generating equiprobable superpositions of arbitrary
subsets of basis states whose cardinalities are not necessarily powers of two. Two
alternative algorithms are examined for this purpose. One variant is based on the solution
of a similar problem described by Kitaev et al. [13] and the other one on Grover’s search
algorithm. We compare these algorithms with respect to circuit complexity and precision
and show that each has its strong and weak points. We then propose a generalization of the
Deutsch-Jozsa problem where the task is to find whether the oracle function is constant or
4
balanced on a subset, rather than the whole, of its domain. Integration of the Grover-based
variant as the superposition generation module is shown to yield a one-sided error
algorithm for this problem.
We also provide a quantum programming infrastructure which translates classical
irreversible programs into the domain of quantum computing. A visual representation
component of the infrastructure prints the resulting quantum circuit in two possible levels
of detail. In the low level representation, the quantum circuit is displayed at the level of
CNOT gates, which, while not very insightful for our understanding, may be the format
that a future compiler would need. In the high level representation, the circuit is displayed
in terms of more complex gates which correspond to actual programming language level
operations in the algorithms, which would yet have to be decomposed in order to be
executable by a quantum compiler.
In the remainder of this thesis, we will cover the basic quantum computing concepts
in Chapter 2. A detailed discussion of three basic quantum algorithms, namely the
Deutsch-Jozsa algorithm, Grover’s search algorithm and Shor’s factorization algorithm, is
given in Chapter 3. In Chapter 4, we will discuss alternative algorithms for generating
equiprobable superpositions and a new generalization of the Deutsch-Jozsa algorithm. In
Chapter 5, we will discuss our new infrastructure for quantum programming. Chapter 6 is a
conclusion.
5
2. QUANTUM COMPUTING CONCEPTS
Quantum computing is a computing paradigm that obeys the rules of quantum
mechanics, in contrast to classical computing, which was designed in a framework of
classical physics. Classical computers work deterministically, following a single
computation trajectory. On the other hand, a quantum computer can be initialized in a
superposition of several states, each of which would follow its own trajectory in parallel. It
is precisely this phenomenon, quantum parallelism, that quantum computing tries to make
use of in its quest for faster algorithms.
2.1. Qubit
Computers are machines that process information. The basic unit of information in
current computer technology is the binary digit, or bit for short. It assumes either the value
0 or 1. By analogy, the basic unit of information in quantum computing is called quantum
bit, or qubit for short. Mathematically, a qubit can be represented as a unit vector in a two-
dimensional Hilbert space. Theoretically, any two-level quantum system may be used for
this purpose. Examples may include polarization state of a photon, or the energy level of a
hydrogen atom [13,14]. For obvious reasons, we will denote the basis states of the Hilbert
space as 0 and 1 . Once the basis states are fixed, an arbitrary state of a qubit can be
described by the equation
10 10 ααψ += , (2.1)
where α0 and α1 are complex numbers. These numbers are called the amplitudes of the
basis states 0 and 1 . In vector notation, we can fix the following notations for basis
states
⎥⎦
⎤⎢⎣
⎡=
01
0 , (2.2)
6
⎥⎦
⎤⎢⎣
⎡=
10
1 . (2.3)
This leads us to an arbitrary state notation of the form
⎥⎦
⎤⎢⎣
⎡=⎥
⎦
⎤⎢⎣
⎡+⎥
⎦
⎤⎢⎣
⎡=+=
1
01010 1
001
10αα
ααααψ . (2.4)
The condition that a qubit is a unit vector means that amplitudes α0 and α1 are subject
to the normalization constraint
121
20 =+ αα , (2.5)
where |αi| denotes the norm of the complex number αi. The most important consequence of
the way qubits are defined is their capability to exist in “mixed” states of both 0 and 1 .
In fact there are uncountably many such mixed states for a qubit as compared to only two
“pure” states. Compared to the classical bit, this constitutes a qualitative difference
between quantum and classical information. In fact, one could use a single qubit to encode
all the encyclopedias in the universe, if we were able to measure its amplitudes. However,
the laws of quantum mechanics state that, while a qubit may exist in an arbitrary
superposition of its basis states, it will collapse to one of its basis states upon measurement.
Exactly which value will be measured depends on the respective amplitudes.
In fact, there is a very close relation between amplitude αi of a qubit ψ and the
probability pi that basis state i will be observed upon measurement of ψ :
2iip α= . (2.6)
As an example, consider a qubit in the state 12
102
11 +=ψ . If we measure this
qubit, the probability of observing 0 is ½, and the probability of observing 1 is also ½.
7
Note however that the respective probabilities are the same for any of the following states:
12
1021
2 +−
=ψ , 12
1023 +=iψ , and 1
210
24 +−
=iψ .
In fact, for any given value p, there are uncountably many qubit states upon
measurement of which we would observe 0 with probability p and 1 with probability
1-p, since there are uncountably many complex numbers with the same norm. A very
useful way of visualizing a qubit is the so-called “Bloch sphere diagram” [14]. As seen in
Figure 2.1, a qubit state is actually a point in the surface of the Bloch sphere.
Figure 2.1. Bloch sphere diagram
Without loss of generality, we can write a qubit in the form
⎟⎠⎞
⎜⎝⎛ += 1
2sin0
2cos θθψ ϕγ ii ee , (2.7)
where θ, φ and γ are angles. The term eiγ has no observable effect on ψ , the angle θ
determines the ratio of probabilities of observing 0 or 1 , and the angle φ is called the
8
phase angle and it determines the ratio of real and imaginary magnitudes in the amplitude
of 1 .
Note that coefficients of basis states 0 and 1 in Equation 2.7 can be opened up as
γθγθθα γ sin2
coscos2
cos2
cos0 iei +== (2.8)
)sin(2
sin)cos(2
sin2
sin)(1 ϕγθϕγθθα ϕγ +++== + iei . (2.9)
Carrying out the appropriate simplifications, it then follows that
2
cossin2
coscos2
cos 2222220
θγθγθα =+= (2.10)
2
sin)(sin2
sin)(cos2
sin 2222221
θϕγθϕγθα =+++= , (2.11)
thus satisfying the normalization constraint (2.5).
2.2. Quantum Registers
In classical computing, we define a register as a collection of n bits which are
collectively used to hold a specific value. Because a classical bit may be in either 0 or 1
state, an n bit register may be in one of its 2n possible states. Similarly, we define quantum
registers as a collection of n qubits. Quantum mechanics states that the joint state space of
two quantum systems, such as qubits, is the tensor product of their individual state spaces.
In the general case, the tensor product of two matrices A and B is given as
. (2.12) ⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛=⊗
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛
BaBa
BaBaB
aa
aa
nmn
m
nmn
m
L
MOM
L
L
MOM
L
1
111
1
111
and the tensor product of two vectors a and b is defined as
9
. (2.13) [ ] Tmnnmm
mn
babababababab
b
a
a 1212111
11
LLLLLMM =⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡⊗
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
As a direct consequence, a quantum system consisting of two qubits would therefore
have four dimensions, and the respective basis states would be
00000 ⊗== , (2.14)
10011 ⊗== , (2.15)
01102 ⊗== , (2.16)
11113 ⊗== . (2.17)
The state of a two qubit register would then be given as
3210 3210 ααααψ +++= , (2.18)
subject to the condition
123
22
21
20 =+++ αααα . (2.19)
In the vector notation, in general the basis state i would be a vector of size four,
with a 1 in the (i+1)th column and 0 elsewhere. An arbitrary state of a quantum register of
size two would therefore be represented as [ ] T 3210 ααααψ = .
The same reasoning can be extended to define quantum registers of arbitrary size n.
In this case, the register is an N dimensional quantum system as the basis states would be
0 , 1 , ... 1−N , where N = 2n. Alternatively, the basis state i might be denoted as
12... uuun , where . An arbitrary state of such a quantum register is defined
as
( 212... uuui n= )
10
∑−
=
=1
0
N
ii iαψ , (2.20)
subject to the normalization constraint
11
0
2 =∑−
=
N
iiα . (2.21)
The probability pi of observing state i when measuring ψ is again given as 2iα .
In vector notation, an arbitrary state of the register is denoted as [ ] TN 10 −= ααψ L .
2.3. Quantum Gates
In classical computing, gate is a term often referring to some standard Boolean
function, such as NOT, AND, OR, NAND, etc. These gates are actually the building
blocks of a modern computer. In quantum computing, gates are the elements that enable us
to transform the state of a quantum register. For reasons that we will see shortly, quantum
gates are reversible in the sense that given the output, we can determine what the input
was. This property means that the number of input and output qubits must be the same.
2.3.1. Single Qubit Gates
Definition 2.1. An operation on a qubit, called a unary quantum gate, is a unitary
mapping U: H2 H2, where Hn denotes an n-dimensional Hilbert space.
The gate U defines a linear operation of the form:
100 baU +→ ,
101 dcU +→ .
11
But how would gate U act on a qubit in an arbitrary state 10 10 ααψ += ? It turns
out that because of the linearity of Hilbert space, it is sufficient to know the action of U
only on the basis states. The following equalities illustrate this point:
( )10 10 αα +U 10 10 UU αα += (2.22)
( ) ( )1010 10 dcba +++= αα
( ) ( ) 1 0 1010 dbca αααα +++= .
In vector notation, we can represent gate U as a 2×2 matrix and qubit ψ as a vector
of length two, where the ith column of U represents the action of the gate on basis state
1−i . The gate, whose action was defined above, can be represented by the matrix
. If we recall the Bloch sphere diagram, qubits were represented as points in
the surface of the sphere.
⎟⎟⎠
⎞⎜⎜⎝
⎛=
dbca
U
A quantum gate acting on a single qubit transforms it from state ψ to state 'ψ .
Since ψ and 'ψ have to satisfy the normalization constraint (2.21), they both
correspond to two, possibly different, points on the surface of Bloch sphere. So quantum
gates acting on single qubit can be actually thought of as rotations on the Bloch sphere.
The fact that both ψ and 'ψ correspond to normalized vectors leads to the only
constraint related with quantum gates, namely, unitarity,
, (2.23) IUU =†
where U† is the conjugate transpose of U, also alternative known as the adjoint of U:
. (2.24) ( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛==
∗∗
∗∗∗
dcbaT UU†
12
Making use of vector notation, we can express the action of gate U on qubit ψ in a
very convenient way, simply as a multiplication of matrix U by the corresponding vector
ψ ,
⎥⎦
⎤⎢⎣
⎡++
=⎥⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛==
10
10
1
0'αααα
αα
ψψdbca
dbca
U . (2.25)
As an example, of a single qubit gate, we will define the quantum NOT gate, denoted
as X, as the gate which transforms basis state 0 to the basis state 1 and vice-versa, that
is 10 →U and 01 →U . Therefore, the matrix representation for the quantum NOT
gate is
⎟⎟⎠
⎞⎜⎜⎝
⎛=
0110
X .
Its action on an arbitrary state would be to swap the amplitudes of basis states 0 and 1 ,
. (2.26) ⎥⎦
⎤⎢⎣
⎡=⎥
⎦
⎤⎢⎣
⎡
0
1
1
0
αα
αα
X
On the Bloch sphere, the action of the NOT gate is actually a rotation by π radians about
the x axis.
Another crucial gate in quantum computing is the so-called Walsh-Hadamard gate,
Hadamard for short, denoted as H. Its action on basis states is
2
100
+=+=H , (2.27)
2
101
−=−=H , (2.28)
13
and the corresponding matrix is
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=11
11
21H .
If we apply the Hadamard gate on any of the basis states, we reach a state which,
when measured, has the same probability of observing either 0 or 1. Applying Hadamard
two times in succession leaves the qubit intact. This can be verified by noting that H2 = I.
On the Bloch sphere, the action of the Hadamard gate corresponds to a rotation of the
sphere by π/2 radians about the y axis and then a rotation by π radians about the x axis.
The Hadamard gate provides an illustration of the difference between qubits and
probabilistic bits, pbits for short. A probabilistic bit is a memory cell on the tape of a
probabilistic Turing machine that uses a binary alphabet S = {0,1}. At any instant, a
memory cell may contain 0 or 1, depending on the outcome of the probabilistic events that
accompany the decision of probabilistic branchings [15]. The state of the pbit can then be
completely specified by the probability p, with which it can contain 1. The state of a pbit
can then be expressed in its general form as p[1] [0])1( [b] +−= p . Now assume that a
specific state transition fires, the action of which on the memory cell under consideration is
[ ] [ ] [ ]1210
210 +→ ,
[ ] [ ] [ ]1210
211 +→ .
This is what the Hadamard gate would correspond to in the transition function of a
probabilistic Turing machine. It can be easily verified that applying this transition twice to
either [ or [ would yield the state ]0 ]1
[ ] [ ]1210
21
+ .
14
By contrast, applying the Hadamard transition twice to state 0 yields again 0 , as
the amplitudes of 1 destructively cancel each other, whereas the amplitudes of 0 add
up, thus constructively amplifying each other. This example illustrates the importance of
phase angle in quantum computing, as it is because of the phase difference that the
cancellation of amplitudes of 1 occurs. There is more to qubits than just probabilities.
Consider the state + defined in Equation 2.27. When measured, the outcome of this
state resembles that of a fair coin toss, only much more so. If we know the initial position
of the coin, the torque with which we toss it, the surface on which it will land, the air
resistance and so on to cover for all possible factors that affect the whole process, then we
can exactly predict whether the coin will turn up head or tail. However, most of the times it
is more efficient to accept our ignorance of all the factors listed above (or our
unwillingness to perform the tedious process of calculation) and simply to regard the
outcome as a random event. However, when measuring the state + we have no chance
whatsoever to predict the observed value. We can only state that there is a 50 per cent
chance to observe 0 and 50 per cent to observe 1 . We can use Hadmard gate to
generate perfectly random numbers, as opposed to pseudorandom numbers that are
generated by current classical computers.
Other important gates are
⎟⎟⎠
⎞⎜⎜⎝
⎛ −=
00i
iY , , , . ⎟⎟
⎠
⎞⎜⎜⎝
⎛−
=10
01Z ⎟⎟
⎠
⎞⎜⎜⎝
⎛=
iS
001
⎟⎟⎠
⎞⎜⎜⎝
⎛= 4/0
01πie
T
X, Y and Z are also known as the Pauli matrices, S is the phase gate and T is called π/8
gate. Note that gate T is, up to a global phase constant, equivalent to the gate
. (2.29) ⎟⎟⎠
⎞⎜⎜⎝
⎛=
−
8/
8/8/
00π
ππ
i
ii
ee
eT
15
An important observation is that if we exponentiate the Pauli matrices, we obtain
generalized rotation matrices about the corresponding axes.
⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
−
−==
−
2cos
2sin
2sin
2cos
)( 2θθ
θθ
θθ
i
ieR
iX
x
⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛ −==
−
2cos
2sin
2sin
2cos
)( 2θθ
θθ
θθ
iY
y eR
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛==
−−
2
22
0
0)( θ
θθ
θi
iiZ
z
e
eeR .
Note that Rz(θ) can be written, up to a global phase , as 2/θie
. (2.30) ⎟⎟⎠
⎞⎜⎜⎝
⎛= θ
θθ ii
z eeR
001
)( 2/
For πθ = , the rotation matrix becomes
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=10
01)(πzR .
This introduces a phase factor of −1 to 1 component. The observation will be important
in the analysis of Grover’s search algorithm in Chapter 3.
There is a famous theorem [14] that states that an arbitrary single qubit operation can
be expressed as a series of rotations about the y and z axes.
Theorem 2.1. For any unitary single qubit operation U, there exist real numbers α, β, γ and
δ such that )()()( δγβαzyz
i RRReU =
16
The theorem implies that if we are allowed the freedom to use y and z rotations by
any angle, (even including the transcendental numbers), we can realize an arbitrary single
qubit operator exactly, with zero error. Note however that in practical settings we can only
obtain rational approximations of transcendental values, and most quantum algorithms are
not exact, but just “bounded error” methods for this reason.
2.3.2. Multiple Qubit Gates
We have seen that the joint state of two individual systems can be obtained by taking
the tensor product of the individual states. Consider the situation in Figure 2.2. Two
individual qubits, with states x and y , are passed through respective single qubit gates
U and V. If we consider the two qubits as a single register with state z , so that
yxz ⊗= , then what would be the operator that would act on state z ?
Figure 2.2. Qubits x and y passing through single qubit gates U and V respectively
It turns out that just as the overall state of the two qubits is given by their tensor
product, the overall action of two operators acting on the two different qubits is also given
by their tensor product, VUW ⊗= , see Figure 2.3. If only the topmost qubit is passed
through gate U and the other one is left alone, the resulting overall gate is the tensor
product , where I is the identity matrix and can be regarded as a NOOP. IUW ⊗=
Figure 2.3. Qubits passing through two qubit gate W
17
Quantum gates as W, that can be expressed as the tensor product of some other gates,
are said to be decomposable. In general, for any n qubit gate U, we need to know its action
on each of the basis states i , 0≤ i ≤N-1, where N = 2n.
Consider the gate U whose action on basis state i is denoted as
∑−
=
=1
0
N
jj jiU α .
Then the (i+1)th column in the matrix for gate U will be the column vector
. [ ] TN 10 −αα K
Consider the two qubit CNOT gate, whose action on basis states is
0000 →CN
0101 →CN
1110 →CN
1011 →CN .
The gate realizes the transformation yxxyx ⊕→ ,, . The first bit remains unchanged,
and the second qubit is assigned the XOR of the two input qubits. It can also be viewed as
a controlled NOT operation, hence the name CNOT. When the first qubit is 0, then the
second qubit is unchanged. If the first qubit is 1, then the second qubit is inverted as
though passed through a NOT gate. The first qubit is called the control qubit since it is the
one that controls whether the second qubit, also called the target qubit, will be inverted or
not. Figure 2.4 shows the visual representation of the CNOT gate.
18
Figure 2.4. CNOT gate implements the function yxxyx ⊕→ ,,
The concepts utilized in the logic behind the CNOT gate can be extended to handle
the following modifications:
• the action on the target qubit need not be conditioned as inversion but can be any
single qubit unitary operator U
• the action on the target qubit need not fire when the control qubit is 1, instead, the
firing condition can be set to any basis state of the control qubit
Figure 2.5. Controlled-U gate that acts when control qubit is in state 0
Figure 2.5 shows a controlled gate which applies unitary operator U to the target
qubit when the control qubit is in state 0 . The action of the gate on the basis states is
0000 UCU ⊗→ ,
1001 UCU ⊗→ ,
1010 →CU ,
1111 →CU .
19
The concept of controlled operation is not confined to single qubit control and single
qubit operators. In the most general case, we can have n control qubits, m target qubits and
an m qubit operator U. When each of the control qubits is in the desired state, operator U
acts upon the target qubits. A schematic representation is shown in Figure 2.6.
Figure 2.6. General controlled-U gate with n control qubits and m target qubits
As an example, consider the double controlled NOT gate, also known as the Toffoli
gate, with two control qubits and one target qubit. When both control qubits are in state
1 , the target qubit is inverted.
Figure 2.7. Toffoli gate
The action of the gate on the basis states can be summarized as follows. States 000
up to 101 are left intact, state 110 is transformed to 111 , and state 111 is
transformed to 110 . It can be seen easily that the third (target) qubit is inverted when
20
both of the first two (control) qubits are in the state 1 . The Toffoli gate realizes the
transformation 212121 ,,,, xxyxxyxx ⊕→ .
Let us get back to the Hadamard gate. Remember that it transforms state 0 to the
equiprobable superposition of 0 and 1 . Now consider the case when two individual
qubits are passed through Hadamard gates. The resulting overall two qubit gate is
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−−−−−−
=⎟⎟⎠
⎞⎜⎜⎝
⎛−
=⊗=⊗
111111111111
1111
21
212
HHHH
HHH . (2.31)
When this gate is applied to a two qubit register initially in the state 00 , the output is in
the state ( 321021
2 +++=ψ ). When measuring 2ψ , the probability pi of
observing state i is ¼ for all i. In the general case, n qubits each passed through a
Hadamard gate define the Hadamard transform. The matrix representation for the
Hadamard transform can be recursively defined as
⎟⎟⎠
⎞⎜⎜⎝
⎛
−=⊗=
−⊗−⊗
−⊗−⊗−⊗⊗
11
111
21
nn
nnnn
HHHH
HHH , (2.32)
where
HH =⊗1 .
Note that ∑−
=
⊗ =1
02
10N
in
n iH . Therefore, the probability pi of observing state i upon
measurement is for all i. From this viewpoint, we can use this procedure to generate
random numbers in the range 0..N-1.
N/1
21
In the more general case of an arbitrary basis state as input,
( )∑−
=
⋅⊗ −=1
01
2
1 N
y
yx
n
n yxH , (2.33)
where
111100 ... −−⊕⊕⊕=⋅ nn yxyxyxyx . (2.34)
2.4. Measurement
A quantum system, such as a quantum register, evolves according to unitary
evolution as described by the unitary operators that are applied. During this span, the
system is assumed to be closed and no interactions with the outside world are allowed.
However, a closed system is of no use if we are not allowed to “open it up” and observe its
internal state. The process of opening a quantum system corresponds to measuring and
observing its state. We have briefly mentioned the postulate of quantum mechanics which
states that “a qubit might exist in uncountably infinite different superpositions of its basis
states, however, when measured, its state collapses to one of the basis states.” Quantum
measurement is defined by a collection of measurement operators {Mi}, where i is the
potential outcome of the measurement. While in principle measurement on any basis can
be performed, we will only consider measurement on the computational basis. Then Mi is a
sparse matrix with a single 1 in the (i+1)th diagonal entry with all other entries set to 0.
⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜
⎝
⎛
==
000
010
000
LL
MOMNM
LL
MNMOM
LL
iiM i (2.35)
22
The notation ϕ stands for the conjugate transpose of ϕ . The column vector of ϕ is
transposed and every vector element is conjugated. For a quantum system, whose state
before measurement is ψ , the probability of observing i is
ψψ iMMip †i)( = . (2.36)
If indeed the outcome of the measurement is i, then the state of the system collapses to
)(
'
ip
M ii
ψψ = . (2.37)
As an example, consider a qubit in the state 01 ba + , on which we will perform a
measurement on basis states 0 and 1 . The measurement operators are
⎟⎟⎠
⎞⎜⎜⎝
⎛==
0001
000M
and
⎟⎟⎠
⎞⎜⎜⎝
⎛==
1000
111M .
Then the corresponding probabilities are
[ ] [ ] 20
†0 0
0001
0001
)0( aba
aba
baMMp =⎥⎦
⎤⎢⎣
⎡=⎥
⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛== ψψ
[ ] [ ] 21
†1 0
1000
1000
)1( bba
bba
baMMp =⎥⎦
⎤⎢⎣
⎡=⎥
⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛== ψψ .
The corresponding states of the qubit after measurement are
23
001
0100
01
)0( 2
0'0 =⎥
⎦
⎤⎢⎣
⎡=⎥
⎦
⎤⎢⎣
⎡=
⎥⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛
==a
aa
ba
p
M ψψ
and
1100110
00
)1( 2
1'1 =⎥
⎦
⎤⎢⎣
⎡=⎥
⎦
⎤⎢⎣
⎡=
⎥⎦
⎤⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛
==bbb
ba
p
M ψψ ,
in the cases that 0 or 1 is observed, respectively.
For the case of measurement on computational basis, if we measure the quantum
register ∑−
=
=1
0
N
ii iαϕ , the probability of observing i is simply
2)( iip α= , (2.38)
and the state of the register after measurement collapses to
ii ='ϕ . (2.39)
2.5. Entanglement
If the state ψ of a two qubit register can be decomposed as the tensor product of
two single qubits φ and ϕ , ie. ϕφψ ⊗=2 , then the state is said to be
decomposable. A state that is not decomposable is said to be entangled. As an illustration,
consider the quantum circuit in Figure 2.8.
24
Figure 2.8. Quantum circuit producing an entangled state
Assume both qubits are initialized to 0 . After the Hadamard, the state of the first
qubit becomes
210 +
=x ,
and the joint state of both qubits becomes
21000 +
.
After applying CNOT to x and y , their final joint state becomes
21100 +
.
If we measure the quantum register composed of x and y , the probabilities of observing
00 or 11 are both 0.5. This means that the register is in half 00 and half 11 state.
What about the individual states of x and y at this stage ? Let us assume that
⎥⎦
⎤⎢⎣
⎡=
1
0
αα
x ,
and
25
⎥⎦
⎤⎢⎣
⎡=
1
0
ββ
y
and try to find them. From the relation
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
=
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
=⎥⎦
⎤⎢⎣
⎡⊗⎥
⎦
⎤⎢⎣
⎡=⊗
1001
21
11
01
10
00
1
0
1
0
βαβαβαβα
ββ
αα
yx ,
we get the set of equations
12 00 =βα ,
02 10 =βα ,
02 01 =βα ,
12 11 =βα ,
which has no solution. It means that there exist no two individual qubit states so that their
tensor product would yield the state
21100 +
.
This is precisely the idea of entanglement that we set out to define at the beginning of this
section.
What do we see if we try to measure only the first qubit? It is clear that the
probability of observing 0 is 0.5, as is the probability of observing 1 . What is
interesting is that once we observe the first qubit, we have also affected the state of the
second qubit. Whatever basis state the first qubit collapses to after the measurement, the
same happens to the second qubit although we have left it intact.
26
2.6. Universality
We have seen that a quantum system follows a unitary evolution until it interacts
with an external environment. In quantum computing we use the notion of unitary
operators to represent unitary evolution. A unitary operator U acting on n qubits is
described by a matrix subject to the condition that . If the quantum
computer will ever become a practical reality, we will have the practical problem of
deciding which operators to supply in its physical implementation. For one thing, it will
have to support only a finite set of such operators. On the other hand, there are uncountably
infinitely many unitary operators for any dimension n. Should we restrict our algorithms to
only a standard set, and if so, what should be the standard set of operators?
nn 22 × IUU =†
We do not have headaches of this kind in classical computing, because there exist
universal sets of gates such as {NAND}, {AND, NOT}, {OR, NOT}, etc. Using only gates
from a universal set, we can implement any Boolean function that we wish. There is a
similar, though not identical, situation in quantum computing. There are several well-
established results [14] on the universality of quantum gates.
Theorem 2.2. An arbitrary unitary operator U can be expressed exactly as the product of
unitary operators U1, U2, ..., Un, where each Ui acts nontrivially only on two basis states.
Theorem 2.3. An arbitrary unitary operator U can be expressed exactly using only CNOT
gates and arbitrary single qubit gates.
Theorem 2.4. Single qubit gates can be approximated to an arbitrary accuracy (though
not necessarily exactly) using only Hadamard, phase and π/8 gates (respectively denoted
as H, S and T gates).
Theorem 2.4 identifies the set composed of Hadamard, phase and π/8 gates as
universal for quantum computing. In fact this set can be used to synthesize, up to an
arbitrary accuracy, any unitary transformation. If we are concerned with the overall
observational result of the computation rather than emulating the arbitrary transformations
27
that are used in it, a simpler two-qubit gate with only real amplitudes can be shown to
suffice for universal quantum computation [16]. This gate has the form
, (2.40)
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−=
φφφφ
cossin00sincos0000100001
G
where angle φ is incommensurate with π, that is, φ cannot be written in the form mπ / n,
where m and n are integers. (For instance, the angle φ=cos-1(4/5), among infinitely many
others, can be shown to have this property [14]). The gate G can also be written as
( )⎟⎟⎠⎞
⎜⎜⎝
⎛=
φRI
G0
0,
where
(2.41) ( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛ −=
φφφφ
φcossinsincos
R
is a rotation gate in the hyperplane defined by axis 0 and 1 .
So gate G is actually a controlled-R(φ) gate. It can be easily verified that
. (2.42)
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−=
φφφφ
kkkk
G k
cossin00sincos00
00100001
The choice of φ as an irrational multiple of π guarantees that the sequence φ, 2φ, ...,
never enters into a cycle, and so for each new k, kφ corresponds to a new value, previously
unencountered in the interval [ )π2,0 . Although there are uncountably many points in this
28
interval, we can get as close as we want to any point by choosing the appropriate value of
k. Extending this observation to the gate G defined above, we can efficiently prepare any
gate of the form
(2.43) ( )⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−=
θθθθ
θ
cossin00sincos0000100001
F
by approximating it with Gk, such that kφ is sufficiently close to θ.
In this section, we will show that we can simulate any arbitrary quantum computation
by using only the gate G. Remember that Theorem 2.1 states that any single qubit operator
U can be exactly expressed, up to a global phase constant, in terms of a rotation about the
y-axis sandwiched between two arbitrary rotations about the z-axis,
( ) ( ) ( )δγβαzyz
i RRReU = (2.44)
If gate G is universal for quantum computing, then it should be able to simulate the
action of single qubit gates too. In order to show this, is sufficient to show it for arbitrary
rotation gates about z and y axes as Equation 2.44 suggests. In doing so we will use a
single ancilla qubit called RI, whose orthonormal states we will call R and I . The
Equation 2.1, describing an arbitrary qubit state, can be rewritten as
10 1010
θθϕ ii erer += . (2.45)
From the perspective of quantum computing, the two qubit state that is obtained with
the introduction of the RI ancilla qubit,
IrRrIrRrE 1sin1cos0sin0cos 11110000 θθθθϕ +++= , (2.46)
29
is equivalent to the one described in Equation 2.45. The state Eϕ , where all amplitudes
are real numbers, is said to be the encoded form of ϕ . The purpose of RI ancilla qubit is
to keep track of the corresponding real and imaginary parts of complex coefficients in
Equation 2.45.
Consider the transformation of Rz(τ) to an arbitrary qubit in the normal form
(described in Equation 2.45) which is shown in Figure 2.9.
Figure 2.9. Rotation of qubit about the z-axis
It can be shown [16] that the action of such gate can be simulated by the
transformation of controlled-R(τ) where the action is upon the ancilla qubit and the control
qubit is the one which would normally be subjected to z-rotation. The situation is
illustrated in Figure 2.10.
Figure 2.10. Simulation of z-rotations
Note that the controlled-R(τ) gate that is used to simulate the action of the original z
rotation, Rz(τ), is in fact a gate of the form F(τ). We have just changed the order of its
inputs, however, the gate itself remains the same. In the same way, it can be shown that
arbitrary rotations about the y axis, Ry(τ), can be simulated with a controlled-R(τ/2) gate by
a using an ancilla bit in prepared state 1 . Note that this ancilla bit is different from RI
ancilla qubit used in implementation of Rz(τ). The corresponding circuit is shown in Figure
2.11.
30
Figure 2.11. Simulation of y-rotations
Since we can simulate arbitrary rotations about y and z axis, we can in general
simulate the action of any single qubit gate, up to a global phase constant. Now consider
the gate F(π/2),
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−=⎟
⎠⎞
⎜⎝⎛
01001000
00100001
2πF . (2.47)
For the purposes of universal computation, F(π/2) together with arbitrary single qubit
gates is sufficient [16]. We conclude that the set of gates F(θ), which in turn can be
efficiently synthesized using only gates of type G, is sufficient to simulate any quantum
circuit. Note that in this case, we would only have to deal with real numbers in our gate
specifications.
Now let us illustrate this approach with an example. We will try to simulate the
Hadamard gate, by using only gates of type G. By Theorem 2.1, we know that there exist
real numbers α, β, γ, and δ such that
( ) ( ) ( )δγβαzyz
i RRReH = . (2.48)
Therefore
( ) ( )( ) ( ) ⎟⎟
⎠
⎞⎜⎜⎝
⎛⎟⎟⎠
⎞⎜⎜⎝
⎛ −⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛−
−−
2/
2/
2/
2/
00
2/cos2/sin2/sin2/cos
00
1111
21
δ
δ
β
βα
γγγγ
i
i
i
ii
ee
ee
e
31
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛ −=⎟⎟
⎠
⎞⎜⎜⎝
⎛− ++−+
+−−−
2/2/2/2/
2/2/2/2/
2/cos 2/sin 2/sin 2/cos
1111
21
δβαδβα
δβαδβα
γγγγ
ii
ii
eeee
.
Now we can set 2/πγ = and get rid of the 1/ 2 factor, thus leading to
( ) ( )
( ) ( ⎟⎟⎠
⎞⎜⎜⎝
⎛ −=⎟⎟
⎠
⎞⎜⎜⎝
⎛− ++−+
+−−−
2/2/2/2/
2/2/2/2/
1111
δβαδβα
δβαδβα
ii
ii
eeee
) ,
from which we derive
⎟⎟⎠
⎞⎜⎜⎝
⎛++−++−−−
=⎟⎟⎠
⎞⎜⎜⎝
⎛2/2/2/2/2/2/2/2/
000
δβαδβαδβαδβα
π.
We obtain 2/πα = , 0=β , 2/πγ = and πδ = as a solution.
So the Hadamard gate can be rewritten as
( ) ( )πππzy
i RReH 2/2/= , (2.49)
which can be easily verified. Now we need to simulate the action of Ry(π/2) and Rz(π).
Remember that Ry(π/2) can be simulated by F(π/4) controlled by some qubit in state 1 .
Rz(π) can be simulated by F(π) with RI ancilla qubit as the target bit. On the other hand,
θπ 114/ ≈ and θπ 83≈ , where ( )5/4cos 1−=θ . These are the best approximations when
we constrain ourselves to less than 200 iterations. If we allow for more iterations we can
obtain better approximations, e.g. with a maximum 1000 iterations θπ 714≈/4 , with
10000 iterations θπ 2452≈/4 , and so on. The overall circuit that simulates a Hadamard
gate is shown in Figure 2.12. Since the controlled-R(θ) gate, where is actually
the G gate that we defined, the circuit can be rewritten as shown in Figure 2.13
(4/5)= −1cosθ
32
Figure 2.12. Approximation of Hadamard gate with controlled-R gates
Figure 2.13. Approximation of Hadamard gate with 94 G gates
The overall matrix of this circuit is
( )( )
⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜
⎝
⎛
−−
−−
=⊗⊗=
7001.007141.00000007001.007141.00000
7141.007001.00000007141.007001.0000000001000000001000000001000000001
8311 GIIGM .
We know that the first qubit has to be in state 1 , in order for this gate to simulate
correctly the action of Hadamard gate. Let us consider the cases where RI qubit is in state
R and I , which can be thought to correspond to 0 and 1 , respectively.
33
In case when RI is in the R state, the overall action on the second qubit is given by the
matrix formed by intersection of rows 5 and 7 with columns 5 and 7 (which correspond to
the basis states 100 and 110 )
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=7001.07141.0
7141.07001.07,5M .
Exactly the same matrix is obtained from the intersection of rows 6 and 8 with columns 6
and 8, which corresponds to the action of on the second register when the RI qubit is in I
state.
⎟⎟⎠
⎞⎜⎜⎝
⎛−
==7001.07141.0
7141.07001.07,58,6 MM .
Note that both M5,7 and M6,8 provide a good approximation for the Hadamard gate,
whose matrix is
⎟⎟⎠
⎞⎜⎜⎝
⎛−
≅7071.07071.0
7071.07071.0H .
How to interpret these results? Remember that the R and I components of the RI
ancilla qubit respectively represent the real part and imaginary part of the qubit in
consideration in its standard form (defined in Equation 2.45). When the complex
coefficients of the input qubit to the Hadamard gate are composed only of real part, M5,7
will be applied, therefore correctly approximating the behavior of the Hadamard gate.
Conversely, when the coefficients are composed of only imaginary part, M6,8 will be
applied, which still is according to our expectations. Finally, when coefficients are
composed both of real and imaginary part, then both M5,7 and M6,8 will be applied, each to
a certain degree. But since both of their individual behaviors correctly approximate the
Hadamard gate, the action of their superposition will also result in a correct approximation.
34
2.7. Reversible Computation
An important result linking computing and thermodynamics, known as the Landauer
principle [17], states that, for every bit of information which is erased, at least
units of energy dissipate into the environment, where k
2lnTkB
B is the universal constant known as
Boltzmann’s constant, and T is the temperature of the environment in which the action is
taking place. Since computers running quantum algorithms are supposed to be
thermodynamically reversible, a “quantum programmer” has to observe additional
constraints that are usually ignored during classical programming. Throughout this thesis,
we will consider reversibility at the level of functions as described in the following
definition:
Definition 2.2. A function with n inputs and m outputs is said to be reversible if for all
outputs we can uniquely determine its inputs.
2.7.1. Reversibility of Quantum Gates
We know from previous sections of this chapter that quantum registers are actually
quantum systems and the operators that are applied on them determine the evolution of
their state in time. When considering the matrix representation of operators, we also know
that the product of any such operator with its adjoint operator should yield the identity
matrix. Now consider the circuit in Figure 2.14, where U is an arbitrary unitary operator.
Figure 2.14. Transformation of a quantum register under a unitary operator
We know that the states of the register prior to and after applying operator U are
related through the equation ϕϕ U=' . If we multiply both sides of the equation by U†,
we obtain
35
'† ϕϕ U= . (2.50)
Equation 2.50 suggests that we can reproduce the input state from the output state of the
register by applying the operator U† to the latter as illustrated in Figure 2.15.
Figure 2.15. Transformation of a quantum register under the adjoint of an unitary operator
In this aspect, the transformation implemented by unitary operator U is reversible.
Since all the operators in the domain of quantum computing satisfy the unitarity condition
by definition, we conclude that every quantum program is reversible. Extending this line of
reasoning, the reverse computation of any quantum circuit is obtained by applying the
adjoint of every operator in the original circuit in the reverse order. This idea is illustrated
in Figure 2.17, which shows the reverse of the circuit showed in Figure 2.16.
Figure 2.16. An arbitrary quantum circuit
Figure 2.17. The reverse circuit of the quantum circuit in Figure 2.16
36
2.7.2. Classical Computing and Reversibility
By contrast, classical programs are not reversible in general. In fact, most boolean
gates that are used to implement the building blocks of classical computers are not
reversible. Such irreversible gates are AND, OR, NAND, XOR etc. just to name a few. For
an illustration, consider the AND gate. If we observe the output bit to be 1, we can infer
that both the input bits are also 1. However, if the output bit is 0, then all one can say about
the state of the input bits is that at least one of them is 0, but we can not say in which one
of the possible states 00, 01 or 10, the input bits are. As an example of a reversible
classical gate, consider the NOT gate. We can determine the input bit just by applying
another NOT gate to the output bit.
Even though classical computing in general is not reversible, every classical program
can be modified to an equivalent reversible one, if we allow for an increase in the number
of inputs and outputs. In order to see this, first remember that any boolean function f of the
form
( )nxxxfy ,...,, 21=
can be synthesized by using only gates from the set {AND, OR, NOT, FANOUT}. This set
of gates is said to be universal for classical computing. The mention of the FANOUT gate,
illustrated in Figure 2.18, is important because it often goes unnoted in logic design theory.
In fact, the FANOUT gate acts as a replicator that prepares identical copies of the bit.
Figure 2.18. The FANOUT gate
The capability to prepare multiple copies of some given bit is considered as trivial in
classical computing. However, this is not the case in quantum computing, as it would
violate the No-Cloning Theorem [18] which states that we cannot implement the
transformation ϕϕψϕ → .
37
In fact, one can easily show that this transformation is not reversible and therefore
there cannot be a quantum gate which implements it, because all quantum gates have to be
reversible. We can reduce the set of universal gates for classical computing to {NAND,
FANOUT} because we can synthesize AND, OR and NOT by using only NAND and
FANOUT. Now recall the Toffoli gate defined in Section 2.3.2, Figure 2.7. We presented
it as a quantum gate but it can be instantly defined as a classical gate by reducing the inputs
to the classical values of 0 and 1 only. It can be seen to implement the transformation
( ) ( )xyzyxzyxToffoli ⊕= ,,,, . (2.51)
The Toffoli gate is reversible because given the outputs x’, y’ and z’, we can determine the
original inputs x, y and z. Equations 2.13 and 2.14 show that we can synthesize both
NAND and FANOUT gates by using only a Toffoli gate and constant inputs 0 and 1.
( ) ( )( )yxNANDyxyxToffoli ,,,1,, = (2.52)
( ) ( )yyyToffoli ,,10,,1 = (2.53)
In general, a boolean gate may have different number of inputs and outputs, say m and
n respectively. In this case it can be thought to perform a transformation of m-bit strings to
n-bit strings of the form fNR: Bm Bn, where B={0,1} as before. A necessary but not
sufficient condition for such a gate to be reversible is that the number of inputs should be
equal to the number of outputs, m=n. Let n denote the dimension of such gates, ie. the
number of inputs (outputs). Then the function of the gate is f: Bn Bn. In order for the
function f to be reversible, no two input strings should be mapped to the same output
string. This condition can be formulated as
( ) ( ) yx =⇔= yfxf ,
which in turn means that function f is both one-to-one and onto, that is, f is a
correspondence. Such functions can also be treated as permutations, since the list of the
outputs is just a reordering of the list of inputs. Consider the CNOT gate as an example. It
implements the function
38
( ) ( )yxxyxf CN ⊕= ,, . (2.54)
The truth table for this function is shown in Table 2.1.
Table 2.1. Truth table of CNOT gate.
(x,y) fCN(x,y)
00 00
01 01
10 11
11 10
The function fCN implements the permutation PCN={00,01,11,10}. In matrix notation,
a reversible boolean function is represented by a matrix which has exactly one entry set to
1 and all other entries to 0 in each row and each column.
We have seen that boolean gates can be thought of as functions which map binary
strings to binary strings. There are N=2n different input strings for boolean functions with n
inputs and n outputs. Such functions are uniquely determined by specifying the list of
output strings ordered in the alphabetic order of the inputs. So the list L={s0, s1, …, sN-1}
specifies that si is the output of the input i. It is evident that each si may assume any of the
possible N=2n values. Then the total number of such functions is . This is indeed
a huge number, in fact, the latest release of Matlab overflows when computing it for n=8.
Of this huge set of gates, what portion is reversible? To answer this question, we should
remember that no two input strings are allowed to be mapped to same value. Therefore, list
L should be a permutation, and there are such permutations.
nnNN 22=
!2N! n=
2.7.3. Reversifying Mathematical Functions
In the previous section, we considered boolean gates and saw that they were
irreversible in general. We also saw that if we allow for an increase in the number of inputs
and/or outputs, we can synthesize universal gates such as NAND and FANOUT by the
39
reversible Toffoli gate. We conclude that we can synthesize any boolean gate by using
only reversible gates such as the Toffoli gate.
In this section, we consider arbitrary mathematical functions like
( )nxxfy ,...1=
of the form f: Nn N, where N is the set of natural numbers. Such functions typically take
more than one, say n, inputs, and return one output. If this is case, at least for one output
instance we will not be able to uniquely determine the inputs and therefore the function is
irreversible. We saw in the previous section that in order to make such functions reversible,
that is, to “reversify” them, we should make the number of outputs equal to the number of
inputs. While different solutions exist depending on the function in consideration, a general
solution would be to mask the n–to–one irreversible function
( )nNR xxfy ,...1= , (2.55)
as an n+1 to n+1 reversible function
( ) ( )( )( )nNRnnRE xxfygxxyxxh ,...,,,...,,... 111 = , (2.56)
where g is a two-to-one function that satisfies the property that given the output and one of
the inputs, we can uniquely determine the other input. Such functions would be addition,
subtraction, XOR, etc. Let us decide on addition as the auxiliary function this time. Then
the reversible function that would simulate the irreversible function of 2.55 is
( ) ( )( )nNRnnRE xxfyxxyxxh ,...,,...,,... 111 += . (2.57)
Figure 2.19 shows a visual representation of such a reversible function.
40
Figure 2.19. Reversible function ( ) ( )( )nNRnnRE xxfyxxyxxh ,...,,,...,,... 111 +=
As an example, consider multiplication. It can be expressed as MULT: N2 N.
Clearly, it is not reversible, because given any output y, there are infinitely many pairs
(x1,x2) whose product gives y. However, the function REVMULT: N3 N3, which is
defined as
( ) ( )212121 ,,,, xxyxxyxxREVMULT ⋅+= ,
is reversible. Consider the output tuple (3,5,17). Then we can easily deduce that the input
tuple producing this result is (3,5,2).
The key concept in reversifying arbitrary functions is redundancy. In example above,
we had to introduce two extra outputs which are completely redundant for the purpose of
multiplication. Yet, it is because of this redundancy that we were able to recover the
complete input tuple. As we said before, our formulation is neither unique nor the best one.
For multiplication, we need output only one of the inputs and the product itself and we can
recover the other one by dividing product by the given input. However, the formulation of
Equation 2.17 is general enough to work for any arbitrary function.
41
3. QUANTUM ALGORITHMS
From the point of view of computability, quantum and classical computers are
equivalent. A classical computer can simulate a quantum computer and therefore is capable
of doing anything that a quantum computer can. What then is the point in dealing with
quantum computers? The key issue here is efficiency. The best known techniques require a
classical simulator simulating a quantum system, to spend exponentially more time than
the simulated system. This is only natural, considering that an n-dimensional quantum
system can be described by 2n complex numbers, or 2n+1 real numbers. Quantum
algorithms make use of concepts such as superposition, quantum parallelism and
interference in order to solve problems more efficiently than their classical counterparts.
As of today, only three general classes of interesting quantum algorithms are known.
The first one, Amplitude Amplification, includes the Grover search algorithm, which finds
a solution state making ( )NΘ database queries, in an unordered list of N elements.
Clearly we can solve this problem making ( )NΘ queries with a classical computer, so the
speedup is only polynomial. The second group of quantum algorithms are based on the
Quantum Fourier Transform, and include Shor’s polynomial time factorization algorithm,
which runs in ( )3nΘ time, where n is the number of bits needed to encode the input
number. This is the most remarkable achievement of quantum computing, since it directly
affects the security of the RSA cryptosystem. This system, now in use all over the world in
industries such as banking and the military, is based on the assumption that integer
factorization is an intractable problem. Indeed, no classical algorithm has been found to
solve this problem efficiently, and the best one, the Number Field Sieve algorithm, is
superpolynomial and its running time is
( )3 2ln nneΘ .
Quantum computers might as well be used for simulating quantum systems, after all
they are one of a kind. In fact Feynman’s idea to use quantum computer for simulations of
this kind was what started the first attempts towards quantum computing. The third class of
42
fast quantum algorithm [10] draws on such simulation ideas to implement a quantum
random walk, which can be used to find the name of the exit node of a specially structured
graph, starting with the entrance node and the ability to use an oracle which can answer
questions about the neighbors of a given node in the graph.
Following the publication of Shor’s algorithm [8], interest arose and a huge
investment was made on quantum computing research. Disappointingly however, not much
significant progress has been made until now. This might be partly due to the completely
different principles that govern quantum systems and the fact that our intuition is often at
odds with explaining them, let alone employing them in the thought processes that would
lead to an algorithm. But it might well turn out that there are relatively few problems for
which quantum computing might offer an exponential speedup over classical computing.
In this chapter, we describe three algorithms which share an interesting property: all
three start by setting a register in a perfectly equiprobable superposition of all possible
values. We will examine a generalization of this particular task in more detail in the next
chapter.
3.1. Variants of the Deutsch-Jozsa Algorithm
In 1985, David Deutsch, after having elaborated on Feynman’s idea of a quantum
computer, showed the existence of a problem for which the hypothetical quantum
computer was clearly superior to a classical one. Deutsch first proposed a solution for a
function with a single input bit [5]. In a more general version of this algorithm, named
Deutsch-Jozsa, the problem is solved for n input bit functions [7]. It should be noted that
the problem is important mostly from a theoretical point of view, since it is a rather
contrived one with no obvious practical use.
3.1.1. Deutsch’s Algorithm for One-Bit Functions
Assume that we are given an oracle (a “black box computer,” whose internal details
can not be examined) computing a function f: B B, where B = {0,1}. In general, a
43
function g: Bn B is said to be constant if it maps all its input space to the same output
value, and balanced if it maps half of its input space to 0 and the other half to 1. We are
not allowed to open up the oracle and see how it makes the function evaluation. Instead we
can only query it with different inputs. Under these settings, the problem is to determine
whether f is constant or balanced with a minimum number of oracle queries. With a
classical algorithm, we have to make two oracle calls, f(0) and f(1), and then we can
conclude that the oracle is constant if f(0)=f(1) and balanced if f(0)≠f(1). We need query
the oracle only once using the quantum algorithm described in Figure 3.1.
1. Initialize the first qubit to 0 and the second qubit to 1 .
2. Apply the Hadamard gate to each qubit.
3. Apply the oracle Uf, which is defined below.
4. Apply another Hadamard gate to the first qubit.
5. Measure the first qubit to observe value a.
6. If a=0 output “constant”, otherwise output “balanced”.
Figure 3.1. Pseudocode of Deutsch algorithm
Figure 3.2. Circuit for Deutsch algorithm
The corresponding quantum circuit for the Deutsch algorithm is shown in Figure 3.2.
The gate in the middle, denoted as Uf, is the quantum oracle for the function f. It realizes
the transformation ( )xfyxyxU ⊕→ ,, .
44
Let us examine how the algorithm works. Initially qubits x and y are initialized
to respectively 0 and 1 . Hadamard gates are applied to both qubits and their individual
states become
210 +
=x ,
210 −
=y .
Their joint state therefore is
211100100
210
210 −+−
=−
⊗+
=⊗ yx .
In the next step, we apply the oracle. Note that the second qubit is in the state
210 −
.
In general, for any state x of the first qubit, the joint state 2
10 −⊗x becomes
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛ −⊗−=⎟⎟
⎠
⎞⎜⎜⎝
⎛ −⊗
210
12
10 )( xxU xff . (3.1)
Applying Equation 3.1 to our current state of x , we obtain
( ) ( )
210
21101 )1()0( −⊗
−+−=⊗
ff
yx (3.2)
45
Note that the joint state of the two qubits is separable, this algorithm does not
produce any entanglement. As a result we can concentrate only on qubit x for the last
part of the algorithm. If we apply the Hadamard gate to
( ) ( )2
1101 )1()0( ff
x−+−
=
we obtain
( ) ( )2
101
210
1 )1()0( −−+
+−= ffx .
Suppose function f is constant. If the constant value is 0, then
02
102
10=
−+
+=x ,
if it is 1, then
02
102
10−=
−−
+−=x .
In both cases, if we measure x , we will observe 0 with certainty. What differs is the
phase of qubit x , which has no effect on measurement process.
Suppose function f is balanced. If f(0) = 0 and f(1) = 1, then
12
102
10=
−−
+=x ,
if f(0) = 1 and f(1) = 0, then
46
12
102
10−=
−+
+−=x .
Whatever the definition of function f, if it is balanced, we observe 1 with certainty.
Hence the exactness of the algorithm.
To illustrate this with a real-world scenario, suppose somebody gives you a coin and
you are not sure whether it is a fair coin or one with both sides having the same symbol.
Normally we would check each side of the coin, that is, make two checks, to make sure
that it is a fair coin. This is what happens with the classical algorithm for our problem, we
need to query the oracle twice, once for input 0 and once for input 1. Using the Deutsch
algorithm, we make only one check to the coin and can tell whether it is fair or fake with
certainty. How can this happen? If we check side A of the coin, how can be sure of side B
and vice versa? The answer is that we are able to make a single coin check for a
superposition, or mixture if you like, of both sides, A and B, using the Deutsch algorithm.
The algorithm then makes use of quantum parallelism and interference effects to produce
the correct answer.
3.1.2. Deutsch-Jozsa Algorithm for Multiple Bit Functions
There is a more general version of Deutsch’s algorithm, called the Deutsch-Jozsa
algorithm, in which the function we want to classify is f: Bn B. Such functions are not
required to be either balanced or constant as in the single bit case, but we suppose that we
are now promised that the function is either constant or balanced, and we are required to
classify it correctly by making the minimum number of oracle queries, depending on this
promise. Classically, we have to make an exponential number of oracle queries, 2n-1+1 in
the worst case, to give the answer with certainty. Alternatively, we can design a
probabilistic algorithm and make only a constant number of queries and produce an answer
which is correct with a reasonably high probability. The Deutsch-Jozsa algorithm needs
only one oracle query in order to give the correct answer with certainty.
47
Figure 3.3. Circuit for Deutsch-Jozsa algorithm
The circuit for the algorithm is shown in Figure 3.3. The oracle Uf implements the
transformation zxxxyxxx nn ,,...,,,,...,, 110110 −− → , where ( )110 ,...,, −⊕= nxxxfyz . In
this algorithm, we will consider the top n qubits as a single n qubit register. As in the
previous version, we end by measuring that register to obtain a value r. If r is 0, then the
algorithm outputs “constant”; otherwise, it outputs “balanced”. A trace of the algorithm
follows.
1. Initial state is of the system is 10 ⊗n .
2. After applying the first set of Hadamards, the state becomes
⎥⎦
⎤⎢⎣
⎡ −⊗⎥
⎦
⎤⎢⎣
⎡∑ 2
10
21
in
i .
3. Note that result qubit is in state − , therefore the oracle call produces the state
⎥⎦
⎤⎢⎣
⎡ −⊗⎥
⎦
⎤⎢⎣
⎡−∑
2
10)1(
2
1 )(
i
if
ni ,
as the Equation (3.1) suggests.
48
4. At this point we may consider only the n-qubit register, whose state after the last set
of Hadamards becomes
( ) ji j
jiifn ∑∑ ⋅+− )(1
21
5. After measuring the register, we may observe any value whose amplitude is non-
zero in the register state just before the measurement. Let us concentrate on the
amplitude of basis state n0 ,
( ) ( )∑∑−
=
−
=
⋅+ −=−=1
0
)(1
0i
0)(0 1111 N
i
ifN
iif
NNα , (3.3)
where . In the case of a constant function, nN 2= 0α is 1 if the constant value is 0
and -1 if the constant value is 1. In both cases, the probability of observing n0 is
1. In the case of a balanced function, the summation
( )∑−
=
−1
0
)(1N
i
if
will contain N/2 positive terms and N/2 negative terms, thus leaving 00 =α . Hence
the correctness of the algorithm.
3.2. Quantum Search Algorithm
Consider a phone book S with N entries. If we are asked to determine the name of the
owner of a given phone number w, with current computer technology we have no way but
to look up all the items in the list one by one and compare their number entries with w. It
then takes N/2 lookups on the average to determine the owner of w. The classical query
complexity of this problem is therefore ( )NΘ . In 1996, Lov Grover proposed a quantum
algorithm that would solve this search problem using ( )NΘ queries. This is only a
49
polynomial speedup, but still it presents a clear advantage over classical computers. For a
list of size 1000000, we need to make about 500000 checks on the average if we use the
classical method, but only about 1000 checks if use Grover’s search method. In this section
we will assume, without loss of generality, that N can be written as N=2n. We will use an n
qubit system with basis states 0 through 1−N .
3.2.1. The Single Solution Case
Assume that we are given a quantum oracle for computing a function f(x):Bn B.
Furthermore, assume that we know a priori that the function returns 1 for only one input,
and 0 for the other inputs. The task is to find for which x the function f(x) returns 1, by
making the least possible oracle calls. The Grover algorithm starts by generating an
equiprobable superposition of all basis states and then iteratively applies a set of operations
collectively known as the Grover iteration. The optimal number of iterations, k, depends
on N and the number of solution states, q, which we will assume is one in this section, and
is given by the formula
⎟⎟⎠
⎞⎜⎜⎝
⎛−=
21
4 qNroundk π . (3.4)
Grover’s search algorithm is summarized in Figure 3.4.
1. Initialize an n qubit register, named x, to n0 , and a single qubit, named y, to 1 .
2. Apply n+1 Hadamard gates, one to each qubit in x and y .
3. Apply Grover iterations (described below) k times to x and y .
4. Measure register x to obtain value w’.
5. Output w’.
Figure 3.4. The Grover algorithm
The number of iterations, k, can easily be precalculated on a classical machine, so the
only undefined concept in the description above is the Grover iteration. The Grover
50
iteration, RG, is defined as the product RG = Ps·Pw, where Pw and Ps are two reflection
operators. Let us analyze each one separately.
Reflection Pw realizes the transformation
( ) xx xf )(w 1P −= . (3.5)
It leaves the input intact except for the solution state, in which case it multiplies the state
by −1. We have seen in Chapter 2 that this operation amounts to a phase change by π
radians. We now try to provide an implementation for the reflection Pw. For this purpose,
consider the oracle for the function fw,
( )xfyxyx ⊕=fU .
Let us examine how this oracle works when the input state of the last (result) qubit
y is
−=−
=2
10y .
The action of the oracle on input ( ) 2/10 −x can be written as
( ) ( )⎟⎟⎠
⎞⎜⎜⎝
⎛ ⊕−
⊕=⎟⎟
⎠
⎞⎜⎜⎝
⎛ −
21
20
210 xfxf
xxU f .
If f(x)=1
⎟⎟⎠
⎞⎜⎜⎝
⎛ −−=⎟⎟
⎠
⎞⎜⎜⎝
⎛ −=⎟⎟
⎠
⎞⎜⎜⎝
⎛ ⊕−
⊕=⎟⎟
⎠
⎞⎜⎜⎝
⎛ −
210
201
211
210
210
xxxxU f
and if f(x)=0
51
⎟⎟⎠
⎞⎜⎜⎝
⎛ −=⎟
⎟⎠
⎞⎜⎜⎝
⎛ ⊕−
⊕=⎟
⎟⎠
⎞⎜⎜⎝
⎛ −
2
10
2
01
2
00
2
10xxxU f .
The action of the oracle can therefore be generalized as
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛ −−=⎟
⎟⎠
⎞⎜⎜⎝
⎛ −
2
101
2
10 )( xxU xff . (3.6)
The overall action leaves the result qubit intact and introduces the phase change by π
radians in the input register if it is in the solution state. This is exactly what the reflection
Pw is supposed to do. We conclude that calling the oracle for function f with the result
qubit set to the state − provides the actual implementation for Pw.
Grover’s search algorithm uses a set of basis states i , for all i such that 0 ≤ i ≤ N-1.
This set of basis states spans the 2n dimensional Hilbert space . One of these basis
states is the unknown solution state, call it
nH2
w .
Referring to Equation 2.20, the general state of register x can be expressed as
1......0 10 −++++= − Nwx Nw ααα .
We can distinguish the two subsets of basis states. Let Sw include w and Sw’ include all
i such that i ≠ w. Each of the subsets spans its own subspace of , call these
respectively and . Any state in can be expressed as
nH2
wH 'wH nH 2
'' wwx ww αα += , (3.7)
where wwα and '' wwα are projections of x onto and , respectively, and wH 'wH
12'
2 =+ ww αα .
52
xwwα
'' waw
wwα− xPw
Figure 3.5. Visual representation for Pw reflection
Operator Pw can then be viewed as a reflection about the hyperplane defined by i
such that i ≠ w. Figure 3.5 illustrates this observation. In a broader sense, the operator Pw
can be viewed as a state marker, since given any state x it marks the amplitude of the
basis state w with a minus sign:
. (3.8)
⎥⎥⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢⎢⎢
⎣
⎡
−→
⎥⎥⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢⎢⎢
⎣
⎡
−− 1
0
1
0
N
w
N
wwP
α
α
α
α
α
α
M
M
M
M
Let us now turn our attention to the reflection operator Ps. It realizes the
transformation
( ) xIssxPs 2 −→ , (3.9)
where
53
∑−
=
⊗ ==1
0
10N
i
nn iN
Hs . (3.10)
Consider the matrix representation of this operator. The product ss yields a matrix with
all entries set to . Therefore, the matrix for N/1 Iss −2 will consist of entries
along the diagonal and elsewhere. NN /)2( − N/2
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−
−−
=
n
NN
NM
sP
222
222222
1
L
MOMM
L
L
(3.11)
The action of Ps on an arbitrary state
∑−
=
=1
0
N
ii ix α
is shown by the matrix multiplication
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
−
−−
=
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−
−−
=
−− 1
1
0
1
1
0
2
22
222
222222
1
Nx
x
x
N
s
n
NN
NxP
αµ
αµαµ
α
αα
MM
L
MOMM
L
L
, (3.12)
where µx is the mean of the α’s. So each amplitude αi is transformed to 2µx-αi. Initially
large amplitudes become small, and vice versa. Consider the action of Ps on s and 's ,
where 's is a unit vector orthogonal to s . From linear algebra we know that 1=ss
and 0' =ss , so
( ) sIsssPs −= 2
ssss −= 2
54
ss −= 2
s=
and
( ) '2' sIsssPs −=
''2 ssss −=
'0 s−=
's−=
We can therefore think of Ps as a reflection around the hyperplane defined by s ,
since it leaves the s component unchanged, but flips any component that is orthogonal to
s . This fact is illustrated in Figure 3.6.
x'ssα
sas
'' ssα−xPs
Figure 3.6. Visual representation for PS reflection
The last remaining issue then is to find an implementation for reflection operator Ps.
Remember from Chapter 2 that state s is obtained by applying the Hadamard transform
to a register initially in state 0 ,
55
0nHs ⊗= .
Moreover Hadamard transform has the property that it is self-inverse. This is shown by the
fact that successively applying the Hadamard transform twice, leaves the target register
unchanged since
IHH nn =⊗⊗ .
Therefore the following equalities are in order:
( ) nn HIH ⊗⊗ −002 ( )nnn HHH ⊗⊗⊗ −= 002 (3.13)
( )nn HsH ⊗⊗ −= 02
nnn HHsH ⊗⊗⊗ −= 02
Is −= s2
sP=
All that remains to do is to provide an implementation for the operator between the
two Hadamard transforms in left side of Equation 3.13. The operator
I−002
is like an identity matrix multiplied by -1, except for the first diagonal entry which is still
+1.
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
−
−+
=−
100
010001
002
L
MOMM
L
L
I (3.14)
Remember that the matrix of Z gate is
56
. ⎟⎟⎠
⎞⎜⎜⎝
⎛−
=10
01Z
The action of a controlled-Z gate on the basis states can be summarized as follows. If u ≠
11 no change is applied on the target qubit
uucZ = ,
otherwise
111111 πiecZ =−= .
If both the control qubit and the target qubit are in state 1 , then a global phase of π
radians is introduced to the overall register. With no loss of generality, we can assume that
the phase change is transferred to the control qubit and that the target qubit is left intact.
Now consider the quantum circuit in Figure 3.7. The controlled-Z gate is triggered when
all control bits are in state 0 . The target qubit being also 1 , it means that the amplitude
of the control bit is flipped exactly when the control register is in the “all zeros” state.
Figure 3.7. Controlled Z-gate
So if we concentrate on the action of this circuit on the control register, it could be
described by the matrix
( )IeM i −=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
+
+−
= 002
100
010001
π
L
MOMM
L
L
.
57
Although this does not look exactly like I−002 to a computer engineer, it differs from
Ps only by a global phase of π radians, which, as we mentioned in Chapter 2, has no
observable effects. Therefore we can accept the circuit in Figure 3.7 as an implementation
of the reflection Ps. This was also the last missing piece of the Grover iteration, whose
general picture can be shown as in Figure 3.8.
Figure 3.8. Grover Iteration
3.2.2. Analysis
As we have defined all steps, the picture of Grover’s search algorithm is now complete.
However, it is not yet clear why this method would work. This section is devoted to
explaining the reason why the algorithm works. Recall from the previous section that the
operators that define the Grover iteration are both reflections about some hyperplanes. If
you consider Figure 3.5, state x is shown as a vector in the plane defined by basis state
w and the hyperplane which is orthogonal to w , represented by the direction 'w . Let
φ be the angle that vector x makes with 'w . Value φ=0 implies that x is one of the
non-solution states (or a superposition of several non-solution states) and value θ=π/2
implies x is the solution state w . Then we can express any x as
58
wwx θθ sin'cos += . (3.15)
In Grover’s search algorithm, we start by initializing x to state 0 . From the point
of view of the coordinate system defined by w and 'w , x may be aligned to either
axis, depending on whether w=0 or not. At the first step, we apply the Hadamard transform
to x , transforming it to the state s . Let us then try to visualize state s in our
coordinate system. Clearly, state s has “one part” from the solution space and “N-1
parts” from non-solution space. We can rewrite Equation 3.15 for state s as
wwwN
NwN
s θθ sin'cos'11+=
−+= , (3.16)
where
⎟⎟⎠
⎞⎜⎜⎝
⎛= −
N1sin 1θ . (3.17)
So the angle θ that s makes with the non-solution axis 'w is
⎟⎟⎠
⎞⎜⎜⎝
⎛−
N1sin 1 .
As the next step, we make a Grover iteration on x . Remember that the Grover iteration is
performed by first performing the reflection Pw and then the reflection Ps. The combined
action of a single Grover iteration is a rotation in the hyperplane of the coordinate system
by 2θ radians. Note that this observation is independent of the original alignment of the
state vector of x . This conclusion is demonstrated in Figure 3.9.
59
w
xPP ws
2θ s
θ 'w θ
sPw
Figure 3.9. Action of the first Grover iteration relative to w and 'w
Initially x corresponds to state s and it makes an angle θ with the 'w
hyperplane. With every application of Grover iteration it moves towards the w axis by 2θ
radians. It is evident that after the kth iteration, the angle with 'w is (2k+1)θ, and the state
of the register is
( ) ( ) ' 2cos 2sin wkwkx θθθθ +++= (3.18)
and therefore the probability of observing the desired state w becomes
( ) ( )θθ += kwp 2sin 2 . (3.19)
Because of the oscillatory nature of the sine function, we cannot keep on applying the
Grover iteration indefinitely, because after some point, the probability will start decreasing,
and we will need twice as many rotations for it to pick up again, and so on. Our target is to
60
make as many rotations needed as to make x aligned to w , in which case the angle with
'w is π/2. Therefore,
( ) 2/12 πθ =+k . (3.20)
Solving Equation 3.20 for k, we obtain
21
4−=
θπk . (3.21)
Recall form Equation 3.17 that
⎟⎟⎠
⎞⎜⎜⎝
⎛= −
N1sin 1θ .
For , 1>>N N/1=θ , and therefore ( )Nk Θ= . However, we would obtain a non-
integral value for k from Equation 3.21, whereas we can only make an integer number of
iterations. The obvious solution is to make k’ iterations, where k’ is the nearest integer to k,
⎟⎟⎠
⎞⎜⎜⎝
⎛−=
21
4' Nroundk π . (3.22)
Figure 3.10 shows the plot of the success probability versus the angle between x
and 'w at every iteration for the case where n=7, N=128, w = 66 (in fact any choice of w
will do). Counting from the left, point i in the graph stands for the state of the register after
the ith Grover iteration.
61
Figure 3.10. Plot of probability of observing the desired state versus
the angle of each iteration
In this case, k evaluates to approximately 8.4, therefore, in order to maximize the success
probability, we make eight Grover iterations. Note that after this many iterations, the
probability of observing w is slightly less than 1. In general, Grover’s search algorithm is
not an exact algorithm; almost always there is a small but nonzero probability that we may
observe an unwanted state at the end of the algorithm. Since
2/1' ≤−=∆ kkk ,
the maximum angle that we can be away from the w axis at the end of the algorithm is θ.
So the maximum failure probability is
N
p f1sin 2 == θ , (3.23)
which is negligible when, as assumed, . Note however that by various means this
probability can be made arbitrarily small.
1>>N
62
3.2.3. Multiple Solutions
Until now, we have considered the case of a single solution, q=1. What happens if
there is more than one solution to our problem? Let W be the set of solutions W = {w1, ...,
wq}. Then the function f(x) would return 1 for every solution state, that is, for every
element x of W, and 0 otherwise. This situation corresponds to the real world scenario
where we need to look up in the phone book for any person whose phone number ends
with 012. Can we refine the search algorithm so that we can solve the multiple solution
case? It turns out that we can, and the procedure remains almost completely unchanged.
What changes is that our oracle will be constructed such that it will respond with 1 to more
than one input. However, this is within the realm of the oracle, which is not the concern of
the algorithm itself. More importantly, the precalculation of the optimal number of
iterations, k, also changes. In the previous subsection, we have seen that we need to know
this value beforehand in order to know how many times we should perform the Grover
iteration. As with the single solution case, before starting the iterative process, we create
the equiprobable superposition of all states, s . With respect to w and 'w , s can be
expressed now as
wwwN
qNwNqs θθ sin'cos' +=
−+= ,
where
⎟⎟⎠
⎞⎜⎜⎝
⎛= −
Nq1sinθ . (3.24)
It can be easily seen that a single Grover iteration once again corresponds to a
rotation by 2θ toward w axis, only the definition of θ has slightly changed as suggested
by the difference in equations 3.17 and 3.24. The optimal value for k can be again
calculated as in Equation 3.21 and k’ is the rounded value of k. Assuming that , we
can show that
qN >>
( )Nk Θ= . Note that when , 2/Nq > 0k = , therefore resulting in no
63
Grover iterations at all and leaving x at the state s . This situation corresponds to the
classical algorithm, where the probability of picking a solution in the first random check is
greater than ½ if q is this big. In order to boost the success probability even higher, we can
do the trick of adding more “dummy” qubits, just for the sake of increasing N. In the
general case, if we add c qubits then c n n' += , and CNN *'= , where . Therefore cC 2=
⎟⎟⎠
⎞⎜⎜⎝
⎛= −
NCq1sinθ . (3.25)
If we add as many qubits as to make the assumption hold, then the previous
analysis is still valid. In this case, the required number of Grover iterations is
qN >>'
( ) ( )qCNqN /*/' Θ=Θ , so the running time increases exponentially with respect to c.
As a result, the maximum failure probability becomes
CN
qNqp f *'
sin === θ . (3.26)
The more extra qubits we add, the more we reduce the maximum failure probability. We
conclude that we can make the failure probability arbitrarily small by adding a required
number of qubits, with corresponding time cost. The same observations can be made for
the case of a single solution, simply by setting 1=q .
One final observation that we make for the Grover search algorithm is that we need
to know the number of solutions, q, in order to precalculate the number of iterations, k.
This is not a very realistic assumption, however, it can be overcome by first estimating the
number of solutions. This can be done in time comparable to that required by the search
itself by applying a quantum counting algorithm, a detailed study of which will not be
included here. The interested reader can refer to [14] for a specification of this algorithm.
64
3.3. Factorization Algorithm
Although integers have been classified as prime or composite since ancient times, it
was not until 2002 that Agrawal et al. [19] came up with a deterministic polynomial time
solution for the problem of primality testing. Therefore, we know it is within the power of
a classical computer to classify any given integer as prime or composite. The factorization
problem can be seen as the logical continuation of primality testing. From a computability
point of view, a classical computer can solve this problem too. When no time limit is
imposed on the computation, a classical computer will correctly produce the prime factors
of any input integer. The key point here is efficiency. It is suspected that classical
computers cannot solve the factorization problem in polynomial time, even if they are
given the freedom to occasionally produce some erroneous results. As of today, the best
known classical algorithm for factorization is probabilistic and runs in time
( )( )nn 3/23/1 logexp Θ ,
where n is the number of bits needed to encode the integer. This is clearly exponential in n
and from a complexity point of view, when large values of n are considered, it is as bad as
an undecidable problem. Public key cryptography, and most famously the RSA
cryptosystem, are founded on exactly the assumption that integer factorization is not a
tractable problem for big inputs.
In 1994, Shor showed that a Quantum Turing Machine can factorize integers in
polynomial time, actually in ( )3nΘ time. While the idea of quantum computing had been
around for quite a while in academic circles, until that point it was regarded just another
provocative idea of no practical importance. Shor’s algorithm was important from both a
theoretical and practical point of view. Practically, it showed that if ever a quantum
computer of significant size could be built, then breaking the RSA protocol would be no
problem. Theoretically, it was important because quantum computers were able to
efficiently solve a problem hailed by many as intractable by classical computers, so it was
an informal evidence of quantum computers’ intrinsic superiority over classical ones. As
we saw in Section 3.1, Deutsch had already presented a problem [5,7] that a quantum
computer could solve exponentially faster than a classical deterministic computer.
65
However, besides the contrived nature of the problem, it was still solvable in a short time
by a classical probabilistic algorithm. With Shor’s factorization algorithm we now had a
practical problem with very important outcomes. The discovery of this algorithm provided
the main thrust on the following research efforts in the field of quantum computing.
3.3.1. Quantum Fourier Transform
The Fourier transform is an important mathematical tool that transforms a problem
from time domain to frequency domain. This is particularly helpful in many problems
which can be defined more naturally, and thus studied more easily, in the frequency
domain. While there exists a continuous Fourier transform, from a practical point of view it
is the discrete version that is mostly used. Given the vector
[ ] T xx X N1 L= ,
the discrete Fourier transform produces a complex vector Y of the same cardinality, N, so
that
∑−
=
⋅⋅⋅=1
0
/21 N
j
Nkjijk ex
Ny π . (3.27)
The quantum Fourier transform, to which we will also refer as QFT in this text,
implements the same transformation, however the vector X is substituted by a set of basis
vectors 0 through 1−N . With no loss of generality, we can assume N is a power of two,
. nN 2=
QFT implements the transformation
∑−
=
⋅⋅⋅=1
0
/21 N
k
Nkji keN
jQFT π . (3.28)
Because of the linearity principle, QFT on an arbitrary quantum state would act as
66
∑∑−
=
−
=
=⎟⎟⎠
⎞⎜⎜⎝
⎛ 1
0
1
0
N
kk
N
jj kyjxQFT ,
where yk are the discrete Fourier transform of the amplitudes xj given by Equation 3.27. It
can be shown [14] that the circuit in Figure 3.11 implements the QFT transformation.
Figure 3.11. Quantum circuit implementing quantum Fourier transform
Gates Rk are defined as
⎟⎟⎠
⎞⎜⎜⎝
⎛= −kik e
R 12001π .
Note that . So an R( kz
ik ReR
k −−
= 12 2ππ ) k gate is just a z-rotation by angle π/2k-1, up to a
global phase constant ( )ki 2/exp π . The “REVERSE” part of the circuit is implemented by
n/2 gates which swap bit states, one for each (ji, jn-i) pair. The circuit size is only quadratic
in terms of the input size, since it requires only ( ) ( )22/2 nnn Θ=+ gates. Since circuit size
is an indication of the running time, we conclude that this implementation of QFT runs in
( )2nΘ time steps. This is an exponential speedup with respect to the known classical
complexity of the same problem. The best known algorithm for computing the discrete
Fourier transform, the so called Fast Fourier Transform (FFT), has exponential complexity
( )nn2Θ [14].
67
3.3.2. Phase Estimation
The quantum Fourier transform can be used to determine the phase of the eigenvalue
of an arbitrary unitary operator efficiently. This procedure is known as Quantum Phase
Estimation, and is the key component in many fast quantum algorithms, including
factorization. Before going on, we have to refresh our knowledge of linear algebra and
recall the definitions of eigenvector and eigenvalue. Given a vector space H and a linear
operator U in H, eigenvectors and eigenvalues of U are defined by the Equation 3.29
vavU = , (3.29)
where a is an eigenvalue and v is an eigenvector of U. Moreover we can find eigenvalues
of any such operator by solving the Equation 3.30
( ) 0det =−= IUc λλ , (3.30)
where I stands for the identity matrix of the same dimension as U. In the equation above,
( )λc is also called the characteristic function of U. Now let us get back to our phase
estimation problem. We are given a unitary operator U which has an eigenvector v ,
whose corresponding eigenvalue is
πϕ2iea = .
We suppose that we are also given the eigenvector v and we are required to find the
phase φ of its eigenvalue. Note that this phase is sufficient to completely define a complex
number of unit magnitude. If we know φ, we can construct a as
( ) ( ) πϕπϕπϕ 22sin2cos ieia =+= .
The algorithm also assumes the availability of oracles of the form , where and
, where n is the number of qubits that the operator U acts upon. The circuit for the
Quantum Phase Estimation algorithm is shown in Figure 3.12.
JU jJ 2=
nj <≤0
68
Figure 3.12. Quantum circuit implementing phase estimation algorithm, T=2t
Let us trace the algorithm step by step. We start by noting that it uses two quantum
registers, which we will call w and v , and whose lengths are respectively t and n.
1. Register w is initialized to state 0 and register v to the eigenvector under
consideration. Therefore, the initial state of the system can be expressed as
vvw 0= .
2. Each qubit of w undergoes the Hadamard transform, transforming the system to
the state
vjT
vwT
j⎟⎟⎠
⎞⎜⎜⎝
⎛= ∑
−
=
1
0
1 ,
where T = 2t.
3. A series of controlled-U operators is applied on v , where each operator is
controlled by the qubit of
k
U 2
w with index . In order to derive the state of the
system, we make the two following independent observations:
1-k-t
• The action of an oracle of the form on kU v is
69
( ) vevevavU kikikk ϕπϕπ 2 2 === .
The target register v remains unchanged and the phase is
transferred to the control register.
ϕπ 2 kie
• Every qubit of the control register is initially in the equiprobable
superposition state ( ) 2/10 + . Remember that the controlled operation
is applied to the target qubits only when the control qubit is in state 1 .
If we combine these two observations, we conclude that, after this step, the state of
qubit wi, where 0≤i<t, is
( )102
1 2 ϕπ iKi ew += ,
where 1-i-t2 K = .
Therefore the state of the overall two-register system is
vjeT
vwT
j
ji⎟⎟⎠
⎞⎜⎜⎝
⎛= ∑
−
=
1
0
21 ϕπ .
However, since the two registers are not entangled, we will only consider w ,
whose state has become
∑−
=
=1
0
21 T
j
ji jeT
w ϕπ .
4. The state of the system that was produced at the end of the previous step is almost
identical to the result of the quantum Fourier transform. Therefore, after applying a
circuit computing inverse QFT (which is just the circuit of Figure 3.11, in which
70
each gate is replaced by its adjoint and the order of the gates is reversed) to w , we
obtain the state 'ϕ , upon measurement of which the probability of observing the
first few bits coming after the “decimal” (actually, “binary”) point in the binary
representation of φ as a number between 0 and 1 in the topmost register is very
high.
The length of register w is set according to the desired accuracy and probability of
success [14]. So if we want n digits of accuracy and a success probability of at least 1-ε,
then
⎥⎥
⎤⎢⎢
⎡⎟⎠⎞
⎜⎝⎛ ++=
ε12lognt .
Now note that the running time of the algorithm is the set of Hadamard gates which
run in parallel, plus the set of controlled-U gates, plus the cost of inverse QFT, resulting in
( ) ( )21 nt Θ+Θ+ . We conclude that the running time is polynomial in n and 1/ε.
3.3.3. Order Finding
Given any two coprime integers, x and N, where x is the smaller of the two, the order
of x with respect to N is defined to be the least positive integer r that satisfies the equality
( )Nxr mod 1≡ . (3.31)
The order finding problem consists of determining the order r, given x and N. This problem
is believed to be a hard problem for classical computers, because no procedure,
deterministic or probabilistic, has been found that will efficiently compute r in time
polynomial in n, where n is the number of bits needed to write N. However, a quantum
computer can solve this problem in polynomial time. This is due to the fact that order
finding can be reduced to the phase estimation problem, which was shown to be solvable
by a QTM in the previous subsection. For any such x, N and r as stated above, consider the
unitary operator
71
( )NxyyU mod= .
When we apply this operator to any state
∑−
=⎟⎠⎞
⎜⎝⎛ −=
1
0mod2exp1 r
k
ks Nx
rski
rv π , (3.32)
where 0≤s≤r-1, we obtain
∑−
=
+⎟⎠⎞
⎜⎝⎛ −=
1
0
1 mod2exp1 r
k
ks Nx
rski
rvU π . (3.33)
Note that both summations contain the same basis states, since . However,
the coefficient of each state now differs by a factor of
( Nx r mod1≡ )
⎟⎠⎞
⎜⎝⎛
rsi π2exp from that in Equation
3.32.
Making the proper arrangements, we obtain
ss vr
sivU ⎟⎠⎞
⎜⎝⎛=
π2exp ,
meaning that each sv is an eigenvector of U, and s/r is the phase of the corresponding
eigenvalue. We are equipped with an efficient algorithm that finds the phase of the
eigenvalue when given the operator and the corresponding eigenvector. We cannot apply
this algorithm directly, because it requires that we know the eigenvector sv , which is not
the case, because that would mean that we know the order r. We can work around by
observing that [14]
11 1
0=∑
−
=
r
ssv
r.
72
It implies that for the purpose of order finding algorithm we can equally as well provide
1 as input.
However, we are not done yet. The phase estimation algorithm outputs an
approximation of a ratio φ ≈ s/r, where s is randomly picked between 0 and r-1, however
we are interested in r rather than the ratio itself. We can work around this problem by
noting that both s and r are integers, and therefore the “ideal” s/r, whose binary
approximation is provided to us, is a rational number. Furthermore, we also know of an
upper bound for r, namely, N. If we compute the nearest fraction to φ satisfying these
constraints, there is a chance that we might succeed in finding r. On the other hand, we can
verify whether the candidate for r that we have found in this way is indeed the desired
order or not by checking if it satisfies ( )Nxr mod 1≡ . The efficient classical procedure
that extracts s and r from the approximated value of the ratio s/r is called continued
fraction expansion. We will not include a detailed specification of this algorithm. For such
purpose the reader is referred to [14].
Note that the complexity of order finding algorithm is only polynomial with respect
to number of bits that are needed to write input N. As we will see in the next section, this is
the main block of Shor’s fast factorization algorithm. The supposed efficiency of these
quantum algorithms are founded on the observation that exponentiation also can be
performed efficiently. There is already an efficient classical algorithm for this purpose,
namely the modular exponentiation. However because the rest of order finding is supposed
to be in “quantum” format, this part of the algorithm has to be translated into this format.
3.3.4. Shor’s Algorithm
The results of the previous subsections are relevant to factorization, because it has
been known for a long time that if one could solve the order finding problem efficiently,
one could solve the factorization problem easily as well. Consider the composite integer N
and suppose that
( )N mod 1y2 = . (3.34)
73
It follows that
( )( ) ( )Nyy mod 011 =−+ . (3.35)
If , then either ( Ny mod 1±≠ ) ( )Ny ,1gcd + or ( )Ny ,1gcd − is a non-trivial factor of N.
And of course, computing the greatest common divisor of two given numbers is known to
be an easy problem since the Euclidean times. Consider the order r of some number x,
, with respect to N. That is, ( Nx mod 1≠ ) ( )Nxr mod 1= . If r is an even number, then we
can write ( )( ) ( )Nxx rr mod 011 2/2/ =−+ . Referring to Equation 3.35, we can conclude than
either ( )Nx r ,1gcd 2/ + or ( )Nx r ,1gcd 2/ − is a factor of N.
Now all the required pieces are in place, if only we are given such an x as described
in the previous paragraph. However, it turns out that we need not worry about this problem
either, because the probability that a randomly chosen number x co-prime to N has an even
order r and is sufficiently large, if N is not the power of a single prime [14].
Fortunately, finding whether N is a prime power or not, and determining that prime when it
is a prime power, are tasks for which fast classical algorithms are known [5]. Shor’s
algorithm, which combines all these techniques to factorize a given integer with a small
probability of error, is given in Figure 3.13
12/ −≠rx
On input N
1. If N is even, return 2
2. If N is a prime power, N = ab, return a.
3. Pick a random x, 2≤x<N. If x is not co-prime to N, return gcd(x,N).
4. Use the order finding algorithm to obtain the order r of x with respect to N.
5. If r is even and , then compute )(mod 12/ Nxr −≠ ( )Nx r ,1gcd 2/ + and
( )Nx r ,1gcd 2/ − , and check if either is a non-trivial factor of N. If so, return
the appropriate one, otherwise, the algorithm fails.
Figure 3.13. Shor’s factorization algorithm
74
4. ARBITRARY EQUIPROBABLE SUPERPOSITIONS
As mentioned earlier, many fast quantum algorithms start by setting an n-qubit
register (which initially contains 0 ) to an equiprobable superposition of all the 2n basis
states. As we saw in the Deutsch-Jozsa algorithm, this transformation can be done easily
using n Hadamard gates with zero error, that is, all the amplitudes in the resulting
superposition are exactly equal to each other. We will now consider a generalized version
of this task, that of transforming the state of a quantum register from the initial all-zero
state to the equiprobable superposition of a given arbitrary subset of the set of basis states.
Our input then consists of a set containing q n-bit numbers; S = {s0, s1, ... sq-1}. The state
we want to generate is
. ∑−
=
=1
0
1 q
iis
qϕ
We will discuss in detail two algorithms for this task. The first one, Algorithm KSV, is
based on a circuit described by Kitaev et al. [13] for a similar problem. The second
alternative, Algorithm G, is based on Grover’s quantum search algorithm [9].
4.1. Algorithm KSV
Kitaev et al. consider a problem similar to ours in [13]; that of generating the state
qn,η , the equiprobable superposition of states in { }110 −= qQ L . The input to this
algorithm will just be the number q, rather than the entire set Q. The output of the
algorithm is an approximation to qn,η ,
iq
ii∑
−
=
=1
0δϕ .
The solution that Kitaev et al. describe for this problem involves a circuit that is
allowed to be constructed using elements from the infinite set of all possible two-qubit
gates. It is actually an approximate implementation of the recursive formula
75
",1',1, 1sin0cos qnqnqn −− ⊗+⊗= ηνηνη , (4.1)
where
q’ = 2n-1, q” = q – q’, qq /'cos 1−=ν if q > 2n-1,
q’ = q, q” = 1, 0=ν if q ≤ 2n-1.
1. Compute q’, q” and υ/π, with the latter number represented as an approximation by
L binary digits.
2. Apply the rotation operator R(υ) to the first qubit of the register in which we are
computing qn,η
( ) ⎟⎟⎠
⎞⎜⎜⎝
⎛ −=
νννν
νcossinsincos
R
3. In the remaining n-1 qubits, construct ',1 qn−η if the first qubit is zero, construct
",1 qn−η if it equals one.
4. Reverse the first stage to clear the supplementary memory.
Figure 4.1. Algorithm KSV’, proposed by Kitaev et al. for the generation of state qn,η
Figure 4.1 summarizes this algorithm, which we will call KSV’. Consider the first q
n-bit nonnegative integers. We would list them as (00…0), (00…1) and so on until (bn-1bn-
2…b0), where bn-1bn-2…b0 is the binary representation of q-1. Then q’ and q” denote the
counts of numbers in Q whose first bits are respectively 0 and 1. The algorithm rotates the
first qubit (which is initially 0 ) by angle ν in the plane defined by the 0 and 1 axes,
where ν is calculated according to the ratio of q’ and q. The rest of the register is set in
similar fashion by recursively applying the same procedure outlined above until we reach
the last qubit (n=1).
76
The operator R(ν) is implemented approximately by making use of the following
observation. Let T = 0.t1t2…tL be the L-bit binary expression of the approximated υ/π value.
It follows that
)2t...2t2t(2t
...4t
2t
tt0.t L2
21
1LL21
L21L−−− +++=↔+++=…= πν
πν .
On the other hand, consider a rotation Rn’(θ1) followed by a rotation Rn’(θ2), where θ1 and
θ2 are rotation angles and n’ is the rotation axis. Then the combined effect is the rotation
Rn’(θ1+ θ2). We conclude that we can approximate rotation by angle ν as a series of
rotations by angle π/2i, where each rotation is controlled by the corresponding bit in T,
namely, ti: For each i, if ti is 1 , the qubit under consideration gets rotated by π/2i radians.
The quantum circuit implementing this is shown in Figure 4.2.
Figure 4.2. Quantum circuit for approximating rotation operator R(ν/π)
The algorithm KSV’ generates an approximation to the equiprobable superposition of
basis states 0 , 1 , ..., 1−q . However, we want to generate the equiprobable
superposition of all states si in an arbitrary given set S. For this purpose, we reduce our
more general problem to the problem of Kitaev et al. by applying an appropriately
constructed permutation operator PS to the result of algorithm KSV’. The permutation
operator PS has the task of mapping set Q to set S. While any of the q! possible
permutations may do, we can choose in general to perform the permutation isi → . A
possible implementation consists of q input registers named iϕ , which hold the elements
77
of the set S, and one register named I , which holds the index of the value to be mapped
to one of the elements of S. The permutation circuit has q groups of XOR gates, each of
which latches the value of input register iϕ holding the value si to the result register,
which initially contains 0 . Moreover, each XOR gate is controlled by the register I so
that it will be applied only if the value in I is i, given that the value it latches into the
result register is si. Stated as such, this is only a partial permutator, however, because our
input is guaranteed to be an element of Q, it will be enough for our purposes. One possible
circuit that implements the permutation operator PS is shown in Figure 4.3. In this case,
q=2 so Q = {0,1}. The set to which we want Q to be permuted is S = {3,5}, that is, we
want to map 0 to 3 and 1 to 5.
Figure 4.3. Quantum circuit for partial permutation Q S, where Q = {0,1} and S = {3,5}
On input S = {s0, s1, ... sq-1}
1. Construct the permutation PS corresponding to S
2. Obtain an approximation to the state qn,η by applying algorithm KSV’ with q as
input.
3. Obtain an approximation to state ϕ by applying permutation operator PS to qn,η .
Figure 4.4. Algorithm KSV
78
Now, all the building blocks of algorithm KSV are defined. We first apply algorithm
KSV’ to obtain an equiprobable superposition of elements in Q, and then we apply
permutation operator PS to obtain the equiprobable superposition of elements in S. The
pseudocode for algorithm KSV is shown in Figure 4.4.
4.2. Algorithm G
The second algorithm that we discuss for the task of generating equiprobable
superpositions is based on the fast search algorithm proposed by Grover and which was
presented in detail in Section 3.4. Recall that Grover’s algorithm starts with a superposition
of all the N possible basis states, and then applies a series of operations which are
collectively called the Grover iteration k times, where
⎟⎟⎠
⎞⎜⎜⎝
⎛−=
21
4 qNroundk π ,
and q is the number of solutions. Finally, the register is observed, and the measured value
is outputted. The failure probability is at most q/N. In the case where q > N/2, we will need
to build a new oracle with an input register containing additional qubits (this can be done
easily by using the original oracle) in order to increase the search space and reduce the
failure probability. In general, if we allow for a corresponding increase in the running time,
we can reduce the error probability to q/2cN by adding c extra qubits. A nice property of
Grover’s algorithm is that it can be implemented for any problem size using only gates
from a finite standard set [9, 14].
An analysis [14] of the algorithm shows that the resulting superposition (just before
the measurement) is a sum of two different equiprobable superpositions of the form
∑ ∑∈ ∉
+=Si Sj
jbiaϕ .
The algorithm is said to succeed when a solution state is observed, otherwise it fails. Let
the success probability be p, thus the failure probability is 1-p. The algorithm equally
79
distributes p among the solution states and 1-p among the non-solution ones. Amplitudes
of individual solution and non-solution states, a and b, are expressed in terms of p as
qpa = (4.2)
and
q
pb n −−
=21 . (4.3)
We can therefore use Grover’s algorithm for the generation of an equiprobable
superposition of elements of a given set S. First, we have to implement the “oracle”
ourselves. The first implementation that comes to mind is to have f(x) = 1 iff x ∈ S, but this
idea leads to an unnecessarily inefficient circuit. Instead, we build an oracle which returns
1 if its input x is less than q, and 0 otherwise. This single comparison can be implemented
efficiently. If we run Grover’s algorithm with this oracle and stop without a measurement,
the state of the resulting register
021 ...rrr nn −−=ϕ
will be given as
∑ ∑∈ ∉
+=Qi Qi
ibiaQϕ ,
where Q is as defined in the previous subsection. To get from this state to the superposition
∑∑∉∈
+=SjSi
jbiaϕ ,
we use the permutation PS that maps i to si, as discussed in the previous subsection. At the
end of the Grover iterations, we apply permutation PS to the result state ϕ , and we are
done.
80
Figure 4.5. shows a summary of the algorithm G.
On input S = {s0, s1, ... sq-1}
1. Construct the “oracle” corresponding to Q
2. Construct the permutation PS corresponding to S
3. Perform Grover’s algorithm up to the measurement to obtain Qϕ
4. Perform permutation PS on Qϕ to obtain state ϕ
Figure 4.5. Algorithm G
4.3. Comparison of the Algorithms
The general specification of Algorithm KSV (unrealistically) assumes that the
infinite set of R(π/2i) gates, for any i, is available to us. In this respect, Algorithm G is
superior, as it uses a finite set of standard gates. When required to work with the same gate
set as G, KSV would necessarily make some approximation errors in addition to those that
will be analyzed below. Algorithm KSV approximates arbitrary rotations by the (possibly
irrational) angle ν by a series of controlled rotations by angles of the form π2-i, introducing
a discrepancy between the actual angle ν and the implemented angle ν’. Because of this
discrepancy, the probabilities are not distributed perfectly equally among the solution
states. For example, when preparing the state 3,6η , the ideal rotation angle for the first
qubit is approximately 0.1959π radians. With four bits of accuracy for storing the value
υ/π, the rotation is actually implemented by an angle of nearly 0.1875π radians. Using
seven bits, the resulting rotation angle would be nearly 0.1953π radians. Assuming no
discrepancy in the remaining two qubits, the resulting superpositions are respectively
[ ] T 003928.03928.04157.04157.04157.04157.02 =ϕ
and
[ ] T 004072.04072.04088.04088.04088.04088.02 =ϕ .
81
One extreme situation is the disappearance of some of the si altogether from the
resulting superposition. Consider the generation of a superposition |ηK,n> where
12 1 += −nK . The rotation angle for the first qubit is
⎟⎟⎠
⎞⎜⎜⎝
⎛+−
−−
122cos 1
11
n
n
.
For even moderate values of n, if the number of accuracy bits is not big enough, this angle
would be approximated to zero, and therefore the first qubit would not be rotated at all. As
a result, the state 1−K would not appear in the resulting superposition. Consider the
following numerical example. Let n=30 and q=229+1. The rotation angle for the first bit is
calculated as
122cos 29
291
1 += −ν ,
and the binary representation of ν1/π is
100011011100000000000.000004316.01 ==πν
.
The first 13 bits in register holding the value υ1/π are 0. This means that with 13 bits of
accuracy or less, the value for υ1/π would be approximated to 0, the first qubit would not
be rotated at all, and we would have no chance to observe the state 292 . Theoretically,
we should be able to observe this state with probability equal to that of any other state
starting with 0 as the first qubit.
On the other hand, Algorithm G does not pose such problems, because it distributes
the success probability perfectly equally among solution states, and also distributes failure
probability perfectly equally among the non-solution states. The significance of this
property will be exhibited in the next section.
82
Algorithm KSV’, and therefore our algorithm KSV, do not produce any unwanted
states (i.e. any numbers not in S) in the resulting superposition. In order to see this, note
that at the basis step, when the algorithm arrives at the last qubit, the two possible states
are 1,2η or 1,1η corresponding to respective rotation angles of 0 and π/4. The bit
representation for υ/π for these angles is finite, and therefore results in no approximation
errors. On the other hand, Algorithm G is based on the Grover iteration, which increases
the amplitude of the solution states and decreases the amplitude of non-solution ones. This
process goes on with each iteration until we pass a critical value. However, even with the
optimal iteration count, if we do not have access to perfectly implemented gates of
possibly irrational rotation angles, the total probability of solution states does not sum to
one, and therefore the probability of non-solution states is not zero.
4.4. A New Generalization of the Deutsch-Jozsa Algorithm
Several generalizations of the Deutsch-Jozsa algorithm in different “directions” exist
in the literature: Cleve et al. [21] handle oracles with multiple-bit outputs. Chi et al. [22]
show how to distinguish between constant and “evenly distributed” functions. Bergou et al.
[23] distinguish between balanced functions and a subset of the class of “biased” functions.
Holmes and Texier [24] determine whether f is constant or balanced on each coset of a
subgroup of [0, 2n-1]. In this section we will provide a different generalization, where the
domain of the function that we are really interested in is not necessarily a power of two.
As we have seen in Section 3.1, the Deutsch-Jozsa algorithm decides whether an
unknown function is constant or balanced by making a single oracle call, whereas any
exact classical algorithm would have to make an exponential number of calls. By making
use of the algorithms for generation of equiprobable superpositions presented in Sections
4.1 and 4.2, we can construct a probabilistic algorithm for the following generalized
version of the Deutsch-Jozsa problem: As before, f is a function from {0,1}n to {0,1}. Let S
be a given subset of q numbers in the interval Un = [0, 2n-1]. The problem is subject to the
following promises:
• q is even and is greater than 2n-1
• function f is balanced on S’ = Un-S
83
• function f is either balanced or constant on S
We are required to find whether f is constant or balanced on S by making the
minimum possible number of oracle calls. Note that by letting S = Un, we obtain the
original Deutsch-Jozsa problem as a special case. The algorithm we propose for this
problem is similar to the one described in Section 2, with the single difference that instead
of n Hadamard gates that generate the equiprobable superposition of all numbers in Un, we
will use an algorithm for generating equiprobable superposition of only the numbers in S.
The overall simplified circuit is shown in Figure 4.6.
Figure 4.6. Circuit for the new generalization of Deutsch-Jozsa algorithm
1. As in the original algorithm, the initial state of the registers is given as
10 ⊗=⊗ ψϕ .
2. In the first stage, we apply an algorithm for generating the equiprobable superposition
on input set S in the first register and a Hadamard gate to the final qubit. For the time
being, let us assume that this algorithm produces a perfect superposition with no
unwanted elements. The overall state of the registers is then
⎥⎦
⎤⎢⎣
⎡ −⊗
⎥⎥⎦
⎤
⎢⎢⎣
⎡=⊗ ∑
∈ 2101
Sxx
qψϕ .
3. In the next stage, we make the oracle call, which affects only the top register,
84
⎥⎦
⎤⎢⎣
⎡ −⊗
⎥⎥⎦
⎤
⎢⎢⎣
⎡−=⊗ ∑
∈ 210
)1(1 )(
Sx
xf xq
ψϕ .
Since the joint state of the top register and the result qubit is decomposable, we will
consider only the top register for the last step.
4. At the last stage, we apply the Hadamard transform to the top register to obtain
⎥⎥⎦
⎤
⎢⎢⎣
⎡−= ∑∑
∈
+⋅−
= Sx
xfzx
zn
zq
n
)(12
0
)1(121ϕ .
5. Measure the top register to observe the value r. If r is 0, then the algorithm outputs
“constant”; otherwise, it outputs “balanced”.
Let p(i) denote the probability that i is observed upon this final measurement. Note
that the amplitude of state 0 is
∑∈
−=Sx
xf
n q)(
0 )1(21α . (4.4)
If the function f is balanced on S, this summation will vanish, leaving, as expected, 00 =α .
So in the case of a balanced function, the probability of observing 0 in the top register is
zero. In the case of a constant function f on S,
nf qS 2/)1(0 −=α , (4.5)
where fS denotes the constant value of the function. No matter whether fS is 0 or 1,
nqp 2/)0( 20 == α . (4.6)
So, if we could obtain a perfectly equiprobable superposition of S in step 2, then the
85
probability of correctly observing state 0 in the case of a constant function on S is q/2n.
The promise that q > 2n-1 ensures that the correctness probability exceeds ½. This is not an
exact algorithm, as was the case with the original Deutsch-Jozsa algorithm. However it has
the nice “one-sided error” property than when the function is balanced, it is correctly
classified with certainty. Note that the best (probabilistic) classical algorithm for this
problem also has one-sided error, but requires two oracle calls at a minimum.
How would our generalized Deutsch-Jozsa circuit behave if we substituted the ideal
superposition generator in the first stage with one of the actual circuits examined in this
chapter, none of which produce the desired perfect equiprobable superposition? Let us
consider the KSV algorithm first. The actual superposition produced by this module is
∑∈∀
=Ss
iii
sδϕ ,
where the δi’s are, because of approximation problems, near, but not equal to, q/1 , and
are therefore also not equal to each other. As a result, the amplitude of 0 just before the
measurement will be
⎥⎦
⎤⎢⎣
⎡−= ∑
∈Sxi
xf
nδα )()1(
21
0 . (4.7)
For a balanced f, we expect the amplitude of 0 to be exactly zero after the oracle
call. However, because the amplitudes of the desired states were not all equal to each other,
we will not have a complete cancellation, and therefore the probability of observing the
state 0 would be a (small) nonzero number. As a result, integrating KSV as the
superposition generation module makes our generalized Deutsch-Jozsa algorithm lose its
one-sided error property. Consider the following numerical example.
Let n = 3, S = {0,1,2,3,4,5}and f(x) = 0 if x ∈ {0,1,2,6}, otherwise f(x) = 1. Consider
using 5 bit registers for the value of υ/π. Then, because of approximation errors, the final
superposition generated by KSV is
86
[ ] T 003928.03928.04157.04157.04157.04157.0 ,
whereas the ideal state would be
[ ] T 004082.04082.04082.04082.04082.04082.0 .
Under these circumstances, the amplitude of state 0 in the result register is not 0 as
would be expected, but rather 0.0161, yielding an observation probability of approximately
0.0003.
Interestingly, integrating Algorithm G as the superposition generation module solves
this problem. We know that G produces the superposition
∑ ∑∈ ∉
+=Si Si
ibiaϕ ,
where a and b are defined as in Equation 4.2 and 4.3. Note that a is possibly slightly less
than q/1 , therefore allowing b to assume a (small) nonzero value. By a straightforward
extension of the analysis at the beginning of this section, it can be seen that the amplitude
of 0 just before the measurement will be
⎥⎦
⎤⎢⎣
⎡−+−= ∑∑
∉∈ Sx
xf
Sx
xf
nba )()(
0 )1()1(21α . (4.8)
But since f is balanced on S’ = Un-S, the second summation vanishes, and the probability of
measuring 0 , given that function f is balanced over S, is zero. So the incorporation of
Algorithm G preserves the one-sided error property of the generalized Deutsch-Jozsa
algorithm.
When function f is constant, the probability of measuring 0 will be rather
than , which would be the case with the ideal algorithm, but we can fix the number of
Nqp /
Nq /
87
Grover iterations and use ancilla bits in the way explained in the next paragraph to ensure
that this probability never falls below ½, ensuring the correctness of our probabilistic
algorithm. Because q > 2n-1 = N/2, we need to run the Grover algorithm with c extra qubits,
c≥1. Failure probability is then at most q/CN , , the success probability p, is at least
. Therefore, the probability of observing
c2 C =
q/CN -1 0 in the first register at the end of the
algorithm in case of a constant function is
( ) ⎟⎠⎞
⎜⎝⎛ −==
Nqp
Nq
Nqp
c210 . (4.9)
It can be shown that for c=2 and q > 0.59N, qp/N becomes greater than ½. On the other
hand, addition of two qubits to Grover algorithms will slow down its running time by a
constant factor of 4 . In fact, for any q > 0.5N, by adding sufficiently many ancilla qubits,
we can make the minimum success probability greater than ½. Figure 4.7 shows the plot of
the minimum success probability versus the ratio q/N for different values of c.
Figure 4.7. Plot of minimum success probability versus q/N for different values of c
(green: c=0; magenta: c=1; red: c=2; blue: c=3; cyan: c=100)
88
From a complexity point of view, Algorithm KSV is superior to algorithm G.
Running time of algorithm KSV is polynomial with respect to n, whereas algorithm G has
the oracle complexity of Grover search algorithm, ( )NΘ , which is still exponential in n.
89
5. A PROGRAMMING INFRASTRUCTURE FOR REVERSIBLE
AND QUANTUM COMPUTING
If quantum computers were to be available today, we would have no more than a few
quantum algorithms to run on them. On the other hand, we have seen that arbitrary
classical algorithms cannot be run as is on a quantum computer simply because of their
irreversible nature. First we need to convert them to reversible equivalents, via a process
we call reversification, and only then we can run them on a quantum computer. In this
chapter, we will describe a programming infrastructure devoted to the reversification of
classical irreversible algorithms. Furthermore, we will show that the use of the primitive
gates that we allow within the structure of reversified functions, in fact provides a
universal model for quantum computing.
5.1. Overview
The primary purpose of this infrastructure is to make sure that a classical procedure
written with no concern about reversibility issues, is reversified (i.e. converted into a
reversible equivalent) using a specific set of primitive operations. For the purpose of the
classical algorithm, we will consider a function written in Matlab. We have chosen Matlab
because vector and matrix operations, which are so frequently encountered in quantum
computing, are carried out very easily and efficiently in this platform. In principle, the
same approach could have been adopted for any programming standard.
A block diagram of the operational flow of the reversifier is shown in Figure 5.1. The
source file of the possibly irreversible function is fed as input to the reversifier program,
which first transforms it into a reversible function. A second parsing of the reversible
program prints out the quantum circuit that would implement the function. The level of
specifications of the circuit can be supplied in terms of high-level building blocks, such as
ADD, SUBTRACT etc., or at the low level of primitive gates such as CNOT.
90
Classical Function
Reversible Function
Low Level Gate Representation
High Level Gate Representation
(NOT, CNOT, G, etc.)
(ADD, SUB, etc.)
Figure 5.1. General flow of the reversifier program
Note that the resulting reversible code is a perfectly valid Matlab function.
Furthermore, because of the way that classical input and classical operators are translated,
it is possible to simulate such functions with quantum inputs as well as with classical
inputs. This may be particularly useful in situations where a classical algorithm is used as
part of a larger quantum algorithm. We faced this situation in order finding algorithm in
Section 3.3.3, where we noted the fact that an efficient implementation for this algorithm
depends on an efficient translation into “quantum” format of exponentiation problem.
5.2. Register Representation
We have seen in Section 2.7 that classical computation can be made reversibly if we
allow for an overhead in the memory space that we are allowed to use. In this context, we
can use classical bits and classical registers to represent variables and constants to reversify
classical functions. However, in this work, we view reversibility as an intermediate stage
toward the domain of quantum computing. We do not simply to make our algorithms
reversible, we also want to be able to use them on quantum computers. For this purpose,
we use the quantum approach for representing information and processing variables. While
this gives us the power of quantum superposition and quantum parallelism, remember that
we currently do not have a quantum computer, and will be trying to “run” the resulting
91
quantum program with a classical computer, at least for testing purposes. We already know
from Sections 2.1 and 2.2 that a quantum system of n qubits can be represented with 2n
complex parameters, which on the other hand is equivalent to 2n+1 real parameters. This
exponential increase in the number of parameters makes registers of big size impossible to
simulate with reasonable amounts of classical resources. So, to fully describe a 32 qubit
register, one would need 233 real parameters, a number which is bigger than the number of
bits in a 512 MB RAM unit. Things turn from bad to worse when we consider the
implementation of quantum operators. An arbitrary operator acting on n qubit registers can
be fully represented by a 2n×2n matrix with complex entries for a total of 22n+1 real
parameters. The maximum dimension for which such a matrix can be instantiated in
version 6.5 of Matlab is 12.
Therefore, a quantum register r that is composed of n qubits will be represented in
the background by a vector of 2n (possibly complex) entries, that is,
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡=
−1
0
N
rα
αM ,
where ibai +=α and . The complex parameters nN 2= iα are also subject to the
normalization condition
. (5.1) 11
0=∑
−
=
N
iiα
For a more detailed coverage of mathematical theory behind this representation, refer to
Sections 2.1 and 2.2.
5.3. Structure of the Classical Function
We have already pointed out that the primary aim of this infrastructure is to
transform classical algorithms in a format in which they can be readily run on a quantum
computer. We also showed that at an intermediate level of representation, such algorithms
92
must first be made reversible. Classical computing is not reversible in principle. Referring
to Definition 2.2, a function is said to be reversible if we can compute its inputs uniquely
from its outputs. In fact, reversible functions account for a tiny portion of the set of all
possible mathematical functions. The basic arithmetic operators such as addition,
subtraction, multiplication etc., are all irreversible. Even more importantly from the
computer perspective, most boolean logic functions, such as AND, OR, NAND etc., are
irreversible too. However, we saw in Section 2.7 that any function has a reversible
equivalent if we allow for an increase in the number of input and output bits. While this
increase is completely redundant in the sense that it does not provide any useful
information with respect to the original function, it is necessary in order to uniquely
reproduce the input from the output.
The input to the reversifier is the Matlab source code of a classical and possibly
irreversible function. The reversifier is supposed to rewrite it in a fancy but reversible
format. To achieve this feat, each statement of the classical algorithm must be rewritten in
a reversible form. The next three sections explain the structure of the classical program that
we consider for reversification.
5.4. Classical Operations
The classical operations that are considered in this infrastructure can be classified
under three main groups, namely, arithmetic operations, logical operations and comparison
operators. Mathematical operations include increment, decrement, addition, subtraction
and multiplication. We have not covered division for practical reasons. Logical operations
include AND, OR and NOT. Comparison operators include testing for equality, inequality,
greater than, less than, etc. The obvious distinction between the three classes is that the
arithmetic operators take two numerical operands and return a numerical result, the logical
operators take two logical operands and return a logical result, and the comparison
operators map numerical inputs to logical results. While there are many ways to make a
function reversible, in this implementation we will adopt the method discussed in Section
2.7.3. In brief, it states that for any irreversible function of the form
( ) yxxf nIR →,...,1 (5.2)
93
which takes n inputs and maps them to one output, one can synthesize a reversible function
( ) ( )( )nIRnnREV xxfyxxyxxf ,...,,,...,,,..., 111 +→ . (5.3)
This implementation assumes that arguments xi and y are numerical values. If we are
dealing with boolean functions, the equivalent reversible function would have the form
( ) ( )( )nIRnnREV xxhyxxyxxh ,...,,,...,,,..., 111 ⊕→ . (5.4)
This idea is illustrated in Figure 5.2.
Figure 5.2. Reversible synthesis of irreversible arithmetic and logical operations
5.4.1. Arithmetic Operations
We consider only integer values in the range 0..N-1, where N=2n for some integer n.
Theoretically, this is the same as real computers, at least on a qualitative basis. We might
be tempted to think that computers represent real numbers, however, each such number is
actually just an approximation to the nearest rational number that is allowed by the
precision of that computer. On the other hand, it is well-known that the set of rational
numbers has the same cardinality with the set of integers. The fact that we supply an upper
limit for values that we are able to represent should not come as a surprise either.
Conventional computers have the same kind of limitation and will always have it as long
as the memory of the computer is finite.
The arithmetic operators that we consider in this infrastructure are addition,
subtraction and multiplication. All these functions are of the form
94
NNNf →×: ,
they take two inputs and return one output. Throughout this chapter, we will use the letter
N to denote two different concepts. N may denote the number of different basis states that
an n-qubit register may be observed in, that is N=2n. It may be also be used to denote the
set of natural numbers. The specific meaning should become evident from the context.
Additionally we use two already reversible operators, increment and decrement by n.
Their functional representation is of the form
NNNNf ×→×: .
The arithmetic operations are all implemented in modulo N arithmetic, since we cannot
represent values greater than or equal to N in an n-qubit register.
5.4.1.1. Addition Operation. The irreversible addition operation implements the function
. To make this function reversible, we should transmit both inputs as
outputs. However in this case we are still left with unequal number of outputs (3) and
inputs (2). For this reason we add an input whose value is to be added to the sum of
original addends. The final form of the reversible function is
( ) babaf +→+ ,:
( ) ( )bacbacbaf R ++→+ ,,,,: .
Remember that addition operations in the third output are implemented in modulo N. Note
that setting c to 0, we obtain exactly the sum of a and b in the third output as illustrated in
Figure 5.3.
Figure 5.3. Reversible implementation of addition
95
In Section 5.2, we decided to represent the input registers as 2n column vectors (where n is
the number of bits in the register) so that it could be compatible with the definition of
quantum registers. Following this choice of input, we represent the reversible classical
operators as 2n×2n matrices.
When considering such operators in the domain of quantum computing, it can be
shown that they map any basis states i to just another basis state j . As such they can
be represented as permutation operators. A permutation operator is defined by a matrix
which has exactly one entry set to 1 and all the entries set to 0 for each row and column.
The (i+1)st column represents the action of the operator on the basis state i . Let us
consider the addition operator for 1 bit inputs, where no carry out is produced. In cases
where the sum needs cannot be represented with 1 bit, the operation results in an overflow
state and the sum is set to the least significant bit.
The action of one bit reversible addition operation is shown in Table 5.1.
Table 5.1. The action of reversible one bit addition
Input Output
a b c a b c
0 0 0 0 0 0
0 0 1 0 0 1
0 1 0 0 1 1
0 1 1 0 1 0
1 0 0 1 0 1
1 0 1 1 0 0
1 1 0 1 1 0
1 1 1 1 1 1
It is straightforward to verify that the matrix representation of this operation is
96
⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜
⎝
⎛
=
1000000001000000000100000010000000000100000010000000001000000001
M ,
which corresponds to the permutation operator [ ]76452310P =+ . The same
approach is taken for all other reversible operators defined below.
5.4.1.2. Subtraction Operation. The irreversible subtraction operation implements the
function . In the same way as with addition, we can synthesize this
function by using a reversible function with three inputs and three outputs,
( ) babaf −→− ,:
( ) ( )bacbacbaf R −+→− ,,,,: . Remember that the operation bac +− in the third output is
implemented as a modulo N addition. By setting c to 0, we obtain the difference of a and b
in the third output.
5.4.1.3. Multiplication Operation. The irreversible multiplication operation implements the
function . Then the reversible function for multiplication is ( ) babaf ×→× ,:
( ) ( )bacbacbaf R ×+→× ,,,,: . As with the other operations, setting c to 0, we obtain the
product of a and b.
5.4.1.4. Increment Operation. Unlike addition, subtraction and multiplication, the
increment operation is reversible, so it needs no transformation. It implements the function
. From the arithmetic point of view, there is no need for such an
operation as increment. As can be seen from its definition, increment is simply an addition.
However, because of its widespread use in programming, it is convenient to consider it
separately. This operation corresponds directly to the C language statement
( ) ( dadadf INC +→ ,,: )
a += d;
97
5.4.1.5. Decrement Operation. Just like increment, decrement is another reversible
operation. It implements the function ( ) ( )dadadf DEC −→ ,, . Decrement can be regarded
as a special case of subtraction where the same variable appears both as the result and the
number to be subtracted. In fact decrement is to subtraction as increment is to addition. It
also corresponds to the statement a -= d;
5.4.2. Logical Operations
Logical operations combine logical input values to produce logical output values. In
this context, the word logical will be used to mean a state space composed of two values,
true and false, just like the keyword boolean is used in the Java programming language. In
this context, logical operations implement boolean functions. There are many ways logical
values can be combined to yield other logical values, that is, there are many boolean
functions. However, as we mentioned in Section 2.7.2, any such boolean function can be
synthesized by using only operations from a standard set, which is therefore called
universal. Examples of such universal sets are {AND, OR, NOT} and {NAND}.
In the structure of the reversifier, we allow the use of the standard logical operations
AND, OR, and NOT. Inclusion of other operations is straightforward but possibly of little
use. If we use the symbol B to denote the set {0,1}, where 0 represents false and 1
represents true as is customary, the functional representation of AND and OR is of the
form . For NOT we have . BBBf →×: BBf →:
When dealing with logical operations, we do not have to worry about overflow
conditions. As with arithmetic operations, the general strategy for reversification is to add
the two inputs as outputs and add one more input which is to be combined with the original
result of the irreversible function. We XOR this input with the original result. In this way,
given the two input values, we can evaluate the original irreversible result and from there
we can recover the third input. Note that with arithmetic functions we used addition instead
of XOR.
98
5.4.2.1. AND Operation. The irreversible AND operation implements the function
. This function can be synthesized by the reversible logical function ( ) babah ∧→∧ ,:
( ) ( )( )bacbacbahR ∧⊕→∧ ,,,,: . By setting c to 0, we obtain the AND of a and b in the
third output as shown in Figure 5.4.
Figure 5.4. Reversible implementation of AND
5.4.2.2. OR Operation. Just as with AND, the irreversible logical operation OR, which
implements the function , can be synthesized by making clever use of
the reversible function
( ) babah ∨→∨ ,:
( ) ( )( )bacbacbahR ∨⊕→∨ ,,,,: . Again setting c to 0, we obtain
the OR of a and b in the third output.
5.4.2.3. NOT Operation. Unlike AND and OR, NOT is a reversible operation and needs
no further reversification. It implements the function aah ¬→¬ : .
5.4.3. Comparison Operations
In previous subsections, we considered functions that mapped numerical inputs to
numerical outputs or logical inputs to logical outputs. In this sense, the transformation was
within the kind. Comparison operations are somewhat different since they map pairs of
numerical inputs to a logical value. They actually test whether the input pair satisfy a
property or not. We will cover the most frequently used comparisons, equality, inequality,
less than, less than or equal, greater than, and greater than or equal. All these operations
can be expressed functionally as BNNf →×: . Needless to say, they are irreversible. In
order to reversify them, the same approach as with logical operations is followed, since the
result of a comparison operation is again a logical value.
99
5.4.3.1. Equality Comparison. The comparison for the equality operator implements the
function ( ) babag =→= ,: . It is reversified by using the function
. Setting c to 0 leaves the third output register with the result of
the comparison.
( ) ( baccbag R =⊕→= ,,: )
5.4.3.2. Inequality Comparison. The comparison for equality operator is the logical
inverse of equality comparison. It implements the function . It is
synthesized by the reversible function
( ) babag ≠→≠ ,:
( ) ( )baccbag R ≠⊕→≠ ,,: . Setting c to 0 leaves the
third output register with the result of the comparison.
5.4.3.3. “Less than” Comparison. The “less than” comparison operator implements the
function . It can be synthesized by the reversible function
. Setting c to 0 leaves the third output register with the result of
the comparison.
( ) babag <→< ,:
( ) ( baccbag R <⊕→< ,,: )
5.4.3.4. “Less than or equal to” Comparison. The “less than or equal to” comparison
operator implements the function ( ) babag <=→<= ,: . It is synthesized by the reversible
function ( ) ( baccbag R )<=⊕→<= ,,: . Setting c to 0 leaves the third output register with
the result of the comparison.
5.4.3.5. “Greater Then” Comparison. The “greater than” comparison operator implements
the function . The corresponding reversible synthesizing function is
. Setting c to 0 leaves the third output register with the result of
the comparison.
( ) babag >→> ,:
( ) ( baccbag R >⊕→> ,,: )
5.4.3.6. “Greater than or Equal to” Comparison. The “greater than or equal to” comparison
operator implements the function ( ) babag >=→>= ,: . It is synthesized by the reversible
function ( ) ( baccbag R >=⊕→>= ,,: ) . Setting c to 0 leaves the third output register with
the result of the comparison.
100
5.5. Control Structures
Apart from simple operations as the ones described in the previous subsection, two
other constructs, namely, conditional statements and loops, are essential in classical
algorithms. This section is devoted to explaining the implementation of these constructs in
our infrastructure, and the additional restrictions that are imposed on them with respect to
reversibility.
5.5.1. Conditional Statement
Conditional statements are indispensable from a programming point of view, because
they enable us to test various conditions and to act properly in each of the actual
conditions. The general form of a conditional statement can be thought of as in Figure 5.5.
if (condition) {if-statements} else {else-statement} end-if
Figure 5.5. Conditional statements
As can be seen in Figure 5.6, conditional statements introduce two possible branches
of execution for the program, depending on whether the condition is satisfied or not.
condition
if-statements else-statements
true false
Figure 5.6. Schematic representation of a conditional statement
101
5.5.1.1. Reversible Conditional Statements. Stated as in Figure 5.5, we cannot say whether
a conditional statement is reversible or not. From the point of view of algorithms,
reversibility is the capability of the algorithm to run backwards, from the last statement to
the first one, furthermore, the reverse version of each individual statement should run in
this backward pass. During such a reverse run, assuming all statements up to the
conditional block have run successfully, we would then face the dilemma of which path to
take, the if-branch or the else-branch. In order to be able to say which path the original
version of the algorithm has taken, we could try to evaluate the condition again. However,
if in the original version the variables within the condition are modified inside the
conditional block, then evaluation of the condition after the conditional block may not
yield the same value as with the original evaluation. If we introduce the restriction that
variables appearing in the condition can not be modified inside the conditional-block, then
we are guaranteed that the values of those variables are the same prior to entering and after
exiting the conditional block. We say that those variables are invariant with respect to
conditional-block. If this is the case, then evaluating the condition after the loop and before
the loop yields the same result, and this solves our “which path to take” dilemma when
running the algorithm in the reverse direction. Consider the piece of pseudo-code in Figure
5.7 as an example. When running forwards, the algorithm takes the else-branch, updating
the value of variable c to 4 in the process. If we try to run the algorithm backward, suppose
we have run every statement in reverse inside the program and we have arrived at the
conditional block. Clearly, the value of c should be 4 at this point. We have to decide
which branch to take and if we evaluate the condition with c=4, we will incorrectly take
the if-branch.
… c=2; if (c>3) … else … c=c+2; … end-if …
Figure 5.7. Irreversible conditional statement
102
If we had not modified the conditional variable c inside the conditional block, but
used another variable inside the block, say c’, we would have been able to perform all
operations as stated in the original program and on the other hand we would have been able
to correctly predict the branch taken by the forward run of the program simply by
evaluating the condition again. This idea is shown in the pseudocode in the Figure 5.8.
… c=2; c’=c; if (c>3) // use c’ instead of c … else // use c’ instead of c … c’=c’+2; … end-if
Figure 5.8. Reversible conditional statement
5.5.1.2. Quantum IF Statements. What could be the equivalent construct to the conditional
statement in quantum computing? Does the quantum computing paradigm allow for
branching of execution? The answer is yes, and we will discuss shortly how to represent
conditional statements in a quantum computer. In fact, such branching is part of the nature
of quantum systems and we will see that it can be implemented at the low level of quantum
gates.
As a first step, note that any conditional statement, however complex it can be, is
actually testing whether a boolean value is true or false. In fact compilers evaluate the
condition into a boolean result and then the branching is performed by checking whether
the result is true or false. On the other hand, a boolean result can actually be thought of as a
single bit. To conclude, we can write any conditional statement, without loss of generality,
as testing of a single bit which holds the result of the condition evaluation, as shown in
Figure 5.9. With respect to reversibility issues, within the conditional block we can now
modify any variable that is part of the condition but not the evaluation result b.
103
b = condition; If (b) … End-If
Figure 5.9. Modification of conditional statements
When considering the equivalent construct of a conditional statement for quantum
computing, let us analyze an if statement without an else branch first. In this case, we
evaluate the condition into some result bit b, and then we check the value of b. If it is 1
(true), we perform the statement(s) within the conditional block, otherwise, they are not
performed, and the program execution moves to the first statement after the conditional
block. This is very much like the CNOT gate in quantum computing. Remember from
Section 2.3.2 that if the control bit is 1, the target bit is inverted, otherwise, it is left
unchanged. The CNOT gate can be thought as implementation of the conditional statement
in Figure 5.10, where c and b are single bits.
If (c) b = ¬b; End-if
Figure 5.10. CNOT gate as an if statement
We conclude that the concept of controlled statements in quantum computing can be
extended to accommodate multiple target qubits and any unitary operator rather than just
the NOT operation used in the CNOT gate. In this context, a single control, multiple target
bit controlled-U operation can be thought of as an implementation of the if statement in
Figure 5.11
If (c) Apply operation U to b; End-if
Figure 5.11. Controlled-U gate as an if statement
104
In general, provided that the condition result c is invariant with respect to conditional
block, it can be distributed to each single statement inside the conditional block. In other
words the pseudocodes shown in Figures 5.12 and 5.13 are equivalent to each-other.
C = condition; If (c) Apply operation U1 to b1; … Apply operation Un to bn; End-If
Figure 5.12. If statement
c = condition; If (c) Apply operation U1 to b1; End-if … If (c) Apply operation Un to bn; End-if
Figure 5.13. Distributed If statement
From Figure 5.13, we can identify that each single if statement can be implemented by a
controlled-U gate in quantum computing. Therefore the whole conditional block which
consists of n statements can be implemented by a quantum circuit consisting of n
controlled-U gates as illustrated in Figure 5.14.
Figure 5.14. Implementation of a conditional block as a quantum circuit
105
5.5.1.3. Quantum IF-ELSE Statement. Implementation of if-else statements can be seen
as a straightforward extension of the strategy adopted for if statements. Key to
implementing the else part is the construct depicted in Figure 5.15, which we discussed in
more detail in Chapter 2.
Figure 5.15. Controlled-U gate enabled when control value is 0
Note that in this case the operator U is applied to b when the value of the control is 0.
This is not something we can not do with the usual control gates that are enabled at value
1. In fact, Figure 5.16. shows how to implement a controlled-U gate which is enabled at
value 0 with a controlled-U gate that is enabled at value 1. By applying a NOT gate to the
control bit just before applying the controlled gate, and once again after the controlled gate,
the overall behavior is that of a controlled gate enabled for value 0 of the control.
Figure 5.16. Controlled-U gate enabled when control value is 1
The controlled-U gate in Figures 5.15 and 5.16 corresponds to if statement in Figure 5.17.
c = condition; If (c) Do nothing Else Apply operation U to b; End
Figure 5.17. Simple else statement
106
It is an if statement with only an else part. Of course it can be written as an if
statement with no else part by negating the condition, but the point in writing it like this is
to illustrate the following observation: We showed in the previous subsection that
controlled-U operations acting when the control is 1 actually implement the if part of the
conditional statement. On the other hand, controlled-U operations acting when the control
is 0 implement the else part of the conditional statement. Now let us consider a simple
if-else statement as the one in Figure 5.18.
c = condition; If (c) Apply operation U to b; Else Apply operation V to b; End-if
Figure 5.18. Simple if-else statement
If we assume that c is not modified inside the conditional block, the statement can be
decomposed into two individual if statements, one for the if branching and the other for
the else branching of the original statement. Moreover, conditional expressions of these
statements are the complement of each other. The idea is demonstrated in pseudocode of
Figure 5.19.
c = condition; If (c) Apply operation U to b; End-if If (¬c) Apply operation V to b; End-if
Figure 5.19. Distributed if-else statement
Figure 5.20. Implementation of a simple if-else statement
107
Combining the separate approaches for if and else branches, the overall quantum
circuit for the statement is shown in Figure 5.20.
How do we verify that this circuit indeed implements an if-else statement in quantum
computing? It is simple to verify that the circuit is compliant with classical logic, where
bits assume values from the set {0,1}. If c is 0, then operator U would act upon b but V
would not be triggered, and vice-versa. But we are dealing with a quantum circuit and
qubits, which, as we know, might as well be in a superposition of 0 and 1. If this is the case
with our control qubit c, both U and V will act upon b, at particular, possibly different,
degrees which are governed by the amplitudes of c. The sequence “first U then V”
introduced by the circuit is not important either. In fact the circuit of Figure 5.20 produces
the same results as would the circuit in Figure 5.21, where the order of U and V is
swapped.
Figure 5.21. Reordered implementation of if-else statement
So the sequence of controlled operations U and V is not important, and the overall
result does not depend on that sequence. This observation is compliant with our
understanding of classical if-else statements. Let us consider the overall system as a two
qubit register composed of qubits c and b , where
⎥⎦
⎤⎢⎣
⎡=
1
0
cc
c
and
⎥⎦
⎤⎢⎣
⎡=
1
0
bb
b .
108
The initial state of the register is given as
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
==⊗=
11
01
10
00
bcbcbcbc
cbbcIN .
If we represent the operators U and V as
⎟⎟⎠
⎞⎜⎜⎝
⎛=
1101
1000
uuuu
U
and
⎟⎟⎠
⎞⎜⎜⎝
⎛=
1101
1000
vvvv
V ,
the matrices for the controlled operators become
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⎟⎟⎠
⎞⎜⎜⎝
⎛=
1101
1000
0000
00100001
00
uuuuU
IcU
and
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=⎟⎟⎠
⎞⎜⎜⎝
⎛=
100001000000
00 1101
1000
vvvv
IV
cV .
109
The combined action of the two operators is the product of the corresponding matrices
⎟⎟⎠
⎞⎜⎜⎝
⎛=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
×
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=×=U
V
uuuu
vvvv
vvvv
uuuu
cUcVT0
0
0000
0000
100001000000
0000
00100001
1101
1000
1101
1000
1101
1000
1101
1000
.
The final state of the system is then
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
⎥⎦
⎤⎢⎣
⎡
⎥⎦
⎤⎢⎣
⎡
=
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
++++
=
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
==
11
01
10
00
11110101
11100100
10110001
10100000
11
01
10
00
1101
1000
1101
1000
0000
0000
bcbc
U
bcbc
V
bcubcubcubcubcvbcvbcvbcv
bcbcbcbc
uuuu
vvvv
INTOUT .
If we examine the final state of the system, OUT , we can see that the operator U
acts on the basis states 10 and 11, that is, when control c is 1 , and operator V will act on
the basis states 00 and 01, that is when control c is 0 . The final state is the
superposition of these two actions. Furthermore, since these two scenarios (control c being
0 and 1) do not logically overlap, the end result does not depend on the sequence of these
operations. This is manifested in the fact that
⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟⎟
⎠
⎞⎜⎜⎝
⎛×⎟⎟⎠
⎞⎜⎜⎝
⎛=×=⎟⎟
⎠
⎞⎜⎜⎝
⎛×⎟⎟⎠
⎞⎜⎜⎝
⎛=×=
UV
IV
UI
cVcUU
II
VcUcVT
00
00
00
00
00
.
5.5.2. Loops
Loops are very useful programming constructs that save programmer time and
program size. In fact any useful funtion would be hard to think of without using loops at
some point of its execution. Consider the quantum circuit in Figure 5.22. We have
encountered it in Section 3.2.2 as part of the phase estimation algorithm, which in turn is a
module of Shor’s algorithm. In fact it implements the pseudocode in Figure 5.23.
110
Figure 5.22. Quantum for loop
For i=1 to c Apply operator U to v End-for
Figure 5.23. Simple for loop
To see this, suppose the value of c can be expressed in binary notation as c =(cn-1…c1c0)2.
We can then express c as
∑−
=− =++++=
1
01210 2
2...42
n
i
iin ccNcccc .
Let us turn our attention to the circuit now. The top register holds the value c, the
most significant qubit being the topmost one. In this case, only those controlled- gates
whose corresponding control qubit c
i2U
i is in state 1 will be applied. Therefore, the overall
action can be summarized as
cNcccc UUUUUT n =+×××= − 2/42 1210 ... ,
and the overall state of the system is transformed as
vUcvc c→ .
111
5.5.3. G Gate
By including the universal G gate, which was first defined in Section 2.6, in the list
of primitives that can be used in our infrastructure, we can simulate any quantum gate in
principle, if we allow for appropriately many ancilla bits for working space. Note that this
gate can be applied only to inputs which are defined in a compatible with quantum
registers as defined in Section 5.2. As such, it cannot be included as a primitive gate for
classical functions that are entered as inputs to reversifier. In fact, the very definition of
this gate is incompatible with classical inputs. However, it can be used as an additional
gate to those already defined in Section 5.3, on the side of reversified functions.
112
5.6. The Syntax
At this point, we have covered all basic features that we used in the source irreversible
function. A detailed syntax is given in Figure 5.24.
F <signature><body> <signature> function <funcName><outParams>=<inParams> <outParams> <params> <inParams> <params> <params> <var> | (<vars>) <vars> <var> | <var><vars> <body> <statements> <statements> <stat> | <stat><statements> <stat> <asgnStat> | <aritStat> | <logStat> | <compStat> | <ifStat> | <forStat> <asgnStat> <var>=<arg> <aritStat> <var>=<arg><binAritOp><arg> <binAritOp> + | - | × <logStat> <var>=<boolArg><binLogOp><boolArg> <logStat> <var>=<unLogOp><boolArg> <binLogOp> & <binLogOp> | <unLogOp> ~ <compStat> <var>=<arg><compOp><arg> <compOp> == | ~= | < | <= | > | >= <ifStat> if <condition> <if-part> end <ifStat> if <condition> <if-part> else <else-part> end <condition> <logArg> <if-part> <statements> <else-part> <statements> <forStat> for <var> = 1:<var> <statements> end <arg> <var> | <lit>
Figure 5.24. Supported syntax of source function
113
5.7. Examples
The reversifier program is written in Java. As already mentioned, it takes as input the
source code (actually the path of the file) of a classical function written in Matlab,
assuming that it follows the syntax in Section 5.5. The reversifier will produce an
equivalent reversible function. Because of the way we have chosen to represent registers
and corresponding reversible operators, the resulting algorithm can be simulated and tested
with quantum inputs rather than simply classical ones.
Reversifier produces two different output, the reversified function code and the graphical
representation of the corresponding quantum circuit. The graphical output may be in high
level or low level, the choice is left to the user. Figure 5.26 shows a low level circuit for
the single statement program represented in Figure 5.25. The reversified program code is
shown in Figure 5.27
function y = add(a,b) y = a+b;
Figure 5.25. Irreversible addition function
Figure 5.26. Low level visual representation of addition
114
function [a,b,y] = rev_add(a,b,y) y = QBase(0,2); [a, b, y] = revadd(a, b, y);
Figure 5.27. Reversified addition function
The notation QBase(a,n) represents the basis state a for an n-qubit quantum
register. Now consider the example function in Figure 5.29. The code of the produced
reversible function is shown in Figure 5.30 and the corresponding high level graphical
output is given in Figure 5.28.
Figure 5.28. High level visual representation of the test function in Table 5.15
115
function [y] = if2test(a,b) if (b >=1) if (b==2) d=c+a; else d=c-a; end else d =c-2; end y=2*d;
Figure 5.29. Irreversible test function
function [y] = rev_if2test(a,b) y = QBase(0,2); d = QBase(0,2); tmp_1 = QBase(1, 2); [b, tmp_1, tmp_2] = revgte(b,tmp_1, tmp_2); if (tmp_2) tmp_3 = QBase(2, 2); [b, tmp_3, tmp_4] = reveq(b,tmp_3, tmp_4); if (tmp_4) [c, a, d] = revadd(c, a, d); else [c, a, d] = revsub(c, a, d); end else tmp_5 = QBase(2, 2); [c, tmp_5, d] = revsub(c, tmp_5, d); end tmp_6 = QBase(2, 2); [tmp_6, d, y] = revmul(tmp_6, d, y);
Figure 5.30. Reversified program code for test function
From a human point of view, the low level description is hardly understandable even
for the simplest of the functions. It is already very crowded even for two-qubit addition.
For other more complex functions it becomes visually meaningless to represent the circuit
at this low level. But from the machine point of view, a quantum compiler for an eventual
real quantum computer would expect exactly this kind of specification. On the other hand,
the high-level representation can be better for circuit design and representation purposes.
Moreover, if a quantum computer ever becomes a reality, high level specialized quantum
“chips” implementing basic mathematical operations could also become available. In this
context, the high-level representation of the circuit would provide an approximate, yet not
complex, view of the implementation of the overall circuit.
116
5.8. Related Work
Bernhard Ömer has proposed a formalism for quantum programming called QCL (for
quantum computing language) and implemented an interpreter for source code written
according to the rules of this formalism [25]. QCL has a procedural C-like syntax and is
high-level from the point of view that numerical simulations of unitary transformations are
hidden from the programmer.
The language itself is classical but it provides support for quantum computing by
allowing the use of (i) quantum data types such as quantum registers (ii) classical
reversible functions and (iii) inherently quantum operators. The approach he uses in
treating classical reversible functions is similar to the approach we have adopted in Section
5.3, therefore allowing for the synthesis of any classical irreversible function with the
overhead of using extra qubits to achieve the required redundancy. While the language
itself is classical, additional constraints enforce it to be reversible. Loop and condition
variables are not allowed to be modified inside the proper control block.
QCL assumes the availability of a set of external gates that are to be provided from
the physical implementation backend. On the other hand, it does not seem to provide a
detailed decomposition of the function that it implements in terms of this standard gates.
Whereas our reversifier cannot be viewed as a general purpose quantum computing
language, the addition of the G gate to the list of operations that can be used in the
reversified functions allows the simulation of any quantum gate provided that we have
enough memory to represent the necessary RI ancillas.
Bettelli et al. introduced an alternative high level quantum computing language
template [26] which is based on a QRAM model of a quantum computer. It provides
automatic decomposition of the overall program into primitive gates which would be sent
to the quantum device of QRAM machine. On the other hand, this infrastructure does not
implement an automatic translation of classical functions into the format of quantum
computation.
117
Peter Selinger has also proposed a syntax and semantics for a quantum programming
language named QPL [27]. QPL is a functional language in the sense that every operator is
realized as a transformation of specific inputs to outputs, rather than updating any global
variables In his work he covers high-level features such as loops, recursive procedures and
structured data types, the absence of which is the main reason why still today quantum
algorithms are expressed in terms of hardware circuits. Programs in the language are
represented as control flow diagrams enriched with elements of data flow.
Just like there are several classical programming languages when theoretically only
one would suffice, it is only natural that there will be many quantum programming
languages with a wide variety of differences among them. Our platform is different in the
sense that it is designed with the aim of providing an automatic translation of classical
functions that are not written with any concern to reversibility issues into the domain of
quantum, and therefore reversible, computing. The resulting reversified code can then be
enriched with gate G therefore allowing for simulation of any quantum operator, at least on
theoretical basis. Furthermore, we provide a decomposition of the resulting quantum
circuit at two levels of details.
118
6. CONCLUSIONS
In this work, we have discussed two alternative algorithms called G and KSV for
generating equiprobable superpositions of arbitrary subsets of basis states and compared
them with respect to time complexity and precision. On the other hand, we proposed a new
generalization of the Deutsch-Jozsa algorithm whose task is to find whether the oracle
function is constant or balanced on a given subset rather than the whole domain of the
function. We showed that incorporating G as the equiprobable superposition generation
module yields a one sided error algorithm.
We have also designed a quantum programming infrastructure that allows
reversification of classical functions, an essential stage in their eventual implementation on
quantum computers. A visual component has been added to the infrastructure which
outputs the quantum circuit corresponding to the original classical function. Two visual
representations are made possible, a high level one operating at the function level, and a
low level one, which though very clumsy for our intuition, provides a decomposition that
might be useful for a quantum compiler.
The Deutsch-Jozsa problem is one of the hidden subgroup problems [14], which form
one of two classes for which quantum algorithms have been shown to lead to exponential
speedup over classical ones. We are currently trying to apply the idea of working with
perfectly equal distributions of arbitrary sets to other hidden subgroup problems, and to
come up with similar generalizations for those algorithms as well. On the other hand, we
are trying to provide more efficient reversifications of individual functions such as
addition, multiplication etc. We will also try to add an optimizer component to the
decomposition module, which we hope will reduce the complexity of the resulting low
level circuit.
119
REFERENCES
1. Turing, A. M., “On Computable Numbers with an Application to Entscheidungs-
problem”, Proceedings of London Mathematical Society, Vol. 42, pp. 230-265, 1936.
2. Keyes, R. W., “Miniaturization of Electronics in its Limits”, IBM Journal of Research
and Development, Vol. 32, pp. 54-62, 1988.
3. Feynman, R. P., “Simulating Physics with Computers”, International Journal of
Theoretical Physics, Vol. 21, No. 6/7, pp. 467-488, 1982.
4. Benioff, P., “The Computer as a Physical System. A Microscopic Quantum Mechanical
Hamiltonian Model of Computers as Represented by Turing Machines”, Journal of
Statistical Physics, Vol. 22/5, pp. 563-591, 1980.
5. Deutsch, D., “Quantum Theory, the Church-Turing Principle and the Universal
Quantum Computer”, Proceedings of Royal Society of London, Vol. A 400, pp. 97-117,
1985.
6. Bernstein, D. J. and U. Vazirani, “Quantum Complexity Theory”. SIAM Journal of
Computing, Vol. 26, No.5, pp. 1411-1473, 1997.
7. Deutsch, D. and R. Jozsa, “Rapid Solution of Problems by Quantum Computation”,
Proceedings of Royal Society of London. Vol. 439A, pp. 439-553, 1992.
8. Shor, P. W., “Algorithms for Quantum Computing: Discrete Logarithm and Factoring”,
Proceedings of 35th Annual Symposium on Foundations of Computer Science, pp. 124-
134, 1994.
9. Grover, L., “A Fast Quantum Mechanical Algorithm for Database Search”,
Proceedings of the 28 Annual ACM Symposium on Theory of Computing, pp. th 212-
219, 1996.
120
10. Childs, A. M., R. Cleve, E. Deotto, E. Farhi, S. Gutmann and D. A. Spielman,
Exponential Algorithmic Speedup by Quantum Walk, http://arxiv.org/pdf/quant-
ph/0209131, 2002.
11. Vandersypen, L. M. K., M. Steffen, V. Breyta, C. S. Yannoni, M. Sherwood and I. L.
Chuang, “Experimental Realization of Shor's Quantum Factoring Algorithm Using
Nuclear Magnetic Resonance”, Nature Vol. 414, No. 6866, pp. 883-887, 2001.
12. Shor, P. W., “Why Haven’t More Quantum Algorithms Been Found?”, Journal of the
ACM, Vol. 50, No. 1, pp. 87–90, 2003.
13. Kitaev, A. Yu., A. H. Shen, M. N. Vyalyi, Classical and Quantum Computation,
American Mathematical Society, 2002.
14. Nielsen, M. A. and I. L. Chuang, Quantum Computation and Quantum Information,
Cambridge University Press, 2000.
15. Hirvensalo, M., Quantum Computing, Springer Verlag, Berlin, 2001.
16. Grover, L. and R. Terry, A Two-Rebit Gate Universal for Quantum Computing,
http://arxiv.org/pdf/quant-ph/0210187, 2002.
17. Landauer, R., “Irreversibility and Heat Generation in the Computing Process”, The
IBM Journal of Research and Development, Vol. 5, pp. 183-191, 1961.
18. Wooters, W. K. and W.H. Zurek, “A Single Quantum Cannot Be Cloned”, Nature, Vol.
299, pp. 802-803, 1982.
19. Agrawal, M., N. Kayal and N. Saxena, Primes Is in P, http://www.cse.iitk.ac.in/
news/primality.html, 2002
20. Bernstein, D. J., “Detecting Perfect Powers in Essentially Linear Time”, Mathematics
of Computation, Vol. 67, No. 223, pp. 1253-1283, 1998.
121
21. Cleve, R., A. Ekert, C. Macchiavello and M. Mosca, “Quantum Algorithms Revisited”,
Proceedings of Royal London Society, Vol. A 454, pp. 339-354, 1998.
22. Chi, D. P., J. Kim and S. Lee, “Initialization-free Generalized Deutsch-Jozsa
Algorithm”, Journal of Physics A: Mathematical and General, Vol. 34, pp. 5251-5258,
2001.
23. Bergou, J.A., U. Herzog, and M. Hillery, “Quantum State Filtering and Discrimination
between Sets of Boolean Functions”, Physical Review Letters, Vol. 90, 2003.
24. Holmes, R. R. and F. Texier, “A Generalization of the Deutsch-Jozsa Quantum
Algorithm”, Far East Journal of Mathematical Sciences, Vol. 9, pp. 319-326, 2003.
25. Ömer, B., A Procedural Formalism for Quantum Computing, M.S. Thesis, Technical
University of Vienna, http://tph.tuwien.ac.at/~oemer, 1998.
26. Bettelli, S., T. Calarco and L. Serafini, Toward an Architecture for Quantum
Programming, http://arXiv:cs.PL/0103009, 2001.
27. Selinger, P., Toward a Quantum Programming Language, http://quasar.mathstat.utt-
awa.ca./~selinger/qpl2004
28. Boyer, M., G. Brassard, P. Høyer and A. Tapp, “Tight Bounds on Quantum
Searching”, Fortschritte Der Physik, Vol. 46, pp. 493-505, 1998.