Boost PC performance: How more available memory can improve productivity
0606.pptx
1. Quantum supremacy using a
programmable superconducting
processor
Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends,
Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian
Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew
Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa
Giustina, Rob Graff, Keith Guerin, …John M. Martinis
Nature volume 574, pages505–510 (2019)
2. Introduction
• Richard Feynman proposed the concept of a quantum computer in the 1980s as a tool to solve complex
problems in physics and chemistry.
• The realization of a quantum computer poses challenges related to computational space, error rates, and
formulating problems suitable for quantum speedup.
• The experiment described in the paragraph achieves quantum supremacy, demonstrating that quantum
speedup is achievable in a real-world system and not limited by hidden physical laws.
• Quantum supremacy also marks the era of noisy intermediate-scale quantum (NISQ) technologies.
• The benchmark task demonstrated in the experiment has applications in generating certifiable random
numbers and other areas such as optimization, machine learning, materials science, and chemistry.
• However, achieving the full potential of quantum computing, including using algorithms like Shor's algorithm
for factoring, requires further technical advancements in fault-tolerant logical qubits.
• The experiment involved technical advancements in fast, high-fidelity gates, calibration, and benchmarking
using cross-entropy benchmarking, and accurate prediction of system performance based on component-level
fidelities.
3. A suitable computational task
• The goal is to demonstrate quantum supremacy by comparing a quantum processor against state-of-the-art classical
computers in the task of sampling the output of a pseudo-random quantum circuit.
• Random circuits are chosen for benchmarking because they lack structure and provide limited guarantees of
computational hardness.
• The quantum circuit is designed to entangle qubits through single-qubit and two-qubit logical operations, and
sampling the output produces a set of bitstrings.
• Classically computing the probability distribution of the bitstrings becomes exponentially difficult as the number of
qubits and gate cycles increase.
• Cross-entropy benchmarking is used to verify the proper functioning of the quantum processor by comparing
experimental observations with ideal probabilities computed through classical simulation.
• The cross-entropy benchmarking fidelity (FXEB) reflects how often high-probability bitstrings are sampled.
• Achieving a high enough FXEB for a wide and deep circuit ensures that the classical computing cost becomes
prohibitively large, indicating quantum supremacy.
• Imperfect logic gates and sensitivity to errors make achieving low error rates crucial for claiming quantum
supremacy.
4. Building a high-fidelity processor
• The quantum processor called 'Sycamore' is
designed with a two-dimensional array of 54
transmon qubits, each tunably coupled to four
nearest neighbors in a rectangular lattice.
• The connectivity of the qubits is chosen to be
compatible with future error correction using
the surface code.
• A key advancement of the device is achieving
high-fidelity single- and two-qubit operations,
not only in isolation but also during a realistic
computation with simultaneous gate
operations on multiple qubits.
• The processor utilizes transmon qubits, which
are nonlinear superconducting resonators
operating at 5-7 GHz.
5. Building a high-fidelity processor
• Each transmon qubit has a microwave
drive to excite it and a magnetic flux
control to tune its frequency. Qubit state is
encoded in the two lowest quantum
eigenstates of the resonant circuit.
• Linear resonators are connected to each
qubit for readout of their states.
• The qubits are also connected to
neighboring qubits through an adjustable
coupler design that allows quick tuning of
qubit-qubit coupling.
• The device consists of 53 functioning
qubits and 86 couplers, as one qubit did
not operate properly.
7. Building a high-fidelity processor
• The processor is fabricated using aluminum for
metallization and Josephson junctions, and indium
for bump-bonds between two silicon wafers.
• The chip is wire-bonded to a superconducting circuit
board and cooled to below 20 mK in a dilution
refrigerator to reduce thermal energy.
• The processor is connected to room-temperature
electronics through filters and attenuators, which
synthesize the control signals.
• A frequency-multiplexing technique is used to read
the state of all qubits simultaneously.
• Cryogenic amplifiers and digital processing are
employed to boost and process the readout signal.
• 277 digital-to-analog converters are used for
complete control of the quantum processor.
• Single-qubit gates are executed using microwave
pulses resonant with the qubit frequency while the
qubit-qubit coupling is turned off.
• Gate performance is optimized by minimizing
various error mechanisms such as two-level-system
defects, stray microwave modes, coupling to control
lines, readout resonator, and pulse distortions.
• Single-qubit gate performance is benchmarked using
the cross-entropy benchmarking protocol, measuring
the probability of error occurrence during a single-
qubit gate.
• The benchmarking is done with a variable number of
randomly selected gates applied to each qubit, and
the average fidelity is calculated as errors accumulate.
• The experiment is repeated with all qubits executing
single-qubit gates simultaneously to assess
microwave crosstalk.
8. Building a high-fidelity processor
• Two-qubit iSWAP-like entangling gates are
performed by bringing neighboring qubits on-
resonance and activating a 20-MHz coupling for 12
ns.
• The gate allows qubits to swap excitations and
experience a controlled-phase (CZ) interaction
originating from higher levels of the transmon.
• Trajectories of the two-qubit gate frequencies are
optimized to mitigate error mechanisms.
• Two-qubit gate characterization and benchmarking
are done using two-qubit circuits with cycles
consisting of single-qubit gates and fixed two-qubit
gates.
• Parameters of the two-qubit unitary are learned by
using FXEB as a cost function.
• The per-cycle error and the two-qubit error are
extracted and analyzed, resulting in average e2 values
of 0.36% and 0.62%.
• Quantum circuits are generated using individually
calibrated two-qubit unitaries for each pair during
simultaneous operation.
• Qubit readout is benchmarked using standard
dispersive measurement, and simultaneous readout
incurs only a modest increase in per-qubit
measurement errors.
• The fidelity of a quantum circuit is modeled as the
product of error-free operation probabilities of all
gates and measurements.
• Random quantum circuits with 53 qubits, single-qubit
gates, two-qubit gates, and measurements are
predicted to have a total fidelity of 0.2%.
• The hypothesis that entangling larger systems does
not introduce additional error sources beyond
measured single- and two-qubit errors is discussed.
9. Fidelity estimation in the supremacy regime
• Each cycle consists of applying random single-qubit gates
followed by two-qubit gates on pairs of qubits.
• The 'supremacy circuits' are designed to minimize circuit
depth and create a highly entangled state for computational
complexity and classical hardness.
• Three variations are used to estimate the fidelity in the
supremacy regime: patch circuits, elided circuits, and
verification circuits.
• Patch circuits remove a slice of two-qubit gates and
calculate fidelity as the product of patch fidelities.
• Elided circuits remove a fraction of initial two-qubit gates
but allow for entanglement between patches.
• Verification circuits have the same gate counts as
supremacy circuits but a different pattern for the sequence
of two-qubit gates.
• Patch and elided versions of verification circuits produce
similar fidelity to full verification circuits up to 53 qubits.
• Elided circuits can accurately estimate the fidelity of more
complex circuits.
• The largest circuits with 53 qubits and simplified gate
arrangement can be directly verified, and random circuit
sampling at 0.8% fidelity is significantly faster on the
quantum processor compared to a single core.
• Benchmarking of computationally difficult circuits shows
measured FXEB for patch and elided versions of full
supremacy circuits.
• The largest circuit with 53 qubits and 20 cycles obtained
FXEB of (2.24±0.21)×10−3 for elided circuits, indicating that
the data is in the quantum supremacy regime.
12. The classical computational cost
• Quantum circuits used in the experiment are simulated on
classical computers for verification and benchmarking
purposes.
• Up to 43 qubits, a Schrödinger algorithm is used to simulate
the evolution of the full quantum state.
• Larger qubit numbers require a hybrid Schrödinger-
Feynman algorithm, which breaks the circuit into two
patches and efficiently simulates each patch before
connecting them.
• The Schrödinger-Feynman algorithm becomes
exponentially more computationally expensive with
increasing circuit depth.
• The classical computational cost of the supremacy circuits
is estimated by running portions of the simulation on the
Summit supercomputer and Google clusters and
extrapolating to the full cost.
• The computation cost is scaled with FXEB, where a lower
fidelity decreases the cost.
• The estimation of performing the task for m = 20 with 0.1%
fidelity on Google Cloud servers is provided, indicating a
significant computational and energy cost.
• The quantum processor is significantly faster in sampling
the circuits compared to classical simulations.
• Bitstring samples from all circuits have been archived
online for further development and testing of verification
algorithms.
• The cost of simulating quantum circuits is assumed to be
exponential in circuit size based on insights from
complexity theory.
• Simulation methods have improved over the years, but
hardware improvements on larger quantum processors are
expected to consistently outpace simulation cost reductions.
13. Verifying the digital error model
• The theory of quantum error correction assumes that
quantum state errors can be digitized and localized.
• In a digital model, errors in the quantum state can be
characterized as localized Pauli errors (bit-flips or
phase-flips) interspersed into the circuit.
• The assumption of a digital model needs to be tested
to determine if errors in a quantum system can be
treated as discrete and probabilistic.
• Experimental observations in the study support the
validity of the digital model for the quantum
processor.
• The system fidelity is well predicted by a simple
model that multiplies together the individually
characterized fidelities of each gate.
• A system described by a digitized error model should
have low correlated errors.
• The experiment achieves low correlated errors by
using circuits that randomize and decorrelate errors,
optimizing control to minimize systematic errors and
leakage, and designing gates that operate faster than
correlated noise sources.
• Demonstrating a predictive uncorrelated error model
up to a Hilbert space of size 253
indicates that the
system can handle quantum resources, such as
entanglement, without excessive fragility.
14. The future
• Quantum processors based on superconducting qubits
can perform computations in a large Hilbert space,
reaching a dimension of 253
≈ 9 × 1015
,
surpassing the capabilities of current classical
supercomputers.
• This experiment represents the first computation that
can be exclusively performed on a quantum processor,
marking the achievement of quantum supremacy.
• The computational power of quantum processors is
expected to grow at a double-exponential rate,
surpassing the exponential growth of classical
simulations.
• Quantum error correction will be crucial to sustain
the rapid growth of quantum processors and enable
the execution of well-known quantum algorithms
such as Shor or Grover algorithms.
• The extended Church-Turing thesis, which states that
any "reasonable" model of computation can be
efficiently simulated by a Turing machine, may be
challenged by the availability of quantum
computation.
• The experiment demonstrates that random quantum
circuit sampling can be performed in polynomial time
using a physically realizable quantum processor,
while no efficient method is known for classical
computing machinery.
• Quantum computing is transitioning from a research
topic to a technology that unlocks new computational
capabilities, and valuable near-term applications are
expected with the development of creative algorithms.