APS March meeting 2020
Variational hybrid quantum-classical algorithms (VHQCAs) are near-term algorithms that leverage classical optimization to minimize a cost function, which is efficiently evaluated on a quantum computer. Recently VHQCAs have been proposed for quantum compiling, where a target unitary U is compiled into a short-depth gate sequence V. In this work, we report on a surprising form of noise resilience for these algorithms. Namely, we find one often learns the correct gate sequence V (i.e., the correct variational parameters) despite various sources of incoherent noise acting during the cost-evaluation circuit. Our main results are rigorous theorems stating that the optimal variational parameters are unaffected by a broad class of noise models, such as measurement noise, gate noise, and Pauli channel noise. Furthermore, our numerical implementations on IBM's noisy simulator demonstrate resilience when compiling the quantum Fourier transform, Toffoli gate, and W-state preparation. Hence, variational quantum compiling, due to its robustness, could be practically useful for noisy intermediate-scale quantum devices. Finally, we speculate that this noise resilience may be a general phenomenon that applies to other VHQCAs such as the variational quantum eigensolver.
Software-defined white-space cognitive systems: implementation of the spectru...CSP Scarl
S. Benco, F. Crespi, A. Ghittino, A. Perotti, "Software-defined
white-space cognitive systems: implementation of the spectrum sensing
unit", Proceedings of the 2nd International Workshop of COST Action
IC0902 October 5–7 2011, Castelldefels and Barcelona, Spain
A very wide spectrum of optimization problems can be efficiently solved with proximal gradient methods which hinge on the celebrated forward-backward splitting (FBS) schema. But such first-order methods are only effective when low or medium accuracy is required and are known to be rather slow or even impractical for badly conditioned problems. Moreover, the straightforward introduction of second-order (Hessian) information is beset with shortcomings as, typically, at every iteration we need to solve a non-separable optimisation problem. In this talk we will follow a different route to the solution of such optimisation problems. We will recast non-smooth optimisation problems as the minimisation of a real-valued, continuously differentiable function known as the forward-backward envelope. We will then employ a semismooth Newton method to solve the equivalent optimisation problem instead of the original one. We will then apply the proposed semismooth Newton method to L1-regularised least squares (LASSO) problems which is motivated by an an interesting application: recursive compressed sensing. Compressed sensing is a signal processing methodology for the reconstruction of sparsely sampled signals and it offers a new paradigm for sampling signals based on their innovation, that is, the minimum number of coefficients sufficient to accurately represent it in an appropriately selected basis. Compressed sensing leads to a lower sampling rate compared to theories using some fixed basis and has many applications in image processing, medical imaging and MRI, photography, holography, facial recognition, radio astronomy, radar technology and more. The traditional compressed sensing approach is naturally offline, in that it amounts to sparsely sampling and reconstructing a given dataset. Recently, an online algorithm for performing compressed sensing on streaming data was proposed; the scheme uses recursive sampling of the input stream and recursive decompression to accurately estimate stream entries from the acquired noisy measurements. We will see how we can tailor the forward-backward Newton method to solve recursive compressed sensing problems at one tenth of the time required by other algorithms such as ISTA, FISTA, ADMM and interior-point methods (L1LS).
Software-defined white-space cognitive systems: implementation of the spectru...CSP Scarl
S. Benco, F. Crespi, A. Ghittino, A. Perotti, "Software-defined
white-space cognitive systems: implementation of the spectrum sensing
unit", Proceedings of the 2nd International Workshop of COST Action
IC0902 October 5–7 2011, Castelldefels and Barcelona, Spain
A very wide spectrum of optimization problems can be efficiently solved with proximal gradient methods which hinge on the celebrated forward-backward splitting (FBS) schema. But such first-order methods are only effective when low or medium accuracy is required and are known to be rather slow or even impractical for badly conditioned problems. Moreover, the straightforward introduction of second-order (Hessian) information is beset with shortcomings as, typically, at every iteration we need to solve a non-separable optimisation problem. In this talk we will follow a different route to the solution of such optimisation problems. We will recast non-smooth optimisation problems as the minimisation of a real-valued, continuously differentiable function known as the forward-backward envelope. We will then employ a semismooth Newton method to solve the equivalent optimisation problem instead of the original one. We will then apply the proposed semismooth Newton method to L1-regularised least squares (LASSO) problems which is motivated by an an interesting application: recursive compressed sensing. Compressed sensing is a signal processing methodology for the reconstruction of sparsely sampled signals and it offers a new paradigm for sampling signals based on their innovation, that is, the minimum number of coefficients sufficient to accurately represent it in an appropriately selected basis. Compressed sensing leads to a lower sampling rate compared to theories using some fixed basis and has many applications in image processing, medical imaging and MRI, photography, holography, facial recognition, radio astronomy, radar technology and more. The traditional compressed sensing approach is naturally offline, in that it amounts to sparsely sampling and reconstructing a given dataset. Recently, an online algorithm for performing compressed sensing on streaming data was proposed; the scheme uses recursive sampling of the input stream and recursive decompression to accurately estimate stream entries from the acquired noisy measurements. We will see how we can tailor the forward-backward Newton method to solve recursive compressed sensing problems at one tenth of the time required by other algorithms such as ISTA, FISTA, ADMM and interior-point methods (L1LS).
Distributed solution of stochastic optimal control problem on GPUsPantelis Sopasakis
Stochastic optimal control problems arise in many
applications and are, in principle,
large-scale involving up to millions of decision variables. Their
applicability in control applications is often limited by the
availability of algorithms that can solve them efficiently and within
the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal
gradient algorithm which is amenable to parallelization and
demonstrate that its GPU implementation affords high speed-up
values (with respect to a CPU implementation) and greatly outperforms
well-established commercial optimizers such as Gurobi.
Injecting image priors into Learnable Compressive SubsamplingMartino Ferrari
My master thesis work extends the problem formulation of learnable compressive subsampling [1] that focuses on the learning of the best sampling operator in the Fourier domain adapted to spectral properties of a training set of images. I formulated the problem as a reconstruction from a finite number of sparse samples with a prior learned from the external dataset or learned on-fly from the images to be reconstructed. More in
details, I developed two very different methods, one using multiband coding in the spectral domain and the second using a neural network.
The new methods can be applied to many different fields of spectroscopy and Fourier optics, for example in medical (computerized tomography, magnetic resonance spectroscopy) and astronomy (the Square Kilometre Array) imaging, where the capability to reconstruct high-quality images, in the pixel domain, from a limited number of samples, in the frequency domain, is a key issue.
The proposed methods have been tested on diverse datasets covering facial images, medical and multi-band astronomical data, using the mean square error and SSIM as a perceptual measure of the quality of the reconstruction.
Finally, I explored the possible application in data acquisition systems such as computer tomography and radio astronomy. The obtained results demostrate that the properties of the proposed methods have a very promising potential for future research and extensions.
For such reason, the work was both presented at the poster session of the EUSIPCO 2018 conference in Rome and submitted for a EU patent.
[1] L. Baldassarre, Y.-H. Li, J. Scarlett, B. Gözcü, I. Bogunovic, and V.
Cevher, “Learning-based compressive subsampling,” IEEE Journal of Selected
Topics in Signal Processing, vol. 10, no. 4, pp. 809–822, 2016
Random Number Generators :
LCG, Fibonacci, LFSR, GFSR, TGFSR, MT, MT19937,WELL
Tutorials on FInite Fields and associated RNG on github at :
https://github.com/rinnocente/Random_Numbers
We present a novel modeling
methodology to derive a nonlinear dynamical model which
adequately describes the effect of fuel sloshing on the attitude dynamics of a spacecraft. We model the impulsive thrusters using mixed logic and dynamics leading to a hybrid formulation.
We design a hybrid model predictive control scheme for the
attitude control of a launcher during its long coasting period,
aiming at minimising the actuation count of the thrusters.
This talk was presented by Thomas Brougham from quantum theory group on 30th August 2018. It explains what boson sampling is, why it is important with some example applications to cryptography.
Distributed solution of stochastic optimal control problem on GPUsPantelis Sopasakis
Stochastic optimal control problems arise in many
applications and are, in principle,
large-scale involving up to millions of decision variables. Their
applicability in control applications is often limited by the
availability of algorithms that can solve them efficiently and within
the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal
gradient algorithm which is amenable to parallelization and
demonstrate that its GPU implementation affords high speed-up
values (with respect to a CPU implementation) and greatly outperforms
well-established commercial optimizers such as Gurobi.
Injecting image priors into Learnable Compressive SubsamplingMartino Ferrari
My master thesis work extends the problem formulation of learnable compressive subsampling [1] that focuses on the learning of the best sampling operator in the Fourier domain adapted to spectral properties of a training set of images. I formulated the problem as a reconstruction from a finite number of sparse samples with a prior learned from the external dataset or learned on-fly from the images to be reconstructed. More in
details, I developed two very different methods, one using multiband coding in the spectral domain and the second using a neural network.
The new methods can be applied to many different fields of spectroscopy and Fourier optics, for example in medical (computerized tomography, magnetic resonance spectroscopy) and astronomy (the Square Kilometre Array) imaging, where the capability to reconstruct high-quality images, in the pixel domain, from a limited number of samples, in the frequency domain, is a key issue.
The proposed methods have been tested on diverse datasets covering facial images, medical and multi-band astronomical data, using the mean square error and SSIM as a perceptual measure of the quality of the reconstruction.
Finally, I explored the possible application in data acquisition systems such as computer tomography and radio astronomy. The obtained results demostrate that the properties of the proposed methods have a very promising potential for future research and extensions.
For such reason, the work was both presented at the poster session of the EUSIPCO 2018 conference in Rome and submitted for a EU patent.
[1] L. Baldassarre, Y.-H. Li, J. Scarlett, B. Gözcü, I. Bogunovic, and V.
Cevher, “Learning-based compressive subsampling,” IEEE Journal of Selected
Topics in Signal Processing, vol. 10, no. 4, pp. 809–822, 2016
Random Number Generators :
LCG, Fibonacci, LFSR, GFSR, TGFSR, MT, MT19937,WELL
Tutorials on FInite Fields and associated RNG on github at :
https://github.com/rinnocente/Random_Numbers
We present a novel modeling
methodology to derive a nonlinear dynamical model which
adequately describes the effect of fuel sloshing on the attitude dynamics of a spacecraft. We model the impulsive thrusters using mixed logic and dynamics leading to a hybrid formulation.
We design a hybrid model predictive control scheme for the
attitude control of a launcher during its long coasting period,
aiming at minimising the actuation count of the thrusters.
This talk was presented by Thomas Brougham from quantum theory group on 30th August 2018. It explains what boson sampling is, why it is important with some example applications to cryptography.
In this deck from ATPESC 2019, Yunong Shi from the University of Chicago presents: SW/HW co-design for near-term quantum computing.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-lpv
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
BER Performance of MU-MIMO System using Dirty Paper CodingIJEEE
In this paper Dirty Paper Coding for communication system is implemented. MIMO application that involves devices such as cell phones, pocket PCs require closely spaced antenna, which suffers from mutual coupling among antennas and high spatial correlation for signals. DPC is used for compensating the degradation due to correlation and mutual coupling.
Investigation of repeated blasts at Aitik mine using waveform cross correlationIvan Kitov
We present results of signal detection from repeated events at the Aitik and Kiruna mines in Sweden as based on waveform cross correlation. Several advanced methods based on tensor Singular Value Decomposition is applied to waveforms measured at seismic array ARCES, which consists of three-component sensors.
Multi Qubit Transmission in Quantum Channels Using Fibre Optics Synchronously...researchinventy
A quantum channel can be used to transmit classical information as well as to deliver quantum data from one location to another . Classical information theory is a subset of Quantum information theory which is fundamentally richer, because quantum mechanics includes so many more elementary classes of static and dynamic resources. Quantum information theory contains many more facts other than described here, including the study of quantum data processing, manipulation and Quantum data compression. Here we consider quantum channel as Bosonic channels, which are a quantum-mechanical model for free space or fibre optic communication. In this paper the overview of theoretical scenario of quantum networks in particular to multiple user access to the quantum communication channel is considered. Multiple qubits are generated in different system, the proper alignment of qubits is a must it can be first come first serve or round robin fashion. The received data are grouped into codewords each of n qubits and quantum error correction is performed. These codewords are agreed between the transmitter and the receiver before transmitting over the quantum channel known as valid codewords.
Quantum networks with superconducting circuits and optomechanical transducersOndrej Cernotik
Connecting distant chips in a quantum network is one of biggest challenges for superconducting quantum computers. Superconducting systems operate at microwave frequencies; transmission of microwave signals through room-temperature quantum channels is impossible due to the omnipresent thermal noise. I will show how two well-known experimental techniques—parity measurements on superconducting systems and optomechanical force sensing—can be combined to generate entanglement between two superconducting qubits through a room-temperature environment. An optomechanical transducer acting as a force sensor can be used to determine the state of a superconducting qubit. A joint readout of two qubits and postselection can lead to entanglement between the qubits. From a conceptual perspective, the transducer senses force exerted by a quantum object, entering a new paradigm in force sensing. In a typical scenario, the force sensed by an optomechanical system is classical. I will argue that the coherence between different states of the qubit (which give rise to different values of the force) can be preserved during the measurement, making it an important resource for quantum communication.
Design of 8-Bit Comparator Using 45nm CMOS TechnologyIJMER
In this paper design of 8- bit binary comparator using 45nm CMOS technology is discussed.
This design needs less area and less number of transistors, also discussed about power and execution time. The
circuit has three output X, Y and Z. X is active high, when A>B, Y is active high when A=B and Z is active high
when both X and Y are active low. Design 1- bit comparator with the help of precharge gate.The design of 1-bit
comparator has been extended to implement an 8-bit comparator by connecting in series with pass
transistor between them. The design has been implemented in Microwind3.1, is tested successfully and
has been validated using Pspice for different measurable parameter.
An identification of the tolerable time-interleaved analog-todigital convert...IJECEIAES
High-speed Terahertz communication systems has recently employed orthogonal frequency division multiplexing approach as it provides high spectral efficiency and avoids inter-symbol interference caused by dispersive channels. Such high-speed systems require extremely high-sampling time-interleaved analog-to-digital converters at the receiver. However, timing mismatch of time-interleaved analog-to-digital converters significantly causes system performance degradation. In this paper, to avoid such performance degradation induced by timing mismatch, we theoretically determine maximum tolerable mismatch levels for orthogonal frequency division multiplexing communication systems. To obtain these levels, we first propose an analytical method to derive the bit error rate formula for quadrature and pulse amplitude modulations in Rayleigh fading channels, assuming binary reflected gray code (BRGC) mapping. Further, from the derived bit error rate (BER) expressions, we reveal a threshold of timing mismatch level for which error floors produced by the mismatch will be smaller than a given BER. Simulation results demonstrate that if we preserve mismatch level smaller than 25% of this obtained threshold, the BER performance degradation is smaller than 0.5 dB as compared to the case without timing mismatch.
Projected Barzilai-Borwein Methods Applied to Distributed Compressive Spectru...Polytechnique Montreal
Cognitive radio allows unlicensed (cognitive) users to use licensed frequency bands by exploiting spectrum sensing techniques to detect whether or not the licensed (primary) users are present. In this paper, we present a compressed sensing applied to spectrum-occupancy detection in wide-band applications. The collected analog signals from each cognitive radio (CR) receiver at a fusion center are transformed to discrete-time signals by using analog-to-information converter (AIC) and then employed to calculate the autocorrelation. For signal reconstruction, we exploit a novel approach to solve the optimization problem consisting of minimizing both a quadratic (l2) error term and an l1-regularization term. In specific, we propose the Basic gradient projection (GP) and projected Barzilai-Borwein (PBB) algorithm to offer a better performance in terms of the mean squared error of the power spectrum density estimate and the detection probability of licensed signal occupancy.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ABSTRACT: Once introduced the fundamental ideas of quantum computing, we will discuss the possibilities offered by quantum computers in machine learning.
BIO: Davide Pastorello obtained an M.Sc. in Physics (2011) and a Ph.D. in Mathematics (2014) from Trento University. After serving at the Dept. of Mathematics and DISI in Trento, he is currently an assistant professor at the Dept. of Mathematics, University of Bologna. His main research interests concern the mathematical aspects of quantum information theory, quantum computing, and quantum machine learning.
A novel delay dictionary design for compressive sensing-based time varying ch...TELKOMNIKA JOURNAL
Compressive sensing (CS) is a new attractive technique adopted for Linear Time Varying channel estimation. orthogonal frequency division multiplexing (OFDM) was proposed to be used in 4G and 5G which supports high data rate requirements. Different pilot aided channel estimation techniques were proposed to better track the channel conditions, which consumes bandwidth, thus, considerable data rate reduced. In order to estimate the channel with minimum number of pilots, compressive sensing CS was proposed to efficiently estimate the channel variations. In this paper, a novel delay dictionary-based CS was designed and simulated to estimate the linear time varying (LTV) channel. The proposed dictionary shows the suitability of estimating the channel impulse response (CIR) with low to moderate Doppler frequency shifts with acceptable bit error rate (BER) performance.
Similar to Noise Resilience of Variational Quantum Compiling (20)
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Multi-source connectivity as the driver of solar wind variability in the heli...
Noise Resilience of Variational Quantum Compiling
1. Noise Resilience of Variational Quantum
Compiling
Kunal Sharma
GRA, Los Alamos National Lab
Phd Candidate, Louisiana State University
ksharm7@lsu.edu
APS March Meeting 2020
Joint work with
Sumeet Khatri, Marco Cerezo, and Patrick J. Coles
New Journal of Physics, arXiv:1908.04416, LA-UR-19-28095
1 / 17
2. Main Results
Noise resilience of Variational Quantum Compiling (VQC)
• Measurement noise (readout error).
• Incoherent gate noise and decoherence processes: Pauli
channels and non-unital Pauli channels.
Implementations of VQC on IBM’s noisy quantum simulator
• Quantum Fourier transform (QFT)
• Toffoli
• W-state preparation
Noise resilience in every case.
2 / 17
4. Background
Noisy intermediate-scale quantum (NISQ) computers [Pre12]
Limitations of NISQ devices:
• Limited number of qubits.
• Limited connectivity between qubits.
• Restricted (hardware-specific) gate alphabets.
• Limited circuit depth due to noise.
4 / 17
5. Background
Noisy intermediate-scale quantum (NISQ) computers [Pre12]
Limitations of NISQ devices:
• Limited number of qubits.
• Limited connectivity between qubits.
• Restricted (hardware-specific) gate alphabets.
• Limited circuit depth due to noise.
What can be done with NISQ devices?
• Error mitigation [TBG17, LJF+17, SCC18, MBJA+19].
• Variational hybrid quantum-classical algorithm [MRBAG16].
• Inherent noise resilience to certain noise models
[MRBAG16, SKCC20].
4 / 17
6. Variational Quantum Compiling (VQC)
Quantum compiler
Conversion of an high-level algorithm into a lower-level machine
code [CFM17, HSST18, BDB+18].
5 / 17
7. Variational Quantum Compiling (VQC)
Main goal of VQC [KLP+19, JB18, HSNF18]
Target unitary = U
Trainable unitary = V
• Full unitary matrix compiling:
Compile U by using V (k, α) that has approximately the same
action as U on any given input state.
α: continuous, k: discrete
• Fixed input state compiling:
U|ψ0 = V |ψ0 or U |0 = V |0 .
5 / 17
8. Variational quantum compiling (VQC)
Applications of VQC:
• Algorithmic depth compression.
• Device specific compilation.
• Variational fast forwarding for quantum simulation [CHI+19].
6 / 17
10. Variational quantum compiling
Full unitary matrix compiling
Target unitary U
Trainable unitary V
Questions:
1. Cost function?
2. Task for a quantum computer?
3. Task for a classical computer?
4. How to perform optimization?
8 / 17
11. Variational quantum compiling
Quantum Computer
Input: U
Output: Vkopt(αopt)
Continuous
parameter
optimizer over α
Structure
parameter
optimizer over k
Classical Computer
If α optimal
If α not optimal
Gate
sequence
Vk(α)
Cost
C(U, Vk(α))
|0 H •
U
• H
|0 H • • H
...
|0 H • • H
|0
V ∗
|0
...
|0
HST
|0 H •
U
• H
|0 H •
...
|0 H •
|0
V ∗
|0
...
|0
LHST
OR
OR
(a) Variable structure approach
⊕
⊕
⊕
⊕
Fixed
structure
Variable
structure
with L = 4
⊕
⊕
Optimal L = 5
compilation
(b) Fixed structure approach
Ansatz with
each layer
parameterized by
two-qubit gates
Two
layers
Figure 1: Outline for Full Unitary Matrix Compiling [KLP+19].
9 / 17
12. Cost function for VQC
Lorem ipsum Lorem ipsum
Figure 2: a) The Hilbert-Schmidt Test (HST). b) The Local Hilbert-Schmidt Test
(LHST).
CHST = 1 − FHST = 1 − | Tr(V †
U)|2
/d2
,
CLHST = 1 − FLHST = 1 −
1
n
n
j=1
F
(j)
LHST.
CHST might exhibit barren plateau [MBS+18, CSV+20].
10 / 17
13. Cost function for FUMC
Global Cost Function
CHST = 1 − FHST = 1 − | Tr(V †
U)|2
/d2
,
CHST =
d + 1
d
(1 − F(U, V )).
Faithfulness of CLHST:
CLHST ≤ CHST ≤ nCLHST.
• Cost functions are zero iff U = V (up to global phase).
• Efficient to compute on a quantum computer.
11 / 17
14. Noise models
Pauli noise
• E(XkZl ) = ckl
XkZl .
• T2 process as a special case (dephasing channel).
• Depolarizing noise ρ → pρ + (1 − p)I/2n.
Non-unital Pauli noise
• E(I) = I + (k,l)=(0,0)
dkl
XkZl .
• For (k, l) = (0, 0), E(XkZl ) = ckl
XkZl .
• T1 process as a special case (amplitude damping channel).
Measurement noise
Ideal: {P0 = |0 0|, P1 = |1 1|}.
Noisy:{P0, P1} with P0 = p00|0 0| + p01|1 1| and
P1 = p10|0 0| + p11|1 1|, where p00 + p10 = 1, p01 + p11 = 1. 12 / 17
15. Main results: Optimal parameter resilience
Lorem ipsum
Figure 3: Noise Model 1. PGN = Pauli gate noise, MN = Measurement noise,
PN=Pauli noise, DN = depolarizing noise, NUPN = non-unital Pauli noise.
13 / 17
16. Main results: Optimal parameter resilience
• Quantum circuit = QC.
• Cost function = CQC(V ).
• Noisy cost function (QC is run in the presence of some noise
process N) = CQC(V ).
Optimal solutions:
Vopt
d = {V ∈ Vd : CQC(V ) = min
V ∈Vd
CQC(V )} ,
Vopt
d = {V ∈ Vd : CQC(V ) = min
V ∈Vd
CQC(V )} .
CQC(V ) exhibits strong-optimal parameter resilience to N if
Vopt
d = Vopt
d .
13 / 17
17. Main results: Optimal parameter resilience
Lorem ipsum
Figure 3: Noise Model 1. PGN = Pauli gate noise, MN = Measurement noise,
PN=Pauli noise, DN = depolarizing noise, NUPN = non-unital Pauli noise.
13 / 17
18. Implementations
Figure 4: (a) The dressed CNOT is composed of a CNOT preceded and followed by
single-qubit gates Vk (αk ). (b) Two layers of the alternating-pair ansatz in the case of
four qubits. Each layer is composed of dressed CNOTs acting on alternating pairs of
neighboring qubits. (c) Schematic representation of the target-inspired ansatz. In this
approach, the gate sequence of dressed CNOTs is obtained from the gate sequence of
the target unitary U.
Vk(αk) = e−iαk,3σz /2
e−iαk,2σy /2
e−iαk,1σz /2
.
14 / 17
19. Implementations
Figure 5: Quantum circuits for: (a) Toffoli Gate, (b) Three-qubit Quantum Fourier
Transform, and (c) Three-qubit W-state preparation. Here, Rm stands for the
controlled phase gate with a phase shift of φ = e2πi/2m
. For the three-qubit W-state
preparation circuit we have β1 = (2 arccos( 1/3), 0, 0) and β2 = (π/2, 0, 0).
Vk(αk) = e−iαk,3σz /2
e−iαk,2σy /2
e−iαk,1σz /2
.
15 / 17
20. Implementations
Figure 6: VQC implementations: the Toffoli gate (top) and three-qubit QFT
(bottom). The ansatz for V (α) is: (a) one layer of the alternating-pair (AP) ansatz,
(b) two layers of the AP ansatz, (c) the target-inspired ansatz. The blue and green
curves respectively plot the values of CHST and CLHST obtained by noisy training. The
green and pink curves respectively plot the values of CHST and CLHST evaluated at the
variational parameters α obtained from the noisy optimization of V (α). The blue and
red dashed lines in (a) and (b) correspond to the minimum value of CHST and CLHST,
respectively, determined by optimizing V (α) in a noise-free environment.
16 / 17
21. Conclusion
Noise resilience of Variational Quantum Compiling (VQC)
• Measurement noise (readout error).
• Incoherent gate noise and decoherence processes: Pauli
channels and non-unital Pauli channels.
Future directions
• Practically useful applications of VQC with a NISQ device.
• Noise resilience of other variational quantum algorithms.
Example: VQE.
17 / 17
22. [BDB+18] K. E. C. Booth, M. Do, J. C. Beck, E. Rieffel,
D. Venturelli, and J. Frank. Comparing and
integrating constraint programming and
temporal planning for quantum circuit
compilation. arXiv:1803.06775, 2018.
[CFM17] F. T. Chong, D. Franklin, and M. Martonosi.
Programming languages and compiler design for
realistic quantum hardware. Nature,
549(7671):180, 2017.
[CHI+19] Cristina Cirstoiu, Zoe Holmes, Joseph Iosue, Lukasz
Cincio, Patrick J Coles, and Andrew Sornborger.
Variational fast forwarding for quantum
simulation beyond the coherence time. arXiv
preprint arXiv:1910.04292, 2019.
17 / 17
23. [CSV+20] M Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio,
and Patrick J Coles. Cost-function-dependent
barren plateaus in shallow quantum neural
networks. arXiv preprint arXiv:2001.00550, 2020.
[Fey99] Richard P Feynman. Simulating physics with
computers. Int. J. Theor. Phys, 21(6/7), 1999.
[Gro96] Lov K Grover. A fast quantum mechanical
algorithm for database search. In Proceedings of
the twenty-eighth annual ACM symposium on Theory
of computing, pages 212–219, 1996.
[HSNF18] Kentaro Heya, Yasunari Suzuki, Yasunobu Nakamura,
and Keisuke Fujii. Variational quantum gate
optimization. arXiv preprint arXiv:1810.12745, 2018.
[HSST18] Thomas H¨aner, Damian S Steiger, Krysta Svore, and
Matthias Troyer. A software methodology for
17 / 17
24. compiling quantum programs. Quantum Science
and Technology, 3(2):020501, 2018.
[JB18] T. Jones and S. C. Benjamin. Quantum
compilation and circuit optimisation via energy
dissipation. arXiv:1811.03147, 2018.
[KLP+19] Sumeet Khatri, Ryan LaRose, Alexander Poremba,
Lukasz Cincio, Andrew T. Sornborger, and Patrick J.
Coles. Quantum-assisted quantum compiling.
Quantum, 3:140, May 2019.
[LJF+17] N. M. Linke, S. Johri, C. Figgatt, K. A. Landsman,
A. Y. Matsuura, and C. Monroe. Measuring the
R´enyi entropy of a two-site Fermi-Hubbard
model on a trapped ion quantum computer.
arXiv:1712.08581, 2017.
17 / 17
25. [MBJA+19] Prakash Murali, Jonathan M Baker, Ali
Javadi-Abhari, Frederic T Chong, and Margaret
Martonosi. Noise-adaptive compiler mappings for
noisy intermediate-scale quantum computers. In
Proceedings of the Twenty-Fourth International
Conference on Architectural Support for
Programming Languages and Operating Systems,
pages 1015–1029. ACM, 2019.
[MBS+18] J. R. McClean, S. Boixo, V. N. Smelyanskiy,
R. Babbush, and H. Neven. Barren plateaus in
quantum neural network training landscapes.
Nature Communications, 9:4812, Nov 2018.
[MRBAG16] J. R. McClean, J. Romero, R. Babbush, and
A. Aspuru-Guzik. The theory of variational hybrid
quantum-classical algorithms. New Journal of
Physics, 18(2):023023, 2016.
17 / 17
26. [Pre12] J. Preskill. Quantum computing and the
entanglement frontier. arXiv:1203.5813, 2012.
[SCC18] Yigit Subasi, Lukasz Cincio, and Patrick J Coles.
Entanglement spectroscopy with a depth-two
quantum circuit. Journal of Physics A:
Mathematical and Theoretical, 2018.
[Sho99] Peter W Shor. Polynomial-time algorithms for
prime factorization and discrete logarithms on a
quantum computer. SIAM review, 41(2):303–332,
1999.
[SKCC20] Kunal Sharma, Sumeet Khatri, Marco Cerezo, and
Patrick Coles. Noise resilience of variational
quantum compiling. New Journal of Physics, 2020.
17 / 17
27. [TBG17] K. Temme, S. Bravyi, and J. M. Gambetta. Error
mitigation for short-depth quantum circuits.
Physical Review Letters, 119(18):180509, 2017.
17 / 17