This document presents a new class of restricted quantum membrane systems. The key ideas are to define membrane systems that operate under strictly unitary quantum evolution rules, avoiding problems associated with transferring objects between membranes. A cascading membrane P system is defined, where membranes are arranged hierarchically with input and output spaces coupled in a pipeline. Computation proceeds by applying unitary operators to manipulate qubit registers representing object degrees in each membrane. This model is shown to be capable of simulating classical automata. The approach aims to combine variants of P systems with quantum computing techniques in a way that is consistent with underlying quantum physics.
Infinite and Standard Computation with Unconventional and Quantum Methods Usi...Konstantinos Giannakis
The document discusses Konstantinos Giannakis's dissertation defense on unconventional computing methods using automata. It summarizes the dissertation's structure, including discussions on standard computation, infinite computation using automata, membrane computing using novel membrane automata definitions, and quantum computing using periodic quantum automata. It provides examples and definitions related to each of these topics.
Presentation of "Quantum automata for infinite periodic words" for the 6th International Conference on Information, Intelligence, Systems and Applications (IISA 2015)
Monte Caro Simualtions, Sampling and Markov Chain Monte CarloXin-She Yang
Pseudorandom
Pseudorandom The document discusses Monte Carlo methods and Markov chain Monte Carlo (MCMC). It provides examples of using Monte Carlo simulations to estimate pi and solve Buffon's needle problem. It also discusses random walks in Markov chains, the PageRank algorithm used by Google, and challenges with high-dimensional integrals and distributions that do not have a closed-form inverse. MCMC methods are presented as a way to address these challenges.
Markov chain Monte Carlo methods and some attempts at parallelizing themPierre Jacob
Markov chain Monte Carlo (MCMC) methods are commonly used to approximate properties of target probability distributions. However, MCMC estimators are generally biased for any fixed number of samples. The document discusses various techniques for constructing unbiased estimators from MCMC output, including regeneration, sequential Monte Carlo samplers, and coupled Markov chains. Specifically, running two Markov chains in parallel and taking the difference in their values at meeting times can yield an unbiased estimator, though certain conditions must hold.
Why should you care about Markov Chain Monte Carlo methods?
→ They are in the list of "Top 10 Algorithms of 20th Century"
→ They allow you to make inference with Bayesian Networks
→ They are used everywhere in Machine Learning and Statistics
Markov Chain Monte Carlo methods are a class of algorithms used to sample from complicated distributions. Typically, this is the case of posterior distributions in Bayesian Networks (Belief Networks).
These slides cover the following topics.
→ Motivation and Practical Examples (Bayesian Networks)
→ Basic Principles of MCMC
→ Gibbs Sampling
→ Metropolis–Hastings
→ Hamiltonian Monte Carlo
→ Reversible-Jump Markov Chain Monte Carlo
This a short presentation for a 15 minutes talk at Bayesian Inference for Stochastic Processes 7, on the SMC^2 algorithm.
http://arxiv.org/abs/1101.1528
The document provides an introduction to Markov Chain Monte Carlo (MCMC) methods. It discusses using MCMC to sample from distributions when direct sampling is difficult. Specifically, it introduces Gibbs sampling and the Metropolis-Hastings algorithm. Gibbs sampling updates variables one at a time based on their conditional distributions. Metropolis-Hastings proposes candidate samples and accepts or rejects them to converge to the target distribution. The document provides examples and outlines the algorithms to construct Markov chains that sample distributions of interest.
The document discusses Markov chains and their relationship to random walks on graphs and electrical networks. Some key points:
- A Markov chain is a process that transitions between a finite set of states based on transition probabilities that depend only on the current state.
- For a strongly connected Markov chain, there exists a unique stationary distribution that the long-term probabilities of the chain converge to, regardless of the starting state.
- Random walks on undirected graphs can be modeled as Markov chains, where the transition probabilities are proportional to edge conductances in an analogous electrical network. The stationary distribution of such a random walk is proportional to vertex degrees or conductances.
Infinite and Standard Computation with Unconventional and Quantum Methods Usi...Konstantinos Giannakis
The document discusses Konstantinos Giannakis's dissertation defense on unconventional computing methods using automata. It summarizes the dissertation's structure, including discussions on standard computation, infinite computation using automata, membrane computing using novel membrane automata definitions, and quantum computing using periodic quantum automata. It provides examples and definitions related to each of these topics.
Presentation of "Quantum automata for infinite periodic words" for the 6th International Conference on Information, Intelligence, Systems and Applications (IISA 2015)
Monte Caro Simualtions, Sampling and Markov Chain Monte CarloXin-She Yang
Pseudorandom
Pseudorandom The document discusses Monte Carlo methods and Markov chain Monte Carlo (MCMC). It provides examples of using Monte Carlo simulations to estimate pi and solve Buffon's needle problem. It also discusses random walks in Markov chains, the PageRank algorithm used by Google, and challenges with high-dimensional integrals and distributions that do not have a closed-form inverse. MCMC methods are presented as a way to address these challenges.
Markov chain Monte Carlo methods and some attempts at parallelizing themPierre Jacob
Markov chain Monte Carlo (MCMC) methods are commonly used to approximate properties of target probability distributions. However, MCMC estimators are generally biased for any fixed number of samples. The document discusses various techniques for constructing unbiased estimators from MCMC output, including regeneration, sequential Monte Carlo samplers, and coupled Markov chains. Specifically, running two Markov chains in parallel and taking the difference in their values at meeting times can yield an unbiased estimator, though certain conditions must hold.
Why should you care about Markov Chain Monte Carlo methods?
→ They are in the list of "Top 10 Algorithms of 20th Century"
→ They allow you to make inference with Bayesian Networks
→ They are used everywhere in Machine Learning and Statistics
Markov Chain Monte Carlo methods are a class of algorithms used to sample from complicated distributions. Typically, this is the case of posterior distributions in Bayesian Networks (Belief Networks).
These slides cover the following topics.
→ Motivation and Practical Examples (Bayesian Networks)
→ Basic Principles of MCMC
→ Gibbs Sampling
→ Metropolis–Hastings
→ Hamiltonian Monte Carlo
→ Reversible-Jump Markov Chain Monte Carlo
This a short presentation for a 15 minutes talk at Bayesian Inference for Stochastic Processes 7, on the SMC^2 algorithm.
http://arxiv.org/abs/1101.1528
The document provides an introduction to Markov Chain Monte Carlo (MCMC) methods. It discusses using MCMC to sample from distributions when direct sampling is difficult. Specifically, it introduces Gibbs sampling and the Metropolis-Hastings algorithm. Gibbs sampling updates variables one at a time based on their conditional distributions. Metropolis-Hastings proposes candidate samples and accepts or rejects them to converge to the target distribution. The document provides examples and outlines the algorithms to construct Markov chains that sample distributions of interest.
The document discusses Markov chains and their relationship to random walks on graphs and electrical networks. Some key points:
- A Markov chain is a process that transitions between a finite set of states based on transition probabilities that depend only on the current state.
- For a strongly connected Markov chain, there exists a unique stationary distribution that the long-term probabilities of the chain converge to, regardless of the starting state.
- Random walks on undirected graphs can be modeled as Markov chains, where the transition probabilities are proportional to edge conductances in an analogous electrical network. The stationary distribution of such a random walk is proportional to vertex degrees or conductances.
Ordinal Regression and Machine Learning: Applications, Methods, MetricsFrancesco Casalegno
What do movie recommender systems, disease progression evaluation, and sovereign credit ranking have in common?
→ ordinal regression sits between classification and regression
→ target values are categorical and discrete, but ordered
→ many challenges to face when training and evaluating models
What will you find in this presentation?
→ real life, clear examples of ordinal regression you see everyday
→ learning to rank: predict user preferences and items relevance
→ best solution methods: naïve, binary decomposition, threshold
→ how to measure performance: understand & choose metrics
Hidden Markov Models with applications to speech recognitionbutest
This document provides an introduction to hidden Markov models (HMMs). It discusses how HMMs can be used to model sequential data where the underlying states are not directly observable. The key aspects of HMMs are: (1) the model has a set of hidden states that evolve over time according to transition probabilities, (2) observations are emitted based on the current hidden state, (3) the four basic problems of HMMs are evaluation, decoding, training, and model selection. Examples discussed include modeling coin tosses, balls in urns, and speech recognition. Learning algorithms for HMMs like Baum-Welch and Viterbi are also summarized.
The Metropolis Hastings algorithm is an MCMC method for obtaining a sequence of samples from a probability distribution when direct sampling is difficult. It constructs a Markov chain that has the desired target distribution as its stationary distribution. At each step, a candidate sample is generated and either accepted, replacing the current state, or rejected, keeping the current state. The acceptance ratio is determined by the ratio of probabilities of the candidate and current states. The algorithm is a generalization of the Metropolis algorithm that allows for non-symmetric proposal distributions. When the chain satisfies ergodicity conditions, the sample distribution will converge to the target distribution as the number of samples increases.
1) Markov Chain Monte Carlo (MCMC) methods use Markov chains to sample from complex probability distributions and are useful for problems that cannot be solved efficiently using other methods.
2) Common MCMC algorithms include Metropolis-Hastings, which samples from a target distribution using a proposal distribution, and Gibbs sampling, which efficiently samples multidimensional distributions by updating variables sequentially.
3) MCMC methods like simulated annealing can find global maxima of probability distributions and have applications in statistical mechanics, optimization, and Bayesian inference.
This document provides an introduction to Bayesian analysis and Metropolis-Hastings Markov chain Monte Carlo (MCMC). It explains the foundations of Bayesian analysis and how MCMC sampling methods like Metropolis-Hastings can be used to draw samples from posterior distributions that are intractable. The Metropolis-Hastings algorithm works by constructing a Markov chain with the target distribution as its stationary distribution. The document provides an example of using MCMC to perform linear regression in a Bayesian framework.
- The document discusses various techniques for Markov chain Monte Carlo (MCMC) sampling, including rejection sampling, Metropolis-Hastings, and Gibbs sampling.
- It explains how MCMC can be used for approximate probabilistic inference in complex models by constructing a Markov chain that converges to the target distribution.
- Diagnostics are discussed for checking if the Markov chain has converged, such as visual inspection of trace plots, and Geweke and Gelman-Rubin tests of the within-chain and between-chain variances.
This document discusses Markov chain Monte Carlo (MCMC) methods. It begins with an outline of the Metropolis-Hastings algorithm, which is a generic MCMC method for obtaining a sequence of random samples from a probability distribution when direct sampling is difficult. The document then provides details on the Metropolis-Hastings algorithm, including its convergence properties. It also discusses the independent Metropolis-Hastings algorithm as a special case and provides an example to illustrate it.
This document provides an overview of Markov chain Monte Carlo (MCMC) methods. It begins with motivations for using MCMC, such as computational difficulties that arise in models with latent variables like mixture models. It then discusses likelihood-based and Bayesian approaches, noting limitations of maximum likelihood methods. Conjugate priors are described that allow tractable Bayesian inference for some simple models. However, conjugate priors are not available for more complex models, motivating the use of MCMC methods which can approximate integrals and distributions of interest for more complex models.
Those are the slides for my Master course on Monte Carlo Statistical Methods given in conjunction with the Monte Carlo Statistical Methods book with George Casella.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
Markov Chain Monte Carlo (MCMC) methods use Markov chains to sample from probability distributions for use in Monte Carlo simulations. The Metropolis-Hastings algorithm proposes transitions to new states in the chain and either accepts or rejects those states based on a probability calculation, allowing it to sample from complex, high-dimensional distributions. The Gibbs sampler is a special case of MCMC where each variable is updated conditional on the current values of the other variables, ensuring all proposed moves are accepted. These MCMC methods allow approximating integrals that are difficult to compute directly.
Supervised Hidden Markov Chains.
Here, we used the paper by Rabiner as a base for the presentation. Thus, we have the following three problems:
1.- How efficiently compute the probability given a model.
2.- Given an observation to which class it belongs
3.- How to find the parameters given data for training.
The first two follow Rabiner's explanation, but in the third one I used the Lagrange Multiplier Optimization because Rabiner lacks a clear explanation about how solving the issue.
1. Hidden Markov Models (HMMs) are used to model sequential data where the underlying process generating the observable outputs is not visible but assumed to be a Markov process with hidden states.
2. HMMs define transition probabilities between hidden states and emission probabilities of observable outputs for each state.
3. There are three typical problems for HMMs: likelihood computation, decoding the most likely sequence of hidden states, and learning the transition and emission probabilities from data.
The document discusses Markov chain Monte Carlo (MCMC) methods, which use Markov chains to generate dependent samples from probability distributions that are difficult to directly sample from. It introduces Gibbs sampling and the Metropolis-Hastings algorithm as two common MCMC techniques. Gibbs sampling works by iteratively sampling each parameter from its conditional distribution given current values of other parameters. Metropolis-Hastings also iteratively proposes new parameter values but only accepts them probabilistically, based on the target distribution. Both techniques generate Markov chains that can be used to approximate integrals and obtain quantities of interest from complex distributions.
This document summarizes a talk given by Pierre E. Jacob on recent developments in unbiased Markov chain Monte Carlo methods. It discusses:
1. The bias inherent in standard MCMC estimators due to the initial distribution not being the target distribution.
2. A method for constructing unbiased estimators using coupled Markov chains, where two chains are run in parallel until they meet, at which point an estimator involving the differences in the chains' values is returned.
3. Conditions under which the coupled chain estimators are unbiased and have finite variance. Examples are given of how to construct coupled versions of common MCMC algorithms like Metropolis-Hastings and Gibbs sampling.
This document provides an overview of hidden Markov models (HMMs). It defines HMMs as statistical Markov models that include both observed and hidden states. The key components of an HMM are states (Q), observations (V), initial state probabilities (p), state transition probabilities (A), and emission probabilities (E). HMMs find applications in areas like protein structure prediction, sequence alignment, and gene finding. The Viterbi algorithm is described as a dynamic programming approach for finding the most likely sequence of hidden states in an HMM. Advantages of HMMs include their statistical power and modularity, while disadvantages include assumptions of state independence and potential for overfitting.
Dealing with intractability: Recent Bayesian Monte Carlo methods for dealing ...BigMC
talk by Nicolas Chopin at CREST Statistics Seminar, 16/01/2011.
This is partly a review, partly a talk on recent research such as
http://arxiv.org/abs/1101.1528
The document provides an introduction to quantum computing fundamentals using an object-oriented approach. It discusses quantum theory, registers, gates and simulations. Key concepts covered include superposition, matrix operations, single and multi-qubit gates like Pauli-X, CNOT and their representations. The presenter aims to demonstrate quantum computing principles via a .NET simulator called Q#.
Hidden Markov Model - The Most Probable PathLê Hòa
This document provides an overview of hidden Markov models including:
- The components of hidden Markov models including states, transition probabilities, emission probabilities, and observation sequences.
- How the Viterbi algorithm can be used to find the most probable hidden state sequence that explains an observed sequence by calculating likelihoods recursively and backtracking through the model.
- An example application of the Viterbi algorithm to find the most probable hidden weather sequence given observed data from a weather HMM model.
Fundamentals of quantum computing part i revPRADOSH K. ROY
This document provides an introduction to the fundamentals of quantum computing. It discusses computational complexity classes such as P and NP and essential matrix algebra concepts like Hermitian, unitary, and normal matrices. It also contrasts the classical and quantum worlds. In the quantum world, quantum systems can exist in superposition states and qubits can represent more than just binary 0s and 1s. The document introduces the concept of a qubit register and how multiple qubits can be represented using tensor products. It discusses characteristics of quantum systems like superposition, Born's rule for probabilities, and the measurement postulate which causes wavefunction collapse.
The document provides an overview of quantum computing, including its history, data representation using qubits, quantum gates and operations, and Shor's algorithm for integer factorization. Shor's algorithm uses quantum parallelism and the quantum Fourier transform to find the period of a function, from which the factors of a number can be determined. While quantum computing holds promise for certain applications, classical computers will still be needed and future computers may be a hybrid of classical and quantum components.
Ordinal Regression and Machine Learning: Applications, Methods, MetricsFrancesco Casalegno
What do movie recommender systems, disease progression evaluation, and sovereign credit ranking have in common?
→ ordinal regression sits between classification and regression
→ target values are categorical and discrete, but ordered
→ many challenges to face when training and evaluating models
What will you find in this presentation?
→ real life, clear examples of ordinal regression you see everyday
→ learning to rank: predict user preferences and items relevance
→ best solution methods: naïve, binary decomposition, threshold
→ how to measure performance: understand & choose metrics
Hidden Markov Models with applications to speech recognitionbutest
This document provides an introduction to hidden Markov models (HMMs). It discusses how HMMs can be used to model sequential data where the underlying states are not directly observable. The key aspects of HMMs are: (1) the model has a set of hidden states that evolve over time according to transition probabilities, (2) observations are emitted based on the current hidden state, (3) the four basic problems of HMMs are evaluation, decoding, training, and model selection. Examples discussed include modeling coin tosses, balls in urns, and speech recognition. Learning algorithms for HMMs like Baum-Welch and Viterbi are also summarized.
The Metropolis Hastings algorithm is an MCMC method for obtaining a sequence of samples from a probability distribution when direct sampling is difficult. It constructs a Markov chain that has the desired target distribution as its stationary distribution. At each step, a candidate sample is generated and either accepted, replacing the current state, or rejected, keeping the current state. The acceptance ratio is determined by the ratio of probabilities of the candidate and current states. The algorithm is a generalization of the Metropolis algorithm that allows for non-symmetric proposal distributions. When the chain satisfies ergodicity conditions, the sample distribution will converge to the target distribution as the number of samples increases.
1) Markov Chain Monte Carlo (MCMC) methods use Markov chains to sample from complex probability distributions and are useful for problems that cannot be solved efficiently using other methods.
2) Common MCMC algorithms include Metropolis-Hastings, which samples from a target distribution using a proposal distribution, and Gibbs sampling, which efficiently samples multidimensional distributions by updating variables sequentially.
3) MCMC methods like simulated annealing can find global maxima of probability distributions and have applications in statistical mechanics, optimization, and Bayesian inference.
This document provides an introduction to Bayesian analysis and Metropolis-Hastings Markov chain Monte Carlo (MCMC). It explains the foundations of Bayesian analysis and how MCMC sampling methods like Metropolis-Hastings can be used to draw samples from posterior distributions that are intractable. The Metropolis-Hastings algorithm works by constructing a Markov chain with the target distribution as its stationary distribution. The document provides an example of using MCMC to perform linear regression in a Bayesian framework.
- The document discusses various techniques for Markov chain Monte Carlo (MCMC) sampling, including rejection sampling, Metropolis-Hastings, and Gibbs sampling.
- It explains how MCMC can be used for approximate probabilistic inference in complex models by constructing a Markov chain that converges to the target distribution.
- Diagnostics are discussed for checking if the Markov chain has converged, such as visual inspection of trace plots, and Geweke and Gelman-Rubin tests of the within-chain and between-chain variances.
This document discusses Markov chain Monte Carlo (MCMC) methods. It begins with an outline of the Metropolis-Hastings algorithm, which is a generic MCMC method for obtaining a sequence of random samples from a probability distribution when direct sampling is difficult. The document then provides details on the Metropolis-Hastings algorithm, including its convergence properties. It also discusses the independent Metropolis-Hastings algorithm as a special case and provides an example to illustrate it.
This document provides an overview of Markov chain Monte Carlo (MCMC) methods. It begins with motivations for using MCMC, such as computational difficulties that arise in models with latent variables like mixture models. It then discusses likelihood-based and Bayesian approaches, noting limitations of maximum likelihood methods. Conjugate priors are described that allow tractable Bayesian inference for some simple models. However, conjugate priors are not available for more complex models, motivating the use of MCMC methods which can approximate integrals and distributions of interest for more complex models.
Those are the slides for my Master course on Monte Carlo Statistical Methods given in conjunction with the Monte Carlo Statistical Methods book with George Casella.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
Markov Chain Monte Carlo (MCMC) methods use Markov chains to sample from probability distributions for use in Monte Carlo simulations. The Metropolis-Hastings algorithm proposes transitions to new states in the chain and either accepts or rejects those states based on a probability calculation, allowing it to sample from complex, high-dimensional distributions. The Gibbs sampler is a special case of MCMC where each variable is updated conditional on the current values of the other variables, ensuring all proposed moves are accepted. These MCMC methods allow approximating integrals that are difficult to compute directly.
Supervised Hidden Markov Chains.
Here, we used the paper by Rabiner as a base for the presentation. Thus, we have the following three problems:
1.- How efficiently compute the probability given a model.
2.- Given an observation to which class it belongs
3.- How to find the parameters given data for training.
The first two follow Rabiner's explanation, but in the third one I used the Lagrange Multiplier Optimization because Rabiner lacks a clear explanation about how solving the issue.
1. Hidden Markov Models (HMMs) are used to model sequential data where the underlying process generating the observable outputs is not visible but assumed to be a Markov process with hidden states.
2. HMMs define transition probabilities between hidden states and emission probabilities of observable outputs for each state.
3. There are three typical problems for HMMs: likelihood computation, decoding the most likely sequence of hidden states, and learning the transition and emission probabilities from data.
The document discusses Markov chain Monte Carlo (MCMC) methods, which use Markov chains to generate dependent samples from probability distributions that are difficult to directly sample from. It introduces Gibbs sampling and the Metropolis-Hastings algorithm as two common MCMC techniques. Gibbs sampling works by iteratively sampling each parameter from its conditional distribution given current values of other parameters. Metropolis-Hastings also iteratively proposes new parameter values but only accepts them probabilistically, based on the target distribution. Both techniques generate Markov chains that can be used to approximate integrals and obtain quantities of interest from complex distributions.
This document summarizes a talk given by Pierre E. Jacob on recent developments in unbiased Markov chain Monte Carlo methods. It discusses:
1. The bias inherent in standard MCMC estimators due to the initial distribution not being the target distribution.
2. A method for constructing unbiased estimators using coupled Markov chains, where two chains are run in parallel until they meet, at which point an estimator involving the differences in the chains' values is returned.
3. Conditions under which the coupled chain estimators are unbiased and have finite variance. Examples are given of how to construct coupled versions of common MCMC algorithms like Metropolis-Hastings and Gibbs sampling.
This document provides an overview of hidden Markov models (HMMs). It defines HMMs as statistical Markov models that include both observed and hidden states. The key components of an HMM are states (Q), observations (V), initial state probabilities (p), state transition probabilities (A), and emission probabilities (E). HMMs find applications in areas like protein structure prediction, sequence alignment, and gene finding. The Viterbi algorithm is described as a dynamic programming approach for finding the most likely sequence of hidden states in an HMM. Advantages of HMMs include their statistical power and modularity, while disadvantages include assumptions of state independence and potential for overfitting.
Dealing with intractability: Recent Bayesian Monte Carlo methods for dealing ...BigMC
talk by Nicolas Chopin at CREST Statistics Seminar, 16/01/2011.
This is partly a review, partly a talk on recent research such as
http://arxiv.org/abs/1101.1528
The document provides an introduction to quantum computing fundamentals using an object-oriented approach. It discusses quantum theory, registers, gates and simulations. Key concepts covered include superposition, matrix operations, single and multi-qubit gates like Pauli-X, CNOT and their representations. The presenter aims to demonstrate quantum computing principles via a .NET simulator called Q#.
Hidden Markov Model - The Most Probable PathLê Hòa
This document provides an overview of hidden Markov models including:
- The components of hidden Markov models including states, transition probabilities, emission probabilities, and observation sequences.
- How the Viterbi algorithm can be used to find the most probable hidden state sequence that explains an observed sequence by calculating likelihoods recursively and backtracking through the model.
- An example application of the Viterbi algorithm to find the most probable hidden weather sequence given observed data from a weather HMM model.
Fundamentals of quantum computing part i revPRADOSH K. ROY
This document provides an introduction to the fundamentals of quantum computing. It discusses computational complexity classes such as P and NP and essential matrix algebra concepts like Hermitian, unitary, and normal matrices. It also contrasts the classical and quantum worlds. In the quantum world, quantum systems can exist in superposition states and qubits can represent more than just binary 0s and 1s. The document introduces the concept of a qubit register and how multiple qubits can be represented using tensor products. It discusses characteristics of quantum systems like superposition, Born's rule for probabilities, and the measurement postulate which causes wavefunction collapse.
The document provides an overview of quantum computing, including its history, data representation using qubits, quantum gates and operations, and Shor's algorithm for integer factorization. Shor's algorithm uses quantum parallelism and the quantum Fourier transform to find the period of a function, from which the factors of a number can be determined. While quantum computing holds promise for certain applications, classical computers will still be needed and future computers may be a hybrid of classical and quantum components.
Quantum Evolutionary Algorithm for Solving Bin Packing Probleminventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Cellular Automata, PDEs and Pattern FormationXin-She Yang
This document discusses the relationship between cellular automata (CA), partial differential equations (PDEs), and pattern formation. It begins by introducing CA as a rule-based computing system and noting that CA and PDE models both describe temporal evolution, raising the question of how to relate the two approaches. The document then covers the fundamentals of CA, including finite-state CA, stochastic CA, and reversible CA. It discusses using CA to model PDEs by constructing CA rules from finite difference schemes. Finally, it discusses how both CA and PDEs can be used to model pattern formation in various natural and engineered systems.
1. The document discusses entanglement generation and state transfer in a Heisenberg spin-1/2 chain under an external magnetic field.
2. It analyzes the fidelity and concurrence of the system over time and temperature using the density matrix and Hamiltonian equations for a 2-qubit system.
3. The results show that maximally entangled states are difficult to achieve but desirable for quantum computation applications like quantum teleportation.
Quantum computing uses quantum mechanics and qubits that can exist in superpositions of states to perform calculations. A qubit can represent both 0 and 1 simultaneously, allowing quantum computers to evaluate many possibilities in parallel. Operations on qubits use reversible quantum gates like the Hadamard gate to create superpositions and the controlled-NOT gate to entangle qubits. One example of a quantum algorithm is Shor's algorithm for integer factorization that runs exponentially faster than classical computers. Open questions remain around building large-scale quantum computers and finding other useful quantum algorithms.
osama-quantum-computing and its uses and applicationsRachitdas2
This document provides an overview of quantum computing. It begins with introductions to quantum mechanics and the basic concept of a quantum computer. Qubits can represent superpositions of states allowing quantum computers to perform massive parallelism. Data is represented using qubit states and operations involve entanglement. Measurement causes superpositions to collapse probabilistically. While quantum mechanics is strange, quantum computing may enable solving problems like factoring exponentially faster than classical computers. The document questions the Church-Turing thesis in light of quantum computing's ability.
A quantum computer uses quantum mechanics phenomena like superposition and entanglement to perform computations. In a quantum computer, a qubit can represent a 0 and 1 simultaneously using superposition. This allows quantum computers to evaluate functions on all possible inputs at once. Measurement causes the superposition to collapse to a single value. Quantum computers may be able to solve certain problems like factoring exponentially faster than classical computers due to these quantum effects. However, building large-scale, reliable quantum computers remains a significant technical challenge.
The document discusses quantum computing concepts such as wave functions, bra-ket notation, identity matrices, Pauli matrices, Hermitian matrices, and unitary matrices. It provides examples of applying Pauli matrices to quantum states |0> and |1> and explains how identity matrices do not change these states. The key aspects covered are mathematical representations of quantum states and operations, as well as basic principles of quantum information and computing.
This document summarizes an experimental demonstration of one-way quantum computing using a four-photon cluster state. Key points:
1) Researchers generated a four-qubit cluster state by emitting four photons into different spatial modes using spontaneous parametric down-conversion.
2) They fully characterized the quantum state using quantum state tomography, the first time this was done for a four-qubit state.
3) Experiments implemented single-qubit and two-qubit quantum logic gates by performing measurements on the cluster state, demonstrating the feasibility of one-way quantum computing.
Quantum inspired evolutionary algorithm for solving multiple travelling sales...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The document discusses calculating and plotting the allocation of entropy for bipartite and tripartite quantum systems. It provides tables of entropy calculations for a bipartite system of |0> and |1> states, and checks that the results satisfy the subadditivity inequality. It also outlines the methodology to perform similar calculations and plots for other systems to visualize the convex cones of entropy allocation.
This document is a research essay presented by Canlin Zhang to the University of Waterloo in partial fulfillment of the requirements for a Master's degree in Pure Mathematics. The essay introduces some basic concepts in operator theory and their connections to quantum computation and information. It discusses topics such as quantum algorithms, quantum channels, quantum error correction, and noiseless subsystems. The essay is divided into six sections that cover these topics at a high-level introduction.
This document summarizes recent convergence results for the fuzzy c-means clustering algorithm (FCM). It discusses both numerical convergence, referring to how well the algorithm attains the minima of an objective function, and stochastic convergence, referring to how accurately the minima represent the actual cluster structure in data. For numerical convergence, the document outlines global and local convergence theorems, showing FCM converges to minima or saddle points globally and linearly to local minima. For stochastic convergence, it discusses a consistency result showing the minima accurately represent cluster structure under certain statistical assumptions.
Breaking the 49 qubit barrier in the simulation of quantum circuitshquynh
This paper presents a new method for classically simulating quantum circuits with up to 56 qubits. The method uses a tensor representation rather than a matrix representation, allowing quantum circuits to be decomposed and simulated in arbitrary order. The authors demonstrate the method by simulating a 49-qubit circuit with depth 27 and a 56-qubit circuit with depth 23. These simulations required 4.5TB and 3.0TB of memory respectively, within the capabilities of existing supercomputers, whereas previous methods would have required impractical amounts of memory. The simulations confirm theoretical predictions about the distribution of output probabilities.
This document provides an overview of quantum computing, including:
- Quantum computers store and process information using quantum bits (qubits) that can exist in superpositions of states allowing exponential increases in processing power over classical computers.
- Key concepts include qubit representation and superpositions, entanglement, measurement and computational complexity classes like BQP.
- Quantum algorithms show exponential speedups over classical for factoring, discrete log, and some other problems.
- Implementation challenges include building reliable qubits, controlling operations, and error correction. Leading approaches use trapped ions, NMR, photonics, and solid state systems.
This presentation provides an overview of quantum computers including:
- What they are and how they use quantum phenomena like superposition and entanglement to perform operations.
- Common algorithms like Shor's algorithm, Grover's algorithm, and Deutsch-Jozsa algorithm.
- Key concepts like qubits, quantum gates, entanglement, and bra-ket notation.
- Challenges like errors, decoherence, and difficulty verifying results against classical computers.
- Recent advances in building larger quantum computers with more qubits by companies like Intel, Google, and IBM.
The document provides an introduction to quantum computing, including:
1) It explains that quantum computing utilizes quantum mechanics and quantum bits (qubits) that can exist in superpositions of states, allowing quantum computers to potentially process exponentially more information than classical computers.
2) The key differences between classical and quantum computers are described, with classical computers using bits in binary states while quantum computers use qubits that can be in superpositions of states.
3) Popular quantum gates like Hadamard, CNOT, and rotation gates are introduced and explained as transformations that can be applied to qubits.
Similar to A new class of restricted quantum membrane systems (20)
A quantum-inspired optimization heuristic for the multiple sequence alignment...Konstantinos Giannakis
The document presents a quantum-inspired heuristic for solving the multiple sequence alignment problem in bioinformatics. It models the sequence similarity as a traveling salesman problem instance using a normalized similarity matrix. The method applies a quantum-inspired generalized variable neighborhood search metaheuristic to approximate the shortest Hamiltonian path and generate an initial alignment. Evaluation on real biological sequences shows it outperforms progressive methods, producing alignments with good sum-of-pairs scores, especially for large sequence sets.
Computing probabilistic queries in the presence of uncertainty via probabilis...Konstantinos Giannakis
1) The document proposes using probabilistic automata to compute probabilistic queries on RDF-like data structures that contain uncertainty. It shows how to assign a probabilistic automaton corresponding to a particular query.
2) An example query is provided that finds all nodes influenced by a starting node with a probability above a threshold. The probabilistic automata calculations allow filtering results by probability.
3) Benefits cited include leveraging well-studied probabilistic automata results and efficient handling of uncertainty. Future work could expand the models to infinite data and provide more empirical results.
Initialization methods for the tsp with time windows using variable neighborh...Konstantinos Giannakis
This document presents an initialization method for solving the travelling salesman problem with time windows (TSP-TW) using variable neighborhood search (VNS). The authors implement a VNS metaheuristic that uses both random and sorted initial solutions and performs local search. Their results show that for some problem instances, a sorted initial solution does not find a feasible solution as often as a random initial solution. The authors propose using alternative random initialization procedures with different probability distributions for future work.
The document discusses querying Linked Data using Büchi automata. It introduces Linked Data and SPARQL queries, and notes the infinite nature of social networking applications and Linked Open Numbers. It then discusses using Büchi automata to verify webs of Linked Data by modeling their infinite behavior. The authors propose representing SPARQL queries on infinite webs of Linked Data using Büchi automata with infinite input to check for eventual computability.
The document describes a model of mitochondrial fusion using membrane automata. It investigates the biological function of mitochondrial fusion and models it using membrane automata and brane calculus. The model combines P automata and BioAmbients calculus to represent the hierarchical membrane structure and biomolecular rules governing mitochondrial fusion. It translates the biological model of fusion expressed in BioAmbient calculus into rewriting rules of P automata to simulate the process in a more visual and well-established framework.
The document summarizes an experiment evaluating a simulated listening typewriter for composing letters. Eighteen participants, including 10 novices and 8 professionals, used different versions of the typewriter to compose letters. The versions varied vocabulary size and allowed isolated or continuous speech. Results showed that isolated word speech with a large vocabulary produced letters of similar quality to traditional dictation and handwriting. However, some participants found the slow speed of the simulated system frustrating. The conclusion discusses limitations like the immaturity of speech recognition technology at the time and opportunities for further evaluating the system's potential benefits for disabled users.
The document discusses developing a Space Invaders video game. Space Invaders was a classic 1978 arcade game that inspired many subsequent games. It involved destroying rows of aliens that moved horizontally across the screen in increasingly difficult waves. The goal is to recreate Space Invaders using Microsoft XNA Framework and C# with game states like logo, menu, play, pause, win, and lose. The game will include levels that scale in difficulty.
User Requirements for Gamifying Sports Software- in 3rd International Workshop on Games and Software Engineering (GAS 2013) in ICSE 2013 May 18, 2013 San Francisco, California, U.S.A.
Web Mining to Create Semantic Content: A Case Study for the EnvironmentKonstantinos Giannakis
This document discusses using web mining techniques to create semantic content from environmental web data sources. It introduces concepts like web mining, the semantic web, and ecoinformatics. It describes related projects that use semantic information and focuses on mining the Encyclopedia of Earth. The proposed concept involves hierarchical clustering of mined data. Future work includes demos and integrating with semantic frameworks. The authors believe it is necessary to create semantic environmental content to assist in areas like preventing fires and pollution.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...
A new class of restricted quantum membrane systems
1. a new class of restricted quantum membrane
systems
2nd
World Congress “Genetics, Geriatrics and Neurodegenerative Diseases Re-
search” (GeNeDis 2016)
Sparta, Greece
Konstantinos Giannakis, Alexandros Singh, Kalliopi Kastampolidou, Christos
Papalitsas, and Theodore Andronikos
October 20, 2016
Department of Informatics, Ionian University
0
2. preview of our study
∙ Computing in an unconventional environment
∙ Membrane systems
∙ Quantum computational aspects
∙ Quantum evolution rules
1
4. motivations
∙ Moore’s Law is reaching its physical limits.
∙ New computing paradigms?
∙ Redesign and revisit well-studied models and structures from
classical computation.
∙ Unitarity in membrane systems.
3
5. membrane computing
∙ Known as P systems with several proposed variants.
∙ Evolution depicted through rewriting rules on multisets of the
form u→v
∙ imitating natural chemical reactions.
∙ u, v are multisets of objects.
∙ The hierarchical status of membranes evolves by constantly
creating and destroying membranes, by membrane division etc.
∙ Different types of communication rules:
∙ symport rules (one-way passing through a membrane)
∙ antiport rules (two-way passing through a membrane)
4
7. p systems evolution and computation
∙ Via purely non deterministic, parallel rules.
∙ Characteristics of membrane systems: the membrane structure,
multisets of objects, and rules.
∙ They can be represented by a string of labelled matching
parentheses.
∙ Use of rules =⇒ transitions among configurations.
∙ A sequence of transitions is interpreted as computation.
∙ Accepted computations are those which halt and a successful
computation is associated with a result.
6
8. rules used in membrane computing
...
...
b)
a)
c) exo
(a,in)aa(b,in)ba
(a,out)
cab
c→a
bbbba
(a,out)
caa
c→bb
cca
=⇒
=⇒
=⇒
7
9. definition
Definition
A generic P system (of degree m, m ≥ 1) with the characteristics described above can be defined as a construct
Π=(V, T, C, H, µ, w1, ..., wm, (R1, ..., Rm), (H1, ..., Hm) i0) ,
where
1. V is an alphabet and its elements are called objects.
2. T ⊆ V is the output alphabet.
3. C ⊆ V, C ∩ T = ⊘ are catalysts.
4. H is the set {pino, exo, mate, drip} of membrane handling rules.
5. µ is a membrane structure consisting of m membranes, with the membranes and the regions labeled in a one-to-one way with
elements of a given set H.
6. wi, 1 ≤ i ≤ m, are strings representing multisets over V associated with the regions 1,2, ... ,m of µ.
7. Ri , 1 ≤ i ≤ m, are finite sets of evolution rules over the alphabet set V associated with the regions 1,2, ... , m of µ. These object
evolution rules have the form u → v.
8. Hi , 1 ≤ i ≤ m, are finite sets of membrane handling rules rules over the set H associated with the regions 1,2, ... , m of µ.
9. i0 is a number between 1 and m and defines the initial configuration of each region of the P system.
8
10. advantages
Inherent compartmentalization, easy extensibility and direct
intuitive appearance for biologists.
Expression models and phenomena related to
neurodegenerative diseases and malfunctions.
Probability theory and stochasticity (many biological functions
are of stochastic nature).
P systems: formal tools, with enhanced power and efficiency
=⇒ could shed light to the problem of modeling complex
biological processes.
9
11. computing in a quantum environment
∙ Quantum computing ⇒ Buzzword
∙ Moore’s Law is reaching its physical limits.
∙ New computing paradigms?
∙ Redesign and revisit well-studied models and structures from
classical computation.
10
12. consequences of moore’s law
∙ Continuously decreasing size of the computing circuits.
∙ Technological and physical limitations (limits of lithography in
chip design).
∙ New technologies to overcome these barriers, with Quantum
Computation being a possible candidate.
∙ Ability of these systems to operate at a microscopic level.
11
13. basics of quantum computing
∙ QC considers the notion of computing as a natural, physical
process.
∙ It must obey to the postulates of quantum mechanics.
∙ Bit ⇒ Qubit.
∙ It was initially discussed in the works of Richard Feynman in the
early ’80s.
12
14. dirac symbolism bra-ket notation
∙ State 0 is represented as ket |0⟩ and state 1 as ket |1⟩.
∙ Every ket corresponds to a vector in a Hilbert space.
∙ A qubit is in state |ψ⟩ described by:
|ψ⟩ = c0 |0⟩ + c1 |1⟩ (1)
∙ They are complex numbers for which |c0|2
+ |c1|2
= 1.
13
15. terminology needed for clarification
∙ Σ ⇒ the alphabet
∙ Σ∗
⇒ the set of all finite strings over Σ
∙ If U is an n × n square matrix , ¯U is its conjugate, and U†
its
transpose and conjugate.
∙ Cn×n
defines the set of all n × n complex matrices.
∙ Hn is an n-dimensional Hilbert space.
14
16. quantum computation states and formalism
∙ The evolution of a quantum system is described by unitary
transformations.
∙ The states of an n-level quantum system are self-adjoint
positive mappings of Hn with unit trace.
∙ An observable of a quantum system is a self-adjoint mapping
Hn → Hn.
∙ Each state qi ∈ Q with |Q| = n can be represented by a vector
ei = (0, . . . , 1, . . . , 0).
15
17. quantum computation applying matrices, observables, and projection
∙ Each of the states is a superposition of the form
n∑
i=1
ciei.
∙ n is the number of states
∙ ci ∈ C are the coefficients with |c1|2
+ |c2|2
+ · · · + |cn|2
= 1
∙ ei denotes the (pure) basis state corresponding to i.
∙ Each symbol σi ∈ Σ a unitary matrix/operator Uσi
and each
observable O an Hermitian matrix O.
∙ The possible outcomes of a measurement are the eigenvalues
of the observable.
∙ Transition from one state to another is achieved through the
application of a unitary operator Uσi
.
∙ The probability of obtaining a result p is ∥πPi∥, where π is the
current state (or a superposition) and Pi is the projection matrix
of the measured basis state.
∙ The state after the measurement collapses to the πPi
/
∥πPi∥.
16
19. similar approaches
∙ Mainly by Leporati et al.
∙ Inspired by classical energy-based P systems
∙ 2 models: based on strictly unitary rules and on non-unitary
operations.
∙ Objects are represented by qudits, while multisets are
compositions of such individual systems.
∙ Energy units, associated with the objects, are incorporated in the
system in the form of actual quanta of energy.
∙ Objects can change their state but can never cross membranes to
move to another region.
∙ Interactions happen through the modification of energy of the
oscillators in each membrane.
18
20. our key ideas
∙ No use of energy-based rules, oscillators, and non-unitary rules.
∙ We prefer more conventional quantum computing techniques.
∙ Our rules are strictly unitary.
∙ We avoid the problems associated with the notion of
“transferring” systems/objects, which is inherent in similar
works.
∙ by providing registers with set “depths” that can easily be
manipulated with standard unitary operators.
19
21. defining the cascading p systems
Definition
A cascading P system is a tuple
Π = (Γ, µ, wm, Rm),
where
1. Γ is an alphabet, we call them objects.
2. µ is a membrane structure, in which membranes are nested in
hierarchically arranged layers, in a way such that inputs and outputs
form a pipeline through the layers. Each membrane consists of two
Hilbert spaces, an input and an output one. The outermost membrane
to contain the result of a computation.
3. Each wm describes the initial configuration of the m ∈ µ membrane’s
state. It is composed of |Γ| qubits.
4. Each element of Rm would be a unitary operator which acts in m ∈ µ.
20
22. states and computation
∙ State:
Each membrane in layer 0 has its own input space and a shared
output space. For each layer k > 0, the membranes of layer k
have as inputs the output space of layer k − 1, and share an
output space, which in turn is the input of k + 1.
∙ Computation:
For each membrane, we apply a set of rules. We, also, initialise
the i-th membrane’s input region with instances of objects as
defined by each wi. Computation starts from the innermost
layer (layer 0), applying the composition of rules Rm for all the
membranes m ∈ layer 0 and continues with layer 1, layer 2 etc.
The output space of the outermost layer contains the result of
the computation.
21
23. an example
M1
M2
M3
a
ba
∙ For each membrane, the input and output state kets
|ab⟩ = |a⟩ ⊗ |b⟩ are composed of two qubits, whose values
represent the “degrees of existence” for each letter. For
example, M1’s initial state is |10⟩ = |1⟩ ⊗ |0⟩.
22
24. the rules
∙ Membrane 1 rule: R1 = |10⟩M1in ⊗ |00⟩M1out ↔ |00⟩M1in ⊗ |10⟩M1out
∙ Membrane 2 rule: R2 = |11⟩M2in ⊗ |10⟩M2out ↔ |00⟩M2in ⊗ |11⟩M2out
∙ Membrane 3 rule: R3 = |11⟩M3in ⊗ |00⟩M3out ↔ |00⟩M3in ⊗ |11⟩M3out
The actual rules would work on the whole space
M1in ⊗ M2in ⊗ M1out/M2out/M3in ⊗ M3out.
If we apply the sequence R3 · R2 · R1 to the initial state:
|10⟩M1in ⊗ |11⟩M2in ⊗ |00⟩M1M2out/M3in ⊗ |00⟩M3out
we get the final state:
|00⟩M1in ⊗ |00⟩M2in ⊗ |00⟩M1M2out/M3in ⊗ |11⟩M3out
23
25. simulating classical automata i
∙ Given a depth k ∈ N, we are able to build a P system that
simulates an automaton running on words of length l = k.
∙ Construction:
We build a cascading P system whose alphabet consists of the
alphabet of the automaton we are simulating, plus all its states
(represented as tokens/letters).
Consider k nested membranes, with input/output spaces
coupled as before. Each space consists of two components: a
letter qudit and a state qudit so that it looks something like this:
|letter⟩ ⊗ |state⟩
Starting from the inner membrane, we initialise the letter kets to
the value of the corresponding letter of the input word such
that the k-th membrane contains the k-th letter.
24
26. simulating classical automata ii
∙ All state kets are initialised to |q0⟩.
∙ Then to each membrane is assigned the sum of n = |Σ| rules of
the form:
|letter⟩ ⟨letter| ⊗ U,
where |Σ| is the length of the automaton’s alphabet and U
changes the output state’s ket to |newState⟩ based on the
automaton’s transition function:
δ(letter, currentState) = newState
25
27. simulation example i
Consider the following classical automaton:
∙ Σ = {a, b}
∙ Q = {s0, s1}
∙ δ(a, s0) = s1, δ(b, s0) = s0, δ(a, s1) = s1, δ(b, s1) = s0
Let us simulate a run at depth k = 2 for the word “ab”.
Our membrane system’s initial global state is:
|a⟩m1in ⊗ |q0⟩m1in ⊗ |b⟩m1out/m2in ⊗ |q0⟩m1out/m2in ⊗ |a⟩m2out ⊗ |s0⟩m2out
26
28. simulation example ii
∙ The first membrane’s rule is:
|a⟩ ⟨a| ⊗ |s0⟩ ⟨s0| ⊗ I ⊗ flip ⊗ I +
|b⟩ ⟨b| ⊗ |s0⟩ ⟨s0| ⊗ I ⊗ I ⊗ I +
|a⟩ ⟨a| ⊗ |s1⟩ ⟨s1| ⊗ I ⊗ I ⊗ I +
|b⟩ ⟨b| ⊗ |s1⟩ ⟨s1| ⊗ I ⊗ I ⊗ I
∙ While the second one’s is:
I ⊗ I ⊗ |a⟩ ⟨a| ⊗ |s0⟩ ⟨s0| ⊗ flip +
I ⊗ I ⊗ |a⟩ ⟨a| ⊗ |s0⟩ ⟨s0| ⊗ I +
I ⊗ I ⊗ |a⟩ ⟨a| ⊗ |s1⟩ ⟨s1| ⊗ flip +
I ⊗ I ⊗ |a⟩ ⟨a| ⊗ |s1⟩ ⟨s1| ⊗ I
In the above expression, I denotes the identity operator and flip
is the operator that “flips” a qubit’s value.
27
30. conclusions
∙ An effort to mix of variants of P systems with quantum evolution
rules.
∙ Membrane systems that operate under unitary transformations.
∙ A novel methodology regarding the construction of the quantum
rules.
∙ Unlike related works, our approach involves the use of strictly
unitary rules.
∙ Consistency with the underlying quantum physics.
∙ Potential application of our proposed variants in other
disciplines.
∙ The description of actual algorithms based on these computation
machines.
∙ Connection with game-theoretic aspects of computing.
∙ Relation to quantum game theory.
∙ Implementation of similar approaches, in order to model and
describe actual complex biological models.
29
31. key references
Calude, C.
Unconventional computing: A brief subjective history.
Tech. rep., Department of Computer Science, The University of Auckland, New Zealand, 2015.
Feynman, R. P.
Simulating physics with computers.
International journal of theoretical physics 21, 6 (1982), 467–488.
Giannakis, K., and Andronikos, T.
Mitochondrial fusion through membrane automata.
In GeNeDis 2014, P. Vlamos and A. Alexiou, Eds., vol. 820 of Advances in Experimental
Medicine and Biology. Springer International Publishing, 2015, pp. 163–172.
Leporati, A.
(UREM) P systems with a quantum-like behavior: background, definition, and computational
power.
In International Workshop on Membrane Computing (2007), Springer, pp. 32–53.
Leporati, A., Mauri, G., and Zandron, C.
Quantum sequential P systems with unit rules and energy assigned to membranes.
In International Workshop on Membrane Computing (2005), Springer, pp. 310–325.
Păun, G.
Computing with membranes: Attacking NP-complete problems.
In Unconventional models of Computation, UMC’2K. Springer, 2001, pp. 94–115.
30