We explore how a classical neural network can be partially quantized to create a hybrid quantum-classical neural network, that seeks to classify images of two types of hand drawn digits (0 or 1) from MNIST dataset
This document demonstrates various quantum neural network architectures implemented using the Qiskit Machine Learning library. It shows how to construct OpflowQNN, TwoLayerQNN, CircuitQNN models and perform forward and backward passes on random inputs. It also explores using different backends like statevector simulator and qasm simulator, and different outputs like samples, probabilities or parity functions.
Deep learning and neural networks (using simple mathematics)Amine Bendahmane
The document provides an overview of machine learning and deep learning concepts through a series of diagrams and explanations. It begins by introducing concepts like regression, classification, and clustering. It then discusses supervised vs unsupervised learning before explaining neural networks and components like the perceptron, multi-layer perceptrons, and convolutional neural networks. It notes how neural networks learn representations and separate data through hidden layers.
Introduction to Bayesian modelling and inference with Pyro for meetup group. Part of the presentation is a hands on, with some examples available here: https://github.com/ahmadsalim/2019-meetup-pyro-intro
Βέλτιστη Xάραξη Διαδρομής από Αυτόνομο Όχημα σε Δυναμικό ΠεριβάλλονISSEL
Η αυτόνομη οδήγηση είναι μια τεχνολογία η οποία αναπτύσσεται και εξελίσ σεται ραγδαία τα τελευταία χρόνια. Κυρίως στον βιομηχανικό τομέα, πολλές είναι οι εταιρίες που θέλουν να εδραιωθούν πρώτες στην αγορά και να φτιάξουν το «ιδανικό» αυτόνομο όχημα. Η βελτίωση της ασφάλειας των πολιτών, η μείωση του χρόνου μετακινήσεων αλλά και η εύρυθμη λειτουργία του κυκλοφοριακού συστή ματος, επιτάσσει την αυτοματοποίηση της οδήγησης. Σε ένα ιδεατό σενάριο, οι άνθρωποι θα μπορούν να μετακινούνται χωρίς να απαιτείται η προσοχή τους στον έλεγχο του οχήματος, τα αυτοκινητικά ατυχήματα θα τείνουν στο μηδέν και τα φανάρια όπως και άλλα είδη σήμανσης οδικής κυ κλοφορίας δεν θα είναι πλέον απαραίτητα, καθώς τα αυτοκίνητα θα μπορούν να ανταλλάσσουν πληροφορίες μεταξύ τους μέσω ενός δικτύου επικοινωνίας. Η ανά πτυξη όμως μιας τέτοιας τεχνολογίας είναι αρκετά περίπλοκη, καθώς απαιτείται ο έλεγχος πολλών τυχαίων και απρόβλεπτων συνθηκών. Για την επίτευξη μια τέτοιας πραγματικότητας είναι απαραίτητη η πολύ καλή αντίληψη του χώρου γύρω από το αυτόνομο όχημα, η προσαρμογή του στο περιβάλ λον και η άμεση απόκρισή του σε αλλαγές του περιβάλλοντος. Το όχημα θα πρέπει να πλοηγείται με ασφάλεια στο οδικό δίκτυο και να ανταποκρίνεται ανάλογα τόσο σε στατικά και όσο και σε δυναμικά εμπόδια. Επιπλέον, θα πρέπει να υπολογίζει και να αξιολογεί σενάρια αποφάσεων και να επιλέγει την κατάλληλη ανταπόκριση σύμφωνα με τις συνθήκες. Έτσι, τα αυτόνομα οχήματα θα πρέπει να είναι εξοπλι σμένα με εξειδικευμένους αισθητήρες όπου αναγνωρίζουν και χαρτογραφούν τον γύρω χώρο του οχήματος, με πολύπλοκα συστήματα ελέγχου και λήψης αποφάσεων αλλά και κατάλληλα συστήματα πρόβλεψης συμπεριφορών. Η παρούσα διπλωματική εστιάζει στην ανάπτυξη ενός τέτοιου συστήματος αυ τόνομης οδήγησης. Σκοπός του συστήματος είναι να κατευθύνει το όχημα από ένα αρχικό σημείο σε έναν τελικό προορισμό σε μία πόλη με οχήματα και πεζούς ενώ ταυτόχρονα υπολογίζει τον βέλτιστο και πιο σύντομο δρόμο, με ασφάλεια και συμ μόρφωση στους κανόνες οδικής κυκλοφορίας. Το σύστημα αναπτύχθηκε σε μορφή μεμονωμένου συστήματος (ego-only system) και επιλέχθηκε η μορφή του αρθρωτού συστήματος (modular system). Η υλοποίησή του έγινε σε γλώσσα προγραμματι σμού Python και του μεσολογισμικό ROS. Ο προσομοιωτής που επιλέχθηκε είναι το σύστημα Carla όπου και προσφέρει τα αυτοκίνητα, το αστικό περιβάλλον και τους φυσικούς νόμους για την διεξαγωγή των αποτελεσμάτων. Το σύστημα που αναπτύχθηκε αποτελείται από τα επιμέρους συστήματα 1) κατασκευής βασικού μονοπατιού, 2) αντίληψης, 3) πρόβλεψης συμπεριφοράς, 4) κατασκευής τοπικών μονοπατιών, 5) επιλογής συμπεριφοράς και 6) ελέγχου κινηματικής συμπεριφοράς του οχήματος. Τα συστήματα αυτά επικοινωνούν κατάλληλα μεταξύ τους για την για την επίτευξη της αυτόνομης οδήγησης. Στο σύστημα κατασκευής βασικού μονοπατιού δημιουργείται ένας κατευθυνό μενος γράφος του χάρτη και χρησιμοποιείται ο αλγόριθμος A* για την αναζήτηση της βέλτιστης διαδρομής. (continue in full text)
Do you really think that a neural network is a block box? I believe, a neuron inside the human brain may be very complex, but a neuron in a neural network is certainly not that complex.
In this presentation, we are going to discuss how to implement a neural network from scratch in Python. This means we are NOT going to use deep learning libraries like TensorFlow, PyTorch, Keras, etc.
The document summarizes various sampling methods used in machine learning including Monte Carlo methods, Markov chain Monte Carlo, Gibbs sampling, slice sampling, and Hamiltonian Monte Carlo. The key points are:
1. Sampling methods like Monte Carlo and Markov chain Monte Carlo allow approximating expectations or probabilities that cannot be directly computed.
2. Markov chain Monte Carlo techniques like Metropolis-Hastings and Gibbs sampling generate dependent samples that converge to the desired distribution through Markov chain transitions.
3. Gibbs sampling is a special case of Metropolis-Hastings that directly samples from the conditional distributions to satisfy detailed balance.
4. Slice sampling restricts sampling to a region defined by a "slice" to efficiently explore the
References:
"Gaussian Process", Lectured by Professor Il-Chul Moon
"Gaussian Processes", Cornell CS4780 , Lectured by Professor
Kilian Weinberger
Bayesian Deep Learning by Sungjoon Choi
This document demonstrates various quantum neural network architectures implemented using the Qiskit Machine Learning library. It shows how to construct OpflowQNN, TwoLayerQNN, CircuitQNN models and perform forward and backward passes on random inputs. It also explores using different backends like statevector simulator and qasm simulator, and different outputs like samples, probabilities or parity functions.
Deep learning and neural networks (using simple mathematics)Amine Bendahmane
The document provides an overview of machine learning and deep learning concepts through a series of diagrams and explanations. It begins by introducing concepts like regression, classification, and clustering. It then discusses supervised vs unsupervised learning before explaining neural networks and components like the perceptron, multi-layer perceptrons, and convolutional neural networks. It notes how neural networks learn representations and separate data through hidden layers.
Introduction to Bayesian modelling and inference with Pyro for meetup group. Part of the presentation is a hands on, with some examples available here: https://github.com/ahmadsalim/2019-meetup-pyro-intro
Βέλτιστη Xάραξη Διαδρομής από Αυτόνομο Όχημα σε Δυναμικό ΠεριβάλλονISSEL
Η αυτόνομη οδήγηση είναι μια τεχνολογία η οποία αναπτύσσεται και εξελίσ σεται ραγδαία τα τελευταία χρόνια. Κυρίως στον βιομηχανικό τομέα, πολλές είναι οι εταιρίες που θέλουν να εδραιωθούν πρώτες στην αγορά και να φτιάξουν το «ιδανικό» αυτόνομο όχημα. Η βελτίωση της ασφάλειας των πολιτών, η μείωση του χρόνου μετακινήσεων αλλά και η εύρυθμη λειτουργία του κυκλοφοριακού συστή ματος, επιτάσσει την αυτοματοποίηση της οδήγησης. Σε ένα ιδεατό σενάριο, οι άνθρωποι θα μπορούν να μετακινούνται χωρίς να απαιτείται η προσοχή τους στον έλεγχο του οχήματος, τα αυτοκινητικά ατυχήματα θα τείνουν στο μηδέν και τα φανάρια όπως και άλλα είδη σήμανσης οδικής κυ κλοφορίας δεν θα είναι πλέον απαραίτητα, καθώς τα αυτοκίνητα θα μπορούν να ανταλλάσσουν πληροφορίες μεταξύ τους μέσω ενός δικτύου επικοινωνίας. Η ανά πτυξη όμως μιας τέτοιας τεχνολογίας είναι αρκετά περίπλοκη, καθώς απαιτείται ο έλεγχος πολλών τυχαίων και απρόβλεπτων συνθηκών. Για την επίτευξη μια τέτοιας πραγματικότητας είναι απαραίτητη η πολύ καλή αντίληψη του χώρου γύρω από το αυτόνομο όχημα, η προσαρμογή του στο περιβάλ λον και η άμεση απόκρισή του σε αλλαγές του περιβάλλοντος. Το όχημα θα πρέπει να πλοηγείται με ασφάλεια στο οδικό δίκτυο και να ανταποκρίνεται ανάλογα τόσο σε στατικά και όσο και σε δυναμικά εμπόδια. Επιπλέον, θα πρέπει να υπολογίζει και να αξιολογεί σενάρια αποφάσεων και να επιλέγει την κατάλληλη ανταπόκριση σύμφωνα με τις συνθήκες. Έτσι, τα αυτόνομα οχήματα θα πρέπει να είναι εξοπλι σμένα με εξειδικευμένους αισθητήρες όπου αναγνωρίζουν και χαρτογραφούν τον γύρω χώρο του οχήματος, με πολύπλοκα συστήματα ελέγχου και λήψης αποφάσεων αλλά και κατάλληλα συστήματα πρόβλεψης συμπεριφορών. Η παρούσα διπλωματική εστιάζει στην ανάπτυξη ενός τέτοιου συστήματος αυ τόνομης οδήγησης. Σκοπός του συστήματος είναι να κατευθύνει το όχημα από ένα αρχικό σημείο σε έναν τελικό προορισμό σε μία πόλη με οχήματα και πεζούς ενώ ταυτόχρονα υπολογίζει τον βέλτιστο και πιο σύντομο δρόμο, με ασφάλεια και συμ μόρφωση στους κανόνες οδικής κυκλοφορίας. Το σύστημα αναπτύχθηκε σε μορφή μεμονωμένου συστήματος (ego-only system) και επιλέχθηκε η μορφή του αρθρωτού συστήματος (modular system). Η υλοποίησή του έγινε σε γλώσσα προγραμματι σμού Python και του μεσολογισμικό ROS. Ο προσομοιωτής που επιλέχθηκε είναι το σύστημα Carla όπου και προσφέρει τα αυτοκίνητα, το αστικό περιβάλλον και τους φυσικούς νόμους για την διεξαγωγή των αποτελεσμάτων. Το σύστημα που αναπτύχθηκε αποτελείται από τα επιμέρους συστήματα 1) κατασκευής βασικού μονοπατιού, 2) αντίληψης, 3) πρόβλεψης συμπεριφοράς, 4) κατασκευής τοπικών μονοπατιών, 5) επιλογής συμπεριφοράς και 6) ελέγχου κινηματικής συμπεριφοράς του οχήματος. Τα συστήματα αυτά επικοινωνούν κατάλληλα μεταξύ τους για την για την επίτευξη της αυτόνομης οδήγησης. Στο σύστημα κατασκευής βασικού μονοπατιού δημιουργείται ένας κατευθυνό μενος γράφος του χάρτη και χρησιμοποιείται ο αλγόριθμος A* για την αναζήτηση της βέλτιστης διαδρομής. (continue in full text)
Do you really think that a neural network is a block box? I believe, a neuron inside the human brain may be very complex, but a neuron in a neural network is certainly not that complex.
In this presentation, we are going to discuss how to implement a neural network from scratch in Python. This means we are NOT going to use deep learning libraries like TensorFlow, PyTorch, Keras, etc.
The document summarizes various sampling methods used in machine learning including Monte Carlo methods, Markov chain Monte Carlo, Gibbs sampling, slice sampling, and Hamiltonian Monte Carlo. The key points are:
1. Sampling methods like Monte Carlo and Markov chain Monte Carlo allow approximating expectations or probabilities that cannot be directly computed.
2. Markov chain Monte Carlo techniques like Metropolis-Hastings and Gibbs sampling generate dependent samples that converge to the desired distribution through Markov chain transitions.
3. Gibbs sampling is a special case of Metropolis-Hastings that directly samples from the conditional distributions to satisfy detailed balance.
4. Slice sampling restricts sampling to a region defined by a "slice" to efficiently explore the
References:
"Gaussian Process", Lectured by Professor Il-Chul Moon
"Gaussian Processes", Cornell CS4780 , Lectured by Professor
Kilian Weinberger
Bayesian Deep Learning by Sungjoon Choi
This document provides an introduction to blind source separation and non-negative matrix factorization. It describes blind source separation as a method to estimate original signals from observed mixed signals. Non-negative matrix factorization is introduced as a constraint-based approach to solving blind source separation using non-negativity. The alternating least squares algorithm is described for solving the non-negative matrix factorization problem. Experiments applying these methods to artificial and real image data are presented and discussed.
Yoav Goldberg: Word Embeddings What, How and WhitherMLReview
This document discusses word embeddings and how they work. It begins by explaining how the author became an expert in distributional semantics without realizing it. It then discusses how word2vec works, specifically skip-gram models with negative sampling. The key points are that word2vec is learning word and context vectors such that related words and contexts have similar vectors, and that this is implicitly factorizing the word-context pointwise mutual information matrix. Later sections discuss how hyperparameters are important to word2vec's success and provide critiques of common evaluation tasks like word analogies that don't capture true semantic similarity. The overall message is that word embeddings are fundamentally doing the same thing as older distributional semantic models through matrix factorization.
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...Taegyun Jeon
PR-050: Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting
Original Slide from http://home.cse.ust.hk/~xshiab/data/valse-20160323.pptx
Youtube: https://youtu.be/3cFfCM4CXws
The document describes multilayer neural networks and their use for classification problems. It discusses how neural networks can handle continuous-valued inputs and outputs unlike decision trees. Neural networks are inherently parallel and can be sped up through parallelization techniques. The document then provides details on the basic components of neural networks, including neurons, weights, biases, and activation functions. It also describes common network architectures like feedforward networks and discusses backpropagation for training networks.
Anomaly Detection Using Isolation ForestsTuri, Inc.
This document discusses using anomaly detection techniques to identify minority classes without labels. It summarizes using anomaly detection on an unlabeled cancer biopsy dataset to identify malignant biopsies as anomalies without knowing their labels. This "minority report" approach is well-suited for large unlabeled datasets where an adversarial minority class is expected, like credit card fraud or network intrusions, as it can identify outliers without predefined labels of what to look for.
- The document summarizes key concepts from chapters 1.1 to 1.6 of the book "Pattern Recognition and Machine Learning" by Christopher M. Bishop.
- It introduces polynomial curve fitting, Bayesian curve fitting, decision theory, and information theory concepts such as entropy, Kullback-Leibler divergence, and their applications in machine learning.
- Key algorithms covered include linear and polynomial regression, maximum likelihood estimation, and using entropy and KL divergence to model probability distributions.
Mrbml004 : Introduction to Information Theory for Machine LearningJaouad Dabounou
La quatrième séance de lecture de livres en machine learning.
Vidéo : https://youtu.be/Ab5RvD7ieFg
Elle concernera une brève introduction à la théorie de l'information: Entropy, K-L divergence, mutual Information,... et son application dans la fonction de perte et notamment la cross-entropy.
Lecture de trois livres, dans le cadre de "Monday reading books on machine learning".
Le premier livre, qui constituera le fil conducteur de toute l'action :
Christopher Bishop; Pattern Recognition and Machine Learning, Springer-Verlag New York Inc, 2006
Seront utilisées des parties de deux livres, surtout du livre :
Ian Goodfellow, Yoshua Bengio, Aaron Courville; Deep Learning, The MIT Press, 2016
et du livre :
Ovidiu Calin; Deep Learning Architectures: A Mathematical Approach, Springer, 2020
Optimization techniques: Ant Colony Optimization: Bee Colony Optimization: Tr...Soumen Santra
Optimization techniques: Ant Colony Optimization: Bee Colony Optimization: Traveling Salesman Problem
Features of Ant Colony
Features of Ant
Features of other Optimization Techniques
Algorithm
Flow Charts
This document discusses random forest regression. It imports necessary libraries in Python like pandas, numpy, and matplotlib. It then loads a dataset, splits it into features (X) and target (y). A random forest regressor is created and fit to the data. The model is used to make a prediction and a plot is generated showing the regression line against the true values. Similar steps are performed in R, loading a dataset, creating a random forest regressor, plotting the results and making a prediction.
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithmChristian Robert
The document describes the No-U-Turn Sampler (NUTS), an extension of Hamiltonian Monte Carlo (HMC) that aims to avoid the random walk behavior and poor mixing that can occur when the trajectory length L is not set appropriately. NUTS augments the model with a slice variable and uses a deterministic procedure to select a set of candidate states C based on the instantaneous distance gain, avoiding the need to manually tune L. It builds up a set of possible states B by doubling a binary tree and checking the distance criterion on subtrees, then samples from the uniform distribution over C to generate proposals. This allows NUTS to automatically determine an appropriate trajectory length and avoid issues like periodicity that can plague
The document describes the AlexNet neural network architecture and its application to classifying images from the Fashion-MNIST dataset. It constructs an AlexNet model, loads and preprocesses the Fashion-MNIST data, and trains the model on this dataset for 5 epochs. Key aspects covered include the convolutional and pooling layers in AlexNet, reading and transforming the Fashion-MNIST data, calculating training and test accuracy, and observing slower progress during training compared to LeNet due to the larger image size.
This document provides a demonstration of CLIMADA, a platform for probabilistic climate risk quantification and adaptation economics. It summarizes the key steps to generate hazard data from tropical cyclone tracks, create exposure data for Bangladesh, define vulnerability through impact functions, and calculate risk metrics like expected annual damage. The demonstration shows how CLIMADA can be used to model current and potential future climate risks.
Pydata DC 2018 (Skorch - A Union of Scikit-learn and PyTorch)Thomas Fan
This document summarizes a presentation about Skorch, a library that provides a Scikit-Learn compatible interface for neural network models in PyTorch. Some key points:
1. Skorch abstracts away PyTorch training loops and reduces boilerplate code with callbacks. It allows using Scikit-Learn APIs like net.fit() and net.predict() on neural networks.
2. Examples show using Skorch for tasks like MNIST classification, image segmentation of cell nuclei, and transfer learning with ResNet. Callbacks like EpochScoring, Freezer, and checkpointing are used.
3. Skorch provides datasets and transformations to load data, customized losses and metrics, and integration with Scikit-Learn pipelines and grid
Aiming at complete code coverage by unit tests tends to be cumbersome, especially for cases where external API calls a part of the code base. For these reasons, Python comes with the unittest.mock library, appearing to be a powerful companion in replacing parts of the system under test.
This document provides an overview of SciPy and demonstrates some of its functionality for interpolation, integration, and optimization. SciPy imports NumPy functions and provides additional tools for domains like signal processing, linear algebra, special functions, and more. Examples show linear, cubic and barycentric interpolation of 1D and 2D functions. Integration tools like quad, dblquad, simps and cumtrapz are applied to functions. However, optimization examples are not shown.
This document introduces machine learning concepts including supervised and unsupervised learning. It discusses preparing data for machine learning by techniques like one-hot encoding, scaling, principal component analysis (PCA), and bag-of-words representations. Code examples are provided to classify cancer data using k-nearest neighbors, cluster data with k-means, reduce dimensions with PCA, and vectorize text with bag-of-words. Finally, potential machine learning exercises are outlined like predicting user purchases, finding user clusters, and regression problems.
Ansible is an open source automation platform, written in Python, that can be used for configuration-management, application deployment, cloud provisioning, ad-hoc task-execution, multinode orchestration and so on. This talk is an introduction to Ansible for beginners, including tips like how to use containers to mimic multiple machines while iteratively automating some tasks or testing.
Building High-Performance Language Implementations With Low EffortStefan Marr
This talk shows how languages can be implemented as self-optimizing interpreters, and how Truffle or RPython go about to just-in-time compile these interpreters to efficient native code.
Programming languages are never perfect, so people start building domain-specific languages to be able to solve their problems more easily. However, custom languages are often slow, or take enormous amounts of effort to be made fast by building custom compilers or virtual machines.
With the notion of self-optimizing interpreters, researchers proposed a way to implement languages easily and generate a JIT compiler from a simple interpreter. We explore the idea and experiment with it on top of RPython (of PyPy fame) with its meta-tracing JIT compiler, as well as Truffle, the JVM framework of Oracle Labs for self-optimizing interpreters.
In this talk, we show how a simple interpreter can reach the same order of magnitude of performance as the highly optimizing JVM for Java. We discuss the implementation on top of RPython as well as on top of Java with Truffle so that you can start right away, independent of whether you prefer the Python or JVM ecosystem.
While our own experiments focus on SOM, a little Smalltalk variant to keep things simple, other people have used this approach to improve peek performance of JRuby, or build languages such as JavaScript, R, and Python 3.
This document provides an introduction to blind source separation and non-negative matrix factorization. It describes blind source separation as a method to estimate original signals from observed mixed signals. Non-negative matrix factorization is introduced as a constraint-based approach to solving blind source separation using non-negativity. The alternating least squares algorithm is described for solving the non-negative matrix factorization problem. Experiments applying these methods to artificial and real image data are presented and discussed.
Yoav Goldberg: Word Embeddings What, How and WhitherMLReview
This document discusses word embeddings and how they work. It begins by explaining how the author became an expert in distributional semantics without realizing it. It then discusses how word2vec works, specifically skip-gram models with negative sampling. The key points are that word2vec is learning word and context vectors such that related words and contexts have similar vectors, and that this is implicitly factorizing the word-context pointwise mutual information matrix. Later sections discuss how hyperparameters are important to word2vec's success and provide critiques of common evaluation tasks like word analogies that don't capture true semantic similarity. The overall message is that word embeddings are fundamentally doing the same thing as older distributional semantic models through matrix factorization.
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...Taegyun Jeon
PR-050: Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting
Original Slide from http://home.cse.ust.hk/~xshiab/data/valse-20160323.pptx
Youtube: https://youtu.be/3cFfCM4CXws
The document describes multilayer neural networks and their use for classification problems. It discusses how neural networks can handle continuous-valued inputs and outputs unlike decision trees. Neural networks are inherently parallel and can be sped up through parallelization techniques. The document then provides details on the basic components of neural networks, including neurons, weights, biases, and activation functions. It also describes common network architectures like feedforward networks and discusses backpropagation for training networks.
Anomaly Detection Using Isolation ForestsTuri, Inc.
This document discusses using anomaly detection techniques to identify minority classes without labels. It summarizes using anomaly detection on an unlabeled cancer biopsy dataset to identify malignant biopsies as anomalies without knowing their labels. This "minority report" approach is well-suited for large unlabeled datasets where an adversarial minority class is expected, like credit card fraud or network intrusions, as it can identify outliers without predefined labels of what to look for.
- The document summarizes key concepts from chapters 1.1 to 1.6 of the book "Pattern Recognition and Machine Learning" by Christopher M. Bishop.
- It introduces polynomial curve fitting, Bayesian curve fitting, decision theory, and information theory concepts such as entropy, Kullback-Leibler divergence, and their applications in machine learning.
- Key algorithms covered include linear and polynomial regression, maximum likelihood estimation, and using entropy and KL divergence to model probability distributions.
Mrbml004 : Introduction to Information Theory for Machine LearningJaouad Dabounou
La quatrième séance de lecture de livres en machine learning.
Vidéo : https://youtu.be/Ab5RvD7ieFg
Elle concernera une brève introduction à la théorie de l'information: Entropy, K-L divergence, mutual Information,... et son application dans la fonction de perte et notamment la cross-entropy.
Lecture de trois livres, dans le cadre de "Monday reading books on machine learning".
Le premier livre, qui constituera le fil conducteur de toute l'action :
Christopher Bishop; Pattern Recognition and Machine Learning, Springer-Verlag New York Inc, 2006
Seront utilisées des parties de deux livres, surtout du livre :
Ian Goodfellow, Yoshua Bengio, Aaron Courville; Deep Learning, The MIT Press, 2016
et du livre :
Ovidiu Calin; Deep Learning Architectures: A Mathematical Approach, Springer, 2020
Optimization techniques: Ant Colony Optimization: Bee Colony Optimization: Tr...Soumen Santra
Optimization techniques: Ant Colony Optimization: Bee Colony Optimization: Traveling Salesman Problem
Features of Ant Colony
Features of Ant
Features of other Optimization Techniques
Algorithm
Flow Charts
This document discusses random forest regression. It imports necessary libraries in Python like pandas, numpy, and matplotlib. It then loads a dataset, splits it into features (X) and target (y). A random forest regressor is created and fit to the data. The model is used to make a prediction and a plot is generated showing the regression line against the true values. Similar steps are performed in R, loading a dataset, creating a random forest regressor, plotting the results and making a prediction.
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithmChristian Robert
The document describes the No-U-Turn Sampler (NUTS), an extension of Hamiltonian Monte Carlo (HMC) that aims to avoid the random walk behavior and poor mixing that can occur when the trajectory length L is not set appropriately. NUTS augments the model with a slice variable and uses a deterministic procedure to select a set of candidate states C based on the instantaneous distance gain, avoiding the need to manually tune L. It builds up a set of possible states B by doubling a binary tree and checking the distance criterion on subtrees, then samples from the uniform distribution over C to generate proposals. This allows NUTS to automatically determine an appropriate trajectory length and avoid issues like periodicity that can plague
The document describes the AlexNet neural network architecture and its application to classifying images from the Fashion-MNIST dataset. It constructs an AlexNet model, loads and preprocesses the Fashion-MNIST data, and trains the model on this dataset for 5 epochs. Key aspects covered include the convolutional and pooling layers in AlexNet, reading and transforming the Fashion-MNIST data, calculating training and test accuracy, and observing slower progress during training compared to LeNet due to the larger image size.
This document provides a demonstration of CLIMADA, a platform for probabilistic climate risk quantification and adaptation economics. It summarizes the key steps to generate hazard data from tropical cyclone tracks, create exposure data for Bangladesh, define vulnerability through impact functions, and calculate risk metrics like expected annual damage. The demonstration shows how CLIMADA can be used to model current and potential future climate risks.
Pydata DC 2018 (Skorch - A Union of Scikit-learn and PyTorch)Thomas Fan
This document summarizes a presentation about Skorch, a library that provides a Scikit-Learn compatible interface for neural network models in PyTorch. Some key points:
1. Skorch abstracts away PyTorch training loops and reduces boilerplate code with callbacks. It allows using Scikit-Learn APIs like net.fit() and net.predict() on neural networks.
2. Examples show using Skorch for tasks like MNIST classification, image segmentation of cell nuclei, and transfer learning with ResNet. Callbacks like EpochScoring, Freezer, and checkpointing are used.
3. Skorch provides datasets and transformations to load data, customized losses and metrics, and integration with Scikit-Learn pipelines and grid
Aiming at complete code coverage by unit tests tends to be cumbersome, especially for cases where external API calls a part of the code base. For these reasons, Python comes with the unittest.mock library, appearing to be a powerful companion in replacing parts of the system under test.
This document provides an overview of SciPy and demonstrates some of its functionality for interpolation, integration, and optimization. SciPy imports NumPy functions and provides additional tools for domains like signal processing, linear algebra, special functions, and more. Examples show linear, cubic and barycentric interpolation of 1D and 2D functions. Integration tools like quad, dblquad, simps and cumtrapz are applied to functions. However, optimization examples are not shown.
This document introduces machine learning concepts including supervised and unsupervised learning. It discusses preparing data for machine learning by techniques like one-hot encoding, scaling, principal component analysis (PCA), and bag-of-words representations. Code examples are provided to classify cancer data using k-nearest neighbors, cluster data with k-means, reduce dimensions with PCA, and vectorize text with bag-of-words. Finally, potential machine learning exercises are outlined like predicting user purchases, finding user clusters, and regression problems.
Ansible is an open source automation platform, written in Python, that can be used for configuration-management, application deployment, cloud provisioning, ad-hoc task-execution, multinode orchestration and so on. This talk is an introduction to Ansible for beginners, including tips like how to use containers to mimic multiple machines while iteratively automating some tasks or testing.
Building High-Performance Language Implementations With Low EffortStefan Marr
This talk shows how languages can be implemented as self-optimizing interpreters, and how Truffle or RPython go about to just-in-time compile these interpreters to efficient native code.
Programming languages are never perfect, so people start building domain-specific languages to be able to solve their problems more easily. However, custom languages are often slow, or take enormous amounts of effort to be made fast by building custom compilers or virtual machines.
With the notion of self-optimizing interpreters, researchers proposed a way to implement languages easily and generate a JIT compiler from a simple interpreter. We explore the idea and experiment with it on top of RPython (of PyPy fame) with its meta-tracing JIT compiler, as well as Truffle, the JVM framework of Oracle Labs for self-optimizing interpreters.
In this talk, we show how a simple interpreter can reach the same order of magnitude of performance as the highly optimizing JVM for Java. We discuss the implementation on top of RPython as well as on top of Java with Truffle so that you can start right away, independent of whether you prefer the Python or JVM ecosystem.
While our own experiments focus on SOM, a little Smalltalk variant to keep things simple, other people have used this approach to improve peek performance of JRuby, or build languages such as JavaScript, R, and Python 3.
This sample offers insights of how qubit superpositions and entanglements are possible and how these are represented in statevectors, blockspheres and amplitudes when statevector representations are not possible
This document summarizes machine learning frameworks and libraries, neural network structures, and the process of building and training a neural network model for image classification. It discusses TensorFlow and PyTorch frameworks, describes the structure of a convolutional neural network, and provides code to import datasets, define a model, train the model on GPUs, and test the model's accuracy.
This document provides an overview of running an image classification workload using IBM PowerAI and the MNIST dataset. It discusses deep learning concepts like neural networks and training flows. It then demonstrates how to set up TensorFlow on an IBM PowerAI trial server, load the MNIST dataset, build and train a basic neural network model for image classification, and evaluate the trained model's accuracy on test data.
Part II: LLVM Intermediate RepresentationWei-Ren Chen
This document discusses the LLVM intermediate representation (IR) and the lowering process from LLVM IR to machine code. It covers:
1) The characteristics of LLVM IR including SSA form, infinite virtual registers, and phi nodes.
2) How the LLVM IRBuilder class is used to conveniently generate LLVM instructions.
3) The lowering flow from LLVM IR to selection DAGs to machine DAGs to machine instructions and finally machine code.
4) How assembler relaxation can optimize and correct instructions during assembly.
Simple, fast, and scalable torch7 tutorialJin-Hwa Kim
A tutorial based on basic information of Torch7. It covers installation, simple runable codes, tensor manipulations, sweep out key-packages and post-hoc audience q&a.
In this article you will learn hot to use tensorflow Softmax Classifier estimator to classify MNIST dataset in one script.
This paper introduces also the basic idea of a artificial neural network.
This document provides an overview of machine learning concepts and code examples in Python. It discusses the typical 5 steps of machine learning projects: collaboration, data collection, clustering, classification, and conclusion. Code snippets demonstrate each step, including collecting data with Scrapy, clustering with k-means, classification with support vector machines, and evaluating results with a confusion matrix. Dimensionality reduction techniques like principal component analysis are also covered.
This document discusses support vector machine (SVM) classification using Python and R. It covers importing and splitting a dataset, feature scaling, fitting SVM classifiers to the training set with different kernels, evaluating performance using metrics like confusion matrix, and making predictions on test data. Key steps include feature scaling, fitting linear and radial basis function SVM classifiers, evaluating using k-fold cross validation and classification report.
The document provides sample code examples for key Node.js concepts including prototype-based object-oriented programming, asynchronous programming with callbacks, promises, and async/await, automated testing with Mocha and Chai, and using TypeScript with Node.js. The examples cover topics such as object prototypes, classes, timers, promises, generator functions, generics, and writing automated tests. Useful links are also provided for further learning Node.js, asynchronous programming, testing, and TypeScript.
The document discusses Python testing in the Kytos SDN platform. It covers unit testing, integration testing, and system testing. It also discusses mocking, code coverage, linting tools, documentation testing, and continuous integration used for Python testing in Kytos. Testing helps verify code behavior, find bugs, improve code quality, and encourage fast development. Kytos uses tools like unittest, pytest, coverage, pylint, tox and GitHub for testing.
Similar to Hybrid quantum classical neural networks with pytorch and qiskit (20)
NexGen Solutions for cloud platforms, powered by GenQAIVijayananda Mohire
This is our next generation solutions powered by emerging technologies like AI, quantum computing, Blockchain, quantum cryptography etc. We have various offers that can help improved productivity, help automate and improve ease of doing business. We offer cloud based solutions and have a Hub to interface major cloud platforms.
This is our project work at our startup for Data Science. This is part of our internal training and focused on data management for AI, ML and Generative AI apps
This is our contributions to the Data Science projects, as developed in our startup. These are part of partner trainings and in-house design and development and testing of the course material and concepts in Data Science and Engineering. It covers Data ingestion, data wrangling, feature engineering, data analysis, data storage, data extraction, querying data, formatting and visualizing data for various dashboards.Data is prepared for accurate ML model predictions and Generative AI apps
Considering the need and demand for high quality digital platforms that can help clients to get the most of the newer technology, we have proposed an IT Hub that allows for rapid on boarding of clients to various modules on a need basis, allowing them to subscribe to modules they need only. We have various modules.
This document offers a high level overview of our IT Hub that offers various modules allowing for clients to onboard faster and get the benefits of a large set of vendor products, tools, IDE related to AI, Quantum and Generative AI technologies.
This is my hands-on projects in quantum technologies. These are few of the key projects that I worked with that demonstrates my skills in using various concepts, tools, IDE and deriving the solutions by using quantum principles like superposition, and entanglement along with quantum circuits in realizing the concepts
Azure Quantum Workspace for developing Q# based quantum circuitsVijayananda Mohire
This document provides steps to develop quantum circuits using Q# on Azure Quantum. It instructs the user to create an Azure subscription, log into the Azure portal, create a Quantum Workspace, and provision storage. It then explains how to define Q# operations, simulate them locally using %simulate, connect to the Azure Quantum workspace with %azure.connect, specify an execution target with %azure.target, submit jobs with %azure.submit, check job status with %azure.status, retrieve outputs with %azure.output, and view all jobs with %azure.jobs. An example quantum random number generation program written in Q# is provided.
This is my journey taken from year 2012 on wards, after graduation in my MS with major in AI. I have taken various certification courses, trainings, hands-on labs; few key ones are from Google, and Microsoft.
Agricultural and allied industries play a vital role in the progress of a nation and sustainable economic growth. Farmers play a vital role in this progress. Their hard work and efforts need to be praised and possibly offer them various tools and digital assets that can automate some of their various repetitive tasks such as back office operations, crop monitoring, and post-harvesting routines that might divert the attention of farmers from their core job.
We, at Bhadale IT have developed various products and services that are revolutionary and can offer effective solutions with our industrial partnerships with digital technology leaders like Intel and Microsoft. We have drafted this solution brief to illustrate our products and service offerings for the agricultural industry. We can tailor make highly customized solutions to meet individual project and farmer needs that can include use of various technologies like artificial intelligence, machine learning, data science and related machinery like drones and geo-spatial datasets and various information that can offer precise farming techniques and use of technology in improving production, improvised use of fertilizers, organic farming and reduced crop loss due to rodents, insects and regional diseases.
The focus of this solution is for farmers to adopt and migrate to digital cloud platform to Microsoft Azure that can boost quality and quantity of crop production and improve their supply chain and offer faster and mature downstream business operations.
This is our cloud offerings based on our partnership and relationship with Intel and Microsoft. We offer highly optimized Intel motherboards, memory, and software stack that is best suited for Azure cloud platform and can handle various types of models (IaaS, PaaS, SaaS) and Azure workloads in the public or private cloud.
Explore the fundamentals of GitHub Copilot and its potential to enhance productivity and foster innovation for both individual developers and businesses. Discover how to implement it within your organization and unleash its power for your own projects.
In this learning path, you'll:
Gain a comprehensive understanding of the distinctions between GitHub Copilot for Individuals, GitHub Copilot for Business, and GitHub Copilot X.
Explore various use cases for GitHub Copilot for Business, including real-life examples showcasing how customers have leveraged it to boost their productivity.
Receive step-by-step instructions on enabling GitHub Copilot for Individuals and GitHub Copilot for Business, ensuring a seamless integration into your workflows.
Practical ChatGPT From Use Cases to Prompt Engineering & Ethical ImplicationsVijayananda Mohire
This journey provides learners with a thorough exploration of ChatGPT, starting with an introduction to large language models and their capabilities, the series progresses through practical applications, advanced techniques, industry impacts, and important ethical considerations. Each course aims to equip learners with an in-depth understanding of the model, its functionality, and its wide-ranging applications.
Red Hat Enterprise Linux (RHEL) and Hybrid Cloud Infrastructure. Products that are developed for multi-cloud hybrid platform enabling seamless integration and portability of workloads across Red Hat and partner Infrastructure, public and private clouds.
Learners will be exposed to the foundations of Red Hat, Red Hat Enterprise Linux (RHEL) portfolio including Hybrid Cloud Infrastructure, how to identify target customers, distinguish Red Hat solutions from the competition, review key use cases, align to the sales conversation framework for positioning the solutions, and much more!
Upon completing this learning path, learners will receive the Red Hat Sales Specialist - Red Hat Enterprise Linux accreditation and be prepared to advance to the Red Hat Sales Specialist - Red Hat Enterprise Linux II learning path
This is my annual learning at Red Hat related to accreditation and courses at Red Hat partner training portal.
Learners will be exposed to the foundations of Red Hat, Red Hat Enterprise Linux (RHEL) portfolio including Hybrid Cloud Infrastructure, how to identify target customers, distinguish Red Hat solutions from the competition, review key use cases, align to the sales conversation framework for positioning the solutions, and much more!
Generative AI is a cutting-edge technology that will transform nearly every business function, ranging from content creation and product design, to improving customer experience and marketing new ideas. While the benefits of Generative AI are immense, the technology has its limitations and poses some ethical considerations. In this Journey, learners of all levels will develop a shared understanding of what Generative AI is, the guardrails for use and identify of how to use, build and experiment with the technology in a responsible manner. Learners will also develop skills for leading through this disruption with empathy, while cultivating the human skills to sustain the transformation
The document provides a transcript for Vijayananda Mohire that includes their legal name, username, contact email, a list of 130+ modules completed on the Microsoft learning platform totaling over 130 hours, and descriptions of each module completed including titles, descriptions, and durations.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
A tale of scale & speed: How the US Navy is enabling software delivery from l...
Hybrid quantum classical neural networks with pytorch and qiskit
1. In [10]: import numpy as np
import matplotlib.pyplot as plt
# Importing standard Qiskit libraries
#from qiskit import QuantumCircuit, transpile, Aer, IBMQ
import qiskit
from qiskit import transpile, assemble
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
# For Pytorch
import torch
from torch.autograd import Function
from torchvision import datasets, transforms
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
# Loading your IBM Quantum account(s)
provider = IBMQ.load_account()
ibmqfactory.load_account:WARNING:2021-07-11 06:46:28,261: Credentials
are already in use. The existing account in the session will be repla
ced.
QuantumML https://notebooks.quantum-computing.ibm.com/user/60e12d043ea3ef2d...
1 of 11 7/11/2021, 1:05 PM
2. In [11]: class QuantumCircuit:
"""
This class provides a simple interface for interaction
with the quantum circuit
"""
def __init__(self, n_qubits, backend, shots):
# --- Circuit definition ---
self._circuit = qiskit.QuantumCircuit(n_qubits)
all_qubits = [i for i in range(n_qubits)]
self.theta = qiskit.circuit.Parameter('theta')
self._circuit.h(all_qubits)
self._circuit.barrier()
self._circuit.ry(self.theta, all_qubits)
self._circuit.measure_all()
# ---------------------------
self.backend = backend
self.shots = shots
def run(self, thetas):
t_qc = transpile(self._circuit,
self.backend)
qobj = assemble(t_qc,
shots=self.shots,
parameter_binds = [{self.theta: theta} for thet
a in thetas])
job = self.backend.run(qobj)
result = job.result().get_counts()
counts = np.array(list(result.values()))
states = np.array(list(result.keys())).astype(float)
# Compute probabilities for each state
probabilities = counts / self.shots
# Get state expectation
expectation = np.sum(states * probabilities)
return np.array([expectation])
QuantumML https://notebooks.quantum-computing.ibm.com/user/60e12d043ea3ef2d...
2 of 11 7/11/2021, 1:05 PM
3. In [12]: #import qiskit
#from qiskit import QuantumCircuit, transpile, Aer
simulator = qiskit.Aer.get_backend('aer_simulator')
circuit = QuantumCircuit(1, simulator, 100)
print('Expected value for rotation pi {}'.format(circuit.run([np.p
i])[0]))
circuit._circuit.draw()
Expected value for rotation pi 0.58
Out[12]:
QuantumML https://notebooks.quantum-computing.ibm.com/user/60e12d043ea3ef2d...
3 of 11 7/11/2021, 1:05 PM
5. In [14]: # Concentrating on the first 100 samples
n_samples = 100
X_train = datasets.MNIST(root='./data', train=True, download=True,
transform=transforms.Compose([transforms.ToTen
sor()]))
# Leaving only labels 0 and 1
idx = np.append(np.where(X_train.targets == 0)[0][:n_samples],
np.where(X_train.targets == 1)[0][:n_samples])
X_train.data = X_train.data[idx]
X_train.targets = X_train.targets[idx]
train_loader = torch.utils.data.DataLoader(X_train, batch_size=1, shuff
le=True)
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.
gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz
Extracting ./data/MNIST/raw/train-images-idx3-ubyte.gz to ./data/MNIS
T/raw
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.
gz to ./data/MNIST/raw/train-labels-idx1-ubyte.gz
Extracting ./data/MNIST/raw/train-labels-idx1-ubyte.gz to ./data/MNIS
T/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.g
z to ./data/MNIST/raw/t10k-images-idx3-ubyte.gz
Extracting ./data/MNIST/raw/t10k-images-idx3-ubyte.gz to ./data/MNIST
/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.g
z to ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz
Extracting ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ./data/MNIST
/raw
Processing...
/opt/conda/lib/python3.8/site-packages/torchvision/datasets/mnist.py:
479: UserWarning: The given NumPy array is not writeable, and PyTorch
does not support non-writeable tensors. This means you can write to t
he underlying (supposedly non-writeable) NumPy array using the tenso
r. You may want to copy the array to protect its data or make it writ
eable before converting it to a tensor. This type of warning will be
suppressed for the rest of this program. (Triggered internally at /p
ytorch/torch/csrc/utils/tensor_numpy.cpp:143.)
return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)
Done!
QuantumML https://notebooks.quantum-computing.ibm.com/user/60e12d043ea3ef2d...
5 of 11 7/11/2021, 1:05 PM
7. In [17]: class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
self.dropout = nn.Dropout2d()
self.fc1 = nn.Linear(256, 64)
self.fc2 = nn.Linear(64, 1)
self.hybrid = Hybrid(qiskit.Aer.get_backend('aer_simulator'), 1
00, np.pi / 2)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2)
x = self.dropout(x)
x = x.view(1, -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
x = self.hybrid(x)
return torch.cat((x, 1 - x), -1)
QuantumML https://notebooks.quantum-computing.ibm.com/user/60e12d043ea3ef2d...
7 of 11 7/11/2021, 1:05 PM
8. In [18]: model = Net()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_func = nn.NLLLoss()
epochs = 20
loss_list = []
model.train()
for epoch in range(epochs):
total_loss = []
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
# Forward pass
output = model(data)
# Calculating loss
loss = loss_func(output, target)
# Backward pass
loss.backward()
# Optimize the weights
optimizer.step()
total_loss.append(loss.item())
loss_list.append(sum(total_loss)/len(total_loss))
print('Training [{:.0f}%]tLoss: {:.4f}'.format(
100. * (epoch + 1) / epochs, loss_list[-1]))
QuantumML https://notebooks.quantum-computing.ibm.com/user/60e12d043ea3ef2d...
8 of 11 7/11/2021, 1:05 PM
9. In [19]: plt.plot(loss_list)
plt.title('Hybrid NN Training Convergence')
plt.xlabel('Training Iterations')
plt.ylabel('Neg Log Likelihood Loss')
<ipython-input-13-625640256c98>:32: FutureWarning: The input object o
f type 'Tensor' is an array-like implementing one of the correspondin
g protocols (`__array__`, `__array_interface__` or `__array_struct__
`); but not a sequence (or 0-D). In the future, this object will be c
oerced as if it was first converted using `np.array(obj)`. To retain
the old behaviour, you have to either modify the type 'Tensor', or as
sign to an empty array created with `np.empty(correct_shape, dtype=ob
ject)`.
gradients = np.array([gradients]).T
Training [5%] Loss: -0.8143
Training [10%] Loss: -0.9220
Training [15%] Loss: -0.9286
Training [20%] Loss: -0.9489
Training [25%] Loss: -0.9479
Training [30%] Loss: -0.9516
Training [35%] Loss: -0.9703
Training [40%] Loss: -0.9635
Training [45%] Loss: -0.9707
Training [50%] Loss: -0.9702
Training [55%] Loss: -0.9769
Training [60%] Loss: -0.9757
Training [65%] Loss: -0.9823
Training [70%] Loss: -0.9897
Training [75%] Loss: -0.9850
Training [80%] Loss: -0.9866
Training [85%] Loss: -0.9905
Training [90%] Loss: -0.9908
Training [95%] Loss: -0.9921
Training [100%] Loss: -0.9889
Out[19]: Text(0, 0.5, 'Neg Log Likelihood Loss')
QuantumML https://notebooks.quantum-computing.ibm.com/user/60e12d043ea3ef2d...
9 of 11 7/11/2021, 1:05 PM
10. In [20]: model.eval()
with torch.no_grad():
correct = 0
for batch_idx, (data, target) in enumerate(test_loader):
output = model(data)
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
loss = loss_func(output, target)
total_loss.append(loss.item())
print('Performance on test data:ntLoss: {:.4f}ntAccuracy: {:.1
f}%'.format(
sum(total_loss) / len(total_loss),
correct / len(test_loader) * 100)
)
In [21]: n_samples_show = 6
count = 0
fig, axes = plt.subplots(nrows=1, ncols=n_samples_show, figsize=(10,
3))
model.eval()
with torch.no_grad():
for batch_idx, (data, target) in enumerate(test_loader):
if count == n_samples_show:
break
output = model(data)
pred = output.argmax(dim=1, keepdim=True)
axes[count].imshow(data[0].numpy().squeeze(), cmap='gray')
axes[count].set_xticks([])
axes[count].set_yticks([])
axes[count].set_title('Predicted {}'.format(pred.item()))
count += 1
Performance on test data:
Loss: -0.9862
Accuracy: 100.0%
QuantumML https://notebooks.quantum-computing.ibm.com/user/60e12d043ea3ef2d...
10 of 11 7/11/2021, 1:05 PM
11. In [ ]: #Hybrid quantum-classical Neural Networks with PyTorch and Qiskit,execu
ted by Bhadale IT
QuantumML https://notebooks.quantum-computing.ibm.com/user/60e12d043ea3ef2d...
11 of 11 7/11/2021, 1:05 PM