This document provides an overview of quantum computing and its implications for cryptography. It discusses how quantum computers could break popular asymmetric cryptographic algorithms like RSA by efficiently solving problems like integer factorization that are intractable on classical computers. The document explains Shor's algorithm, which uses quantum Fourier transforms to find the period of exponential functions and derive prime factors in polynomial time, posing a threat to RSA. It also discusses quantum computing concepts like superposition and entanglement that enable this speedup. Overall, the document serves as an introduction to how quantum computers may impact cryptography by breaking algorithms like RSA.
This document provides an introduction to quantum programming languages. It begins with basic concepts in quantum mechanics like state superposition and entanglement. It then discusses popular quantum algorithms like Deutsch, Shor, and Grover algorithms. The document reviews several quantum programming languages including quantum pseudocode, Quipper which is embedded in Haskell, and the Python toolbox QuTiP. It also mentions Mathematica packages for quantum computation. Finally, it introduces the IBM Quantum Experience platform for designing and running quantum circuits in a quantum processor or simulator.
This presentation discusses the history and concepts of quantum computing. It introduces quantum computers, which perform calculations using quantum bits that can represent more than one value at a time. Operations on quantum computers use quantum gates rather than classical logic gates. One example covered is Shor's algorithm, which can factor large numbers efficiently. Some challenges with quantum computing are decoherence issues and the difficulty of measuring quantum states without destroying superposition. While still in early research stages, quantum computers may one day be able to solve problems exponentially faster than classical computers.
This document discusses post-quantum cryptography and code-based cryptosystems as an alternative that is secure against quantum computers. It describes the McEliece cryptosystem, which uses error correcting codes, and introduces staircase generator codes and randomly split staircase generator codes to improve efficiency and security. The randomly split staircase generator codes cryptosystem allows for both encryption and digital signatures using efficient procedures while providing 80-bit security levels against quantum attacks, though it has large key sizes of around 10 megabytes.
An overview of how 'quantum' will affect cybersecurity - from cryptography to quantum computing algorithms, we will look at how quantum will affect what we do in information security.
Quantum computing harnesses the principles of quantum mechanics to perform calculations in parallel using quantum bits (qubits) that can exist in superposition. The first proposals for quantum computing date back to the 1980s. Current quantum computers have around 50-100 qubits and are being developed by companies like IBM, Google, and Intel to potentially solve problems like cryptography and AI that are intractable on classical computers. However, building reliable quantum computers remains challenging due to issues like qubit decoherence. Potential applications include optimization, simulation, and machine learning.
Quantum Computing with Amazon Braket
In this talk, I describe some fundamental principles of quantum computing including qu-bits, superposition, and entanglement. I will demonstrate how to perform secure quantum computing tasks across many Quantum Processing Units (QPUs) using Amazon Braket, IAM, and S3.
AI and Machine Learning, Quantum Computing, Amazon Braket, QPU
The document describes a system for combining real-time and batch processing using Apache Storm and Hadoop. Streaming data is captured and processed in real-time using Storm topologies, while periodic snapshots of the real-time data are taken and processed using Hadoop for long-term aggregation and analysis. The system aims to provide a single solution for both real-time and historical processing without the limitations of using either Storm or Hadoop alone.
This document discusses quantum computing and its applications in machine learning. It begins by explaining the basics of quantum theory and how quantum computing works by exploiting quantum mechanical phenomena like superposition and entanglement. It then discusses how quantum computing could be applied to machine learning tasks like optimization problems, database searches, and pattern recognition. Specific quantum algorithms like Grover's algorithm and Shor's algorithm are presented as examples. The document also outlines some benefits of quantum computing like faster processing and better machine learning methods. However, it notes that quantum computing still faces challenges in design, implementation costs, and lack of practical applications. Overall, the document examines the potential for quantum computing to revolutionize machine learning by solving currently intractable problems much more quickly.
This document provides an introduction to quantum programming languages. It begins with basic concepts in quantum mechanics like state superposition and entanglement. It then discusses popular quantum algorithms like Deutsch, Shor, and Grover algorithms. The document reviews several quantum programming languages including quantum pseudocode, Quipper which is embedded in Haskell, and the Python toolbox QuTiP. It also mentions Mathematica packages for quantum computation. Finally, it introduces the IBM Quantum Experience platform for designing and running quantum circuits in a quantum processor or simulator.
This presentation discusses the history and concepts of quantum computing. It introduces quantum computers, which perform calculations using quantum bits that can represent more than one value at a time. Operations on quantum computers use quantum gates rather than classical logic gates. One example covered is Shor's algorithm, which can factor large numbers efficiently. Some challenges with quantum computing are decoherence issues and the difficulty of measuring quantum states without destroying superposition. While still in early research stages, quantum computers may one day be able to solve problems exponentially faster than classical computers.
This document discusses post-quantum cryptography and code-based cryptosystems as an alternative that is secure against quantum computers. It describes the McEliece cryptosystem, which uses error correcting codes, and introduces staircase generator codes and randomly split staircase generator codes to improve efficiency and security. The randomly split staircase generator codes cryptosystem allows for both encryption and digital signatures using efficient procedures while providing 80-bit security levels against quantum attacks, though it has large key sizes of around 10 megabytes.
An overview of how 'quantum' will affect cybersecurity - from cryptography to quantum computing algorithms, we will look at how quantum will affect what we do in information security.
Quantum computing harnesses the principles of quantum mechanics to perform calculations in parallel using quantum bits (qubits) that can exist in superposition. The first proposals for quantum computing date back to the 1980s. Current quantum computers have around 50-100 qubits and are being developed by companies like IBM, Google, and Intel to potentially solve problems like cryptography and AI that are intractable on classical computers. However, building reliable quantum computers remains challenging due to issues like qubit decoherence. Potential applications include optimization, simulation, and machine learning.
Quantum Computing with Amazon Braket
In this talk, I describe some fundamental principles of quantum computing including qu-bits, superposition, and entanglement. I will demonstrate how to perform secure quantum computing tasks across many Quantum Processing Units (QPUs) using Amazon Braket, IAM, and S3.
AI and Machine Learning, Quantum Computing, Amazon Braket, QPU
The document describes a system for combining real-time and batch processing using Apache Storm and Hadoop. Streaming data is captured and processed in real-time using Storm topologies, while periodic snapshots of the real-time data are taken and processed using Hadoop for long-term aggregation and analysis. The system aims to provide a single solution for both real-time and historical processing without the limitations of using either Storm or Hadoop alone.
This document discusses quantum computing and its applications in machine learning. It begins by explaining the basics of quantum theory and how quantum computing works by exploiting quantum mechanical phenomena like superposition and entanglement. It then discusses how quantum computing could be applied to machine learning tasks like optimization problems, database searches, and pattern recognition. Specific quantum algorithms like Grover's algorithm and Shor's algorithm are presented as examples. The document also outlines some benefits of quantum computing like faster processing and better machine learning methods. However, it notes that quantum computing still faces challenges in design, implementation costs, and lack of practical applications. Overall, the document examines the potential for quantum computing to revolutionize machine learning by solving currently intractable problems much more quickly.
Detailed design for a robust counter as well as design for a completely on-line multi-armed bandit implementation that uses the new Bayesian Bandit algorithm.
Predictive Maintenance with Deep Learning and Apache FlinkDongwon Kim
Flink can be used to build a predictive maintenance system using deep learning models on time-series sensor data. A Flink data stream processing pipeline is designed to handle joining streams, applying convolutional LSTM models through an ensemble, and monitoring outputs. Docker and Prometheus are used to package and monitor the solution.
The document describes a terascale learning algorithm for training linear models on large datasets using distributed computing. It discusses using a hashing trick to reduce input complexity, an adaptive online gradient descent algorithm to warm-start L-BFGS batch optimization, and a custom all-reduce implementation to synchronize model parameters across nodes. The approach leverages Hadoop for fault tolerance while training linear models on up to 2.1 trillion features and 17 billion examples in 70 minutes, significantly faster than single machine algorithms.
Vowpal Wabbit is an open source machine learning library that achieves high speed through parallel processing, caching, and hashing. It offers a wide range of machine learning algorithms including linear regression, logistic regression, SVMs, neural networks, and matrix factorization. It supports L1 and L2 regularization and uses online gradient descent, conjugate gradient descent, and L-BFGS for optimization. Online gradient descent calculates error independently for each data point over multiple passes, while conjugate gradient descent finds directions orthogonal to previous steps to avoid getting stuck in local optima. L-BFGS approximates the Hessian matrix to enable faster Newton-style convergence without storing the entire matrix due to memory constraints.
Vowpal Wabbit is a machine learning system that has four main goals: scalable and efficient machine learning, supporting new algorithm research, simplicity with few dependencies, and usability with minimal setup requirements. It uses several "tricks" like feature hashing and caching, online learning, and importance weighting to achieve scalability. It also supports newer algorithms like adaptive learning rates and dimensional correction. Vowpal Wabbit can be run in parallel on large clusters to handle terascale problems with billions of examples.
Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017MLconf
High Performance Deep Learning on Edge Devices With Apache MXNet:
Deep network based models are marked by an asymmetry between the large amount of compute power needed to train a model, and the relatively small amount of compute power needed to deploy a trained model for inference. This is particularly true in computer vision tasks such as object detection or image classification, where millions of labeled images and large numbers of GPUs are needed to produce an accurate model that can be deployed for inference on low powered devices with a single CPU. The challenge when deploying vision models on these low powered devices though, is getting inference to run efficiently enough to allow for near real time processing of a video stream. Fortunately Apache MXNet provides the tools to solve this issues, allowing users to create highly performant models with tools like separable convolutions, quantized weights and sparsity exploitation as well as providing custom hardware kernels to ensure inference calculations are accelerated to the maximum amount allowed by the hardware the model is being deployed on. This is demonstrated though a state of the art MXNet based vision network running in near real time on a low powered Raspberry Pi device. We finally discuss how running inference at the edge as well as leveraging MXNet’s efficient modeling tools can be used to massively drive down compute costs for deploying deep networks in a production system at scale.
This document discusses prospects for using quantum computing to accelerate genomics research. It outlines several areas where quantum algorithms could provide speedups for genome analysis, sequencing, and related tasks. These include using quantum computing for whole genome sequencing, reducing the time from 18 hours to 2 hours. It also presents several quantum algorithms that have been proposed for genomic applications such as read alignment, de novo assembly, and algorithmic feature learning from DNA sequences. The document argues that quantum acceleration could help address the exponentially growing data from genomics that classical computers may not be able to handle with moore's law ending. It promotes developing quantum hardware, software, and cross-disciplinary expertise to realize these potential applications.
Web-app realization of Shor’s quantum factoring algorithm and Grover’s quantu...TELKOMNIKA JOURNAL
This document describes the web-app realization of Shor's quantum factoring algorithm and Grover's quantum search algorithm. It discusses:
1) The design and implementation of the web-apps using the ProjectQ and Rigetti Forest quantum frameworks.
2) Test scenarios and results showing the web-apps can successfully find factors of integers using Shor's algorithm and search datasets to find targets using Grover's algorithm.
3) Code snippets and simulations outputs demonstrating the initialization, computation, and measurement steps of the two algorithms through the web-app interfaces.
Streaming data presents new challenges for statistics and machine learning on extremely large data sets. Tools such as Apache Storm, a stream processing framework, can power range of data analytics but lack advanced statistical capabilities. These slides are from the Apache.con talk, which discussed developing streaming algorithms with the flexibility of both Storm and R, a statistical programming language.
At the talk I dicsussed issues of why and how to use Storm and R to develop streaming algorithms; in particular I focused on:
• Streaming algorithms
• Online machine learning algorithms
• Use cases showing how to process hundreds of millions of events a day in (near) real time
See: https://apacheconna2015.sched.org/event/09f5a1cc372860b008bce09e15a034c4#.VUf7wxOUd5o
Online learning, Vowpal Wabbit and HadoopHéloïse Nonne
Online learning, Vowpal Wabbit and Hadoop
Online learning has recently caught a lot of attention, following some competitions, and especially after Criteo released 11GB for the training set of a Kaggle contest.
Online learning allows to process massive data as the learner processes data in a sequential way using up a low amount of memory and limited CPU ressources. It is also particularly suited for handling time-evolving date.
Vowpal Wabbit has become quite popular: it is a handy, light and efficient command line tool allowing to do online learning on GB of data, even on a standard laptop with standard memory. After a reminder of the online learning principles, we present how to run Vowpal Wabbit on Hadoop in a distributed fashion.
Real-time driving score service using FlinkDongwon Kim
Dongwon Kim presented on migrating T map's driving score service from a batch processing architecture to a real-time streaming architecture using Apache Flink. The new system calculates driving scores for each session in real-time as GPS data is received, allowing users to see their scores sooner. It utilizes Flink's event time processing, windowing, and a custom trigger to handle out-of-order data. Metrics are collected using Prometheus to monitor performance and latency.
Quantum Computing and Blockchain: Facts and Myths Ahmed Banafa
The biggest danger to Blockchain networks from quantum computing is its ability to break traditional encryption . Google sent shock waves around the internet when it was claimed, had built a quantum computer able to solve formerly impossible mathematical calculations–with some fearing crypto industry could be at risk . Google states that its experiment is the first experimental challenge against the extended Church-Turing thesis — also known as computability thesis — which claims that traditional computers can effectively carry out any “reasonable” model of computation
This Presentation highlights the project in which i am currently working on.
Secure 2-party AES:
AES is one of the most widely used block cipher.It takes a secret key as input and a message block to be encrypted and generates the ciphertext corresponding to the message, without disclosing anything about the key or the message.
Typically the key and the message to be encrypted are available with a single entity.
Now consider a scenario where we have two parties, one holding the secret key and the other holding the message to be encrypted.
We want to design a protocol such that at the end of the protocol, the second party learns the encryption of the message (and no information about the key) while the first party learns nothing about the encrypted message.
The goal of this project will be to implement such a protocol.
Sergei Vassilvitskii, Research Scientist, Google at MLconf NYC - 4/15/16MLconf
The document discusses new techniques for improving the k-means clustering algorithm. It begins by describing the standard k-means algorithm and Lloyd's method. It then discusses issues with random initialization for k-means. It proposes using furthest point initialization (k-means++) as an improvement. The document also discusses parallelizing k-means initialization (k-means||) and using nearest neighbor data structures to speed up assigning points to clusters, which allows k-means to scale to many clusters. Experimental results show these techniques provide faster and higher quality clustering compared to standard k-means.
Wapid and wobust active online machine leawning with Vowpal Wabbit Antti Haapala
Vowpal Wabbit is a machine learning library that provides fast, scalable, and online learning algorithms. It can handle large datasets with millions of features efficiently using hashing and sparse representations. Unlike other libraries, Vowpal Wabbit is designed for online and active learning, allowing the model to be updated continuously as new data is processed. It performs linear learning rapidly using stochastic gradient descent and has been shown to scale to billions of examples and trillions of features.
Quantum computers use principles of quantum mechanics rather than classical binary logic. They have qubits that can represent superpositions of 0 and 1, allowing massive parallelism. Key effects like superposition, entanglement, and tunneling give them advantages over classical computers for problems like factoring and searching. Early quantum computers have been built with up to a few hundred qubits, and algorithms like Shor's show promise for cryptography applications. However, challenges remain around error correction and controlling quantum states as quantum computers scale up. D-Wave has produced commercial quantum annealing systems with over 1000 qubits, but debate continues on whether these demonstrate quantum advantage. Overall, quantum computing could transform fields like AI, simulation, and optimization if challenges around building reliable large-scale quantum
Deep Recurrent Neural Networks for Sequence Learning in Spark by Yves MabialaSpark Summit
Deep recurrent neural networks are well-suited for sequence learning tasks like text classification and generation. The author discusses implementing recurrent neural networks in Spark for distributed deep learning on big data. Two use cases are described: predictive maintenance using sensor data to detect failures, and sentiment analysis of tweets using RNNs which achieve better accuracy than traditional classifiers.
The second quantum revolution: the world beyond binary 0 and 1Bruno Fedrici, PhD
Our active application of quantum
mechanics has previously been constrained by our
ability to engineer and control systems at the small
scales where quantum effects predominate. This has
now changed. Scientists have reached first base on a
set of enabling technologies that allow us to
routinely manipulate atoms of matter and photons of
light at individual level. This has unlocked our ability
to create a new generation of devices that deliver
unique capabilities directly tied to properties of quantum mechanics such as superposition and entanglement.
We discuss the emerging threat and implications of quantum computing technology on the security of cryptosystems currently deployed in applications, and why system designers should consider addressing this risk already in the near term. We then discuss an overview of the current approaches for building quantum safe cryptosystems and their security and performance aspects. We conclude with a glimpse at the state of the art and research challenges in the area of quantum-safe cryptography, including the design of more advanced quantum-safe cryptographic protocols, such as privacy-preserving cryptocurrencies.
This talk is an introduction to quantum cryptography and cryptanalysis: the physics and mathematics behind how quantum computers provide unique opportunities and threats to traditional cryptographic systems. We will review the basics behind quantum mechanics and quantum computers, why quantum computers pose a unique threat to cryptographic systems and what secure infrastructure systems must do to protect secrets in a post-quantum world.
Detailed design for a robust counter as well as design for a completely on-line multi-armed bandit implementation that uses the new Bayesian Bandit algorithm.
Predictive Maintenance with Deep Learning and Apache FlinkDongwon Kim
Flink can be used to build a predictive maintenance system using deep learning models on time-series sensor data. A Flink data stream processing pipeline is designed to handle joining streams, applying convolutional LSTM models through an ensemble, and monitoring outputs. Docker and Prometheus are used to package and monitor the solution.
The document describes a terascale learning algorithm for training linear models on large datasets using distributed computing. It discusses using a hashing trick to reduce input complexity, an adaptive online gradient descent algorithm to warm-start L-BFGS batch optimization, and a custom all-reduce implementation to synchronize model parameters across nodes. The approach leverages Hadoop for fault tolerance while training linear models on up to 2.1 trillion features and 17 billion examples in 70 minutes, significantly faster than single machine algorithms.
Vowpal Wabbit is an open source machine learning library that achieves high speed through parallel processing, caching, and hashing. It offers a wide range of machine learning algorithms including linear regression, logistic regression, SVMs, neural networks, and matrix factorization. It supports L1 and L2 regularization and uses online gradient descent, conjugate gradient descent, and L-BFGS for optimization. Online gradient descent calculates error independently for each data point over multiple passes, while conjugate gradient descent finds directions orthogonal to previous steps to avoid getting stuck in local optima. L-BFGS approximates the Hessian matrix to enable faster Newton-style convergence without storing the entire matrix due to memory constraints.
Vowpal Wabbit is a machine learning system that has four main goals: scalable and efficient machine learning, supporting new algorithm research, simplicity with few dependencies, and usability with minimal setup requirements. It uses several "tricks" like feature hashing and caching, online learning, and importance weighting to achieve scalability. It also supports newer algorithms like adaptive learning rates and dimensional correction. Vowpal Wabbit can be run in parallel on large clusters to handle terascale problems with billions of examples.
Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017MLconf
High Performance Deep Learning on Edge Devices With Apache MXNet:
Deep network based models are marked by an asymmetry between the large amount of compute power needed to train a model, and the relatively small amount of compute power needed to deploy a trained model for inference. This is particularly true in computer vision tasks such as object detection or image classification, where millions of labeled images and large numbers of GPUs are needed to produce an accurate model that can be deployed for inference on low powered devices with a single CPU. The challenge when deploying vision models on these low powered devices though, is getting inference to run efficiently enough to allow for near real time processing of a video stream. Fortunately Apache MXNet provides the tools to solve this issues, allowing users to create highly performant models with tools like separable convolutions, quantized weights and sparsity exploitation as well as providing custom hardware kernels to ensure inference calculations are accelerated to the maximum amount allowed by the hardware the model is being deployed on. This is demonstrated though a state of the art MXNet based vision network running in near real time on a low powered Raspberry Pi device. We finally discuss how running inference at the edge as well as leveraging MXNet’s efficient modeling tools can be used to massively drive down compute costs for deploying deep networks in a production system at scale.
This document discusses prospects for using quantum computing to accelerate genomics research. It outlines several areas where quantum algorithms could provide speedups for genome analysis, sequencing, and related tasks. These include using quantum computing for whole genome sequencing, reducing the time from 18 hours to 2 hours. It also presents several quantum algorithms that have been proposed for genomic applications such as read alignment, de novo assembly, and algorithmic feature learning from DNA sequences. The document argues that quantum acceleration could help address the exponentially growing data from genomics that classical computers may not be able to handle with moore's law ending. It promotes developing quantum hardware, software, and cross-disciplinary expertise to realize these potential applications.
Web-app realization of Shor’s quantum factoring algorithm and Grover’s quantu...TELKOMNIKA JOURNAL
This document describes the web-app realization of Shor's quantum factoring algorithm and Grover's quantum search algorithm. It discusses:
1) The design and implementation of the web-apps using the ProjectQ and Rigetti Forest quantum frameworks.
2) Test scenarios and results showing the web-apps can successfully find factors of integers using Shor's algorithm and search datasets to find targets using Grover's algorithm.
3) Code snippets and simulations outputs demonstrating the initialization, computation, and measurement steps of the two algorithms through the web-app interfaces.
Streaming data presents new challenges for statistics and machine learning on extremely large data sets. Tools such as Apache Storm, a stream processing framework, can power range of data analytics but lack advanced statistical capabilities. These slides are from the Apache.con talk, which discussed developing streaming algorithms with the flexibility of both Storm and R, a statistical programming language.
At the talk I dicsussed issues of why and how to use Storm and R to develop streaming algorithms; in particular I focused on:
• Streaming algorithms
• Online machine learning algorithms
• Use cases showing how to process hundreds of millions of events a day in (near) real time
See: https://apacheconna2015.sched.org/event/09f5a1cc372860b008bce09e15a034c4#.VUf7wxOUd5o
Online learning, Vowpal Wabbit and HadoopHéloïse Nonne
Online learning, Vowpal Wabbit and Hadoop
Online learning has recently caught a lot of attention, following some competitions, and especially after Criteo released 11GB for the training set of a Kaggle contest.
Online learning allows to process massive data as the learner processes data in a sequential way using up a low amount of memory and limited CPU ressources. It is also particularly suited for handling time-evolving date.
Vowpal Wabbit has become quite popular: it is a handy, light and efficient command line tool allowing to do online learning on GB of data, even on a standard laptop with standard memory. After a reminder of the online learning principles, we present how to run Vowpal Wabbit on Hadoop in a distributed fashion.
Real-time driving score service using FlinkDongwon Kim
Dongwon Kim presented on migrating T map's driving score service from a batch processing architecture to a real-time streaming architecture using Apache Flink. The new system calculates driving scores for each session in real-time as GPS data is received, allowing users to see their scores sooner. It utilizes Flink's event time processing, windowing, and a custom trigger to handle out-of-order data. Metrics are collected using Prometheus to monitor performance and latency.
Quantum Computing and Blockchain: Facts and Myths Ahmed Banafa
The biggest danger to Blockchain networks from quantum computing is its ability to break traditional encryption . Google sent shock waves around the internet when it was claimed, had built a quantum computer able to solve formerly impossible mathematical calculations–with some fearing crypto industry could be at risk . Google states that its experiment is the first experimental challenge against the extended Church-Turing thesis — also known as computability thesis — which claims that traditional computers can effectively carry out any “reasonable” model of computation
This Presentation highlights the project in which i am currently working on.
Secure 2-party AES:
AES is one of the most widely used block cipher.It takes a secret key as input and a message block to be encrypted and generates the ciphertext corresponding to the message, without disclosing anything about the key or the message.
Typically the key and the message to be encrypted are available with a single entity.
Now consider a scenario where we have two parties, one holding the secret key and the other holding the message to be encrypted.
We want to design a protocol such that at the end of the protocol, the second party learns the encryption of the message (and no information about the key) while the first party learns nothing about the encrypted message.
The goal of this project will be to implement such a protocol.
Sergei Vassilvitskii, Research Scientist, Google at MLconf NYC - 4/15/16MLconf
The document discusses new techniques for improving the k-means clustering algorithm. It begins by describing the standard k-means algorithm and Lloyd's method. It then discusses issues with random initialization for k-means. It proposes using furthest point initialization (k-means++) as an improvement. The document also discusses parallelizing k-means initialization (k-means||) and using nearest neighbor data structures to speed up assigning points to clusters, which allows k-means to scale to many clusters. Experimental results show these techniques provide faster and higher quality clustering compared to standard k-means.
Wapid and wobust active online machine leawning with Vowpal Wabbit Antti Haapala
Vowpal Wabbit is a machine learning library that provides fast, scalable, and online learning algorithms. It can handle large datasets with millions of features efficiently using hashing and sparse representations. Unlike other libraries, Vowpal Wabbit is designed for online and active learning, allowing the model to be updated continuously as new data is processed. It performs linear learning rapidly using stochastic gradient descent and has been shown to scale to billions of examples and trillions of features.
Quantum computers use principles of quantum mechanics rather than classical binary logic. They have qubits that can represent superpositions of 0 and 1, allowing massive parallelism. Key effects like superposition, entanglement, and tunneling give them advantages over classical computers for problems like factoring and searching. Early quantum computers have been built with up to a few hundred qubits, and algorithms like Shor's show promise for cryptography applications. However, challenges remain around error correction and controlling quantum states as quantum computers scale up. D-Wave has produced commercial quantum annealing systems with over 1000 qubits, but debate continues on whether these demonstrate quantum advantage. Overall, quantum computing could transform fields like AI, simulation, and optimization if challenges around building reliable large-scale quantum
Deep Recurrent Neural Networks for Sequence Learning in Spark by Yves MabialaSpark Summit
Deep recurrent neural networks are well-suited for sequence learning tasks like text classification and generation. The author discusses implementing recurrent neural networks in Spark for distributed deep learning on big data. Two use cases are described: predictive maintenance using sensor data to detect failures, and sentiment analysis of tweets using RNNs which achieve better accuracy than traditional classifiers.
The second quantum revolution: the world beyond binary 0 and 1Bruno Fedrici, PhD
Our active application of quantum
mechanics has previously been constrained by our
ability to engineer and control systems at the small
scales where quantum effects predominate. This has
now changed. Scientists have reached first base on a
set of enabling technologies that allow us to
routinely manipulate atoms of matter and photons of
light at individual level. This has unlocked our ability
to create a new generation of devices that deliver
unique capabilities directly tied to properties of quantum mechanics such as superposition and entanglement.
We discuss the emerging threat and implications of quantum computing technology on the security of cryptosystems currently deployed in applications, and why system designers should consider addressing this risk already in the near term. We then discuss an overview of the current approaches for building quantum safe cryptosystems and their security and performance aspects. We conclude with a glimpse at the state of the art and research challenges in the area of quantum-safe cryptography, including the design of more advanced quantum-safe cryptographic protocols, such as privacy-preserving cryptocurrencies.
This talk is an introduction to quantum cryptography and cryptanalysis: the physics and mathematics behind how quantum computers provide unique opportunities and threats to traditional cryptographic systems. We will review the basics behind quantum mechanics and quantum computers, why quantum computers pose a unique threat to cryptographic systems and what secure infrastructure systems must do to protect secrets in a post-quantum world.
Quantum Computers new Generation of Computers part 7 by prof lili saghafi Qua...Professor Lili Saghafi
Quantum algorithm
algorithm for factoring, the general number field sieve
Optimization algorithm
deterministic quantum algorithm Deutsch-Jozsa algorithm
Entanglement
Enigma
Quantum Teleportation
The document discusses new directions for the Mahout machine learning library. It describes plans to remove unused and poorly maintained code in the next release to reduce bloat. It outlines work to improve the integration of core collections functionality and speed up k-nearest neighbor searches using techniques like projection search and fast k-means clustering algorithms. It also introduces a Pig Vector module to enable machine learning tasks like text vectorization and classification from Pig queries.
Quantum Computing and its security implicationsInnoTech
Quantum computers work with qubits that can exist in superposition and be entangled. They have enormous computational power compared to digital computers and could solve problems like prime factorization rapidly. This poses risks to current encryption methods and allows for perfectly secure quantum communication. Several types of quantum computers are being developed, from quantum annealers to analog and universal models, with the latter offering exponential speedups but being the hardest to build. Significant progress is being made, with quantum computers in the tens of qubits now and the need to transition encryption to post-quantum algorithms within the next decade.
Resource Management in (Embedded) Real-Time Systemsjeronimored
Rate monotonic analysis provides techniques for analyzing real-time systems with periodic tasks. It focuses on ensuring tasks meet deadlines through priority-based scheduling, where the highest priority is given to the task with the shortest period. The utilization bound test determines if a set of periodic tasks will always meet deadlines based on the total utilization being below a limit that depends on the number of tasks.
Building an Event-oriented Data Platform with Kafka, Eric Sammer confluent
While we frequently talk about how to build interesting products on top of machine and event data, the reality is that collecting, organizing, providing access to, and managing this data is where most people get stuck. Many organizations understand the use cases around their data – fraud detection, quality of service and technical operations, user behavior analysis, for example – but are not necessarily data infrastructure experts. In this session, we’ll follow the flow of data through an end to end system built to handle tens of terabytes an hour of event-oriented data, providing real time streaming, in-memory, SQL, and batch access to this data. We’ll go into detail on how open source systems such as Hadoop, Kafka, Solr, and Impala/Hive are actually stitched together; describe how and where to perform data transformation and aggregation; provide a simple and pragmatic way of managing event metadata; and talk about how applications built on top of this platform get access to data and extend its functionality.
Attendees will leave this session knowing not just which open source projects go into a system such as this, but how they work together, what tradeoffs and decisions need to be addressed, and how to present a single general purpose data platform to multiple applications. This session should be attended by data infrastructure engineers and architects planning, building, or maintaining similar systems.
Quantum computers are rapidly evolving and are promising significant advantages in domains like machine learning or optimization, to name but a few areas. In this keynote we sketch the underpinnings of quantum computing, show some of the inherent advantages, highlight some application areas, and show how quantum applications are built.
[DSC Europe 23] Ales Gros - Quantum and Today s security with Quantum.pdfDataScienceConferenc1
Quantum computing poses risks to modern cryptography. By 2026, there is a 1 in 7 chance that quantum computers will be able to break fundamental public-key cryptography. By 2031, there is a 1 in 2 chance. Cryptography is used everywhere in the digital world, including internet protocols, digital signatures, critical infrastructure, financial systems, and blockchains. If quantum computers are able to break current cryptography, cyber criminals could gain access to critical infrastructure, forge digital signatures to manipulate legal records, decrypt historical data, and create fraudulent transactions. This poses serious risks that must be addressed as quantum computing advances.
This document provides an overview of quantum computing trends and directions. It introduces Francisco Gálvez as the presenter and covers the following topics: IBM's quantum computers including the IBM Quantum Experience platform, basic concepts in quantum computing, quantum architecture focusing on superconducting qubits, quantum algorithms like Shor's and Grover's algorithms, applications of quantum computing, and the IBM Quantum Experience platform which allows users to design and run quantum circuits on real quantum processors.
The JVM memory model describes how threads in the Java eco-system interact through memory. While the memory model impact on developing for the JVM may not be obvious, it is the cause for certain number of "anomalies" that are, well, by design.
In this presentation we will explore the aspects of the memory model, including things like reordering of instructions, volatile members, monitors, atomics and JIT.
This document discusses using quantum-safe cryptography to protect against future quantum computers. It proposes a "hybrid" approach where a FIPS-approved classical algorithm is used for conformance while a quantum-safe algorithm is also used to provide long-term security. Specifically, it examines using the "OtherInfo" field when deriving keys to include a quantum-safe symmetric key as part of the key derivation process. This would allow quantum-safe encryption of data even when using a FIPS-approved scheme for key establishment and compliance. However, it is unclear if including symmetric keys in "OtherInfo" is permitted by standards.
There are many modern techniques for identifying anomalies in datasets. There are fewer that work as online algorithms suitable for application to real-time streaming data. What’s worse? Most of these methodologies require a deep understanding of the data itself. In this talk, we tour what the options are for identifying anomalies in real-time data and discuss how much we really need to know before hand to guess at the ever-useful question: is this normal?
Building a system for machine and event-oriented data - Velocity, Santa Clara...Eric Sammer
This talk was presented at O'Reilly's Velocity conference in Santa Clara, May 28 2015.
Abstract: http://velocityconf.com/devops-web-performance-2015/public/schedule/detail/42284
Similar to DEF CON 27 - ANDREAS BAUMHOF - are quantum computers really a threat to cryptography (20)
DEF CON 24 - Sean Metcalf - beyond the mcse red teaming active directoryFelipe Prado
The document discusses strategies for red teaming Active Directory security. It begins with an overview of Active Directory components and how they can be exploited by attackers. It then covers offensive PowerShell techniques and how PowerShell security can be bypassed. The document also provides methods for effective Active Directory reconnaissance, including discovering administrator accounts and network assets without port scanning. Finally, it discusses some Active Directory defenses that can be deployed and potential bypass techniques.
DEF CON 24 - Bertin Bervis and James Jara - exploiting and attacking seismolo...Felipe Prado
The document discusses vulnerabilities found in seismological network devices that could allow remote exploitation. It begins with a disclaimer and agenda. The speakers are then introduced as researchers from Costa Rica who were interested in these networks due to potential attack scenarios. Through a search engine, they discovered vulnerabilities in devices from multiple vendors. The talk demonstrates taking control of a device and outlines impacts such as sabotage. Recommendations are made to vendors to improve security of these critical scientific instruments.
DEF CON 24 - Tamas Szakaly - help i got antsFelipe Prado
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like anxiety and depression.
DEF CON 24 - Ladar Levison - compelled decryptionFelipe Prado
This document appears to be a preview of slides for a presentation at DEF CON 24 about compelled decryption. The slides discuss the difference between first and third parties in communications and the Communications Assistance for Law Enforcement Act. It also lists several third parties like technology companies and individuals that could potentially be compelled to decrypt communications. The document indicates that it is a preliminary version and the slides may be altered before the actual presentation.
Deep learning systems are susceptible to adversarial manipulation through techniques like generating adversarial samples and substitute models. By making small, targeted perturbations to inputs, an attacker can cause misclassifications or reduce a model's confidence without affecting human perception of the inputs. This is possible due to blind spots in how models learn representations that are different from human concepts. Defending against such attacks requires training models with adversarial techniques to make them more robust.
DEF CON 24 - Chris Rock - how to overthrow a governmentFelipe Prado
This document outlines various strategies and tactics for overthrowing a government through covert and clandestine means, including cyber espionage, propaganda, agitation of public unrest, sabotage of critical infrastructure, and manipulation of financial systems. It discusses using mercenaries and private intelligence agencies to carry out these activities at an arm's length from sponsorship by nation states. Specific examples from Kuwait in 2011 are referenced to illustrate techniques for fomenting revolution through cyber and information operations.
DEF CON 24 - Fitzpatrick and Grand - 101 ways to brick your hardwareFelipe Prado
This document discusses various ways that hardware can become "bricked", or rendered unusable, through both software and hardware issues. It covers 101 different bricking scenarios across firmware, printed circuit boards, connectors, integrated circuits, and unexpected situations. Examples include wiping firmware, damaging traces on PCBs, breaking solder joints on connectors, applying too much voltage to ICs, and devices being bricked by environmental factors. The document provides tips for both bricking and avoiding bricking hardware, as well as techniques for potentially unbricking devices.
DEF CON 24 - Rogan Dawes and Dominic White - universal serial aBUSe remote at...Felipe Prado
This document provides an overview of a talk on novel USB attacks that can provide remote command and control of even air-gapped machines with minimal forensic footprint. It describes building an open-source toolset using freely available hardware that implements a stealthy bi-directional communication channel over USB using a keyboard/mouse and generic HID profiles to deploy payloads and proxy traffic without touching the network. The talk will demonstrate attacking Windows systems by staging payloads in memory to avoid disk artifacts and establishing a VNC session without user interaction or malware deployment. Source code and documentation for the toolset, called USaBUSe, will be released on GitHub at Defcon.
DEF CON 24 - Jay Beale and Larry Pesce - phishing without frustrationFelipe Prado
The document discusses common challenges and failures that can occur when conducting phishing campaigns professionally. It outlines eleven stories of phishing failures caused by issues like poor scheduling, spam filters blocking emails, not having enough target email addresses, domain name choices being too obvious, and lack of communication. For each failure, it provides recommendations on how to avoid the problem by improving collaboration, communication, planning and negotiation with the client organization. The overall message is that success requires treating phishing engagements as multi-party negotiations and managing expectations through clear communication and involvement of all stakeholders.
The document discusses vulnerabilities found in human-machine interface (HMI) solutions used for industrial control systems. It details a case study of multiple stack-based buffer overflow vulnerabilities found in Advantech WebAccess through an RPC service that could allow remote code execution. The vulnerabilities were caused by improper validation of user-supplied input to functions like sprintf and strcpy. While patches were released, analysis showed the fixes did not fully address the underlying problems with input handling.
DEF CON 24 - Allan Cecil and DwangoAC - tasbot the perfectionistFelipe Prado
This document summarizes tool-assisted speedruns (TAS) of video games. It discusses how emulators and tools are used to play games faster than humanly possible by deterministically recording every input. Advanced techniques like memory searching and scripting push games to their limits. Console verification devices were developed to play back TAS movies on original hardware. The document argues that TAS tools are like penetration testing tools and can be used to find and exploit vulnerabilities in games. It demonstrates an arbitrary code execution in Pokemon Red using an unintended opcode execution.
DEF CON 24 - Rose and Ramsey - picking bluetooth low energy locksFelipe Prado
This document shows a graph with distance on the x-axis ranging from 0 to 35 meters and Received Signal Strength (RSS) in dBm on the y-axis ranging from -100 to -40 dBm. The graph contains a line for a model with a path loss exponent of 2.0 as well as scattered data points representing collected RSS measurements.
This document provides an overview of a hands-on, turbocharged pragmatic cloud security training that covers topics in a fraction of the normal time by slimming down four days of material into four hours. It will cover configuring production-quality AWS accounts, building deployment pipelines, and automating security controls. Most examples will use Ruby and Python. Nearly all labs will be in AWS. There will be minimal slides and hand-holding. Participants are responsible for their own AWS bills after the training.
DEF CON 24 - Grant Bugher - Bypassing captive portalsFelipe Prado
The document discusses the results of a study on the impact of COVID-19 lockdowns on air pollution. Researchers analyzed data from dozens of countries and found that lockdowns led to an average decline of nearly 30% in nitrogen dioxide levels over cities. However, they also observed that this improvement was temporary and air pollution rebounded once lockdowns were lifted as vehicle traffic increased again. The short-term reductions could help policymakers design better emission control strategies in the future.
DEF CON 24 - Patrick Wardle - 99 problems little snitchFelipe Prado
Little Snitch is a host-based firewall for macOS that intercepts connection attempts and allows the user to approve or deny them. The document discusses understanding, bypassing, and reversing Little Snitch. It provides an overview of Little Snitch's components and architecture, describes several methods for bypassing its network filtering, and examines techniques for interacting with and disabling Little Snitch's kernel extension through the I/O Kit framework.
DEF CON 24 - Plore - side -channel attacks on high security electronic safe l...Felipe Prado
The document summarizes two presentations on cracking electronic safe locks. It discusses cracking a Sargent & Greenleaf 6120 safe lock by using a power analysis side-channel attack to recover the keycode stored in clear in the lock's EEPROM. It also discusses cracking a Sargent & Greenleaf Titan PivotBolt safe lock by using a timing side-channel attack to recover the keycode, and defeating the lock's incorrect code lockout feature.
DEF CON 24 - Six Volts and Haystack - cheap tools for hacking heavy trucksFelipe Prado
This document discusses hacking heavy trucks and related networking protocols. It describes building a "Truck-in-a-Box" simulator to experiment with truck electronics and protocols like J1939 and J1708. The document outlines adventures in truck hacking, including modifying engine parameters, impersonating an engine control module, and exploiting bad cryptography. Details are provided on new hardware tools like the Truck Duck for analyzing truck communication networks.
DEF CON 24 - Dinesh and Shetty - practical android application exploitationFelipe Prado
The document provides an overview of a workshop on practical Android application exploitation. The workshop aims to teach skills for performing reverse engineering, static and dynamic testing, and binary analysis of Android applications. It will use demonstrations and hands-on exercises with custom applications like InsecureBankv2. The workshop focuses on discovery and remediation, targeting intermediate to advanced skill levels. It will cover tools, techniques, and common vulnerabilities to exploit Android applications.
DEF CON 24 - Klijnsma and Tentler - stargate pivoting through vncFelipe Prado
The document discusses vulnerabilities in VNC implementations that allow unauthenticated access. It notes that a scan of the internet found over 335,000 VNC servers, with around 8,000 having no authentication. This lack of authentication allows attackers to access and "pivot" into internal networks. The document provides statistics on different VNC protocol versions found and describes exploits that could allow compromising devices to access additional internal systems through insecure VNC implementations and proxies.
DEF CON 24 - Antonio Joseph - fuzzing android devicesFelipe Prado
Droid-FF is an Android fuzzing framework that aims to automate the fuzzing process on Android devices. It uses Python scripts and integrates fuzzing tools like Peach and radamsa to generate test case data. The framework runs fuzzing campaigns on Android devices, processes the logs to identify crashes, verifies the crashes are unique, maps crashes to source code locations, and analyzes crashes for exploitability using a GDB plugin. The goal of Droid-FF is to make fuzzing easier on mobile devices and help find more crashes and potential vulnerabilities in Android applications and frameworks.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.