A brief talk on reservoir computing from the perspective of dynamical system. Mostly based on these 2 papers:
1. Pathak, J., Hunt, B., Girvan, M., Lu, Z., & Ott, E. (2018). Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. Physical review letters, 120(2), 024102.
2. A Parsimonious Dynamical Model for Structural Learning in the Human Brain. arXiv preprint arXiv:1807.05214.
Recurrent Neural Networks (RNNs) represent the reference class of Deep Learning models for learning from sequential data. Despite the widespread success, a major downside of RNNs and commonly derived ‘gating’ variants (LSTM, GRU) is given by the high cost of the involved training algorithms. In this context, an increasingly popular alternative is the Reservoir Computing (RC) approach, which enables limiting the training algorithm to operate only on a restricted set of (output) parameters. RC is appealing for several reasons, including the amenability of being implemented in low-powerful edge devices, enabling adaptation and personalization in IoT and cyber-physical systems applications.
This webinar will introduce Reservoir Computing from scratch, covering all the fundamental design topics as well as good practices. It is targeted to both researchers and practitioners that are interested in setting up fastly-trained Deep Learning models for sequential data.
PR-409: Denoising Diffusion Probabilistic ModelsHyeongmin Lee
이번 논문은 요즘 핫한 Diffusion을 처음으로 유행시킨 Denoising Diffusion Probabilistic Models (DDPM) 입니다. ICML 2015년에 처음 제안된 Diffusion의 여러 실용적인 측면들을 멋지게 해결하여 그 유행의 시작을 알린 논문인데요, Generative Model의 여러 분야와 Diffusion, 그리고 DDPM에서는 무엇이 바뀌었는지 알아보도록 하겠습니다.
논문 링크: https://arxiv.org/abs/2006.11239
영상 링크: https://youtu.be/1j0W_lu55nc
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
A brief talk on reservoir computing from the perspective of dynamical system. Mostly based on these 2 papers:
1. Pathak, J., Hunt, B., Girvan, M., Lu, Z., & Ott, E. (2018). Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. Physical review letters, 120(2), 024102.
2. A Parsimonious Dynamical Model for Structural Learning in the Human Brain. arXiv preprint arXiv:1807.05214.
Recurrent Neural Networks (RNNs) represent the reference class of Deep Learning models for learning from sequential data. Despite the widespread success, a major downside of RNNs and commonly derived ‘gating’ variants (LSTM, GRU) is given by the high cost of the involved training algorithms. In this context, an increasingly popular alternative is the Reservoir Computing (RC) approach, which enables limiting the training algorithm to operate only on a restricted set of (output) parameters. RC is appealing for several reasons, including the amenability of being implemented in low-powerful edge devices, enabling adaptation and personalization in IoT and cyber-physical systems applications.
This webinar will introduce Reservoir Computing from scratch, covering all the fundamental design topics as well as good practices. It is targeted to both researchers and practitioners that are interested in setting up fastly-trained Deep Learning models for sequential data.
PR-409: Denoising Diffusion Probabilistic ModelsHyeongmin Lee
이번 논문은 요즘 핫한 Diffusion을 처음으로 유행시킨 Denoising Diffusion Probabilistic Models (DDPM) 입니다. ICML 2015년에 처음 제안된 Diffusion의 여러 실용적인 측면들을 멋지게 해결하여 그 유행의 시작을 알린 논문인데요, Generative Model의 여러 분야와 Diffusion, 그리고 DDPM에서는 무엇이 바뀌었는지 알아보도록 하겠습니다.
논문 링크: https://arxiv.org/abs/2006.11239
영상 링크: https://youtu.be/1j0W_lu55nc
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
This Edureka Recurrent Neural Networks tutorial will help you in understanding why we need Recurrent Neural Networks (RNN) and what exactly it is. It also explains few issues with training a Recurrent Neural Network and how to overcome those challenges using LSTMs. The last section includes a use-case of LSTM to predict the next word using a sample short story
Below are the topics covered in this tutorial:
1. Why Not Feedforward Networks?
2. What Are Recurrent Neural Networks?
3. Training A Recurrent Neural Network
4. Issues With Recurrent Neural Networks - Vanishing And Exploding Gradient
5. Long Short-Term Memory Networks (LSTMs)
6. LSTM Use-Case
Federated Learning of Neural Network Models with Heterogeneous Structures.pdfKundjanasith Thonglek
Federated learning trains a model on a centralized server using datasets distributed over a large number of edge devices. Applying federated learning ensures data privacy because it does not transfer local data from edge devices to the server. Existing federated learning algorithms assume that all deployed models share the same structure. However, it is often infeasible to distribute the same model to every edge device because of hardware limitations such as computing performance and storage space. This paper proposes a novel federated learning algorithm to aggregate information from multiple heterogeneous models. The proposed method uses weighted average ensemble to combine the outputs from each model. The weight for the ensemble is optimized using black box optimization methods. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to conventional training using centralized datasets. Furthermore, we compared six different optimization methods to tune the weights for the weighted average ensemble and found that tree parzen estimator achieves the highest accuracy among the alternatives.
Neuromorphic Computing indicates a broad area of research that aims at achieving means of physical information processing that are inspired by biological brains. As such, this kind of systems is envisaged as being the ideal approach for implementing artificial neural networks concepts. With the rapid pace of development in Deep Learning, the synergy between the development of neuromorphic hardware and neural network concepts is fundamental to obtain intelligent systems that can exploit the full potential of learning efficiently.
This talk aims at giving a broad overview of the possibilities of such synergy. First, we will quickly explore the fundamental differences between neuromorphic and traditional computing, and then we will focus on concepts, algorithms, and neural architectures that are prone to neuromorphic implementation.
very basic introduction of newly emerging technology in electronics called SPINTRONICS.
Quantum mechanics property called SPIN based electronics technology using both quantum mechanical and electronics property of electron i.e "SPIN+ELECTRONICS=SPINTRONICS"
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
Basics of RNNs and its applications with following papers:
- Generating Sequences With Recurrent Neural Networks, 2013
- Show and Tell: A Neural Image Caption Generator, 2014
- Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, 2015
- DenseCap: Fully Convolutional Localization Networks for Dense Captioning, 2015
- Deep Tracking- Seeing Beyond Seeing Using Recurrent Neural Networks, 2016
- Robust Modeling and Prediction in Dynamic Environments Using Recurrent Flow Networks, 2016
- Social LSTM- Human Trajectory Prediction in Crowded Spaces, 2016
- DESIRE- Distant Future Prediction in Dynamic Scenes with Interacting Agents, 2017
- Predictive State Recurrent Neural Networks, 2017
In this presentation file, i have briefly explained about Spintronics. it is a really new and a good concept for pressentation purpose. Hope it is helpful to you.
In our conventional electronic devices we use semi conducting materials for logical operation and magnetic materials for storage, but spintronics uses magnetic materials for both purposes. These spintronic devices are more versatile and faster than the present one. One such device is Spin Valve Transistors (SVT).
Spin valve transistor is different from conventional transistor. In this for conduction we use spin polarization of electrons. Only electrons with correct spin polarization can travel successfully through the device. These transistors are used in data storage, signal processing, automation and robotics with less power consumption and results in less heat. This also finds its application in Quantum computing, in which we use Qubits instead of bits.
This Edureka Recurrent Neural Networks tutorial will help you in understanding why we need Recurrent Neural Networks (RNN) and what exactly it is. It also explains few issues with training a Recurrent Neural Network and how to overcome those challenges using LSTMs. The last section includes a use-case of LSTM to predict the next word using a sample short story
Below are the topics covered in this tutorial:
1. Why Not Feedforward Networks?
2. What Are Recurrent Neural Networks?
3. Training A Recurrent Neural Network
4. Issues With Recurrent Neural Networks - Vanishing And Exploding Gradient
5. Long Short-Term Memory Networks (LSTMs)
6. LSTM Use-Case
Federated Learning of Neural Network Models with Heterogeneous Structures.pdfKundjanasith Thonglek
Federated learning trains a model on a centralized server using datasets distributed over a large number of edge devices. Applying federated learning ensures data privacy because it does not transfer local data from edge devices to the server. Existing federated learning algorithms assume that all deployed models share the same structure. However, it is often infeasible to distribute the same model to every edge device because of hardware limitations such as computing performance and storage space. This paper proposes a novel federated learning algorithm to aggregate information from multiple heterogeneous models. The proposed method uses weighted average ensemble to combine the outputs from each model. The weight for the ensemble is optimized using black box optimization methods. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to conventional training using centralized datasets. Furthermore, we compared six different optimization methods to tune the weights for the weighted average ensemble and found that tree parzen estimator achieves the highest accuracy among the alternatives.
Neuromorphic Computing indicates a broad area of research that aims at achieving means of physical information processing that are inspired by biological brains. As such, this kind of systems is envisaged as being the ideal approach for implementing artificial neural networks concepts. With the rapid pace of development in Deep Learning, the synergy between the development of neuromorphic hardware and neural network concepts is fundamental to obtain intelligent systems that can exploit the full potential of learning efficiently.
This talk aims at giving a broad overview of the possibilities of such synergy. First, we will quickly explore the fundamental differences between neuromorphic and traditional computing, and then we will focus on concepts, algorithms, and neural architectures that are prone to neuromorphic implementation.
very basic introduction of newly emerging technology in electronics called SPINTRONICS.
Quantum mechanics property called SPIN based electronics technology using both quantum mechanical and electronics property of electron i.e "SPIN+ELECTRONICS=SPINTRONICS"
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
Basics of RNNs and its applications with following papers:
- Generating Sequences With Recurrent Neural Networks, 2013
- Show and Tell: A Neural Image Caption Generator, 2014
- Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, 2015
- DenseCap: Fully Convolutional Localization Networks for Dense Captioning, 2015
- Deep Tracking- Seeing Beyond Seeing Using Recurrent Neural Networks, 2016
- Robust Modeling and Prediction in Dynamic Environments Using Recurrent Flow Networks, 2016
- Social LSTM- Human Trajectory Prediction in Crowded Spaces, 2016
- DESIRE- Distant Future Prediction in Dynamic Scenes with Interacting Agents, 2017
- Predictive State Recurrent Neural Networks, 2017
In this presentation file, i have briefly explained about Spintronics. it is a really new and a good concept for pressentation purpose. Hope it is helpful to you.
In our conventional electronic devices we use semi conducting materials for logical operation and magnetic materials for storage, but spintronics uses magnetic materials for both purposes. These spintronic devices are more versatile and faster than the present one. One such device is Spin Valve Transistors (SVT).
Spin valve transistor is different from conventional transistor. In this for conduction we use spin polarization of electrons. Only electrons with correct spin polarization can travel successfully through the device. These transistors are used in data storage, signal processing, automation and robotics with less power consumption and results in less heat. This also finds its application in Quantum computing, in which we use Qubits instead of bits.
Invited talk at Tsinghua University on "Applications of Deep Neural Network". As the tech. lead of deep learning task force at NIO USA INC, I was invited to give this colloquium talk on general applications of deep neural network.
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
Research Summary: Scalable Algorithms for Nearest-Neighbor Joins on Big Traje...Alex Klibisz
Research summary for my STAT645 course fall 2016. Paper Scalable Algorithms for Nearest-Neighbor Joins on Big Trajectory Data by Fang, Cheng, Tang, Maniu, Yang. http://ieeexplore.ieee.org/document/7498408/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Reservoir Computing Overview (with emphasis on Liquid State Machines)
1. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Reservoir Computing with Emphasis on Liquid
State Machines
Alex Klibisz
University of Tennessee
aklibisz@gmail.com
November 28, 2016
2. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Context and Motivation
• Traditional ANNs are useful for non-linear problems, but
struggle with temporal problems.
• Recurrent Neural Networks show promise for temporal
problems, but the models are very complex and difficult,
expensive to train.
• Reservoir computing provides a model of neural
network/microcircuit for solving temporal problems with much
simpler training.
3. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Artificial Neural Networks
1
• Useful for learning non-linear
f (xi ) = yi .
• Input layers takes vectorized input.
• Hidden layers transform the input.
• Output layer indicates something
meaningful (e.g. binary class,
distribution over classes).
• Trained by feeding in many
examples to minimize some
objective function.
1
source: wikimedia
4. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Feed-forward Neural Networks
• Information passed in one direction from input to output.
• Each neuron has a weight w for each of its inputs and a single
bias b.
• Weights, bias, input used to compute the output:
output = 1
1+exp(− j wj xj −b).
• Outputs evaluated by objective function (e.g. classification
accuracy).
• Backpropagation algorithm adjusts w and b to minimize the
objective function.
5. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Recurrent Neural Networks
2
Figure: The network state and resulting output change with time.
• Some data have temporal dependencies across inputs (e.g.
time series, video, text, speech, movement).
• FFNN assume inputs are independent and fail to capture this.
• Recurrent neural nets capture temporal dependencies by:
1. Allowing cyclic connections in the hidden layer.
2. Preserving internal state between inputs.
• Training is expensive; backpropagation-through-time is used
to unroll all cycles and adjust neuron parameters.
2
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
6. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Continuous Activation vs. Spiking Neurons
How does a neuron produce its output?
• Continuous activation neurons
1. Compute an activation function using inputs, weights, bias.
2. Pass the result to all connected neurons.
• Spiking neurons
1. Accumulate and store inputs.
2. Only pass the results if a threshold is exceeded.
• Advantages
• Proven that spiking neurons can compute any function
computed by sigmoidal neurons with fewer neurons (Maass,
1997).
7. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Conceptual Introduction
Figure: Reservoir Computing: construct a reservoir of random recurrent
neurons and train a single readout layer.
8. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
History
Random networks with a trained readout layer
• Frank Rosenblatt, 1962; Geoffrey Hinton, 1981; Buonamano,
Merzenich, 1995
Echo-State Networks
• Herbert Jaeger, 2001
Liquid State Machines
• Wolfgang Maass, 2002
Backpropagation Decorrelation3
• Jochen Steil, 2004
Unifying as Reservoir Computing
• Verstraeten, 2007
3
Applying concepts from RC to train RNNs
9. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Successful Applications
Broad Topics
• Robotics controls, object tracking, motion prediction, event
detection, pattern classification, signal processing, noise
modeling, time series prediction
Specific Examples
• Venayagamoorthy, 2007 used an ESN as a wide-area power
system controller with on-line learning.
• Jaeger, 2004 improved noisy time series prediction accuracy
2400x over previous techniques.
• Salehi, 2016 simulated a nanophotonic reservoir computing
system with 100% speech recognition on TIDIGITS dataset.
10. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Liquid State Machines vs. Echo State Networks
Primary difference: neuron implementation
• ESN: neurons do not hold charge, state is maintained using
recurrent loops.
• LSM: neurons can hold charge and maintain internal state.
• LSM formulation is general enough to encompass ESNs.
11. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
LSM Formal Definition
4 5
• A filter maps between two functions of time u(·) → y(·).
• A Liquid State Machine M defined M = LM, f M .
• Filter LM
and readout function f M
.
• State at time t defined xM(t) = (LMu)(t).
• Read: filter LM
applied to input function u(·) at time t
• Output at time t defined y(t) = (Mu)(t) = f M(xM(t))
• Read: the readout function f applied to the current state xM
(t)
4
Maass, 2002
5
Joshi, Maass 2004
12. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
LSM Requirements Th. 3.1 Maass 2004
Filters in LM
satisfy the point-wise separation property
Class CB of filters has the PWSP with regard to input functions from Un
if for any two functions u(·), v(·) ∈ Un
with u(s) = v(s) for some s ≤ 0,
there exists some filter B ∈ CB such that (Bu)(0) = (Bv)(0).
Intuition: there exists a filter that can distinguish two input functions from one
another at the same time step.
Readout f M
satisfies the universal approximation property
Class CF of functions has the UAP if for any m ∈ N, any set X ⊆ Rm
,
any continuous function h : X → R, and any given ρ > 0, there exists
some f in CF such that |h(x) − f (x)| ≤ ρ for all x ∈ X.
Intuition: any continuous function on a compact domain can be uniformly
approximated by functions from CF.
13. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Examples of Filters and Readout Functions
Filters Satisfying Pointwise Separation Property
• Linear filters with impulse responses h(t) = e−at, a > 0
• All delay filters u(·) → ut0 (·)
• Leaky Integrate and Fire neurons
• Threshold logic gates
Readout functions satisfying Universal Approximation Property
• Simple linear regression
• Simple perceptrons
• Support vector machines
14. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Building, Training LSMs
In General
• Take inspiration from known characteristics of the brain.
• Perform search/optimization to find a configuration.
Example: Simulated Robotic Arm Movement (Joshi, Maass 2004)
• Inputs: x,y target, 2 angles, 2 prior torque magnitudes.
• Output: 2 torque magnitudes to move the arm.
• 600 neurons in a 20 x 5 x 6 grid.
• Connections chosen from distribution favoring local
conections.
• Neuron and connection parameters (e.g. firing threshold)
chosen based on knowledge of rat brains.
• Readout trained to deliver torque values using linear
regression.
15. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Figure: Reservoir architecture and control loop from Joshi, Maass 2004
16. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Research Trends
1. Hardware implemenations: laser optics and other novel
hardware to implement the reservoir.
2. Optimizing reservoirs: analytical insight and techniques to
optimize a reservoirs for specific task. (Current standard is
intuition and manual search.)
3. Interconnecting modular reservoirs for more complex tasks.
17. Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Summary
• Randomly initialized reservoir and a simple trained readout
mapping.
• Neurons in the reservoir and the readout map should satisfy
two properties for universal computation.
• Particularly useful for tasks with temporal data/signal.
• More work to be done for optimizing and hardware
implementation.