This document provides an introduction to spiking neural networks (SNNs) through a presentation given by Jason Tsai. It begins with an overview of the characteristics and advantages of SNNs. It then covers relevant neuroscience concepts like neurons, synapses, action potentials, Hebb's rule, and spike-timing dependent plasticity. Learning algorithms like backpropagation and STDP are introduced. Common neuron models and coding schemes are described. Finally, several neuromorphic computing platforms are discussed.
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
UNIT I INTRODUCTION
Neural Networks-Application Scope of Neural Networks-Artificial Neural Network: An IntroductionEvolution of Neural Networks-Basic Models of Artificial Neural Network- Important Terminologies of
ANNs-Supervised Learning Network.
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Jason Tsai
Abstract:
Being the third generation of neural network models, the study of spiking neural networks is an interdisciplinary field among brain science, theoretical neuroscience, and artificial neural networks research. Recently it is gaining attention and momentum, especially in neuromorphic device design for real-time machine learning. Some of you might have heard of it, but its underneath principles probably remain unknown for most of you. In this talk, I will briefly illustrate the basic building blocks of this emerging architecture and technology.
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
UNIT I INTRODUCTION
Neural Networks-Application Scope of Neural Networks-Artificial Neural Network: An IntroductionEvolution of Neural Networks-Basic Models of Artificial Neural Network- Important Terminologies of
ANNs-Supervised Learning Network.
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Jason Tsai
Abstract:
Being the third generation of neural network models, the study of spiking neural networks is an interdisciplinary field among brain science, theoretical neuroscience, and artificial neural networks research. Recently it is gaining attention and momentum, especially in neuromorphic device design for real-time machine learning. Some of you might have heard of it, but its underneath principles probably remain unknown for most of you. In this talk, I will briefly illustrate the basic building blocks of this emerging architecture and technology.
Big Data & Text Mining: Finding Nuggets in Mountains of Textual Data
Big amount of information is available in textual form in databases or online sources, and for many enterprise functions (marketing, maintenance, finance, etc.) represents a huge opportunity to improve their business knowledge. For example, text mining is starting to be used in marketing, more specifically in analytical customer relationship management, in order to achieve the holy 360° view of the customer (integrating elements from inbound mails, web comments, surveys, internal notes, etc.).
Facing this new domain I have make a personal research, and realize a synthesis, which has help me to clarify some ideas. The below presentation does not intend to be exhaustive on the subject, but could perhaps bring you some useful insights.
Artificial Intelligence, Machine Learning, Deep Learning
The 5 myths of AI
Deep Learning in action
Basics of Deep Learning
NVIDIA Volta V100 and AWS P3
This is a presentation I gave as a short overview of LSTMs. The slides are accompanied by two examples which apply LSTMs to Time Series data. Examples were implemented using Keras. See links in slide pack.
This Edureka Recurrent Neural Networks tutorial will help you in understanding why we need Recurrent Neural Networks (RNN) and what exactly it is. It also explains few issues with training a Recurrent Neural Network and how to overcome those challenges using LSTMs. The last section includes a use-case of LSTM to predict the next word using a sample short story
Below are the topics covered in this tutorial:
1. Why Not Feedforward Networks?
2. What Are Recurrent Neural Networks?
3. Training A Recurrent Neural Network
4. Issues With Recurrent Neural Networks - Vanishing And Exploding Gradient
5. Long Short-Term Memory Networks (LSTMs)
6. LSTM Use-Case
The talk will be focused on AGI (Artificial General Intelligence) and Peter will give his thoughts and impressions what are the next steps in this field and direction where should we go further.
Peter is an entrepreneur, AI Community Leader & Author of various Reports on AI.
This is an article about Generative AI. It discusses what it is and the different techniques used to create it. It also goes into the potential uses of Generative AI. Some of the important points from this article are that Generative AI is still in its early stages but has already shown promising results. It is also important to note that Generative AI can be used to create fake data that is indistinguishable from real data.
https://www.ltimindtree.com/wp-content/uploads/2023/01/DeepPoV-Generative-AI.pdf
Basics of RNNs and its applications with following papers:
- Generating Sequences With Recurrent Neural Networks, 2013
- Show and Tell: A Neural Image Caption Generator, 2014
- Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, 2015
- DenseCap: Fully Convolutional Localization Networks for Dense Captioning, 2015
- Deep Tracking- Seeing Beyond Seeing Using Recurrent Neural Networks, 2016
- Robust Modeling and Prediction in Dynamic Environments Using Recurrent Flow Networks, 2016
- Social LSTM- Human Trajectory Prediction in Crowded Spaces, 2016
- DESIRE- Distant Future Prediction in Dynamic Scenes with Interacting Agents, 2017
- Predictive State Recurrent Neural Networks, 2017
Recurrent Neural Networks (RNNs) represent the reference class of Deep Learning models for learning from sequential data. Despite the widespread success, a major downside of RNNs and commonly derived ‘gating’ variants (LSTM, GRU) is given by the high cost of the involved training algorithms. In this context, an increasingly popular alternative is the Reservoir Computing (RC) approach, which enables limiting the training algorithm to operate only on a restricted set of (output) parameters. RC is appealing for several reasons, including the amenability of being implemented in low-powerful edge devices, enabling adaptation and personalization in IoT and cyber-physical systems applications.
This webinar will introduce Reservoir Computing from scratch, covering all the fundamental design topics as well as good practices. It is targeted to both researchers and practitioners that are interested in setting up fastly-trained Deep Learning models for sequential data.
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터Seonghyun Kim
* 파이콘 한국 2020의 발표자료입니다.
현대 인공 신경망의 뿌리가 되었던 뇌 과학!
이 발표에서는 인공 신경망에 대한 뇌 과학적 접근과,
뇌 세포의 발화를 모사하는 파이썬 기반의 뉴로모픽 신경망 모델에 대한 사례를 공유할 예정입니다.
뉴로모픽 신경망은 단순히 기존의 딥러닝에서 셀 구조만을 변경한 것이 아닙니다.
실제로 실험을 수행하기 어려운 생물학적 한계점을 뇌 시뮬레이션을 통해서 극복할 수 있으며,
나아가 뇌의 정보처리 메커니즘을 밝히고, 다양한 뇌 질환 치료제의 타겟을 연구하는데 아주 중요한 역할을 할 수 있습니다.
이번 발표를 통해, 기계학습을 연구하고 있는 많은 연구자 분들에게 새로운 아이디어에 대한 영감이 될 수 있기를 희망합니다.
Elective Neural Networks. I. The boolean brain. On a Heuristic Point of V...ABINClaude
This two-part article proposes a new approach to understanding neuronal mechanisms, still unexplained despite the immense progress in neuroscience since the 1940s.
The first part ("The boolean brain") first presents a brief history of the steps leading to the Convolutional Networks that now rival the performance of the human visual system. The biological plausibility of these networks is examined, leading to the paradoxical conclusion that McCulloch and Pitts' logical model was a correct approach and that it has been underestimated.
A new model of neural networks, the Elective Neural Networks (ENN), is proposed on this basis, inspired by the Theory of Epigenesis by selective stabilization of synapses (Changeux et al., 1973) [1], and equipped with a logical learning mechanism by synapse elimination. Its capacity to form large-sized networks is examined, taking into account connectivity constraints, and its biological plausibility is defended, including the issue of the binary synapse.
The second part ("The orthogonal brain") proposes a neuronal mechanism with an explanation of the learning curve in a classical conditioning: the proboscis extension reflex in the Apis Mellifera bee. A reinforcement learning mechanism is added to the ENN model, applying to both classical and operant conditioning. A general hypothesis on the implementation of effector control in a brain is deduced, in which no individual synapse is genetically programmed.
The slide format was chosen for this paper because of its ability to represent complex dynamic phenomena.
Big Data & Text Mining: Finding Nuggets in Mountains of Textual Data
Big amount of information is available in textual form in databases or online sources, and for many enterprise functions (marketing, maintenance, finance, etc.) represents a huge opportunity to improve their business knowledge. For example, text mining is starting to be used in marketing, more specifically in analytical customer relationship management, in order to achieve the holy 360° view of the customer (integrating elements from inbound mails, web comments, surveys, internal notes, etc.).
Facing this new domain I have make a personal research, and realize a synthesis, which has help me to clarify some ideas. The below presentation does not intend to be exhaustive on the subject, but could perhaps bring you some useful insights.
Artificial Intelligence, Machine Learning, Deep Learning
The 5 myths of AI
Deep Learning in action
Basics of Deep Learning
NVIDIA Volta V100 and AWS P3
This is a presentation I gave as a short overview of LSTMs. The slides are accompanied by two examples which apply LSTMs to Time Series data. Examples were implemented using Keras. See links in slide pack.
This Edureka Recurrent Neural Networks tutorial will help you in understanding why we need Recurrent Neural Networks (RNN) and what exactly it is. It also explains few issues with training a Recurrent Neural Network and how to overcome those challenges using LSTMs. The last section includes a use-case of LSTM to predict the next word using a sample short story
Below are the topics covered in this tutorial:
1. Why Not Feedforward Networks?
2. What Are Recurrent Neural Networks?
3. Training A Recurrent Neural Network
4. Issues With Recurrent Neural Networks - Vanishing And Exploding Gradient
5. Long Short-Term Memory Networks (LSTMs)
6. LSTM Use-Case
The talk will be focused on AGI (Artificial General Intelligence) and Peter will give his thoughts and impressions what are the next steps in this field and direction where should we go further.
Peter is an entrepreneur, AI Community Leader & Author of various Reports on AI.
This is an article about Generative AI. It discusses what it is and the different techniques used to create it. It also goes into the potential uses of Generative AI. Some of the important points from this article are that Generative AI is still in its early stages but has already shown promising results. It is also important to note that Generative AI can be used to create fake data that is indistinguishable from real data.
https://www.ltimindtree.com/wp-content/uploads/2023/01/DeepPoV-Generative-AI.pdf
Basics of RNNs and its applications with following papers:
- Generating Sequences With Recurrent Neural Networks, 2013
- Show and Tell: A Neural Image Caption Generator, 2014
- Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, 2015
- DenseCap: Fully Convolutional Localization Networks for Dense Captioning, 2015
- Deep Tracking- Seeing Beyond Seeing Using Recurrent Neural Networks, 2016
- Robust Modeling and Prediction in Dynamic Environments Using Recurrent Flow Networks, 2016
- Social LSTM- Human Trajectory Prediction in Crowded Spaces, 2016
- DESIRE- Distant Future Prediction in Dynamic Scenes with Interacting Agents, 2017
- Predictive State Recurrent Neural Networks, 2017
Recurrent Neural Networks (RNNs) represent the reference class of Deep Learning models for learning from sequential data. Despite the widespread success, a major downside of RNNs and commonly derived ‘gating’ variants (LSTM, GRU) is given by the high cost of the involved training algorithms. In this context, an increasingly popular alternative is the Reservoir Computing (RC) approach, which enables limiting the training algorithm to operate only on a restricted set of (output) parameters. RC is appealing for several reasons, including the amenability of being implemented in low-powerful edge devices, enabling adaptation and personalization in IoT and cyber-physical systems applications.
This webinar will introduce Reservoir Computing from scratch, covering all the fundamental design topics as well as good practices. It is targeted to both researchers and practitioners that are interested in setting up fastly-trained Deep Learning models for sequential data.
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터Seonghyun Kim
* 파이콘 한국 2020의 발표자료입니다.
현대 인공 신경망의 뿌리가 되었던 뇌 과학!
이 발표에서는 인공 신경망에 대한 뇌 과학적 접근과,
뇌 세포의 발화를 모사하는 파이썬 기반의 뉴로모픽 신경망 모델에 대한 사례를 공유할 예정입니다.
뉴로모픽 신경망은 단순히 기존의 딥러닝에서 셀 구조만을 변경한 것이 아닙니다.
실제로 실험을 수행하기 어려운 생물학적 한계점을 뇌 시뮬레이션을 통해서 극복할 수 있으며,
나아가 뇌의 정보처리 메커니즘을 밝히고, 다양한 뇌 질환 치료제의 타겟을 연구하는데 아주 중요한 역할을 할 수 있습니다.
이번 발표를 통해, 기계학습을 연구하고 있는 많은 연구자 분들에게 새로운 아이디어에 대한 영감이 될 수 있기를 희망합니다.
Elective Neural Networks. I. The boolean brain. On a Heuristic Point of V...ABINClaude
This two-part article proposes a new approach to understanding neuronal mechanisms, still unexplained despite the immense progress in neuroscience since the 1940s.
The first part ("The boolean brain") first presents a brief history of the steps leading to the Convolutional Networks that now rival the performance of the human visual system. The biological plausibility of these networks is examined, leading to the paradoxical conclusion that McCulloch and Pitts' logical model was a correct approach and that it has been underestimated.
A new model of neural networks, the Elective Neural Networks (ENN), is proposed on this basis, inspired by the Theory of Epigenesis by selective stabilization of synapses (Changeux et al., 1973) [1], and equipped with a logical learning mechanism by synapse elimination. Its capacity to form large-sized networks is examined, taking into account connectivity constraints, and its biological plausibility is defended, including the issue of the binary synapse.
The second part ("The orthogonal brain") proposes a neuronal mechanism with an explanation of the learning curve in a classical conditioning: the proboscis extension reflex in the Apis Mellifera bee. A reinforcement learning mechanism is added to the ENN model, applying to both classical and operant conditioning. A general hypothesis on the implementation of effector control in a brain is deduced, in which no individual synapse is genetically programmed.
The slide format was chosen for this paper because of its ability to represent complex dynamic phenomena.
PowerPoint slides from a 2015 Guest Lecture in PSYCH-268A: Computational Neuroscience, Prof. Jeff Krichmar, University of California, Irvine (UCI).
Corresponding publication:
Beyeler*, M., Carlson*, K. D. , Chou*, T-S., Dutt, N., Krichmar, J. L. (2015). CARLsim 3: A user-friendly and highly optimized library for the creation of neurobiologically detailed spiking neural networks. Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland. (*equal contribution)
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMSIAEME Publication
One of several major components of a production system is the arrangement, which may considerably affect the cost of internal material handling as well as the flexibility, efficiency, and supervision of the plant. To cut the cost of warehouse management and setup time, cellular manufacturing is a technique that organizes the equipment needed to produce similar products into unit cells. In conjunction with traditional nonlinear relapse or chunk analysis techniques, neural networks are widely used for quantifiable analysis and information modeling. They are typically applied in this way to problems that may be stated in terms of categorizing or measurement. These recommendations update three different ANN algorithms genome Wide. The BP Networking, the KSOM Network, and thus the ART1 Connections are standard techniques. We use such non - linear and non-CF ANN methods for the adjustment of MPIM cell reproduction and proportionate cellular development for both the measurement with considering manufacturing things into consideration.
Robust Feature Learning with Deep Neural Networks
http://snu-primo.hosted.exlibrisgroup.com/primo_library/libweb/action/display.do?tabs=viewOnlineTab&doc=82SNU_INST21557911060002591
Artificial Neural Network and its Applicationsshritosh kumar
Abstract
This report is an introduction to Artificial Neural
Networks. The various types of neural networks are
explained and demonstrated, applications of neural
networks like ANNs in medicine are described, and a
detailed historical background is provided. The
connection between the artificial and the real thing is
also investigated and explained. Finally, the
mathematical models involved are presented and
demonstrated.
Artificial neural networks are fundamental means for providing an attempt at modelling the information
processing capabilities of artificial nervous system which plays an important role in the field of cognitive
science. This paper focuses the features of artificial neural networks studied by reviewing the existing research
works, these features were then assessed and evaluated and comparative analysis. The study and literature
survey metrics such as functional capabilities of neurons, learning capabilities, style of computation, processing
elements, processing speed, connections, strength, information storage, information transmission,
communication media selection, signal transduction and fault tolerance were used as basis for comparison. A
major finding in this paper showed that artificial neural networks served as the platform for neuron computing
technology in the field of cognitive science.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A short talk for a forum held by Taiwan Association for Human Rights: https://www.tahr.org.tw/event/2670
Video: https://youtu.be/-hYQRHqyR9g (28:10 - 50:35)
Lecture for Neural Networks study group held on February 8, 2020.
Reference book: http://hagan.okstate.edu/nnd.html
Video: https://youtu.be/TyyoPU13ME0
Python demo codes: https://bit.ly/3893GHB
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/permalink/2017771298545301/)
Lecture for Neural Networks study group held on January 11, 2020.
Reference book: http://hagan.okstate.edu/nnd.html
Video: https://youtu.be/H4NKgliTFUw
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/permalink/2017771298545301/)
Lecture for Reinforcement Learning study group held on August 19th, 2017.
Reference book: http://incompleteideas.net/book/the-book.html
Video: https://youtu.be/xv5ZsOSf6ZQ
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/permalink/1796526840669749/)
Deep Learning: Chapter 11 Practical MethodologyJason Tsai
Lecture for Deep Learning 101 study group to be held on June 9th, 2017.
Reference book: https://www.deeplearningbook.org/
Past video archives: https://goo.gl/hxermB
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/)
Deep Learning: Introduction & Chapter 5 Machine Learning BasicsJason Tsai
Given lecture for Deep Learning 101 study group with Frank Wu on Dec. 9th, 2016.
Reference: https://www.deeplearningbook.org/
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/)
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Introduction to Spiking Neural Networks: From a Computational Neuroscience perspective
1. Jason Tsai (蔡志順)
Oct. 19, 2019 @Mozilla Community Space Taipei
*Picture adopted from
https://bit.ly/2ts8xCk
Introduction to Spiking Neural Networks
2. *Copyright Notice:
All figures in this presentation are taken from
the quoted sources as mentioned in the
respective slides and their copyright belongs
to the owners. This presentation itself adopts
Creative Commons license.
3. Neural Networks 3D Simulation
(Video demo)
*Video from https://youtu.be/3JQ3hYko51Y
4. Questions
What are the advantages of spiking
neural networks and neuromorphic
computing?
What are current challenges of spiking
neural networks (SNNs)?
10. Neuron’s Spike: Action Potential
*Figure adopted from https://en.wikipedia.org/wiki/Action_potential &
The front cover of “Spikes: Exploring the Neural Code (1999)”
12. The Effect of Presynaptic Spikes on
Postsynaptic Neuron
*Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models:
Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 5.
13. Hebb’s Learning Postulate
"When an axon of cell A is near enough to excite a cell B and
repeatedly or persistently takes part in firing it, some growth
process or metabolic change takes place in one or both cells such
that A's efficiency, as one of the cells firing B, is increased.“*
* Refer to Donald O. Hebb, The Organization of Behavior: A Neuropsychological Theory. 1949 & 2002. Page 62.
Causality
Repetition
14. Long-Term Potentiation (LTP) / Long-
Term Depression (LTD)
LTP is a long-lasting, activity-dependent increase in synaptic
strength that is a leading candidate as a cellular mechanism
contributing to memory formation in mammals in a very
broadly applicable sense.*
* Refer to J. David Sweatt. Mechanisms of Memory, Second Edition. Academic Press. 2010. Page 112.
15. Synaptic Plasticity
*Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models:
Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 353.
16. Back-propagating Action Potential (bAP)
*Further reading: https://en.wikipedia.org/wiki/Neural_backpropagation
Induction of tLTP requires activation of the presynaptic
input milliseconds before the bAP in the postsynaptic
dendrite.
17. *Figure adopted from https://doi.org/10.3389/fnsyn.2011.00004
Spike-Timing-Dependent Plasticity
(STDP)
18. Experiment Evidence of STDP
From Wikipedia:
“Henry Markram, when he was in Bert Sakmann's lab and published their
work in 1997, used dual patch clamping techniques to repetitively
activate pre-synaptic neurons 10 milliseconds before activating the post-
synaptic target neurons, and found the strength of the synapse
increased. When the activation order was reversed so that the pre-
synaptic neuron was activated 10 milliseconds after its post-synaptic
target neuron, the strength of the pre-to-post synaptic connection
decreased.
Further work, by Guoqiang Bi, Li Zhang, and Huizhong Tao in Mu-Ming
Poo's lab in 1998, continued the mapping of the entire time course
relating pre- and post-synaptic activity and synaptic change, to show that
in their preparation synapses that are activated within 5-20 ms before a
postsynaptic spike are strengthened, and those that are activated within a
similar time window after the spike are weakened.”
*Further reading: https://en.wikipedia.org/wiki/Spike-timing-dependent_plasticity
20. Lateral Inhibition
Lateral inhibition is a Central Nervous System process whereby
application of a stimulus to the center of the receptive field excites a
neuron, but a stimulus applied near the edge inhibits it.
*Figure adopted from https://bit.ly/2yaat37
23. Dopamine: Essential for Reward
Processing in Mammalian Brain
*Figure adopted from http://www.jneurosci.org/content/29/2/444
Dopamine neurons form huge synaptic contacts to target!
25. Two Hot Approaches
Supervised: Stochastic Gradient Descent
based Backpropagation learning rule
(Treat the membrane potentials of spiking neurons as
differentiable signals, where discontinuities at spike
times are considered as noise.*)
Unsupervised: STDP (Spike-Timing-
Dependent Plasticity) based learning rule
*Refer to Jun Haeng Lee, et al., Training Deep Spiking Neural Networks Using Backpropagation.
Frontiers in Neuroscience, 08 November 2016. https://doi.org/10.3389/fnins.2016.00508
26. *Refer to Yu, Q., Tang, H., Hu, J., Tan, K.C., Neuromorphic Cognitive Systems: A Learning and Memory
Centered Approach. Springer International Publishing. 2017. Page 9.
STDP Learning Rule
27. STDP Learning Rule (1-to-1)
*Figure adopted from http://dx.doi.org/10.7551/978-0-262-33027-5-ch037
28. STDP Learning Rule (2-to-1)
N0 is stimulated until N1 fires, then e0 is stopped for 30 ms.
N2 is stimulated by e2 during those 30 ms.
*Figure adopted from http://dx.doi.org/10.7551/978-0-262-33027-5-ch037
29. STDP Finds Spike Patterns
*Figure adopted from https://doi.org/10.1371/journal.pone.0001377
34. 1st Generation of Neuron Models
(McCulloch–Pitts Neuron Model)
*Figure adopted from http://wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/kk-thesis-html/node12.html
35. 2nd Generation of Neuron Models
*Figure adopted from http://cs231n.github.io/neural-networks-1/
36. 3rd Generation of Neuron Models
(Spiking Neuron Models)
*Figure adopted from http://kzyjc.cnjournals.com/html/2018/5/20180512.htm
37. Spiking Neuron Models
Miscellaneous models (integrators / resonators):
Hodgkin-Huxley model
Izhikevish model
Leaky Integrate-and-Fire (LIF) model
Resonate-and-Fire model
Spike Response model (SRM)
……
*Further reading: https://en.wikipedia.org/wiki/Biological_neuron_model
& http://www.scholarpedia.org/article/Spike-response_model
38. Hodgkin-Huxley Model
*Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models: Single Neurons,
Populations, Plasticity. Cambridge University Press. 2002. Page 34.
42. Leaky Integrate-and-Fire Model
*Figure adopted from Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski “Neuronal Dynamics:
From Single Neurons to Networks and Models of Cognition” Cambridge University Press. 2014. Page 11.
43. The Firing of a Leaky Integrate-and-
Fire Model Neuron
*Figure adopted from https://doi.org/10.1371/journal.pone.0001377
50. Sparse Coding
*Figure adopted from http://brainworkshow.sparsey.com/measuring-similarity-in-localist-vs-distributed-representations/
51. Sparse Coding with Inhibitory Neurons
Population sparseness: Few neurons are
active at any given time
Lifetime sparseness: Individual neurons
are responsive to few specific stimuli
*Figure adopted from https://doi.org/10.1523/JNEUROSCI.4188-12.2013
59. ANN-to-SNN Conversion
Train ANNs using standard supervised training
techniques like backpropagation to leverage
the superior performance of trained ANNs and
subsequently convert to event-driven SNNs for
inference operation on neuromorphic platform.
Rate-encoded spikes are approximately
proportional to the magnitude of the original
ANN inputs.
62. Further Reading
Wulfram Gerstner & Werner M. Kistler, “Spiking Neuron Models:
Single Neurons, Populations, Plasticity”. Cambridge University
Press (2002)
Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam
Paninski, “Neuronal Dynamics: From Single Neurons to Networks
and Models of Cognition”. Cambridge University Press (2014)
Eugene M. Izhikevich, “The Dynamical Systems in Neuroscience:
Geometry of Excitability and Bursting”. The MIT Press (2007)
Nikola K. Kasabov, “Time-Space, Spiking Neural Networks and
Brain-Inspired Artificial Intelligence”. Springer International
Publishing (2018)
蔺想红、王向文, “脉冲神经网络原理及应用”. 科学出版社 (2018)