Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesMatthew Lease
Talk given at the 8th Forum for Information Retrieval Evaluation (FIRE, http://fire.irsi.res.in/fire/2016/), December 10, 2016, and at the Qatar Computing Research Institute (QCRI), December 15, 2016.
https://www.youtube.com/watch?v=5ZUlVlumIQo&list=PLqJzTtkUiq54DDEEZvzisPlSGp_BadhNJ&index=10
Over the last years, deep learning is rapidly advancing with impressive results obtained in several areas including computer vision, machine translation and speech recognition. Deep learning attempts to learn complex function through learning hierarchical representation of data. A deep learning model is composed of non-linear modules that each transforms the representation from lower layer to the higher more abstract one. Very complex functions can be learned using enough composition of the non-linear modules. Furthermore, the need for manual feature engineering can be obviated by learning features themselves through the representation learning. In this talk, we first explain how deep learning architecture in particular and neural networks in general are loosely inspired by mammalian visual cortex and nervous system respectively. We also discuss about the reason for big and successful comeback of neural networks with the deep learning models. Finally, we give a brief introduction of various deep structures and their applications to several domains.
References:
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436-444.
Socher, Richard, Yoshua Bengio, and Chris Manning. "Deep learning for NLP." Tutorial at Association of Computational Logistics (ACL), 2012, and North American Chapter of the Association of Computational Linguistics (NAACL) (2013).
Lee, Honglak. "Tutorial on deep learning and applications." NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. 2010.
LeCun, Yann, and M. Ranzato. "Deep learning tutorial." Tutorials in International Conference on Machine Learning (ICML’13). 2013.
Socher, Richard, et al. "Recursive deep models for semantic compositionality over a sentiment treebank." Proceedings of the conference on empirical methods in natural language processing (EMNLP). Vol. 1631. 2013.
https://www.youtube.com/channel/UC9OeZkIwhzfv-_Cb7fCikLQ
https://www.udacity.com/course/deep-learning--ud730
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
Handwritten Recognition using Deep Learning with RPoo Kuan Hoong
R User Group Malaysia Meet Up - Handwritten Recognition using Deep Learning with R
Source code available at: https://github.com/kuanhoong/myRUG_DeepLearning
Zero shot learning through cross-modal transferRoelof Pieters
review of the paper "Zero-Shot Learning Through Cross-Modal Transfer" by Richard Socher, Milind Ganjoo, Hamsa Sridhar, Osbert Bastani, Christopher D. Manning, Andrew Y. Ng.
at KTH's Deep Learning reading group:
www.csc.kth.se/cvap/cvg/rg/
Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesMatthew Lease
Talk given at the 8th Forum for Information Retrieval Evaluation (FIRE, http://fire.irsi.res.in/fire/2016/), December 10, 2016, and at the Qatar Computing Research Institute (QCRI), December 15, 2016.
https://www.youtube.com/watch?v=5ZUlVlumIQo&list=PLqJzTtkUiq54DDEEZvzisPlSGp_BadhNJ&index=10
Over the last years, deep learning is rapidly advancing with impressive results obtained in several areas including computer vision, machine translation and speech recognition. Deep learning attempts to learn complex function through learning hierarchical representation of data. A deep learning model is composed of non-linear modules that each transforms the representation from lower layer to the higher more abstract one. Very complex functions can be learned using enough composition of the non-linear modules. Furthermore, the need for manual feature engineering can be obviated by learning features themselves through the representation learning. In this talk, we first explain how deep learning architecture in particular and neural networks in general are loosely inspired by mammalian visual cortex and nervous system respectively. We also discuss about the reason for big and successful comeback of neural networks with the deep learning models. Finally, we give a brief introduction of various deep structures and their applications to several domains.
References:
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436-444.
Socher, Richard, Yoshua Bengio, and Chris Manning. "Deep learning for NLP." Tutorial at Association of Computational Logistics (ACL), 2012, and North American Chapter of the Association of Computational Linguistics (NAACL) (2013).
Lee, Honglak. "Tutorial on deep learning and applications." NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. 2010.
LeCun, Yann, and M. Ranzato. "Deep learning tutorial." Tutorials in International Conference on Machine Learning (ICML’13). 2013.
Socher, Richard, et al. "Recursive deep models for semantic compositionality over a sentiment treebank." Proceedings of the conference on empirical methods in natural language processing (EMNLP). Vol. 1631. 2013.
https://www.youtube.com/channel/UC9OeZkIwhzfv-_Cb7fCikLQ
https://www.udacity.com/course/deep-learning--ud730
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
Handwritten Recognition using Deep Learning with RPoo Kuan Hoong
R User Group Malaysia Meet Up - Handwritten Recognition using Deep Learning with R
Source code available at: https://github.com/kuanhoong/myRUG_DeepLearning
Zero shot learning through cross-modal transferRoelof Pieters
review of the paper "Zero-Shot Learning Through Cross-Modal Transfer" by Richard Socher, Milind Ganjoo, Hamsa Sridhar, Osbert Bastani, Christopher D. Manning, Andrew Y. Ng.
at KTH's Deep Learning reading group:
www.csc.kth.se/cvap/cvg/rg/
Upload photos Copy this Meetup
Things we will discuss are
1.Introduction of Machine learning and deep learning.
2.Applications of ML and DL.
3.Various learning algorithms of ML and DL.
4.Quick introduction of open source solutions for all programming languages.
5.Finally A broad picture of what you can do with Deep learning to this tech world.
Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...Impetus Technologies
Presentation on 'Deep Learning: Evolution of ML from Statistical to Brain-like Computing'
Speaker- Dr. Vijay Srinivas Agneeswaran,Director, Big Data Labs, Impetus
The main objective of the presentation is to give an overview of our cutting edge work on realizing distributed deep learning networks over GraphLab. The objectives can be summarized as below:
- First-hand experience and insights into implementation of distributed deep learning networks.
- Thorough view of GraphLab (including descriptions of code) and the extensions required to implement these networks.
- Details of how the extensions were realized/implemented in GraphLab source – they have been submitted to the community for evaluation.
- Arrhythmia detection use case as an application of the large scale distributed deep learning network.
Deep learning: the future of recommendationsBalázs Hidasi
An informative talk about deep learning and its potential uses in recommender systems. Presented at the Budapest Startup Safary, 21 April, 2016.
The breakthroughs of the last decade in neural network research and the quick increasing of computational power resulted in the revival of deep neural networks and the field focusing on their training: deep learning. Deep learning methods have succeeded in complex tasks where other machine learning methods have failed, such as computer vision and natural language processing. Recently deep learning has began to gain ground in recommender systems as well. This talk introduces deep learning and its applications, with emphasis on how deep learning methods can solve long standing recommendation problems.
AILABS - Lecture Series - Is AI the New Electricity? - Advances In Machine Le...AILABS Academy
Prof. Garain discusses in brief on the backgrounds of learning algorithms & major breakthroughs that have been made in the field of machine perception in the last 50 yrs. He also discusses the role of statistical algorithms like artificial neural network, support vector machines, and other concepts related to Deep Learning algorithms.
Along with the above, Prof. Garain touched upon the basics of CNN & RNN, Long Short-Term Memory Networks (LSTM) & attention network & illustrate all of these using real-life problems. Several state-of-the-art problems like image captioning, visual question answering, medical image analysis etc. were discussed to make the potential of deep learning algorithms understandable.
Prof. Utpal Garain is one of the leading minds in Kolkata in the field of Neural Networks & Artificial Intelligence. His research interest is now focused on AI research, especially exploring deep learning methods for language, image and video analysis including NLP tools, OCRs, handwriting analysis, computational forensics and the like.
deep learningFeature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham
2 Neural Networks Basics Perceptron Equivalent performance to least mean square algorithm (linear regression) Activation Function Sigmoid, Hyperbolic Tangent Multi Layer Perceptrons Chains of perceptrons, perform feature extraction Training the network Training set, validation set, generalization set Back propagation
3 Perceptron and Activation Function The basic building block of Neural Networks (1) Summation of weighted inputs Bias performs change in y-intercept Output is present when the activation Threshold is overcome Activation function must be differentiable w1 w3 w2 Activation function Input 1 Input 2 Input 3 output Bias
4 Multi-Layer Perceptrons and Training Classification with 20 node MLP NN (4) Feature extraction with 5 layered Convolutional Neural Network (2) Feature Extraction with MLP NN (4)
5 Article Objectives … we propose a supervised approach for task-aware selection of features using Deep Neural Networks (DNN) in the context of action recognition (e.g. walking, running, jumping). (1) … selected features are found to give better classification performance than the original high-dimensional features. (1) It is also shown that the classification performance of the proposed feature selection technique is superior to the low-dimensional representation obtained by principal component analysis (PCA). (1)
6 Methodology … analyze the contribution of each of the input dimensions to identify the features (inputs) important for classification (1) … to correctly analyze the contribution of an input feature, we study its activation potential (averaged over all training values of the input and hidden neurons) relative to the total activation potential (1) The higher the activation potential contribution of an input dimension, the more likely is its participation in hidden neuronal activity and consequently, classification. (1)Feature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham
2 Neural Networks Basics Perceptron Equivalent performance to least mean square algorithm (linear regression) Activation Function Sigmoid, Hyperbolic Tangent Multi Layer Perceptrons Chains of perceptrons, perform feature extraction Training the network Training set, validation set, generalization set Back propagation
3 Perceptron and Activation Function The basic building block of Neural Networks (1) Summation of weighted inputs Bias performs change in y-intercept Output is present when the activation Threshold is overcome Activation function must be differentiable w1 w3 w2 Activation function Input 1 Input 2 Input 3 output Bias
4 Multi-Layer Perceptrons and Training Classification with 20 node MLP NN (4) Feature extraction with 5 layered Convolutional Neural Network (2) Feature Extraction with MLP NN (4)
5 Article Objectives … we propose a supervised approach fo
Presented at OECD Workshop on Systematic Reviews in the Scope of the Endocrine Disrupter Testing and Assessment (EDTA) Conceptual Framework Level 1 in Paris, France
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
More Related Content
Similar to Deep Learning Big Data Meetup @ Trondheim
Upload photos Copy this Meetup
Things we will discuss are
1.Introduction of Machine learning and deep learning.
2.Applications of ML and DL.
3.Various learning algorithms of ML and DL.
4.Quick introduction of open source solutions for all programming languages.
5.Finally A broad picture of what you can do with Deep learning to this tech world.
Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...Impetus Technologies
Presentation on 'Deep Learning: Evolution of ML from Statistical to Brain-like Computing'
Speaker- Dr. Vijay Srinivas Agneeswaran,Director, Big Data Labs, Impetus
The main objective of the presentation is to give an overview of our cutting edge work on realizing distributed deep learning networks over GraphLab. The objectives can be summarized as below:
- First-hand experience and insights into implementation of distributed deep learning networks.
- Thorough view of GraphLab (including descriptions of code) and the extensions required to implement these networks.
- Details of how the extensions were realized/implemented in GraphLab source – they have been submitted to the community for evaluation.
- Arrhythmia detection use case as an application of the large scale distributed deep learning network.
Deep learning: the future of recommendationsBalázs Hidasi
An informative talk about deep learning and its potential uses in recommender systems. Presented at the Budapest Startup Safary, 21 April, 2016.
The breakthroughs of the last decade in neural network research and the quick increasing of computational power resulted in the revival of deep neural networks and the field focusing on their training: deep learning. Deep learning methods have succeeded in complex tasks where other machine learning methods have failed, such as computer vision and natural language processing. Recently deep learning has began to gain ground in recommender systems as well. This talk introduces deep learning and its applications, with emphasis on how deep learning methods can solve long standing recommendation problems.
AILABS - Lecture Series - Is AI the New Electricity? - Advances In Machine Le...AILABS Academy
Prof. Garain discusses in brief on the backgrounds of learning algorithms & major breakthroughs that have been made in the field of machine perception in the last 50 yrs. He also discusses the role of statistical algorithms like artificial neural network, support vector machines, and other concepts related to Deep Learning algorithms.
Along with the above, Prof. Garain touched upon the basics of CNN & RNN, Long Short-Term Memory Networks (LSTM) & attention network & illustrate all of these using real-life problems. Several state-of-the-art problems like image captioning, visual question answering, medical image analysis etc. were discussed to make the potential of deep learning algorithms understandable.
Prof. Utpal Garain is one of the leading minds in Kolkata in the field of Neural Networks & Artificial Intelligence. His research interest is now focused on AI research, especially exploring deep learning methods for language, image and video analysis including NLP tools, OCRs, handwriting analysis, computational forensics and the like.
deep learningFeature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham
2 Neural Networks Basics Perceptron Equivalent performance to least mean square algorithm (linear regression) Activation Function Sigmoid, Hyperbolic Tangent Multi Layer Perceptrons Chains of perceptrons, perform feature extraction Training the network Training set, validation set, generalization set Back propagation
3 Perceptron and Activation Function The basic building block of Neural Networks (1) Summation of weighted inputs Bias performs change in y-intercept Output is present when the activation Threshold is overcome Activation function must be differentiable w1 w3 w2 Activation function Input 1 Input 2 Input 3 output Bias
4 Multi-Layer Perceptrons and Training Classification with 20 node MLP NN (4) Feature extraction with 5 layered Convolutional Neural Network (2) Feature Extraction with MLP NN (4)
5 Article Objectives … we propose a supervised approach for task-aware selection of features using Deep Neural Networks (DNN) in the context of action recognition (e.g. walking, running, jumping). (1) … selected features are found to give better classification performance than the original high-dimensional features. (1) It is also shown that the classification performance of the proposed feature selection technique is superior to the low-dimensional representation obtained by principal component analysis (PCA). (1)
6 Methodology … analyze the contribution of each of the input dimensions to identify the features (inputs) important for classification (1) … to correctly analyze the contribution of an input feature, we study its activation potential (averaged over all training values of the input and hidden neurons) relative to the total activation potential (1) The higher the activation potential contribution of an input dimension, the more likely is its participation in hidden neuronal activity and consequently, classification. (1)Feature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham
2 Neural Networks Basics Perceptron Equivalent performance to least mean square algorithm (linear regression) Activation Function Sigmoid, Hyperbolic Tangent Multi Layer Perceptrons Chains of perceptrons, perform feature extraction Training the network Training set, validation set, generalization set Back propagation
3 Perceptron and Activation Function The basic building block of Neural Networks (1) Summation of weighted inputs Bias performs change in y-intercept Output is present when the activation Threshold is overcome Activation function must be differentiable w1 w3 w2 Activation function Input 1 Input 2 Input 3 output Bias
4 Multi-Layer Perceptrons and Training Classification with 20 node MLP NN (4) Feature extraction with 5 layered Convolutional Neural Network (2) Feature Extraction with MLP NN (4)
5 Article Objectives … we propose a supervised approach fo
Presented at OECD Workshop on Systematic Reviews in the Scope of the Endocrine Disrupter Testing and Assessment (EDTA) Conceptual Framework Level 1 in Paris, France
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
24. Learning to solve a problem
24 http://neuralnetworksanddeeplearning.com/chap1.html
25. Forward and Backward passes
25
http://caffe.berkeleyvision.org/tutorial/forward_backward.html
26. The Unstable Gradient Problem
26
Why it is difficult to train an RNN
Why are deep neural networks hard to train?
27. Practical advices when training neural networks
(by Ilya Sutskever)
27
• Get good data
• Preprocessing
• Minibatches
• Gradient normalization
• Learning rate schedule
• Learning rate
• Weight Initialization
• Data augmentation
• Dropout
• Ensembling
A Brief Overview of Deep Learning
28. Outline
• Intro to DL (A. Ng)
• Intro to Neural Nets
• Training NN
• Conv Nets
• Autoencoders
• Word Embeddings
• DL@TRD
• Bonus
28
41. 41
Word embeddings and Shared representations
Deep Learning, NLP, and Representations Deep Visual-Semantic Alignments for Generating Image
Descriptions
43. 43
Word embeddings and Reversible Sentence Representation
Deep Learning, NLP, and Representations
Rich Rashid in Tianjin, October, 25, 2012
44. Telenor Norway Network topology
Word embeddings applied to Network operations
Use cases:
• Predict failures of Network components.
• Predict congestion levels on Network links.
• Detect mal-functioning devices.
45. Outline
• Intro to DL (A. Ng)
• Intro to Neural Nets
• Training NN
• Conv Nets
• Autoencoders
• Word Embeddings
• DL@TRD
• Bonus
45
46. DL@TRD - Motivations
Personal observations:
– DL is hot (hyped?)
– DL supremacy seems ineluctable
– DL can solve a whole bunch of problems
– DL is frontier technology (difficult)
– Little DL competence @ Telenor Research
Personal implications:
– Career development
– Network with partners to get momentum
– Great if this happens in Trondheim
46
47. DL@TRD - Vision
Establish a strong DL competence center in Trondheim
– A place where
• competence is gathered
• experiences are exchanged
• collaborations are fostered
– Benefits
• Share passion with others near you
• Get momentum for your work
• Funding (SFI, EU money)
– Ideally
• Collaborate across companies on problems
• Common publications
47
Next workshop: 27th March
49. Stuff we could discuss at DL@TRD
• Training Recurrent Neural Networks
• Long Short Term Memory Networks
• Echo State Networks
• Neural Turing Machines
• Hopfield Nets
• Restricted Bolzman Machines
• Deep beliefs Networks
• Teacher – Student Nets
• Momentum
• Dropout
• Full Bayesian learning
• Hessian free optimization
• Stuff I don´t know I don´t know
49
50. Conclusion & Forecast
50
• DL techniques can be applied to all sorts of data:
– Could you apply some of these techniques to your data?
• DL models are better than humans at some tasks if fed with enough
data & trained properly
• Within 5-10 years, “information work” tasks will be augmented or even
fully automated
– See Peter Norvig´s talk at InfoQ: Machine Learning for Programming
– Models can take decisions based on millions of records while removing human
biases
Big data + Deep Learning = unemployment
– New policies and economic measures will be needed to manage the adverse
effects of job computerization
– Schooling will need reforms: routine tasks non-routine tasks