This was presented at the London Artificial Intelligence & Deep Learning Meetup.
https://www.meetup.com/London-Artificial-Intelligence-Deep-Learning/events/245251725/
Enjoy the recording: https://youtu.be/CY3t11vuuOM.
- - -
Kasia discussed complexities of interpreting black-box algorithms and how these may affect some industries. She presented the most popular methods of interpreting Machine Learning classifiers, for example, feature importance or partial dependence plots and Bayesian networks. Finally, she introduced Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models - using breast cancer data as a specific case scenario.
Kasia Kulma is a Data Scientist at Aviva with a soft spot for R. She obtained a PhD (Uppsala University, Sweden) in evolutionary biology in 2013 and has been working on all things data ever since. For example, she has built recommender systems, customer segmentations, predictive models and now she is leading an NLP project at the UK’s leading insurer. In spare time she tries to relax by hiking & camping, but if that doesn’t work ;) she co-organizes R-Ladies meetups and writes a data science blog R-tastic (https://kkulma.github.io/).
https://www.linkedin.com/in/kasia-kulma-phd-7695b923/
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
This was presented at the London Artificial Intelligence & Deep Learning Meetup.
https://www.meetup.com/London-Artificial-Intelligence-Deep-Learning/events/245251725/
Enjoy the recording: https://youtu.be/CY3t11vuuOM.
- - -
Kasia discussed complexities of interpreting black-box algorithms and how these may affect some industries. She presented the most popular methods of interpreting Machine Learning classifiers, for example, feature importance or partial dependence plots and Bayesian networks. Finally, she introduced Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models - using breast cancer data as a specific case scenario.
Kasia Kulma is a Data Scientist at Aviva with a soft spot for R. She obtained a PhD (Uppsala University, Sweden) in evolutionary biology in 2013 and has been working on all things data ever since. For example, she has built recommender systems, customer segmentations, predictive models and now she is leading an NLP project at the UK’s leading insurer. In spare time she tries to relax by hiking & camping, but if that doesn’t work ;) she co-organizes R-Ladies meetups and writes a data science blog R-tastic (https://kkulma.github.io/).
https://www.linkedin.com/in/kasia-kulma-phd-7695b923/
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
최근 이수가 되고 있는 Bayesian Deep Learning 관련 이론과 최근 어플리케이션들을 소개합니다. Bayesian Inference 의 이론에 관해서 간단히 설명하고 Yarin Gal 의 Monte Carlo Dropout 의 이론과 어플리케이션들을 소개합니다.
Spark 2019: Equifax's SVP Data & Analytics, Peter Maynard, discusses the notion (and importance) of explainable AI in the financial services sector. He looks at the work Equifax have done to crack open the black box by creating patented AI technology that helps companies make smarter, explainable decisions using AI.
Neural Language Generation Head to Toe Hady Elsahar
This is a gentle introduction to Natural language Generation (NLG) using deep learning. If you are a computer science practitioner with basic knowledge about Machine learning. This is a gentle intuitive introduction to Language Generation using Neural Networks. It takes you in a journey from the basic intuitions behind modeling language and how to model probabilities of sequences to recurrent neural networks to large Transformers models that you have seen in the news like GPT2/GPT3. The tutorial wraps up with a summary on the ethical implications of training such large language models on uncurated text from the internet.
In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the output, making it a non-probabilistic binary linear classifier.
Slide explaining the distinction between bagging and boosting while understanding the bias variance trade-off. Followed by some lesser known scope of supervised learning. understanding the effect of tree split metric in deciding feature importance. Then understanding the effect of threshold on classification accuracy. Additionally, how to adjust model threshold for classification in supervised learning.
Note: Limitation of Accuracy metric (baseline accuracy), alternative metrics, their use case and their advantage and limitations were briefly discussed.
Federated Learning makes it possible to build machine learning systems without direct access to training data. The data remains in its original location, which helps to ensure privacy, reduces network communication costs, and taps edge device computing resources. The principles of data minimization established by the GDPR, and the growing prevalence of smart sensors make the advantages of federated learning more compelling. Federated learning is a great fit for smartphones, industrial and consumer IoT, healthcare and other privacy-sensitive use cases, and industrial sensor applications.
We’ll present the Fast Forward Labs team’s research on this topic and the accompanying prototype application, “Turbofan Tycoon”: a simplified working example of federated learning applied to a predictive maintenance problem. In this demo scenario, customers of an industrial turbofan manufacturer are not willing to share the details of how their components failed with the manufacturer, but want the manufacturer to provide them with a strategy to maintain the part. Federated learning allows us to satisfy the customer's privacy concerns while providing them with a model that leads to fewer costly failures and less maintenance downtime.
We’ll discuss the advantages and tradeoffs of taking the federated approach. We’ll assess the state of tooling for federated learning, circumstances in which you might want to consider applying it, and the challenges you’d face along the way.
Speaker
Chris Wallace
Data Scientist
Cloudera
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
최근 이수가 되고 있는 Bayesian Deep Learning 관련 이론과 최근 어플리케이션들을 소개합니다. Bayesian Inference 의 이론에 관해서 간단히 설명하고 Yarin Gal 의 Monte Carlo Dropout 의 이론과 어플리케이션들을 소개합니다.
Spark 2019: Equifax's SVP Data & Analytics, Peter Maynard, discusses the notion (and importance) of explainable AI in the financial services sector. He looks at the work Equifax have done to crack open the black box by creating patented AI technology that helps companies make smarter, explainable decisions using AI.
Neural Language Generation Head to Toe Hady Elsahar
This is a gentle introduction to Natural language Generation (NLG) using deep learning. If you are a computer science practitioner with basic knowledge about Machine learning. This is a gentle intuitive introduction to Language Generation using Neural Networks. It takes you in a journey from the basic intuitions behind modeling language and how to model probabilities of sequences to recurrent neural networks to large Transformers models that you have seen in the news like GPT2/GPT3. The tutorial wraps up with a summary on the ethical implications of training such large language models on uncurated text from the internet.
In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the output, making it a non-probabilistic binary linear classifier.
Slide explaining the distinction between bagging and boosting while understanding the bias variance trade-off. Followed by some lesser known scope of supervised learning. understanding the effect of tree split metric in deciding feature importance. Then understanding the effect of threshold on classification accuracy. Additionally, how to adjust model threshold for classification in supervised learning.
Note: Limitation of Accuracy metric (baseline accuracy), alternative metrics, their use case and their advantage and limitations were briefly discussed.
Federated Learning makes it possible to build machine learning systems without direct access to training data. The data remains in its original location, which helps to ensure privacy, reduces network communication costs, and taps edge device computing resources. The principles of data minimization established by the GDPR, and the growing prevalence of smart sensors make the advantages of federated learning more compelling. Federated learning is a great fit for smartphones, industrial and consumer IoT, healthcare and other privacy-sensitive use cases, and industrial sensor applications.
We’ll present the Fast Forward Labs team’s research on this topic and the accompanying prototype application, “Turbofan Tycoon”: a simplified working example of federated learning applied to a predictive maintenance problem. In this demo scenario, customers of an industrial turbofan manufacturer are not willing to share the details of how their components failed with the manufacturer, but want the manufacturer to provide them with a strategy to maintain the part. Federated learning allows us to satisfy the customer's privacy concerns while providing them with a model that leads to fewer costly failures and less maintenance downtime.
We’ll discuss the advantages and tradeoffs of taking the federated approach. We’ll assess the state of tooling for federated learning, circumstances in which you might want to consider applying it, and the challenges you’d face along the way.
Speaker
Chris Wallace
Data Scientist
Cloudera
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
This slide gives brief overview of supervised, unsupervised and reinforcement learning. Algorithms discussed are Naive Bayes, K nearest neighbour, SVM,decision tree, Markov model.
Difference between regression and classification. difference between supervised and reinforcement, iterative functioning of Markov model and machine learning applications.
Unexpected Challenges in Large Scale Machine Learning by Charles ParkerBigMine
Talk by Charles Parker (BigML) at BigMine12 at KDD12.
In machine learning, scale adds complexity. The most obvious consequence of scale is that data takes longer to process. At certain points, however, scale makes trivial operations costly, thus forcing us to re-evaluate algorithms in light of the complexity of those operations. Here, we will discuss one important way a general large scale machine learning setting may differ from the standard supervised classification setting and show some the results of some preliminary experiments highlighting this difference. The results suggest that there is potential for significant improvement beyond obvious solutions.
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018Sri Ambati
This talk was recorded in London on Oct 30, 2018 and can be viewed here: https://youtu.be/p4iAnxwC_Eg
The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!
This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models
Mateusz is a software developer who loves all things distributed, machine learning and hates buzzwords. His favourite hobby data juggling.
He obtained his M.Sc. in Computer Science from AGH UST in Krakow, Poland, during which he did an exchange at L’ECE Paris in France and worked on distributed flight booking systems. After graduation he move to Tokyo to work as a researcher at Fujitsu Laboratories on machine learning and NLP projects, where he is still currently based.
AI Professionals use top machine learning algorithms to automate models that analyze more extensive and complex data which was not possible in older machine learning algos.
This is an introductory workshop for machine learning. Introduced machine learning tasks such as supervised learning, unsupervised learning and reinforcement learning.
A lot of people talk about Data Mining, Machine Learning and Big Data. It clearly must be important, right?
A lot of people are also trying to sell you snake oil - sometimes half-arsed and overpriced products or solutions promising a world of insight into your customers or users if you handover your data to them. Instead, trying to understanding your own data and what you could do with it, should be the first thing you’d be looking at.
In this talk, we’ll introduce some basic terminology about Data and Text Mining as well as Machine Learning and will have a look at what you can on your own to understand more about your data and discover patterns in your data.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
2. About Kaggle
Biggest platform for competitive data science in the
world
Currently 500k + competitors
Great platform to learn about the latest techniques and
avoiding overfit
Great platform to share and meet up with other data
freaks
3. Approach
Get a good score as fast as possible
Using versatile libraries
Model ensembling
4. Get a good score as fast as
possible
Get the raw data into a universal format like SVMlight or
Numpy arrays.
Failing fast and failing often / Agile sprint / Iteration
Sub-linear debugging:
“output enough intermediate information as a
calculation is progressing to determine before it
finishes whether you've injected a major defect or
a significant improvement.” Paul Mineiro
7. General Strategy
Try to create “machine learning”-learning algorithms with optimized
pipelines that are:
Data agnostic (Sparse, dense, missing values, larger than memory)
Problem agnostic (Classification, regression, clustering)
Solution agnostic (Production-ready, PoC, latency)
Automated (Turn on and go to bed)
Memory-friendly (Don’t want to pay for AWS)
Robust (Good generalization, concept drift, consistent)
8. First Overview I
Classification? Regression?
Evaluation Metric
Description
Benchmark code
“Predict human activities based on their smartphone usage. Predict
if a user is sitting, walking etc.” - Smartphone User Activity Prediction
Given the HTML of ~337k websites served to users of
StumbleUpon, identify the paid content disguised as real content. -
Dato Truly Native?
10. First Overview III
Data size?
Dimensionality?
Number of train samples & test samples?
Online or offline learning?
Linear problem or non-linear problem?
Previous competitions that were similar?
11. Branch
If: Issues with the data -> Tedious clean-up
Join JSON tables, Impute missing values, Curse Kaggle
and join another competition
Else: Get data into Numpy arrays, we want:
X_train, y, X_test
12. Local Evaluation
Set up local evaluation according to competition metric
Create a simple benchmark (useful for exploration and
discarding models)
5-fold stratified cross-validation usually does the trick
Very important step for fast iteration and saving submissions,
yet easy to be lazy and use leaderboard.
Area Under the Curve, Multi-Class Classification
Accuracy
13. Data Exploration
Min, Max, Mean, Percentiles, Std, Plotting
Can detect: leakage, golden features, feature
engineering tricks, data health issues.
Caveat: At least one top 50 Kaggler used to not look at
the data at all:
“It’s called machine learning for a reason.”
14. Feature Engineering I
Log-transform count features, tf-idf transform text features
Unsupervised transforms / dimensionality reduction
Manual inspection of data
Dates -> day of month, is_holiday, season, etc.
Create histograms and cluster similar features
Using VW-varinfo or XGBfi to check 2-3-way interactions
Row stats: mean, max, min, number of NA’s.
15. Feature Engineering II
Bin numerical features to categorical features
Bayesian encoding of categorical features to likelihood
Genetic programming
Random-swap feature elimination
Time binning (customer bought in last week, last month, last year …)
Expand data (Coates & Ng, Random Bit Regression)
Automate all of this
16. Feature Engineering III
Categorical features need some special treatment
Onehot-encode for linear models (sparsity)
Colhot-encode for tree-based models (density)
Counthot-encode for large cardinality features
Likelihood-encode for experts…
18. Algorithms II
There is No Free Lunch in statistical inference
We show that all algorithms that search for an extremum of a cost
function perform exactly the same, when averaged over all possible
cost functions. – Wolpert & Macready, No free lunch theorems for search
Practical Solution for low-bias low-variance models:
Use prior knowledge / experience to limit search (Let algo’s play to their
known strengths for particular problems)
Remove or avoid their weaknesses
Combine/Bag their predictions
19. Random Forests I
A Random Forest is an ensemble of decision trees.
"Random forests are a combination of tree
predictors such that each tree depends on the
values of a random vector sampled
independently and with the same distribution for
all trees in the forest. […] More robust to noise -
“Random Forest" Breiman
20. Random Forests II
Strengths
Fast
Easy to tune
Easy to inspect
Easy to explore data with
Good Benchmark
Very wide applicability
Can introduce randomness / Diversity
Weaknesses
Memory Hungry
Popular
Slower for test time
21. GBM I
A GBM trains weak models on samples that previous
models got wrong
"A method is described for converting a weak
learning algorithm [the learner can produce an
hypothesis that performs only slightly better
than random guessing] into one that achieves
arbitrarily high accuracy." - “The Strength of Weak
Learnability" Schapire
22. GBM II
Strengths
Can achieve very good results
Can model complex problems
Works on wide variety of
problems
Use custom loss functions
No need to scale data
Weaknesses
Slower to train
Easier to overfit than RF
Weak learner assumption is
broken along the way
Tricky to tune
Popular
23. SVM I
Classification and Regression using Support Vectors
"Nothing is more practical than a good theory."
‘The Nature of Statistical Learning Theory’, Vapnik
24. SVM II
Strengths
Strong theoretical guarantees
Tuning regularization parameter
helps prevent overfit
Kernel Trick: Use custom kernels,
turn linear kernel into non-linear
kernel
Achieve state-of-the-art on
select problems
Weaknesses
Slower to train
Memory heavy
Requires a tedious grid-search
for best performance
Will probably time-out on large
datasets
25. Nearest Neighbours I
Look at the distance to other samples
"The nearest neighbor decision rule assigns to
an unclassified sample point the classification of
the nearest of a set of previously classified
points." ‘Nearest neighbor pattern classification’, Cover
et. al.
27. Perceptron I
Update weights when wrong prediction, else do nothing
The embryo of an electronic computer that [the
Navy] expects will be able to walk, talk, see,
write, reproduce itself and be conscious of its
existence. ‘New York Times’, Rosenblatt
28. Perceptron II
Strengths
Cool / Street Cred
Extremely Simple
Fast / Sparse updates
Online Learning
Works well with text
Weaknesses
Other linear algo’s usually
beat it
Does not work well on
average
No regularization
29. Neural Networks I
Inspired by biological systems (Connected neurons firing
when threshold is reached)
Because of the "all-or-none" character of nervous
activity, neural events and the relations among
them can be treated by means of propositional
logic. […] for any logical expression satisfying
certain conditions, one can find a net behaving in
the fashion it describes. ‘A Logical Calculus of the
Ideas Immanent in Nervous Activity’, McCulloch & Pitts
30. Neural Networks II
Strengths
The best for images
Can model any function
End-to-end Training
Amortizes feature
representation
Weaknesses
Can be difficult to set up
Not very interpretable
Requires specialized
hardware
Underfit / Overfit
31. Vowpal Wabbit I
Online learning while optimizing a loss function
We present a system and a set of techniques for
learning linear predictors with convex losses on
terascale datasets, with trillions of features,
billions of training examples and millions of
parameters in an hour using a cluster of 1000
machines. ‘A Reliable Effective Terascale Linear
Learning System’, Agarwal et al.
32. Vowpal Wabbit II
Strengths
Fixed memory constraint
Extremely fast
Feature expansion
Difficult to overfit
Versatile
Weaknesses
Different API
Manual feature engineering
Loses against boosting
Requires practice
Hashing can obscure
34. Ensembles I
Combine models in a way that outperforms individual
models.
“That’s how almost all ML competitions are won” -
‘Dark Knowledge’ Hinton et al.
Ensembles reduce the chance of overfit.
Bagging / Averaging -> Lower variance, slightly lower bias
Blending / Stacking -> Remove biases of base models
36. Stacked Generalization I
Train one model on the predictions of another model
A scheme for minimizing the generalization error rate of
one or more generalizers. Stacked generalization works
by deducing the biases of the generalizer(s) with
respect to a provided learning set. This deduction
proceeds by generalizing in a second space whose
inputs are (for example) the guesses of the original
generalizers when taught with part of the learning set
and trying to guess the rest of it, and whose output is
(for example) the correct guess. - ‘Stacked Generalization’,
Wolpert
38. Stacked Generalization III
Using weak base models vs. using strong base models
Using average of out-of-fold predictors vs. One model
for testing
One can also stack features when these are not
available in test set.
Can share train set predictions based on different folds
39. StackNet
We need to go deeper:
Splitting node: x1 > 5? 1 else 0
Decision tree: x1 > 5 AND x2 < 12?
Random forest: avg ( x1 > 5 AND x2 < 12?, x3 > 2? )
Stacking-1: avg ( RF1_pred > 0.9?, RF2_pred > 0.92? )
Stacking-2: avg ( S1_pred > 0.93?, S2_pred < 0.77? )
Stacking-3: avg ( SS1_pred > 0.98?, SS2_pred > 0.97? )
40. Bagging Predictors I
Averaging submissions to reduce variance
"Bagging predictors is a method for generating
multiple versions of a predictor and using these
to get an aggregated predictor." - "Bagging
Predictors". Breiman
41. Bagging Predictors II
Train models with:
Different data sets
Different algorithms
Different features subsets
Different sample subsets
Then average / vote aggregate these
42. Bagging Predictors III
One can average with:
Plain average
Geometric mean
Rank mean
Harmonic mean
KazAnova’s brute-force weighted averaging
Caruana’s forward greedy model selection
43. Brute-Force Weighted
Average
Create out-of-fold predictions for train set for n models
Pick a stepsize s, and set n weights
Try every possible weight with stepsize s
Look which set of n weights improves the train set score
the most
Can do in cross-validation-style manner for extra
robustness.
44. Greedy forward model
selection (Caruana)
Create out-of-fold predictions for the train set
Start with a base ensemble of 3 best models
Loop: Add every model from library to ensemble and pick 4
models that give best train score performance
Using place-back of models, models can be picked multiple times
(weighing them)
Using random subset selection from library in loop avoids
overfitting to single best model.
45. Automated Stack ’n Bag I
Automatically train 1000s of models and 100s of
stackers, then average everything.
“Hodor!” - Hodor
46. Automated Stack ’n Bag II
Generalization
Train random models, random parameters, random data set transforms,
random feature sets, random sample sets.
Stacking
Train random models, random parameters, random base models, with and
without original features, random feature sets, random sample sets.
Bagging
Average random selection of Stackers and Generalizers. Either pick best
model, or create more random bags and keep averaging, ‘till no increase.
47. Automated Stack ’n Bag III
Strengths
Wins Kaggle competitions
Best generalization
No tuning
No selection
No human bias
Weaknesses
Extremely slow
Redundant
Inelegant
Very complex
Bad for environment
48. Leakage I
“The introduction of information about the data
mining target, which should not be legitimately
available to mine from.” - ‘Leakage in Data Mining:
Formulation, Detection, and Avoidance’, Kaufman et.
al.
“one of the top ten data mining mistakes” -
‘Handbook of Statistical Analysis and Data Mining
Applications.’, Nisbet et. al.
49. Leakage II
Exploiting Leakage:
In predictive modeling competitions: Allowed and
beneficial for results
In Science and Business: A very big NO NO!
In both: Accidental (Complex algo’s find leakage
automatically, or KNN finds duplicates)
50. Leakage III
Dato Truly Native?
This task suffered from data collection leakage:
Dates and certain keywords (Trump) were indicative, and generalized
to private LB (but not generalize to future data).
Smartphone activity prediction
This task had not enough randomization (order of samples in train and
test set was indicative)
Could manually change predictions, because classes were clustered.
51. Winning Dato Truly Native? I
Invented StackNet
“Data science is a team sport”: it helps to join up with #1 Kaggler :)
We used basic NLP: Cleaning, lowercasing, stemming, ngrams, chargrams, tf-
idf, SVD.
Trained a lot of different models on different datasets.
Started ensembling in the last 2 weeks.
Doing research and fun stuff, while waiting for models to complete.
XGBoost the big winner (somewhat rare to use boosting for sparse text)
53. Winning Smartphone
Activity Prediction I
Prototyped Automated Stack ’n Bag (Kaggle Killer).
Let computer run for two days
Automatically inferred feature types
Did not look at the data
Beat very stiff competition
55. General strategy
Being #1 during competition sucks.
Team up
Go crazy with ensembling
Do not worry so much about replication that it freezes progress
Check previous competitions
Be patient and persistent (dont run out of steam)
Automate a lot
Stay up-to-date with State-of-the-art algorithms and tools
56. Complexity vs. Practicality I
Most Kaggle winner models are useless for production. It’s about
hyper-optimization. Top 10% probably good enough for business.
But what if we could use some Top 1% principles from Kaggle
models for business?
1-5% increase in accuracy can matter a lot!
Batch jobs allow us to overcome latency constraints
Ensembles are technically brittle, but give good generalization.
Leave no model behind!
58. Future
Use re-usable holdout set
Use contextual bandits for training the ensemble
Find more models to add to library
Ensemble pruning / compression
Interpretable black box models