This document provides tips for winning data science competitions by summarizing a presentation about strategies and techniques. It discusses the structure of competitions, sources of competitive advantage like feature engineering and the right tools, and validation approaches. It also summarizes three case studies where the speaker applied these lessons, including encoding categorical variables and building diverse blended models. The key lessons are to focus on proper validation, leverage domain knowledge through features, and apply what is learned to real-world problems.
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
This document discusses model selection and tuning at scale using large datasets. It describes using different percentages of a 1TB Criteo click-through dataset to test and tune gradient boosted trees (GBTs) and other models. Testing on small slices found GBT performed best. Tuning GBT on larger slices up to 10% of the data showed tree depth should increase logarithmically with data size. Online learning with VW was also efficient, needing minimal tuning. The document cautions that true model selection and tuning at scale involves starting with larger data samples than GBs to avoid extrapolating from small data.
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
The document discusses tips for winning data science competitions. It outlines the typical structure of competitions, sources of competitive advantage like feature engineering and modeling techniques. It emphasizes using gradient boosted machines (GBM) and blending models. Specific technical tips are provided for handling different data types and tuning GBM. The document stresses applying lessons from competitions to real-world problems by selecting the right problem and using models appropriately.
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
This document discusses model selection and tuning at scale using large datasets. It describes using different percentages of a 1TB Criteo click-through dataset to test and tune gradient boosted trees (GBTs) and other models. Testing on small slices found GBT performed best. Tuning GBT on larger slices up to 10% of the data showed tree depth should increase logarithmically with data size. Online learning with VW was also efficient, needing minimal tuning. The document cautions that true model selection and tuning at scale involves starting with larger data samples than GBs to avoid extrapolating from small data.
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
The document discusses tips for winning data science competitions. It outlines the typical structure of competitions, sources of competitive advantage like feature engineering and modeling techniques. It emphasizes using gradient boosted machines (GBM) and blending models. Specific technical tips are provided for handling different data types and tuning GBM. The document stresses applying lessons from competitions to real-world problems by selecting the right problem and using models appropriately.
Winning Kaggle 101: Introduction to StackingTed Xiao
This document provides an introduction to stacking, an ensemble machine learning method. Stacking involves training a "metalearner" to optimally combine the predictions from multiple "base learners". The stacking algorithm was developed in the 1990s and improved upon with techniques like cross-validation and the "Super Learner" which combines models in a way that is provably asymptotically optimal. H2O implements an efficient stacking method called H2O Ensemble which allows for easily finding the best combination of algorithms like GBM, DNNs, and more to improve predictions.
Jeong-Yoon Lee has extensive experience winning data science competitions, taking first place in KDD Cup 2012 and 2015 and placing in the top 10 in several others. He competes for fun, experience, learning, and networking. Some best practices for competitions include thorough feature engineering, using diverse machine learning algorithms, cross-validation, ensemble methods, and collaboration. While competitions may seem limited, they provide valuable experience in data wrangling, exploration, and pipeline development applicable to real-world work.
The document discusses hyperparameter optimization in machine learning models. It introduces various hyperparameters that can affect model performance, and notes that as models become more complex, the number of hyperparameters increases, making manual tuning difficult. It formulates hyperparameter optimization as a black-box optimization problem to minimize validation loss and discusses challenges like high function evaluation costs and lack of gradient information.
This document discusses 10 R packages that are useful for winning Kaggle competitions by helping to capture complexity in data and make code more efficient. The packages covered are gbm and randomForest for gradient boosting and random forests, e1071 for support vector machines, glmnet for regularization, tau for text mining, Matrix and SOAR for efficient coding, and forEach, doMC, and data.table for parallel processing. The document provides tips for using each package and emphasizes letting machine learning algorithms find complexity while also using intuition to help guide the models.
How To Interview a Data Scientist
Daniel Tunkelang
Presented at the O'Reilly Strata 2013 Conference
Video: https://www.youtube.com/watch?v=gUTuESHKbXI
Interviewing data scientists is hard. The tech press sporadically publishes “best” interview questions that are cringe-worthy.
At LinkedIn, we put a heavy emphasis on the ability to think through the problems we work on. For example, if someone claims expertise in machine learning, we ask them to apply it to one of our recommendation problems. And, when we test coding and algorithmic problem solving, we do it with real problems that we’ve faced in the course of our day jobs. In general, we try as hard as possible to make the interview process representative of actual work.
In this session, I’ll offer general principles and concrete examples of how to interview data scientists. I’ll also touch on the challenges of sourcing and closing top candidates.
Winning Kaggle 101: Introduction to StackingTed Xiao
This document provides an introduction to stacking, an ensemble machine learning method. Stacking involves training a "metalearner" to optimally combine the predictions from multiple "base learners". The stacking algorithm was developed in the 1990s and improved upon with techniques like cross-validation and the "Super Learner" which combines models in a way that is provably asymptotically optimal. H2O implements an efficient stacking method called H2O Ensemble which allows for easily finding the best combination of algorithms like GBM, DNNs, and more to improve predictions.
Jeong-Yoon Lee has extensive experience winning data science competitions, taking first place in KDD Cup 2012 and 2015 and placing in the top 10 in several others. He competes for fun, experience, learning, and networking. Some best practices for competitions include thorough feature engineering, using diverse machine learning algorithms, cross-validation, ensemble methods, and collaboration. While competitions may seem limited, they provide valuable experience in data wrangling, exploration, and pipeline development applicable to real-world work.
The document discusses hyperparameter optimization in machine learning models. It introduces various hyperparameters that can affect model performance, and notes that as models become more complex, the number of hyperparameters increases, making manual tuning difficult. It formulates hyperparameter optimization as a black-box optimization problem to minimize validation loss and discusses challenges like high function evaluation costs and lack of gradient information.
This document discusses 10 R packages that are useful for winning Kaggle competitions by helping to capture complexity in data and make code more efficient. The packages covered are gbm and randomForest for gradient boosting and random forests, e1071 for support vector machines, glmnet for regularization, tau for text mining, Matrix and SOAR for efficient coding, and forEach, doMC, and data.table for parallel processing. The document provides tips for using each package and emphasizes letting machine learning algorithms find complexity while also using intuition to help guide the models.
How To Interview a Data Scientist
Daniel Tunkelang
Presented at the O'Reilly Strata 2013 Conference
Video: https://www.youtube.com/watch?v=gUTuESHKbXI
Interviewing data scientists is hard. The tech press sporadically publishes “best” interview questions that are cringe-worthy.
At LinkedIn, we put a heavy emphasis on the ability to think through the problems we work on. For example, if someone claims expertise in machine learning, we ask them to apply it to one of our recommendation problems. And, when we test coding and algorithmic problem solving, we do it with real problems that we’ve faced in the course of our day jobs. In general, we try as hard as possible to make the interview process representative of actual work.
In this session, I’ll offer general principles and concrete examples of how to interview data scientists. I’ll also touch on the challenges of sourcing and closing top candidates.
This document summarizes a presentation on deep learning in Python. It discusses training a deep neural network (DNN), including data analysis, architecture design, optimization, and training. It also covers improving the DNN through techniques like data augmentation and monitoring layer training. Finally, it reviews popular open-source Python packages for deep learning like Theano, Keras, and Caffe and their uses in applications and research.
Presentation given by Dr. Diego Kuonen, CStat PStat CSci, on November 20, 2013, at the "IBM Developer Days 2013" in Zurich, Switzerland.
ABSTRACT
There is no question that big data has hit the business, government and scientific sectors. The demand for skills in data science is unprecedented in sectors where value, competitiveness and efficiency are driven by data. However, there is plenty of misleading hype around the terms big data and data science. This presentation gives a professional statistician's view on these terms and illustrates the connection between data science and statistics.
The presentation is also available at http://www.statoo.com/BigDataDataScience/.
This document summarizes a presentation on machine learning and Hadoop. It discusses the current state and future directions of machine learning on Hadoop platforms. In industrial machine learning, well-defined objectives are rare, predictive accuracy has limits, and systems must precede algorithms. Currently, Hadoop is used for data preparation, feature engineering, and some model fitting. Tools include Pig, Hive, Mahout, and new interfaces like Spark. The future includes YARN for running diverse jobs and improved machine learning libraries. The document calls for academic work on feature engineering languages and broader model selection ontologies.
Data By The People, For The People
Daniel Tunkelang
Director, Data Science at LinkedIn
Invited Talk at the 21st ACM International Conference on Information and Knowledge Management (CIKM 2012)
LinkedIn has a unique data collection: the 175M+ members who use LinkedIn are also the content those same members access using our information retrieval products. LinkedIn members performed over 4 billion professionally-oriented searches in 2011, most of those to find and discover other people. Every LinkedIn search and recommendation is deeply personalized, reflecting the user's current employment, career history, and professional network. In this talk, I will describe some of the challenges and opportunities that arise from working with this unique corpus. I will discuss work we are doing in the areas of relevance, recommendation, and reputation, as well as the ecosystem we have developed to incent people to provide the high-quality semi-structured profiles that make LinkedIn so useful.
Bio:
Daniel Tunkelang leads the data science team at LinkedIn, which analyzes terabytes of data to produce products and insights that serve LinkedIn's members. Prior to LinkedIn, Daniel led a local search quality team at Google. Daniel was a founding employee of faceted search pioneer Endeca (recently acquired by Oracle), where he spent ten years as Chief Scientist. He has authored fourteen patents, written a textbook on faceted search, created the annual workshop on human-computer interaction and information retrieval (HCIR), and participated in the premier research conferences on information retrieval, knowledge management, databases, and data mining (SIGIR, CIKM, SIGMOD, SIAM Data Mining). Daniel holds a PhD in Computer Science from CMU, as well as BS and MS degrees from MIT.
Big Data [sorry] & Data Science: What Does a Data Scientist Do?Data Science London
What 'kind of things' does a data scientist do? What are the foundations and principles of data science? What is a Data Product? What does the data science process looks like? Learning from data: Data Modeling or Algorithmic Modeling? - talk by Carlos Somohano @ds_ldn at The Cloud and Big Data: HDInsight on Azure London 25/01/13
How to Become a Data Scientist
SF Data Science Meetup, June 30, 2014
Video of this talk is available here: https://www.youtube.com/watch?v=c52IOlnPw08
More information at: http://www.zipfianacademy.com
Zipfian Academy @ Crowdflower
A tutorial on deep learning at icml 2013Philip Zheng
This document provides an overview of deep learning presented by Yann LeCun and Marc'Aurelio Ranzato at an ICML tutorial in 2013. It discusses how deep learning learns hierarchical representations through multiple stages of non-linear feature transformations, inspired by the hierarchical structure of the mammalian visual cortex. It also compares different types of deep learning architectures and training protocols.
An Introduction to Supervised Machine Learning and Pattern Classification: Th...Sebastian Raschka
The document provides an introduction to supervised machine learning and pattern classification. It begins with an overview of the speaker's background and research interests. Key concepts covered include definitions of machine learning, examples of machine learning applications, and the differences between supervised, unsupervised, and reinforcement learning. The rest of the document outlines the typical workflow for a supervised learning problem, including data collection and preprocessing, model training and evaluation, and model selection. Common classification algorithms like decision trees, naive Bayes, and support vector machines are briefly explained. The presentation concludes with discussions around choosing the right algorithm and avoiding overfitting.
This talk is about how we applied deep learning techinques to achieve state-of-the-art results in various NLP tasks like sentiment analysis and aspect identification, and how we deployed these models at Flipkart
Introduction to Mahout and Machine LearningVarad Meru
This presentation gives an introduction to Apache Mahout and Machine Learning. It presents some of the important Machine Learning algorithms implemented in Mahout. Machine Learning is a vast subject; this presentation is only a introductory guide to Mahout and does not go into lower-level implementation details.
Myths and Mathemagical Superpowers of Data ScientistsDavid Pittman
1) The document discusses 10 myths about data scientists and provides realities to counter each myth.
2) Some myths include claims that data scientists are mythical beings, elitist academics, or a fading trend. However, the realities note data science requires hands-on work with data and has experienced steady growth.
3) Other myths suggest data scientists are just statisticians or BI specialists, but the realities indicate data scientists come from varied backgrounds and tackle business problems through experimentation and analysis.
Tutorial on Deep learning and ApplicationsNhatHai Phan
In this presentation, I would like to review basis techniques, models, and applications in deep learning. Hope you find the slides are interesting. Further information about my research can be found at "https://sites.google.com/site/ihaiphan/."
NhatHai Phan
CIS Department,
University of Oregon, Eugene, OR
Deep learning and neural networks are inspired by biological neurons. Artificial neural networks (ANN) can have multiple layers and learn through backpropagation. Deep neural networks with multiple hidden layers did not work well until recent developments in unsupervised pre-training of layers. Experiments on MNIST digit recognition and NORB object recognition datasets showed deep belief networks and deep Boltzmann machines outperform other models. Deep learning is now widely used for applications like computer vision, natural language processing, and information retrieval.
This document provides an introduction to machine learning. It begins with an agenda that lists topics such as introduction, theory, top 10 algorithms, recommendations, classification with naive Bayes, linear regression, clustering, principal component analysis, MapReduce, and conclusion. It then discusses what big data is and how data is accumulating at tremendous rates from various sources. It explains the volume, variety, and velocity aspects of big data. The document also provides examples of machine learning applications and discusses extracting insights from data using various algorithms. It discusses issues in machine learning like overfitting and underfitting data and the importance of testing algorithms. The document concludes that machine learning has vast potential but is very difficult to realize that potential as it requires strong mathematics skills.
- The document introduces artificial neural networks, which aim to mimic the structure and functions of the human brain.
- It describes the basic components of artificial neurons and how they are modeled after biological neurons. It also explains different types of neural network architectures.
- The document discusses supervised and unsupervised learning in neural networks. It provides details on the backpropagation algorithm, a commonly used method for training multilayer feedforward neural networks using gradient descent.
Artificial intelligence is the study and design of intelligent agents, with no single goal. It aims to put the human mind into computers by developing machines that can achieve goals through computation. The origins of AI began in the 1940s with the development of electronic computers. Significant early developments included the first stored program computer in the 1950s, the Dartmouth Conference which coined the term "artificial intelligence" in the 1950s, and the development of the LISP programming language. In the following decades, AI research expanded and led to applications in fields like expert systems, games, and military systems. While progress has been made, the full extent of intelligence and the future of AI remains unknown.
Winning Data Science Competitions (Owen Zhang) - 2014 Boston Data Festivalfreshdatabos
The document discusses tips for winning data science competitions. It outlines the typical structure of competitions, emphasizing the importance of proper validation to avoid overfitting. Sources of competitive advantage include discipline, effort, domain knowledge, feature engineering, and model structure. Gradient boosted machines (GBM) are recommended as a strong baseline model to use, with advice on tuning its parameters. Additional techniques discussed include blending models, handling text and high-cardinality data, and applying lessons from competitions more broadly.
We live with an abundance of ML resources; from open source tools, to GPU workstations, to cloud-hosted autoML. What’s more, the lines between AI research and everyday ML have blurred; you can recreate a state-of-the-art model from arxiv papers at home. But can you afford to? In this talk, we explore ways to recession-proof your ML process without sacrificing on accuracy, explainability, or value.
My presentation about how to get started with competitive data mining at the data mining research group of Department of Computer Science and Engineering, University of Moratuwa.
My presentation about how to get started with competitive data mining at the meeting of data mining research group of Department of Computer Science and Engineering, University of Moratuwa.
This document provides an overview of a project to predict product demand for a Mexican bakery company using a Kaggle competition dataset. It describes three models created - a naive baseline model, an NLTK model using text features, and a comprehensive XGBoost model with extensive feature engineering and parameter tuning. The comprehensive model achieved the best scores on the validation and test sets compared to the other approaches. The document also discusses the tools, platforms, data, and technical challenges involved in building accurate demand prediction models at scale for this large dataset.
Production-Ready BIG ML Workflows - from zero to heroDaniel Marcous
Data science isn't an easy task to pull of.
You start with exploring data and experimenting with models.
Finally, you find some amazing insight!
What now?
How do you transform a little experiment to a production ready workflow? Better yet, how do you scale it from a small sample in R/Python to TBs of production data?
Building a BIG ML Workflow - from zero to hero, is about the work process you need to take in order to have a production ready workflow up and running.
Covering :
* Small - Medium experimentation (R)
* Big data implementation (Spark Mllib /+ pipeline)
* Setting Metrics and checks in place
* Ad hoc querying and exploring your results (Zeppelin)
* Pain points & Lessons learned the hard way (is there any other way?)
This talk will focus on Techniques, metrics and different tests (code, models, infra and features/data) that help the developers of machine learning systems to achieve CD.
"What we learned from 5 years of building a data science software that actual...Dataconomy Media
"What we learned from 5 years of building a data science software that actually works for everybody." Dr. Dennis Proppe, CTO and Chief Data Scientist at GPredictive GmbH
Watch more from Data Natives Berlin 2016 here: http://bit.ly/2fE1sEo
Visit the conference website to learn more: www.datanatives.io
Follow Data Natives:
https://www.facebook.com/DataNatives
https://twitter.com/DataNativesConf
https://www.youtube.com/c/DataNatives
Stay Connected to Data Natives by Email: Subscribe to our newsletter to get the news first about Data Natives 2017: http://bit.ly/1WMJAqS
About the Author:
Dennis Proppe is the CTO and Chief Data Scientist at Gpredictive, where he helps building software that enables data scientists to build and deploy predictive models in a few minutes instead of weeks. He has 10 years+ of expertise in extracting business value from data. Before co-founding Gpredictive, he worked as a marketing science consultant. Dennis holds a Ph.D. in statistical marketing.
1) The document discusses techniques for training very large machine learning models on massive datasets to predict ad click probabilities using minimal resources.
2) It describes the FTRL-Proximal online learning algorithm used to train these models as well as techniques like per-coordinate learning rates, L1 regularization, and randomized rounding to reduce model size and memory usage.
3) It also covers approaches for visualizing model accuracy on high-dimensional data and managing the large number of available machine learning features.
VSSML16 LR1. Summary Day 1
Valencian Summer School in Machine Learning 2016
Day 1
Summary Day 1
Mercè Martin (BigML)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2016
The document provides an overview of machine learning and artificial intelligence concepts. It discusses:
1. The machine learning pipeline, including data collection, preprocessing, model training and validation, and deployment. Common machine learning algorithms like decision trees, neural networks, and clustering are also introduced.
2. How artificial intelligence has been adopted across different business domains to automate tasks, gain insights from data, and improve customer experiences. Some challenges to AI adoption are also outlined.
3. The impact of AI on society and the workplace. While AI is predicted to help humans solve problems, some people remain wary of technologies like home health diagnostics or AI-powered education. Responsible development of explainable AI is important.
This is the slide from my talk at FULokoja Ingressive meetup.
XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. In prediction problems involving unstructured and structured data (images, text, etc.) artificial neural networks tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data, decision tree-based algorithms are considered best-in-class right now. XGBoost model has the best combination of prediction performance and processing time compared to other algorithms.
This document discusses the current limitations of machine learning and managing expectations. It covers three key areas:
1) Current state-of-the-art limitations such as an inability to build generalized models for both images and text or conversational agents that pass the Turing test.
2) Expectation mismatch between what products teams expect ML to be able to do and its actual capabilities, like generating catchy titles.
3) Technical difficulties in maintaining ML systems over time like concept drift, training-serving skew, and unexpected data distributions causing false positives that require additional data and retraining. Check data science competitions to understand current ML capabilities and manage expectations.
Valencian Summer School in Machine Learning 2017 - Day 1
Lectures Review: Summary Day 1 Sessions. By Mercè Martín (BigML).
https://bigml.com/events/valencian-summer-school-in-machine-learning-2017
BIG2016- Lessons Learned from building real-life user-focused Big Data systemsXavier Amatriain
1) More data is not always better than better models. Sometimes, better modeling techniques are needed rather than just collecting more data.
2) Ensembles of different models generally perform better than any single model and are commonly used in practice. Feature engineering to create new inputs for ensembles can improve their effectiveness.
3) Implicit signals from user behavior usually provide more useful information than explicit feedback, but both should be used to best represent users' long-term goals.
Big Data Spain 2018: How to build Weighted XGBoost ML model for Imbalance dat...Alok Singh
Alok Singh is a Principal Engineer at IBM CODAIT who has built multiple analytical frameworks and machine learning algorithms. The presentation provides an overview of building predictive models for imbalanced datasets using scikit-learn and XGBoost. It discusses challenges with imbalanced data, evaluation metrics like confusion matrix and ROC curves, and techniques for imbalanced learning including weighted classes, oversampling minorities and undersampling majorities, and SMOTE. The presentation concludes with a hands-on tutorial demonstrating these techniques on an imbalanced bank marketing dataset.
Similar to Tips for data science competitions (20)
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Orchestrating the Future: Navigating Today's Data Workflow Challenges with Ai...Kaxil Naik
Navigating today's data landscape isn't just about managing workflows; it's about strategically propelling your business forward. Apache Airflow has stood out as the benchmark in this arena, driving data orchestration forward since its early days. As we dive into the complexities of our current data-rich environment, where the sheer volume of information and its timely, accurate processing are crucial for AI and ML applications, the role of Airflow has never been more critical.
In my journey as the Senior Engineering Director and a pivotal member of Apache Airflow's Project Management Committee (PMC), I've witnessed Airflow transform data handling, making agility and insight the norm in an ever-evolving digital space. At Astronomer, our collaboration with leading AI & ML teams worldwide has not only tested but also proven Airflow's mettle in delivering data reliably and efficiently—data that now powers not just insights but core business functions.
This session is a deep dive into the essence of Airflow's success. We'll trace its evolution from a budding project to the backbone of data orchestration it is today, constantly adapting to meet the next wave of data challenges, including those brought on by Generative AI. It's this forward-thinking adaptability that keeps Airflow at the forefront of innovation, ready for whatever comes next.
The ever-growing demands of AI and ML applications have ushered in an era where sophisticated data management isn't a luxury—it's a necessity. Airflow's innate flexibility and scalability are what makes it indispensable in managing the intricate workflows of today, especially those involving Large Language Models (LLMs).
This talk isn't just a rundown of Airflow's features; it's about harnessing these capabilities to turn your data workflows into a strategic asset. Together, we'll explore how Airflow remains at the cutting edge of data orchestration, ensuring your organization is not just keeping pace but setting the pace in a data-driven future.
Session in https://budapestdata.hu/2024/04/kaxil-naik-astronomer-io/ | https://dataml24.sessionize.com/session/667627
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
2. A plug for myself
Current
● Chief Product Officer
Previous
● VP, Science
3. A plug for myself
Current
● Chief Product Officer
Previous
● VP, Science
1st / 330,336
176,181 points
4. Agenda
● Structure of a Data Science Competition
● Philosophical considerations
● Sources of competitive advantage
● Some tools/techniques
● Three cases -- Amazon Allstate LM
● Apply what we learn out of competitions
Technique
Strategy
Philosophy
5. Data Science Competitions remind us that the purpose of a
predictive model is to predict on data that we have NOT seen.
Training Public LB
(validation)
Private LB
(holdout)
Build model using Training Data to
predict outcomes on Private LB Data
Structure of a Data Science Competition
Quick but sometimes misleading feedback
6. A little “philosophy”
● There are many ways to overfit
● Beware of “multiple comparison fallacy”
○ There is a cost in “peeking at the answer”,
○ Usually the first idea (if it works) is the best
“Think” more, “try” less
7. Sources of Competitive Advantage (the Secret Sauce)
● Luck
● Discipline (once bitten twice shy)
○ Proper validation framework
● Effort
● (Some) Domain knowledge
● Feature engineering
● The “right” model structure
● Machine/statistical learning packages
● Coding/data manipulation efficiency
The right tool is very important
Be Disciplined
+
Work Hard
+
Learn from
everyone
+
Luck
8. Good Validation is MORE IMPORTANT than Good Model
● Simple Training/Validation split is NOT enough
○ When you looked at your validation result for the Nth time, you
are training models on it
● If possible, have “holdout” dataset that you do not touch at all during
model building process
○ This includes feature extraction, etc.
9. A Typical Modeling Project
● What if holdout result is bad?
○ Be brave and scrap the project
Identify
Opportunity
Find/Prep
Data
Split Data
and Hide
Holdout
Build Model
Validate
Model
Test Model
with holdout
Implement
Model
10. Make Validation Dataset as Realistic as Possible
● Usually this means “out-of-time” validation.
○ You are free to use “in-time” random split to build models, tune
parameters, etc
○ But hold out data should be out-of-time
● Exception to the rule: cross validation when data extremely small
○ But keep in mind that your model won’t perform as well in reality
○ The more times you “tweak” your model, the bigger the gap.
11. Kaggle Competitions -- Typical Data Partitioning
Training
Public LB
Private LB
X Y
Training
Public LB
Private LB
X Y
Training Public LB Private LB
Time Time
Time
Training
Public LB
Private LB
Time
X Y X Y
X Y X Y
● When should we
use Public LB
feedback to tune
our models?
12. Kaggle Competitions -- Use PLB as Training?
Training
Public LB
Private LB
X Y
Training
Public LB
Private LB
X Y
Training Public LB Private LB
Time Time
Time
Training
Public LB
Private LB
Time
X Y X Y
X Y X Y
YES
YES
M
U
ST
N
O
13. Tools/techniques -- GBM
● My confession: I (over)use GBM
○ When in doubt, use GBM
● GBM automatically approximate
○ Non-linear transformations
○ Subtle and deep interactions
● GBM gracefully treats missing values
● GBM is invariant to monotonic transformation of
features
14. GBDT Hyper Parameter Tuning
Hyper Parameter Tuning Approach Range Note
# of Trees Fixed value 100-1000 Depending on datasize
Learning Rate Fixed => Fine Tune [2 - 10] / # of Trees Depending on # trees
Row Sampling Grid Search [.5, .75, 1.0]
Column Sampling Grid Search [.4, .6, .8, 1.0]
Min Leaf Weight Fixed => Fine Tune 3/(% of rare events) Rule of thumb
Max Tree Depth Grid Search [4, 6, 8, 10]
Min Split Gain Fixed 0 Keep it 0
Best GBDT implementation today: https://github.com/tqchen/xgboost
by Tianqi Chen (U of Washington)
15. Tools/techniques -- data preprocessing for GBDT
● High cardinality features
○ These are very commonly encountered -- zip code, injury type,
ICD9, text, etc.
○ Convert into numerical with preprocessing -- out-of-fold average,
counts, etc.
○ Use Ridge regression (or similar) and
■ use out-of-fold prediction as input to GBM
■ or blend
○ Be brave, use N-way interactions
■ I used 7-way interaction in the Amazon competition.
● GBM with out-of-fold treatment of high-cardinality feature performs
very well
16. Technical Tricks -- Stacking
● Basic idea -- use one model’s output as the next model’s input
● It is NOT a good idea to use in sample prediction for stacking
○ The problem is over-fitting
○ The more “over-fit” prediction1 is , the more weight it will get in
Model 2
Text Features
Model 2
GBM
Prediction 1
Model 1
Ridge
Regression
Final
Prediction
Num Features
17. Technical Tricks -- Stacking -- OOS / CV
● Use out of sample predictions
○ Take half of the training data to build model 1
○ Apply model 1 on the rest of the training data,
use the output as input to model 2
● Use cross-validation partitioning when data limited
○ Partition training data into K partitions
○ For each of the K partition, compute “prediction
1” by building a model with OTHER partitions
18. Technical Tricks -- feature engineering in GBM
● GBM only APPROXIMATE interactions and non-
linear transformations
● Strong interactions benefit from being explicitly
defined
○ Especially ratios/sums/differences among
features
● GBM cannot capture complex features such as
“average sales in the previous period for this type of
product”
19. Technical Tricks -- Glmnet
● From a methodology perspective, the opposite of
GBM
● Captures (log/logistic) linear relationship
● Work with very small # of rows (a few hundred or
even less)
● Complements GBM very well in a blend
● Need a lot of more work
○ missing values, outliers, transformations (log?),
interactions
● The sparsity assumption -- L1 vs L2
20. Technical Tricks -- Text mining
● tau package in R
● Python’s sklearn
● L2 penalty a must
● N-grams work well.
● Don’t forget the “trivial features”: length of text,
number of words, etc.
● Many “text-mining” competitions on kaggle are
actually dominated by structured fields -- KDD2014
21. Technical Tricks -- Blending
● All models are wrong, but some are useful (George
Box)
○ The hope is that they are wrong in different ways
● When in doubt, use average blender
● Beware of temptation to overfit public leaderboard
○ Use public LB + training CV
● The strongest individual model does not necessarily
make the best blend
○ Sometimes intentionally built weak models are good blending
candidates -- Liberty Mutual Competition
22. Technical Tricks -- blending continued
● Try to build “diverse” models
○ Different tools -- GBM, Glmnet, RF, SVM, etc.
○ Different model specifications -- Linear,
lognormal, poisson, 2 stage, etc.
○ Different subsets of features
○ Subsampled observations
○ Weighted/unweighted
○ …
● But, do not “peek at answers” (at least not too much)
23. Apply what we learn outside of competitions
● Competitions give us really good models, but we also need to
○ Select the right problem and structure it correctly
○ Find good (at least useful) data
○ Make sure models are used the right way
Competitions help us
● Understand how much “signal” exists in the data
● Identify flaws in data or data creation process
● Build generalizable models
● Broaden our technical horizon
● …
24. Case 1 -- Amazon User Access competition
● One of the most popular competitions on Kaggle to date
○ 1687 teams
● Use anonymized features to predict if employee access
request would be granted or denied
● All categorical features
○ Resource ID / Mgr ID / User ID / Dept ID …
○ Many features have high cardinality
● But I want to use GBM
25. Case 1 -- Amazon User Access competition
● Encode categorical features using observation counts
○ This is even available for holdout data!
● Encode categorical features using average response
○ Average all but one (example on next slide)
○ Add noise to the training features
● Build different kind of trees + ENET
○ GBM + ERT + ENET + RF + GBM2 + ERT2
● I didn't know VW (or similar), otherwise might have got better
results.
● https://github.com/owenzhang/Kaggle-AmazonChallenge2013
26. Case 1 -- Amazon User Access competition
“Leave-one-out” encoding of categorical features:
Split User ID Y mean(Y) random Exp_UID
Training A1 0 .667 1.05 0.70035
Training A1 1 .333 .97 0.32301
Training A1 1 .333 .98 0.32634
Training A1 0 .667 1.02 0.68034
Test A1 - .5 1 .5
Test A1 - .5 1 .5
Training A2 0
27. Case 2 -- Allstate User Purchase Option Prediction
● Predict final purchased product options based on earlier
transactions.
○ 7 correlated targets
● This turns out to be very difficult because:
○ The evaluation criteria is all-or-nothing: all 7 predictions
need to be correct
○ The baseline “last quoted” is very hard to beat.
■ Last quoted 53.269%
■ #3 (me) : 53.713% (+0.444%)
■ #1 solution 53.743% (+0.474%)
● Key challenges -- capture correlation, and not to lose to
baseline
28. Case 2 -- Allstate User Purchase Option Prediction
● Dependency -- Chained models
○ First build stand-alone model for F
○ Then model for G, given F
○ F => G => B => A => C => E => D
○ “Free models” first, “dependent” model later
○ In training time, use actual data
○ In prediction time, use most likely predicted value
● Not to lose to baseline -- 2 stage models
○ One model to predict which one to use: chained prediction,
or baseline
29. ● ~1 million insurance records
● 300 variables:
target : The transformed ratio of loss to total insured value
id : A unique identifier of the data set
dummy : Nuisance variable used to control the model, but not a predictor
var1 – var17 : A set of normalized variables representing policy
characteristics
crimeVar1 – crimeVar9 : Normalized Crime Rate variables
geodemVar1 – geodemVar37 : Normalized geodemographic variables
weatherVar1 – weatherVar236 : Normalized weather station variables
DATA OVERVIEW
info@DataRobot.com | @DataRobot | DataRobot, INC.
Case 3 -- Liberty Mutual fire loss prediction
30. FEATURE ENGINEERING
info@DataRobot.com | @DataRobot | DataRobot, INC.
32 features
Policy Characteristics
30 features:
- All policy characteristics features (17)
- Split V4 into 2 levels (8)
- Computed ratio of certain features
- Combined surrogate ID and subsets of policy vars
Geodemographics
1 feature:
- Derived from PCA trained on scaled vars
Weather
1 feature:
- Derived from elasticnet trained on scaled variables
Crime Rate
0 features
● Broke feature set into 4 components
● Created surrogate ID based on identical crime, geodemographics and weather
variables
31. FINAL SOLUTION SUMMARY
info@DataRobot.com | @DataRobot | DataRobot, INC.
split
var4
Policy
Weather
Geodem
Crime
Surrogate
ID
25 policy features
1 weather feature
= Enet(Weather)
4 count features
=Count(ID *
4 subsets of
policy features)
1 geo-demo
feature =PCA
(Geodem)Raw
data
31
Features
+
ratio
R(glmnet)
Elastinet
DataRobot:
RF
ExtraTrees
GLM.
Weighted
Average
Blend
Select 28
features +
CrimeVar3
downsample
20K obs(y==0)
One-hot encoded
categorical +
Scaled numerical
R(gbm)
Lambdmart
y2=min(y, cap)
downsample
10K obs(y==0)
y2=min(y, cap)
full sample