SlideShare a Scribd company logo
© DataRobot, Inc. All rights reserved.
Kaggle
and
Data Science
Japan, 2018
Sergey Yurgenson
Director, Advanced Data Science Services
Kaggle Grandmaster
© DataRobot, Inc. All rights reserved.
© DataRobot, Inc. All rights reserved.
Kaggle
● Kaggle is a platform for data science competitions
● It was created by Anthony Goldbloom in 2010 in Australia and then moved to San
Francisco
● In March of 2017 it was acquired by Google
● Right now many other start-up are trying to replicate the same idea, but Kaggle is still the
most known in data science community name
● As of now Kaggle hosted more than 280 competitions and has more than 1 million
members from more than 190 countries
© DataRobot, Inc. All rights reserved.
Kaggle competitions
● Most of Kaggle competitions are predictive modeling competition
● Participants are provided with training data to train their models and test data with
unknown targets
● Participants need to calculate predictions for test data and submit those
predictions to Kaggle platform.
● Accuracy of predictions is evaluated using predefined objective metric and that
result is provided back to participants.
● Model performance of all participants is publicly available and participants can
compare quality of their models with models of other participants
● Many competitions have monetary prizes for top finishers
© DataRobot, Inc. All rights reserved.
Kaggle competitions
© DataRobot, Inc. All rights reserved.
Kaggle ranking
● Based on competitions performance Kaggle ranks members using points and
awards titles for top finishing in competitions
● For example to get title of master member needs to earn one gold medal and
two silver medal. For competitions with 1000 participants it means to finish
once in top 10 places and twice in top 50.
© DataRobot, Inc. All rights reserved.
Kaggle ranking
© DataRobot, Inc. All rights reserved.
Kaggle and Data Science
© DataRobot, Inc. All rights reserved.
Why do you dislike Kaggle ?
● Kaggle competition does not have much in common with real Data Science
○ The problems are already well formulated with metrics predefined. In an industry setting there is
ambiguity, and knowing what to solve is one of the key steps towards a solution.
○ Data is most cases is already provided and is relatively clean.
○ The goal is more leaderboard driven rather than understanding driven. Winning a competition
versus why an approach works is a top priority. Results may not be trustworthy.
○ There are chances of overfitting to test data with repeated submissions.
○ In most cases the solution is an ensemble of algorithms and not “productionizable”.
https://www.quora.com/Why-do-you-dislike-Kaggle
© DataRobot, Inc. All rights reserved.
True or False ?
● “The problems are already well formulated with metrics predefined. In an
industry setting there is ambiguity, and knowing what to solve is one of the
key steps towards a solution.”
https://www.quora.com/Why-do-you-dislike-Kaggle
© DataRobot, Inc. All rights reserved.
Problem is well formulated
Mostly True , however...
● Need for criteria is inherited property of any competition.
● In real world not all data scientists are free to select and reformulate the problem. Many problems
are already defined with assigned specific success criteria.
● We learn many subjects and skills by solving provided predefined problems, doing predefined
exercises. We learn math by solving problems from textbooks, we learn physics by solving
problems from textbooks. Problems already formulated. By solving problems we also learn how
to formulate problems, what is suitable approach in particular data science situation.
● We also have to admit that evaluating business value of solving the problem is completely out of
scope of Kaggle competitions. While business value analysis and problem prioritization is
important part of many real life data science projects.
© DataRobot, Inc. All rights reserved.
True or False ?
● “Data is most cases is already provided and is relatively clean.”
https://www.quora.com/Why-do-you-dislike-Kaggle
© DataRobot, Inc. All rights reserved.
Data is clean
Half true
● In many competitions datasets are
○ Very big
○ Have multiple tables
○ Some records are duplicated and mislabeled
○ Contain combination of structured data and unstructured data
● Some competitions encourage search for additional sources of data
● Many data leaks
● Often features names and meaning are not provided making problem even more difficult than in real
world
● Data may be intentionally distorted to conform to data privacy laws
© DataRobot, Inc. All rights reserved.
Data is clean
● Complex data structure ● Big datasets
● No meaningful feature names
© DataRobot, Inc. All rights reserved.
Data is clean
● Kaggle competitions teach unique data manipulation skills:
○ Dealing with data with hardware limitations : efficient code, smart sampling, clever encoding…
○ Using EDA to uncover meaning of data without relying on labels or other provided information
○ Data leaks discovering based on the data analysis
© DataRobot, Inc. All rights reserved.
True or False ?
● The goal is more leaderboard driven rather than understanding driven. Winning
a competition versus why an approach works is a top priority. Results may not
be trustworthy.
https://www.quora.com/Why-do-you-dislike-Kaggle
© DataRobot, Inc. All rights reserved.
No understanding
True but maybe not that important
● Assumes that model we can not understand is less valuable than model we can understand
○ Model is not necessarily used for knowledge discovery
○ In real life we often use something and rely on something we do not completely understand
○ If something that we do not understand can not be trustworthy then how we ever trust other
people?
○ Even complex machine learning model may provide simplification of even more complex real
system
© DataRobot, Inc. All rights reserved.
No understanding
● Ignores all new research of model interpretability
○ Feature importance
○ Reason codes
○ Partial dependence plots
○ Surrogate models
○ Neuron activation visualization
○ ...
● Those methods allow us to analyze and understand behaviour of models as complicated as GBM and
Neural Networks
© DataRobot, Inc. All rights reserved.
No understanding ?
© DataRobot, Inc. All rights reserved.
True or False ?
● There are chances of overfitting to test data with repeated submissions.
https://www.quora.com/Why-do-you-dislike-Kaggle
© DataRobot, Inc. All rights reserved.
Overfitting
False
● Complete misunderstanding of how Kaggle works
○ Test data in Kaggle competition is split into two parts - public and private
○ During competition models are evaluated only on public part of the test set
○ Final results are based only on private part of the test dataset
○ Thus final model evaluation is based on completely new data
● One of first lessons all competitions participants learn very fast
○ Do not overfit leaderboard.
○ Create training/validation partition which reflect as much as possible test data including
seasonality effects and data drift
© DataRobot, Inc. All rights reserved.
True or False ?
● In most cases the solution is an ensemble of algorithms and not
“productionizable”.
https://www.quora.com/Why-do-you-dislike-Kaggle
© DataRobot, Inc. All rights reserved.
Difficult to put in production
Half True, half false
● Yes, in most cases top models are complicated ensembles
● Difficult to put in production if one does it one-by-one for each model separately
● Easy if one uses appropriately developed platform that can handle many models and blenders
© DataRobot, Inc. All rights reserved.
True or False ?
● Sometimes, a 0.01 difference in AUC can be the difference between 1st place
and 294th place (out of 626) . Those marginal gains take significant time and
effort that may not be worthwhile in the face of other projects and priorities
https://www.quora.com/How-similar-are-Kaggle-competitions-to-what-data-scientists-do
© DataRobot, Inc. All rights reserved.
Marginal gain is not valuable
Not always true
● Often we ourselves advise clients on balance between time spent and model performance
● However in investment world 0.01 AUC difference means difference in millions of dollars of gain or
loss
● Competition aspect of the data science problem with small margins drives innovation
○ New preprocessing steps
○ New feature engineering ideas
○ Continues testing of new algorithms and implementations (GBM - XGboost - LightGBM -
CatBoost)
© DataRobot, Inc. All rights reserved.
Kaggle and Data Science
● “Kaggle competitions cover a decent amount of what a data scientist does.
The two big missing pieces are:
○ 1. taking a business problem and specifying it as a data science problem
(which includes pulling the data and structuring it so that it addresses that
business problem).
○ 2. putting models into production.”
Anthony Goldbloom
© DataRobot, Inc. All rights reserved.
Kaggle and Data Science
● Kaggle is a competition
● “Real” Data Science is ...
also competition
© DataRobot, Inc. All rights reserved.
Kaggle to “real life” Data Science
● DataRobot - created by top Kagglers
Owen Zhang
Product Advisor
Highest: #1
Xavier Conort
Chief Data Scientist
Highest: 1st
Sergey Yurgenson
Director- AI Services
Highest: 1st
Jeremy Achin
CEO & Co-Founder
Highest: 20th
Tom de Godoy
CTO & Co-Founder
Highest: 20th
Amanda Schierz
Data Scientist
Highest: 24
DataRobot automatically replicates the steps seasoned data scientists take. This allows
non-technical business users to create accurate predictive models and data scientists to add
to their existing tool set.
© DataRobot, Inc. All rights reserved.
Kaggle and Data Science

More Related Content

What's hot

最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
mlm_kansai
 
Model selection and tuning at scale
Model selection and tuning at scaleModel selection and tuning at scale
Model selection and tuning at scale
Owen Zhang
 
Tips and tricks to win kaggle data science competitions
Tips and tricks to win kaggle data science competitionsTips and tricks to win kaggle data science competitions
Tips and tricks to win kaggle data science competitions
Darius Barušauskas
 
Winning Kaggle 101: Introduction to Stacking
Winning Kaggle 101: Introduction to StackingWinning Kaggle 101: Introduction to Stacking
Winning Kaggle 101: Introduction to Stacking
Ted Xiao
 
LightGBM: a highly efficient gradient boosting decision tree
LightGBM: a highly efficient gradient boosting decision treeLightGBM: a highly efficient gradient boosting decision tree
LightGBM: a highly efficient gradient boosting decision tree
Yusuke Kaneko
 
Classifying and understanding financial data using graph neural network
Classifying and understanding financial data using graph neural networkClassifying and understanding financial data using graph neural network
Classifying and understanding financial data using graph neural network
Park JunPyo
 
[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...
[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...
[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...
Deep Learning JP
 
第10章後半「ブースティングと加法的木」
第10章後半「ブースティングと加法的木」第10章後半「ブースティングと加法的木」
第10章後半「ブースティングと加法的木」
T T
 
Explainable AI is not yet Understandable AI
Explainable AI is not yet Understandable AIExplainable AI is not yet Understandable AI
Explainable AI is not yet Understandable AI
epsilon_tud
 
LightGBMを少し改造してみた ~カテゴリ変数の動的エンコード~
LightGBMを少し改造してみた ~カテゴリ変数の動的エンコード~LightGBMを少し改造してみた ~カテゴリ変数の動的エンコード~
LightGBMを少し改造してみた ~カテゴリ変数の動的エンコード~
RyuichiKanoh
 
レコメンドエンジン作成コンテストの勝ち方
レコメンドエンジン作成コンテストの勝ち方レコメンドエンジン作成コンテストの勝ち方
レコメンドエンジン作成コンテストの勝ち方
Shun Nukui
 
Kaggle&競プロ紹介 in 中田研究室
Kaggle&競プロ紹介 in 中田研究室Kaggle&競プロ紹介 in 中田研究室
Kaggle&競プロ紹介 in 中田研究室
Takami Sato
 
実践多クラス分類 Kaggle Ottoから学んだこと
実践多クラス分類 Kaggle Ottoから学んだこと実践多クラス分類 Kaggle Ottoから学んだこと
実践多クラス分類 Kaggle Ottoから学んだこと
nishio
 
Devsumi 2018summer
Devsumi 2018summerDevsumi 2018summer
Devsumi 2018summer
Harada Kei
 
論文紹介: "MolGAN: An implicit generative model for small molecular graphs"
論文紹介: "MolGAN: An implicit generative model for small molecular graphs"論文紹介: "MolGAN: An implicit generative model for small molecular graphs"
論文紹介: "MolGAN: An implicit generative model for small molecular graphs"
Ryohei Suzuki
 
合成変量とアンサンブル:回帰森と加法モデルの要点
合成変量とアンサンブル:回帰森と加法モデルの要点合成変量とアンサンブル:回帰森と加法モデルの要点
合成変量とアンサンブル:回帰森と加法モデルの要点
Ichigaku Takigawa
 
機械学習モデルのハイパパラメータ最適化
機械学習モデルのハイパパラメータ最適化機械学習モデルのハイパパラメータ最適化
機械学習モデルのハイパパラメータ最適化
gree_tech
 
Feature selection
Feature selectionFeature selection
Feature selection
Dong Guo
 
Winning Data Science Competitions
Winning Data Science CompetitionsWinning Data Science Competitions
Winning Data Science Competitions
Jeong-Yoon Lee
 
[DL輪読会]Neural Ordinary Differential Equations
[DL輪読会]Neural Ordinary Differential Equations[DL輪読会]Neural Ordinary Differential Equations
[DL輪読会]Neural Ordinary Differential Equations
Deep Learning JP
 

What's hot (20)

最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
 
Model selection and tuning at scale
Model selection and tuning at scaleModel selection and tuning at scale
Model selection and tuning at scale
 
Tips and tricks to win kaggle data science competitions
Tips and tricks to win kaggle data science competitionsTips and tricks to win kaggle data science competitions
Tips and tricks to win kaggle data science competitions
 
Winning Kaggle 101: Introduction to Stacking
Winning Kaggle 101: Introduction to StackingWinning Kaggle 101: Introduction to Stacking
Winning Kaggle 101: Introduction to Stacking
 
LightGBM: a highly efficient gradient boosting decision tree
LightGBM: a highly efficient gradient boosting decision treeLightGBM: a highly efficient gradient boosting decision tree
LightGBM: a highly efficient gradient boosting decision tree
 
Classifying and understanding financial data using graph neural network
Classifying and understanding financial data using graph neural networkClassifying and understanding financial data using graph neural network
Classifying and understanding financial data using graph neural network
 
[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...
[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...
[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...
 
第10章後半「ブースティングと加法的木」
第10章後半「ブースティングと加法的木」第10章後半「ブースティングと加法的木」
第10章後半「ブースティングと加法的木」
 
Explainable AI is not yet Understandable AI
Explainable AI is not yet Understandable AIExplainable AI is not yet Understandable AI
Explainable AI is not yet Understandable AI
 
LightGBMを少し改造してみた ~カテゴリ変数の動的エンコード~
LightGBMを少し改造してみた ~カテゴリ変数の動的エンコード~LightGBMを少し改造してみた ~カテゴリ変数の動的エンコード~
LightGBMを少し改造してみた ~カテゴリ変数の動的エンコード~
 
レコメンドエンジン作成コンテストの勝ち方
レコメンドエンジン作成コンテストの勝ち方レコメンドエンジン作成コンテストの勝ち方
レコメンドエンジン作成コンテストの勝ち方
 
Kaggle&競プロ紹介 in 中田研究室
Kaggle&競プロ紹介 in 中田研究室Kaggle&競プロ紹介 in 中田研究室
Kaggle&競プロ紹介 in 中田研究室
 
実践多クラス分類 Kaggle Ottoから学んだこと
実践多クラス分類 Kaggle Ottoから学んだこと実践多クラス分類 Kaggle Ottoから学んだこと
実践多クラス分類 Kaggle Ottoから学んだこと
 
Devsumi 2018summer
Devsumi 2018summerDevsumi 2018summer
Devsumi 2018summer
 
論文紹介: "MolGAN: An implicit generative model for small molecular graphs"
論文紹介: "MolGAN: An implicit generative model for small molecular graphs"論文紹介: "MolGAN: An implicit generative model for small molecular graphs"
論文紹介: "MolGAN: An implicit generative model for small molecular graphs"
 
合成変量とアンサンブル:回帰森と加法モデルの要点
合成変量とアンサンブル:回帰森と加法モデルの要点合成変量とアンサンブル:回帰森と加法モデルの要点
合成変量とアンサンブル:回帰森と加法モデルの要点
 
機械学習モデルのハイパパラメータ最適化
機械学習モデルのハイパパラメータ最適化機械学習モデルのハイパパラメータ最適化
機械学習モデルのハイパパラメータ最適化
 
Feature selection
Feature selectionFeature selection
Feature selection
 
Winning Data Science Competitions
Winning Data Science CompetitionsWinning Data Science Competitions
Winning Data Science Competitions
 
[DL輪読会]Neural Ordinary Differential Equations
[DL輪読会]Neural Ordinary Differential Equations[DL輪読会]Neural Ordinary Differential Equations
[DL輪読会]Neural Ordinary Differential Equations
 

Similar to Kaggle and data science

Making better use of Data and AI in Industry 4.0
Making better use of Data and AI in Industry 4.0Making better use of Data and AI in Industry 4.0
Making better use of Data and AI in Industry 4.0
Albert Y. C. Chen
 
vodQA Pune (2019) - Design patterns in test automation
vodQA Pune (2019) - Design patterns in test automationvodQA Pune (2019) - Design patterns in test automation
vodQA Pune (2019) - Design patterns in test automation
vodQA
 
Kaggle Days Milan - March 2019
Kaggle Days Milan - March 2019Kaggle Days Milan - March 2019
Kaggle Days Milan - March 2019
Alberto Danese
 
How to train your product owner
How to train your product ownerHow to train your product owner
How to train your product owner
David Murgatroyd
 
Demystifying Xgboost
Demystifying XgboostDemystifying Xgboost
Demystifying Xgboost
halifaxchester
 
"What we learned from 5 years of building a data science software that actual...
"What we learned from 5 years of building a data science software that actual..."What we learned from 5 years of building a data science software that actual...
"What we learned from 5 years of building a data science software that actual...
Dataconomy Media
 
Limits of Machine Learning
Limits of Machine LearningLimits of Machine Learning
Limits of Machine Learning
Alexey Grigorev
 
Operationalizing Machine Learning in the Enterprise
Operationalizing Machine Learning in the EnterpriseOperationalizing Machine Learning in the Enterprise
Operationalizing Machine Learning in the Enterprise
mark madsen
 
Golang for data analytics
Golang for data analyticsGolang for data analytics
Golang for data analytics
GoWitek Consulting Pvt.Ltd
 
Golang for data analytics
Golang for data analyticsGolang for data analytics
Golang for data analytics
GoWitek Consulting Pvt.Ltd
 
Managing machine learning
Managing machine learningManaging machine learning
Managing machine learning
David Murgatroyd
 
A Tester's Life
A Tester's LifeA Tester's Life
A Tester's Life
Bertold Kolics
 
Demystifying ML/AI
Demystifying ML/AIDemystifying ML/AI
Demystifying ML/AI
Matthew Reynolds
 
Profit from AI & Machine Learning: The Best Practices for People & Process
Profit from AI & Machine Learning: The Best Practices for People & ProcessProfit from AI & Machine Learning: The Best Practices for People & Process
Profit from AI & Machine Learning: The Best Practices for People & Process
Tony Baer
 
Always Be Deploying. How to make R great for machine learning in (not only) E...
Always Be Deploying. How to make R great for machine learning in (not only) E...Always Be Deploying. How to make R great for machine learning in (not only) E...
Always Be Deploying. How to make R great for machine learning in (not only) E...
Wit Jakuczun
 
Data Studio for SEOs: Reporting Automation Tips - Weekly SEO with Lazarina Stoy
Data Studio for SEOs: Reporting Automation Tips - Weekly SEO with Lazarina StoyData Studio for SEOs: Reporting Automation Tips - Weekly SEO with Lazarina Stoy
Data Studio for SEOs: Reporting Automation Tips - Weekly SEO with Lazarina Stoy
LazarinaStoyanova
 
CD in Machine Learning Systems
CD in Machine Learning SystemsCD in Machine Learning Systems
CD in Machine Learning Systems
Thoughtworks
 
BDW17 London - Abed Ajraou - First Utility - Putting Data Science in your Bus...
BDW17 London - Abed Ajraou - First Utility - Putting Data Science in your Bus...BDW17 London - Abed Ajraou - First Utility - Putting Data Science in your Bus...
BDW17 London - Abed Ajraou - First Utility - Putting Data Science in your Bus...
Big Data Week
 
DA 592 - Term Project Presentation - Berker Kozan Can Koklu - Kaggle Contest
DA 592 - Term Project Presentation - Berker Kozan Can Koklu - Kaggle ContestDA 592 - Term Project Presentation - Berker Kozan Can Koklu - Kaggle Contest
DA 592 - Term Project Presentation - Berker Kozan Can Koklu - Kaggle Contest
Berker Kozan
 
How to become a data scientist
How to become a data scientist How to become a data scientist
How to become a data scientist
Manjunath Sindagi
 

Similar to Kaggle and data science (20)

Making better use of Data and AI in Industry 4.0
Making better use of Data and AI in Industry 4.0Making better use of Data and AI in Industry 4.0
Making better use of Data and AI in Industry 4.0
 
vodQA Pune (2019) - Design patterns in test automation
vodQA Pune (2019) - Design patterns in test automationvodQA Pune (2019) - Design patterns in test automation
vodQA Pune (2019) - Design patterns in test automation
 
Kaggle Days Milan - March 2019
Kaggle Days Milan - March 2019Kaggle Days Milan - March 2019
Kaggle Days Milan - March 2019
 
How to train your product owner
How to train your product ownerHow to train your product owner
How to train your product owner
 
Demystifying Xgboost
Demystifying XgboostDemystifying Xgboost
Demystifying Xgboost
 
"What we learned from 5 years of building a data science software that actual...
"What we learned from 5 years of building a data science software that actual..."What we learned from 5 years of building a data science software that actual...
"What we learned from 5 years of building a data science software that actual...
 
Limits of Machine Learning
Limits of Machine LearningLimits of Machine Learning
Limits of Machine Learning
 
Operationalizing Machine Learning in the Enterprise
Operationalizing Machine Learning in the EnterpriseOperationalizing Machine Learning in the Enterprise
Operationalizing Machine Learning in the Enterprise
 
Golang for data analytics
Golang for data analyticsGolang for data analytics
Golang for data analytics
 
Golang for data analytics
Golang for data analyticsGolang for data analytics
Golang for data analytics
 
Managing machine learning
Managing machine learningManaging machine learning
Managing machine learning
 
A Tester's Life
A Tester's LifeA Tester's Life
A Tester's Life
 
Demystifying ML/AI
Demystifying ML/AIDemystifying ML/AI
Demystifying ML/AI
 
Profit from AI & Machine Learning: The Best Practices for People & Process
Profit from AI & Machine Learning: The Best Practices for People & ProcessProfit from AI & Machine Learning: The Best Practices for People & Process
Profit from AI & Machine Learning: The Best Practices for People & Process
 
Always Be Deploying. How to make R great for machine learning in (not only) E...
Always Be Deploying. How to make R great for machine learning in (not only) E...Always Be Deploying. How to make R great for machine learning in (not only) E...
Always Be Deploying. How to make R great for machine learning in (not only) E...
 
Data Studio for SEOs: Reporting Automation Tips - Weekly SEO with Lazarina Stoy
Data Studio for SEOs: Reporting Automation Tips - Weekly SEO with Lazarina StoyData Studio for SEOs: Reporting Automation Tips - Weekly SEO with Lazarina Stoy
Data Studio for SEOs: Reporting Automation Tips - Weekly SEO with Lazarina Stoy
 
CD in Machine Learning Systems
CD in Machine Learning SystemsCD in Machine Learning Systems
CD in Machine Learning Systems
 
BDW17 London - Abed Ajraou - First Utility - Putting Data Science in your Bus...
BDW17 London - Abed Ajraou - First Utility - Putting Data Science in your Bus...BDW17 London - Abed Ajraou - First Utility - Putting Data Science in your Bus...
BDW17 London - Abed Ajraou - First Utility - Putting Data Science in your Bus...
 
DA 592 - Term Project Presentation - Berker Kozan Can Koklu - Kaggle Contest
DA 592 - Term Project Presentation - Berker Kozan Can Koklu - Kaggle ContestDA 592 - Term Project Presentation - Berker Kozan Can Koklu - Kaggle Contest
DA 592 - Term Project Presentation - Berker Kozan Can Koklu - Kaggle Contest
 
How to become a data scientist
How to become a data scientist How to become a data scientist
How to become a data scientist
 

More from Akira Shibata

大規模言語モデル開発を支える分散学習技術 - 東京工業大学横田理央研究室の藤井一喜さん
大規模言語モデル開発を支える分散学習技術 - 東京工業大学横田理央研究室の藤井一喜さん大規模言語モデル開発を支える分散学習技術 - 東京工業大学横田理央研究室の藤井一喜さん
大規模言語モデル開発を支える分散学習技術 - 東京工業大学横田理央研究室の藤井一喜さん
Akira Shibata
 
W&B monthly meetup#7 Intro.pdf
W&B monthly meetup#7 Intro.pdfW&B monthly meetup#7 Intro.pdf
W&B monthly meetup#7 Intro.pdf
Akira Shibata
 
20230705 - Optuna Integration (to share).pdf
20230705 - Optuna Integration (to share).pdf20230705 - Optuna Integration (to share).pdf
20230705 - Optuna Integration (to share).pdf
Akira Shibata
 
W&B Seminar #5(to share).pdf
W&B Seminar #5(to share).pdfW&B Seminar #5(to share).pdf
W&B Seminar #5(to share).pdf
Akira Shibata
 
makoto shing (stability ai) - image model fine-tuning - wandb_event_230525.pdf
makoto shing (stability ai) - image model fine-tuning - wandb_event_230525.pdfmakoto shing (stability ai) - image model fine-tuning - wandb_event_230525.pdf
makoto shing (stability ai) - image model fine-tuning - wandb_event_230525.pdf
Akira Shibata
 
LLM Webinar - シバタアキラ to share.pdf
LLM Webinar - シバタアキラ to share.pdfLLM Webinar - シバタアキラ to share.pdf
LLM Webinar - シバタアキラ to share.pdf
Akira Shibata
 
W&B Seminar #4.pdf
W&B Seminar #4.pdfW&B Seminar #4.pdf
W&B Seminar #4.pdf
Akira Shibata
 
Data x
Data xData x
Akira shibata at developer summit 2016
Akira shibata at developer summit 2016Akira shibata at developer summit 2016
Akira shibata at developer summit 2016
Akira Shibata
 
PyData.Tokyo Hackathon#2 TensorFlow
PyData.Tokyo Hackathon#2 TensorFlowPyData.Tokyo Hackathon#2 TensorFlow
PyData.Tokyo Hackathon#2 TensorFlow
Akira Shibata
 
20150421 日経ビッグデータカンファレンス
20150421 日経ビッグデータカンファレンス20150421 日経ビッグデータカンファレンス
20150421 日経ビッグデータカンファレンス
Akira Shibata
 
人工知能をビジネスに活かす
人工知能をビジネスに活かす人工知能をビジネスに活かす
人工知能をビジネスに活かす
Akira Shibata
 
LHCにおける素粒子ビッグデータの解析とROOTライブラリ(Big Data Analysis at LHC and ROOT)
LHCにおける素粒子ビッグデータの解析とROOTライブラリ(Big Data Analysis at LHC and ROOT)LHCにおける素粒子ビッグデータの解析とROOTライブラリ(Big Data Analysis at LHC and ROOT)
LHCにおける素粒子ビッグデータの解析とROOTライブラリ(Big Data Analysis at LHC and ROOT)
Akira Shibata
 
PyData Tokyo Tutorial & Hackathon #1
PyData Tokyo Tutorial & Hackathon #1PyData Tokyo Tutorial & Hackathon #1
PyData Tokyo Tutorial & Hackathon #1
Akira Shibata
 
20150128 cross2015
20150128 cross201520150128 cross2015
20150128 cross2015
Akira Shibata
 
PyData NYC by Akira Shibata
PyData NYC by Akira ShibataPyData NYC by Akira Shibata
PyData NYC by Akira Shibata
Akira Shibata
 
20141127 py datatokyomeetup2
20141127 py datatokyomeetup220141127 py datatokyomeetup2
20141127 py datatokyomeetup2
Akira Shibata
 
The LHC Explained by CNN
The LHC Explained by CNNThe LHC Explained by CNN
The LHC Explained by CNNAkira Shibata
 
Analysis Software Development
Analysis Software DevelopmentAnalysis Software Development
Analysis Software DevelopmentAkira Shibata
 

More from Akira Shibata (20)

大規模言語モデル開発を支える分散学習技術 - 東京工業大学横田理央研究室の藤井一喜さん
大規模言語モデル開発を支える分散学習技術 - 東京工業大学横田理央研究室の藤井一喜さん大規模言語モデル開発を支える分散学習技術 - 東京工業大学横田理央研究室の藤井一喜さん
大規模言語モデル開発を支える分散学習技術 - 東京工業大学横田理央研究室の藤井一喜さん
 
W&B monthly meetup#7 Intro.pdf
W&B monthly meetup#7 Intro.pdfW&B monthly meetup#7 Intro.pdf
W&B monthly meetup#7 Intro.pdf
 
20230705 - Optuna Integration (to share).pdf
20230705 - Optuna Integration (to share).pdf20230705 - Optuna Integration (to share).pdf
20230705 - Optuna Integration (to share).pdf
 
W&B Seminar #5(to share).pdf
W&B Seminar #5(to share).pdfW&B Seminar #5(to share).pdf
W&B Seminar #5(to share).pdf
 
makoto shing (stability ai) - image model fine-tuning - wandb_event_230525.pdf
makoto shing (stability ai) - image model fine-tuning - wandb_event_230525.pdfmakoto shing (stability ai) - image model fine-tuning - wandb_event_230525.pdf
makoto shing (stability ai) - image model fine-tuning - wandb_event_230525.pdf
 
LLM Webinar - シバタアキラ to share.pdf
LLM Webinar - シバタアキラ to share.pdfLLM Webinar - シバタアキラ to share.pdf
LLM Webinar - シバタアキラ to share.pdf
 
W&B Seminar #4.pdf
W&B Seminar #4.pdfW&B Seminar #4.pdf
W&B Seminar #4.pdf
 
Data x
Data xData x
Data x
 
Akira shibata at developer summit 2016
Akira shibata at developer summit 2016Akira shibata at developer summit 2016
Akira shibata at developer summit 2016
 
PyData.Tokyo Hackathon#2 TensorFlow
PyData.Tokyo Hackathon#2 TensorFlowPyData.Tokyo Hackathon#2 TensorFlow
PyData.Tokyo Hackathon#2 TensorFlow
 
20150421 日経ビッグデータカンファレンス
20150421 日経ビッグデータカンファレンス20150421 日経ビッグデータカンファレンス
20150421 日経ビッグデータカンファレンス
 
人工知能をビジネスに活かす
人工知能をビジネスに活かす人工知能をビジネスに活かす
人工知能をビジネスに活かす
 
LHCにおける素粒子ビッグデータの解析とROOTライブラリ(Big Data Analysis at LHC and ROOT)
LHCにおける素粒子ビッグデータの解析とROOTライブラリ(Big Data Analysis at LHC and ROOT)LHCにおける素粒子ビッグデータの解析とROOTライブラリ(Big Data Analysis at LHC and ROOT)
LHCにおける素粒子ビッグデータの解析とROOTライブラリ(Big Data Analysis at LHC and ROOT)
 
PyData Tokyo Tutorial & Hackathon #1
PyData Tokyo Tutorial & Hackathon #1PyData Tokyo Tutorial & Hackathon #1
PyData Tokyo Tutorial & Hackathon #1
 
20150128 cross2015
20150128 cross201520150128 cross2015
20150128 cross2015
 
PyData NYC by Akira Shibata
PyData NYC by Akira ShibataPyData NYC by Akira Shibata
PyData NYC by Akira Shibata
 
20141127 py datatokyomeetup2
20141127 py datatokyomeetup220141127 py datatokyomeetup2
20141127 py datatokyomeetup2
 
The LHC Explained by CNN
The LHC Explained by CNNThe LHC Explained by CNN
The LHC Explained by CNN
 
LHC for Students
LHC for StudentsLHC for Students
LHC for Students
 
Analysis Software Development
Analysis Software DevelopmentAnalysis Software Development
Analysis Software Development
 

Recently uploaded

Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
pchutichetpong
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Subhajit Sahu
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
yhkoc
 
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
slg6lamcq
 
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
oz8q3jxlp
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Boston Institute of Analytics
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
slg6lamcq
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
MaleehaSheikh2
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
ewymefz
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
Tiktokethiodaily
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
Subhajit Sahu
 
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
v3tuleee
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
enxupq
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
ahzuo
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
ukgaet
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
ewymefz
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
balafet
 
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
ahzuo
 

Recently uploaded (20)

Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
 
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
 
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
 
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
 
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
 

Kaggle and data science

  • 1. © DataRobot, Inc. All rights reserved. Kaggle and Data Science Japan, 2018
  • 2. Sergey Yurgenson Director, Advanced Data Science Services Kaggle Grandmaster © DataRobot, Inc. All rights reserved.
  • 3. © DataRobot, Inc. All rights reserved. Kaggle ● Kaggle is a platform for data science competitions ● It was created by Anthony Goldbloom in 2010 in Australia and then moved to San Francisco ● In March of 2017 it was acquired by Google ● Right now many other start-up are trying to replicate the same idea, but Kaggle is still the most known in data science community name ● As of now Kaggle hosted more than 280 competitions and has more than 1 million members from more than 190 countries
  • 4. © DataRobot, Inc. All rights reserved. Kaggle competitions ● Most of Kaggle competitions are predictive modeling competition ● Participants are provided with training data to train their models and test data with unknown targets ● Participants need to calculate predictions for test data and submit those predictions to Kaggle platform. ● Accuracy of predictions is evaluated using predefined objective metric and that result is provided back to participants. ● Model performance of all participants is publicly available and participants can compare quality of their models with models of other participants ● Many competitions have monetary prizes for top finishers
  • 5. © DataRobot, Inc. All rights reserved. Kaggle competitions
  • 6. © DataRobot, Inc. All rights reserved. Kaggle ranking ● Based on competitions performance Kaggle ranks members using points and awards titles for top finishing in competitions ● For example to get title of master member needs to earn one gold medal and two silver medal. For competitions with 1000 participants it means to finish once in top 10 places and twice in top 50.
  • 7. © DataRobot, Inc. All rights reserved. Kaggle ranking
  • 8. © DataRobot, Inc. All rights reserved. Kaggle and Data Science
  • 9. © DataRobot, Inc. All rights reserved. Why do you dislike Kaggle ? ● Kaggle competition does not have much in common with real Data Science ○ The problems are already well formulated with metrics predefined. In an industry setting there is ambiguity, and knowing what to solve is one of the key steps towards a solution. ○ Data is most cases is already provided and is relatively clean. ○ The goal is more leaderboard driven rather than understanding driven. Winning a competition versus why an approach works is a top priority. Results may not be trustworthy. ○ There are chances of overfitting to test data with repeated submissions. ○ In most cases the solution is an ensemble of algorithms and not “productionizable”. https://www.quora.com/Why-do-you-dislike-Kaggle
  • 10. © DataRobot, Inc. All rights reserved. True or False ? ● “The problems are already well formulated with metrics predefined. In an industry setting there is ambiguity, and knowing what to solve is one of the key steps towards a solution.” https://www.quora.com/Why-do-you-dislike-Kaggle
  • 11. © DataRobot, Inc. All rights reserved. Problem is well formulated Mostly True , however... ● Need for criteria is inherited property of any competition. ● In real world not all data scientists are free to select and reformulate the problem. Many problems are already defined with assigned specific success criteria. ● We learn many subjects and skills by solving provided predefined problems, doing predefined exercises. We learn math by solving problems from textbooks, we learn physics by solving problems from textbooks. Problems already formulated. By solving problems we also learn how to formulate problems, what is suitable approach in particular data science situation. ● We also have to admit that evaluating business value of solving the problem is completely out of scope of Kaggle competitions. While business value analysis and problem prioritization is important part of many real life data science projects.
  • 12. © DataRobot, Inc. All rights reserved. True or False ? ● “Data is most cases is already provided and is relatively clean.” https://www.quora.com/Why-do-you-dislike-Kaggle
  • 13. © DataRobot, Inc. All rights reserved. Data is clean Half true ● In many competitions datasets are ○ Very big ○ Have multiple tables ○ Some records are duplicated and mislabeled ○ Contain combination of structured data and unstructured data ● Some competitions encourage search for additional sources of data ● Many data leaks ● Often features names and meaning are not provided making problem even more difficult than in real world ● Data may be intentionally distorted to conform to data privacy laws
  • 14. © DataRobot, Inc. All rights reserved. Data is clean ● Complex data structure ● Big datasets ● No meaningful feature names
  • 15. © DataRobot, Inc. All rights reserved. Data is clean ● Kaggle competitions teach unique data manipulation skills: ○ Dealing with data with hardware limitations : efficient code, smart sampling, clever encoding… ○ Using EDA to uncover meaning of data without relying on labels or other provided information ○ Data leaks discovering based on the data analysis
  • 16. © DataRobot, Inc. All rights reserved. True or False ? ● The goal is more leaderboard driven rather than understanding driven. Winning a competition versus why an approach works is a top priority. Results may not be trustworthy. https://www.quora.com/Why-do-you-dislike-Kaggle
  • 17. © DataRobot, Inc. All rights reserved. No understanding True but maybe not that important ● Assumes that model we can not understand is less valuable than model we can understand ○ Model is not necessarily used for knowledge discovery ○ In real life we often use something and rely on something we do not completely understand ○ If something that we do not understand can not be trustworthy then how we ever trust other people? ○ Even complex machine learning model may provide simplification of even more complex real system
  • 18. © DataRobot, Inc. All rights reserved. No understanding ● Ignores all new research of model interpretability ○ Feature importance ○ Reason codes ○ Partial dependence plots ○ Surrogate models ○ Neuron activation visualization ○ ... ● Those methods allow us to analyze and understand behaviour of models as complicated as GBM and Neural Networks
  • 19. © DataRobot, Inc. All rights reserved. No understanding ?
  • 20. © DataRobot, Inc. All rights reserved. True or False ? ● There are chances of overfitting to test data with repeated submissions. https://www.quora.com/Why-do-you-dislike-Kaggle
  • 21. © DataRobot, Inc. All rights reserved. Overfitting False ● Complete misunderstanding of how Kaggle works ○ Test data in Kaggle competition is split into two parts - public and private ○ During competition models are evaluated only on public part of the test set ○ Final results are based only on private part of the test dataset ○ Thus final model evaluation is based on completely new data ● One of first lessons all competitions participants learn very fast ○ Do not overfit leaderboard. ○ Create training/validation partition which reflect as much as possible test data including seasonality effects and data drift
  • 22. © DataRobot, Inc. All rights reserved. True or False ? ● In most cases the solution is an ensemble of algorithms and not “productionizable”. https://www.quora.com/Why-do-you-dislike-Kaggle
  • 23. © DataRobot, Inc. All rights reserved. Difficult to put in production Half True, half false ● Yes, in most cases top models are complicated ensembles ● Difficult to put in production if one does it one-by-one for each model separately ● Easy if one uses appropriately developed platform that can handle many models and blenders
  • 24. © DataRobot, Inc. All rights reserved. True or False ? ● Sometimes, a 0.01 difference in AUC can be the difference between 1st place and 294th place (out of 626) . Those marginal gains take significant time and effort that may not be worthwhile in the face of other projects and priorities https://www.quora.com/How-similar-are-Kaggle-competitions-to-what-data-scientists-do
  • 25. © DataRobot, Inc. All rights reserved. Marginal gain is not valuable Not always true ● Often we ourselves advise clients on balance between time spent and model performance ● However in investment world 0.01 AUC difference means difference in millions of dollars of gain or loss ● Competition aspect of the data science problem with small margins drives innovation ○ New preprocessing steps ○ New feature engineering ideas ○ Continues testing of new algorithms and implementations (GBM - XGboost - LightGBM - CatBoost)
  • 26. © DataRobot, Inc. All rights reserved. Kaggle and Data Science ● “Kaggle competitions cover a decent amount of what a data scientist does. The two big missing pieces are: ○ 1. taking a business problem and specifying it as a data science problem (which includes pulling the data and structuring it so that it addresses that business problem). ○ 2. putting models into production.” Anthony Goldbloom
  • 27. © DataRobot, Inc. All rights reserved. Kaggle and Data Science ● Kaggle is a competition ● “Real” Data Science is ... also competition
  • 28. © DataRobot, Inc. All rights reserved. Kaggle to “real life” Data Science ● DataRobot - created by top Kagglers Owen Zhang Product Advisor Highest: #1 Xavier Conort Chief Data Scientist Highest: 1st Sergey Yurgenson Director- AI Services Highest: 1st Jeremy Achin CEO & Co-Founder Highest: 20th Tom de Godoy CTO & Co-Founder Highest: 20th Amanda Schierz Data Scientist Highest: 24 DataRobot automatically replicates the steps seasoned data scientists take. This allows non-technical business users to create accurate predictive models and data scientists to add to their existing tool set.
  • 29. © DataRobot, Inc. All rights reserved. Kaggle and Data Science