Our fall 12-Week Data Science bootcamp starts on Sept 21st,2015. Apply now to get a spot!
If you are hiring Data Scientists, call us at (1)888-752-7585 or reach info@nycdatascience.com to share your openings and set up interviews with our excellent students.
---------------------------------------------------------------
Come join our meet-up and learn how easily you can use R for advanced Machine learning. In this meet-up, we will demonstrate how to understand and use Xgboost for Kaggle competition. Tong is in Canada and will do remote session with us through google hangout.
---------------------------------------------------------------
Speaker Bio:
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Pre-requisite(if any): R /Calculus
Preparation: A laptop with R installed. Windows users might need to have RTools installed as well.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
Event arrangement:
6:45pm Doors open. Come early to network, grab a beer and settle in.
7:00-9:00pm XgBoost Demo
Reference:
https://github.com/dmlc/xgboost
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
This is the slide from my talk at FULokoja Ingressive meetup.
XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. In prediction problems involving unstructured and structured data (images, text, etc.) artificial neural networks tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data, decision tree-based algorithms are considered best-in-class right now. XGBoost model has the best combination of prediction performance and processing time compared to other algorithms.
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
This is the slide from my talk at FULokoja Ingressive meetup.
XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. In prediction problems involving unstructured and structured data (images, text, etc.) artificial neural networks tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data, decision tree-based algorithms are considered best-in-class right now. XGBoost model has the best combination of prediction performance and processing time compared to other algorithms.
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
This is a presentation about Gradient Boosted Trees which starts from the basics of Data Mining, building up towards Ensemble Methods like Bagging,Boosting etc. and then building towards Gradient Boosted Trees.
Feature Engineering - Getting most out of data for predictive modelsGabriel Moreira
How should data be preprocessed for use in machine learning algorithms? How to identify the most predictive attributes of a dataset? What features can generate to improve the accuracy of a model?
Feature Engineering is the process of extracting and selecting, from raw data, features that can be used effectively in predictive models. As the quality of the features greatly influences the quality of the results, knowing the main techniques and pitfalls will help you to succeed in the use of machine learning in your projects.
In this talk, we will present methods and techniques that allow us to extract the maximum potential of the features of a dataset, increasing flexibility, simplicity and accuracy of the models. The analysis of the distribution of features and their correlations, the transformation of numeric attributes (such as scaling, normalization, log-based transformation, binning), categorical attributes (such as one-hot encoding, feature hashing, Temporal (date / time), and free-text attributes (text vectorization, topic modeling).
Python, Python, Scikit-learn, and Spark SQL examples will be presented and how to use domain knowledge and intuition to select and generate features relevant to predictive models.
Overview of tree algorithms from decision tree to xgboostTakami Sato
For my understanding, I surveyed popular tree algorithms on Machine Learning and their evolution. This is the first time I wrote a presentation in English. So, I am happy if you give me a feedback.
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
발표자: 이활석(NAVER)
발표일: 2017.11.
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨 지고 있습니다. 본 과정에서는 비지도학습의 가장 대표적인 방법인 오토인코더의 모든 것에 대해서 살펴보고자 합니다. 차원 축소관점에서 가장 많이 사용되는Autoencoder와 (AE) 그 변형 들인 Denoising AE, Contractive AE에 대해서 공부할 것이며, 데이터 생성 관점에서 최근 각광 받는 Variational AE와 (VAE) 그 변형 들인 Conditional VAE, Adversarial AE에 대해서 공부할 것입니다. 또한, 오토인코더의 다양한 활용 예시를 살펴봄으로써 현업과의 접점을 찾아보도록 노력할 것입니다.
1. Revisit Deep Neural Networks
2. Manifold Learning
3. Autoencoders
4. Variational Autoencoders
5. Applications
K-Nearest neighbor is one of the most commonly used classifier based in lazy learning. It is one of the most commonly used methods in recommendation systems and document similarity measures. It mainly uses Euclidean distance to find the similarity measures between two data points.
Slide explaining the distinction between bagging and boosting while understanding the bias variance trade-off. Followed by some lesser known scope of supervised learning. understanding the effect of tree split metric in deciding feature importance. Then understanding the effect of threshold on classification accuracy. Additionally, how to adjust model threshold for classification in supervised learning.
Note: Limitation of Accuracy metric (baseline accuracy), alternative metrics, their use case and their advantage and limitations were briefly discussed.
BIRCH (balanced iterative reducing and clustering using hierarchies) is an unsupervised data-mining algorithm used to perform hierarchical clustering over, particularly large data sets.
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan ZhuDatabricks
XGBoost (https://github.com/dmlc/xgboost) is a library designed and optimized for tree boosting. XGBoost attracts users from a broad range of organizations in both industry and academia, and more than half of the winning solutions in machine learning challenges hosted at Kaggle adopt XGBoost.
While being one of the most popular machine learning systems, XGBoost is only one of the components in a complete data analytic pipeline. The data ETL/exploration/serving functionalities are built up on top of more general data processing frameworks, like Apache Spark. As a result, users have to build a communication channel between Apache Spark and XGBoost (usually through HDFS) and face the difficulties/inconveniences in data navigating and application development/deployment.
We (Distributed (Deep) Machine Learning Community) develop XGBoost4J-Spark (https://github.com/dmlc/xgboost/tree/master/jvm-packages), which seamlessly integrates Apache Spark and XGBoost.
The communication channel between Spark and XGBoost is established based on RDDs/DataFrame/Datasets, all of which are standard data interfaces in Spark. Additionally, XGBoost can be embedded into Spark MLLib pipeline and tuned through the tools provided by MLLib. In this talk, I will cover the motivation/history/design philosophy/implementation details as well as the use cases of XGBoost4J-Spark. I expect that this talk will share the insights on building a heterogeneous data analytic pipeline based on Spark and other data intelligence frameworks and bring more discussions on this topic.
This is a presentation about Gradient Boosted Trees which starts from the basics of Data Mining, building up towards Ensemble Methods like Bagging,Boosting etc. and then building towards Gradient Boosted Trees.
Feature Engineering - Getting most out of data for predictive modelsGabriel Moreira
How should data be preprocessed for use in machine learning algorithms? How to identify the most predictive attributes of a dataset? What features can generate to improve the accuracy of a model?
Feature Engineering is the process of extracting and selecting, from raw data, features that can be used effectively in predictive models. As the quality of the features greatly influences the quality of the results, knowing the main techniques and pitfalls will help you to succeed in the use of machine learning in your projects.
In this talk, we will present methods and techniques that allow us to extract the maximum potential of the features of a dataset, increasing flexibility, simplicity and accuracy of the models. The analysis of the distribution of features and their correlations, the transformation of numeric attributes (such as scaling, normalization, log-based transformation, binning), categorical attributes (such as one-hot encoding, feature hashing, Temporal (date / time), and free-text attributes (text vectorization, topic modeling).
Python, Python, Scikit-learn, and Spark SQL examples will be presented and how to use domain knowledge and intuition to select and generate features relevant to predictive models.
Overview of tree algorithms from decision tree to xgboostTakami Sato
For my understanding, I surveyed popular tree algorithms on Machine Learning and their evolution. This is the first time I wrote a presentation in English. So, I am happy if you give me a feedback.
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
발표자: 이활석(NAVER)
발표일: 2017.11.
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨 지고 있습니다. 본 과정에서는 비지도학습의 가장 대표적인 방법인 오토인코더의 모든 것에 대해서 살펴보고자 합니다. 차원 축소관점에서 가장 많이 사용되는Autoencoder와 (AE) 그 변형 들인 Denoising AE, Contractive AE에 대해서 공부할 것이며, 데이터 생성 관점에서 최근 각광 받는 Variational AE와 (VAE) 그 변형 들인 Conditional VAE, Adversarial AE에 대해서 공부할 것입니다. 또한, 오토인코더의 다양한 활용 예시를 살펴봄으로써 현업과의 접점을 찾아보도록 노력할 것입니다.
1. Revisit Deep Neural Networks
2. Manifold Learning
3. Autoencoders
4. Variational Autoencoders
5. Applications
K-Nearest neighbor is one of the most commonly used classifier based in lazy learning. It is one of the most commonly used methods in recommendation systems and document similarity measures. It mainly uses Euclidean distance to find the similarity measures between two data points.
Slide explaining the distinction between bagging and boosting while understanding the bias variance trade-off. Followed by some lesser known scope of supervised learning. understanding the effect of tree split metric in deciding feature importance. Then understanding the effect of threshold on classification accuracy. Additionally, how to adjust model threshold for classification in supervised learning.
Note: Limitation of Accuracy metric (baseline accuracy), alternative metrics, their use case and their advantage and limitations were briefly discussed.
BIRCH (balanced iterative reducing and clustering using hierarchies) is an unsupervised data-mining algorithm used to perform hierarchical clustering over, particularly large data sets.
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan ZhuDatabricks
XGBoost (https://github.com/dmlc/xgboost) is a library designed and optimized for tree boosting. XGBoost attracts users from a broad range of organizations in both industry and academia, and more than half of the winning solutions in machine learning challenges hosted at Kaggle adopt XGBoost.
While being one of the most popular machine learning systems, XGBoost is only one of the components in a complete data analytic pipeline. The data ETL/exploration/serving functionalities are built up on top of more general data processing frameworks, like Apache Spark. As a result, users have to build a communication channel between Apache Spark and XGBoost (usually through HDFS) and face the difficulties/inconveniences in data navigating and application development/deployment.
We (Distributed (Deep) Machine Learning Community) develop XGBoost4J-Spark (https://github.com/dmlc/xgboost/tree/master/jvm-packages), which seamlessly integrates Apache Spark and XGBoost.
The communication channel between Spark and XGBoost is established based on RDDs/DataFrame/Datasets, all of which are standard data interfaces in Spark. Additionally, XGBoost can be embedded into Spark MLLib pipeline and tuned through the tools provided by MLLib. In this talk, I will cover the motivation/history/design philosophy/implementation details as well as the use cases of XGBoost4J-Spark. I expect that this talk will share the insights on building a heterogeneous data analytic pipeline based on Spark and other data intelligence frameworks and bring more discussions on this topic.
Open Source Tools & Data Science Competitions odsc
This talk shares the presenter’s experience with open source tools in data science competitions. In the past several years Kaggle and other competitions have created a large online community of data scientists. In addition to competing with each other for fame and glory, members of this community also generously share knowledge, insights using forum and open source code. The open competition and sharing have resulted in rapid progress in the sophistication of the entire community. This presentation will briefly cover this journey from a competitor’s perspective, and share hands on tips on some open source tools proven popular and useful in recent competitions.
Top contenders in the 2015 KDD cup include the team from DataRobot comprising Owen Zhang, #1 Ranked Kaggler and top Kagglers Xavier Contort and Sergey Yurgenson. Get an in-depth look as Xavier describes their approach. DataRobot allowed the team to focus on feature engineering by automating model training, hyperparameter tuning, and model blending - thus giving the team a firm advantage.
In this article you will learn hot to use tensorflow Softmax Classifier estimator to classify MNIST dataset in one script.
This paper introduces also the basic idea of a artificial neural network.
"Optimization of a .NET application- is it simple ! / ?", Yevhen TatarynovFwdays
Optimization of .NET application seems complex and tied full task, but don’t hurry up with conclusions. Let’s look on several cases from real projects.
For this we:
look under the hood of an application from a real project;
define the metric for optimization;
choose the necessary tools;
find bottlenecks /memory leaks and best practice to resolve them.
We'll improve the application step by step and we’ll what with simple analysis and simple best practice we can significantly reduce total resources usage.
This tutor introduces the basic idea of machine learning with a very simple example. Machine learning teaches machines (and me too) to learn to carry out tasks and concepts by themselves. It is that simple, so here is an overview:
http://www.softwareschule.ch/examples/machinelearning.jpg
Introduction to use machine learning in python and pascal to do such a thing like train prime numbers when there are algorithms in place to determine prime numbers. See a dataframe, feature extracting and a few plots to re-search for another hot experiment to predict prime numbers.
Feature Engineering - Getting most out of data for predictive models - TDC 2017Gabriel Moreira
How should data be preprocessed for use in machine learning algorithms? How to identify the most predictive attributes of a dataset? What features can generate to improve the accuracy of a model?
Feature Engineering is the process of extracting and selecting, from raw data, features that can be used effectively in predictive models. As the quality of the features greatly influences the quality of the results, knowing the main techniques and pitfalls will help you to succeed in the use of machine learning in your projects.
In this talk, we will present methods and techniques that allow us to extract the maximum potential of the features of a dataset, increasing flexibility, simplicity and accuracy of the models. The analysis of the distribution of features and their correlations, the transformation of numeric attributes (such as scaling, normalization, log-based transformation, binning), categorical attributes (such as one-hot encoding, feature hashing, Temporal (date / time), and free-text attributes (text vectorization, topic modeling).
Python, Python, Scikit-learn, and Spark SQL examples will be presented and how to use domain knowledge and intuition to select and generate features relevant to predictive models.
MATLAB DOCUMENTATION ON SOME OF THE MODULES
A.Generate videos in which a skeleton of a person doing the following Gestures.
1.Tilting his head to right and left
2.Tilting his hand to right and left
3.Walking
in matlab.
B. Write a MATLAB program that converts a decimal number to Roman number and vice versa.
C.Using EZ plot & anonymous functions plot the following:
· Y=Sqrt(X)
· Y= X^2
· Y=e^(-XY)
D.Take your picture and
· Show R, G, B channels along with RGB Image in same figure using sub figure.
· Convert into HSV( Hue, saturation and value) and show the H,S,V channels along with HSV image
E.Record your name pronounced by yourself. Try to display the signal(name) in a plot vs Time, using matlab.
F.Write a script to open a new figure and plot five circles, all centered at the origin and with increasing radii. Set the line width for each circle to something thick (at least 2 points), and use the colors from a 5-color jet colormap (jet).
G. NEWTON RAPHSON AND SECANT METHOD
H.Write any one of the program to do following things using file concept.
1.Create or Open a file
2. Read data from the file and write data to another file
3. Append some text to already existed file
4. Close the file
I.Write a function to perform following set operations
1.Union of A and B
2. Intersection of A and B
3. Complement of A and B
(Assume A= {1, 2, 3, 4, 5, 6}, B= {2, 4, 6})
Mini-lab 1: Stochastic Gradient Descent classifier, Optimizing Logistic Regre...Yao Yao
https://github.com/yaowser/data_mining_group_project
https://www.kaggle.com/c/zillow-prize-1/data
From the Zillow real estate data set of properties in the southern California area, conduct the following data cleaning, data analysis, predictive analysis, and machine learning algorithms:
Mini-lab 1: Stochastic Gradient Descent classifier, Optimizing Logistic Regression Model Performance, Optimizing Support Vector Machine Classifier, Accuracy of results and efficiency, Logistic Regression Feature Importance, interpretation of support vectors, Density Graph
Notebooks such as Jupyter give programming languages a level of interactivity approaching that of spreadsheets.
I present here an idea for a programming language specifically designed for an interactive environment similar to a notebook.
It aims to combining the power of a programming language with the usability of a spreadsheet.
Instead of free-form code, the user creates fields / columns, but these can be combined into tables and object classes.
By decoratively cycling through field elements, loops and other programming constructs can be created.
I give examples from classical computer science, machine learning and mathematical finance, specifically:
Nth Prime Number, 8 Queens, Poker Hand, Travelling Salesman, Linear Regression, VaR Attribution
Jose Leiva, data scientist at Ets Asset Management Factory, gives an accurate and simple introduction to Machine Learning. He explains some of the problems that quantitative managers have to get alpha in the markets, and how to face them using Deep Learning.
This document list the reasons why our past alumni chose NYC Data Science Academy over other programs.
Machine Learning Bootcamp is our flagship program and well received by our community.
This project was completed by Scott Dobbins and Rachel Kogan, who enrolled in the NYC Data Science Academy's 12-Week Data Science Bootcamp. Learn more about the program: http://nycdatascience.com/data-science-bootcamp/
Given that both Wikipedia and comments sections of most websites are freely open to anyone to edit at any time, how has Wikipedia managed to remain such a useful resource while most comments sections are ridden with vandalism, ads, and other counterproductive user behavior?
We believe the answer is two-fold: 1) Wikipedia has an army of bots that quickly identify and revert vandalism so that the worst edits are usually never seen by people and the site generally maintains itself in a well-kempt state, and 2) Wikipedia has a strong community of administrators and other contributors who routinely clean the site’s flagged contents.
Vandalism is relatively easy to flag, though a few clever edits manage to stay on the site for a long time. What about site content problems that are more subjective, like bias? Wikipedia users do routinely manually flag pages with point-of-view (POV) issues, though with millions of pages and no machine-based approaches, the site can only manage to confidently maintain neutrality on the more well-trafficked pages.
Here we propose a solution to solve some of the more intractable content issues for Wikipedia and other sites using Natural Language Processing (NLP) and machine learning approaches. The sheer quantity of data managed by Wikipedia and similar sites requires distributed computing approaches, so we show here how Apache Spark can upgrade common algorithms to run on massive data sets.
A Hybrid Recommender with Yelp Challenge Data Vivian S. Zhang
Developed by Chao Shi, Sam O'Mullane, Sean Kickham, Reza Rad and Andrew Rubino
Watch the project presentation: https://youtu.be/gkKGnnBenyk
This project was completed by students from NYC Data Science Academy's 12-Week Bootcamp. Learn more about the bootcamp: http://nycdatascience.com/data-science-bootcamp/
People make decisions on where to eat based on friends’ recommendations. Since they know you, their suggestions matter more than those of strangers.
For the capstone project, we built a hybrid Yelp recommendation system that can provide individualized recommendations based on your friend’s reviews on the social network. We built the machine learning models using Spark, and set up a Flask-Kafka-RDS-Databricks pipeline that allows a continuous stream of user requests.
During the presentation, we will talk about the development framework and technical implementation of the pipeline.
Read on their project posts and code:
https://blog.nycdatascience.com/student-works/capstone/yelp-recommender-part-1/
https://blog.nycdatascience.com/student-works/yelp-recommender-part-2/
Kaggle Top1% Solution: Predicting Housing Prices in Moscow Vivian S. Zhang
This project was completed by students graduated from NYC Data Science Academy 12-week Data Science Bootcamp. Learn more about the bootcamp: http://nycdatascience.com/data-science-bootcamp/
Watch the project presentation: https://youtu.be/W530d2ZdbJE
Ranked #15 out of 3,274 teams on Kaggle Team Members - Brandy Freitas, Chase Edge and Grant Webb
Given 4 years of housing price data in a foreign market, predicting the following year’s prices should be pretty straightforward, right? But what if in that last year of data, the country’s stock market, the value of its currency and the price of its number 1 export, all dropped by nearly 50%. And on top of all that, the country was slapped with economic sanctions by the EU and the US. This was Moscow in 2014 and as you can see, it was anything but straightforward.
We were able to overcome these challenges and in the two weeks of working together, were able to achieve a top 1% ranking on Kaggle. Our success is a product of our in depth data cleaning, feature engineering and our approach to modeling. With a focus on interpretability and simplicity, we begin modeling using linear regression and decision trees which gave us a better understanding of the data. We then utilized more complicated models such as random forests and XGBoost which ultimately resulted in our top submission.
Data Science is concerned with the analysis of large amounts of data. When the volume of data is really large, it requires the use of cooperating, distributed machines. The most popular method of doing this is Hadoop, a collection of programs to perform computations on connected machines in a cluster. Hadoop began life as an open-source implementation of MapReduce, an idea first developed and implemented by Google for its own clusters. Though Hadoop's MapReduce is Java-based, and quite complex, this talk focuses on the "streaming" facility, which allows Python programmers to use MapReduce in a clean and simple way. We will present the core ideas of MapReduce and show you how to implement a MapReduce computation using Python streaming. The presentation will also include an overview of the various components of the Hadoop "ecosystem."
NYC Data Science Academy is excited to welcome Sam Kamin who will be presenting an Introduction to Hadoop for Python Programmers a well as a discussion of MapReduce with Streaming Python.
Sam Kamin was a professor in the University of Illinois Computer Science Department. His research was in programming languages, high-performance computing, and educational technology. He taught a wide variety of courses, and served as the Director of Undergraduate Programs. He retired as Emeritus Associate Professor, and worked at Google until taking his current position as VP of Data Engineering in NYC Data Science Academy.
--------------------------------------
Our fall 12-Week Data Science bootcamp starts on Sept 21st,2015. Apply now to get a spot!
If you are hiring Data Scientists, call us at (1)888-752-7585 or reach info@nycdatascience.com to share your openings and set up interviews with our excellent students.
Nyc open-data-2015-andvanced-sklearn-expandedVivian S. Zhang
Scikit-learn is a machine learning library in Python, that has become a valuable tool for many data science practitioners.
This talk will cover some of the more advanced aspects of scikit-learn, such as building complex machine learning pipelines, model evaluation, parameter search, and out-of-core learning.
Apart from metrics for model evaluation, we will cover how to evaluate model complexity, and how to tune parameters with grid search, randomized parameter search, and what their trade-offs are. We will also cover out of core text feature processing via feature hashing.
---------------------------------------------------------
Andreas is an Assistant Research Scientist at the NYU Center for Data Science, building a group to work on open source software for data science. Previously he worked as a Machine Learning Scientist at Amazon, working on computer vision and forecasting problems. He is one of the core developers of the scikit-learn machine learning library, and maintained it for several years.
Material will be posted here:
https://github.com/amueller/pydata-nyc-advanced-sklearn
Blog:
peekaboo-vision.blogspot.com
Twitter:
https://twitter.com/t3kcit
Twitter: @NycDataSci
Learn with our NYC Data Science Program (weekend courses for working professionals and 12 week full time for whom are advancing their career into Data Science)
Our next 12-Week Data Science Bootcamp starts in Jun. (Deadline to apply is May 1st, all decisions will be made by May 15th.)
====================================
Max Kuhn, Director is Nonclinical Statistics of Pfizer and also the author of Applied Predictive Modeling.
He will join us and share his experience with Data Mining with R.
Max is a nonclinical statistician who has been applying predictive models in the diagnostic and pharmaceutical industries for over 15 years. He is the author and maintainer for a number of predictive modeling packages, including: caret, C50, Cubist and AppliedPredictiveModeling. He blogs about the practice of modeling on his website at ttp://appliedpredictivemodeling.com/blog
---------------------------------------------------------
His Feb 18th course can be RSVP at NYC Data Science Academy.
Syllabus
Predictive Modeling using R
Description
This class will get attendees up to speed in predictive modeling using the R programming language. The goal of the course is to understand the general predictive modeling process and how it can be implemented in R. A selection of important models (e.g. tree-based models, support vector machines) will be described in an intuitive manner to illustrate the process of training and evaluating models.
Prerequisites:
Attendees should have a working knowledge of basic R data structures (e.g. data frames, factors etc) and language fundamentals such as functions and subsetting data. Understanding of the content contained in Appendix B sections B1 though B8 of Applied Predictive Modeling (free PDF from publisher [1]) should suffice.
Outline:
- An introduction to predictive modeling
- R and predictive modeling: the good and bad
- Illustrative example
- Measuring performance
- Data splitting and resampling
- Data pre-processing
- Classification trees
- Boosted trees
- Support vector machines
If time allows, the following topics will also be covered
- Parallel processing
- Comparing models
- Feature selection
- Common pitfalls
Materials:
Attendees will be provided with a copy of Applied Predictive Modeling[2] as well as course notes, code and raw data. Participants will be able to reproduce the examples described in the workshop.
Attendees should have a computer with a relatively recent version of R installed.
About the Instructor:
More about Max's work:
[1] http://rd.springer.com/content/pdf/bbm%3A978-1-4614-6849-3%2F1.pdf
[2] http://appliedpredictivemodeling.com
Using Machine Learning to aid Journalism at the New York TimesVivian S. Zhang
This talk was presented to NYC Open Data Meetup Group on Nov 11, 2014.
Speaker:
Daeil Kim is currently a data scientist at the Times and is finishing up his Ph.D at Brown University on work related to developing scalable inference algorithms for Bayesian Nonparametric models. His work at the Times spans a variety of problems related to the company's business interests, audience development, as well as developing tools to aid journalism.
Topic:
This talk will focus mostly on how machine learning can help problems that prop up in journalism. We'll begin first by talking about using popular supervised learning algorithms such as regularized Logistic Regression to help assist a journalist's work in uncovering insights into a story regarding the recall of Takata airbags in cars. Afterwards, we'll think about using topic modeling to deal with large document dumps generated from FOIA (Freedom of Information Act) requests and Refinery, a simple web based tool to ease the implementation of such tasks. Finally, if there is time, we will go over how topic models have been extended to assist in the problem of designing an efficient recommendation engine for text-based content.
Hack session for NYTimes Dialect Map Visualization( developed by R Shiny)Vivian S. Zhang
Data Science Academy, Hack session, NY Times, Dialect Map, Data science by R, Vivian S. Zhang, see www.nycdatascience.com for more details. Joint work by Data Scientist team of SupStat Inc. a New York based data analytic and visualization consulting firm.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
4. Introduction
Nowadays we have plenty of machine learning models. Those most well-knowns are
Linear/Logistic Regression
k-Nearest Neighbours
Support Vector Machines
Tree-based Model
Neural Networks
·
·
·
·
Decision Tree
Random Forest
Gradient Boosting Machine
-
-
-
·
4/128
5. Introduction
XGBoost is short for eXtreme Gradient Boosting. It is
An open-sourced tool
A variant of the gradient boosting machine
The winning model for several kaggle competitions
·
Computation in C++
R/python/Julia interface provided
-
-
·
Tree-based model-
·
5/128
6. Introduction
XGBoost is currently host on github.
The primary author of the model and the c++ implementation is Tianqi Chen.
The author for the R-package is Tong He.
·
·
6/128
7. Introduction
XGBoost is widely used for kaggle competitions. The reason to choose XGBoost includes
Easy to use
Efficiency
Accuracy
Feasibility
·
Easy to install.
Highly developed R/python interface for users.
-
-
·
Automatic parallel computation on a single machine.
Can be run on a cluster.
-
-
·
Good result for most data sets.-
·
Customized objective and evaluation
Tunable parameters
-
-
7/128
8. Basic Walkthrough
We introduce the R package for XGBoost. To install, please run
This command downloads the package from github and compile it automatically on your
machine. Therefore we need RTools installed on Windows.
devtools::install_github('dmlc/xgboost',subdir='R-package')
8/128
9. Basic Walkthrough
XGBoost provides a data set to demonstrate its usages.
This data set includes the information for some kinds of mushrooms. The features are binary,
indicate whether the mushroom has this characteristic. The target variable is whether they are
poisonous.
require(xgboost)
## Loading required package: xgboost
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train = agaricus.train
test = agaricus.test
9/128
10. Basic Walkthrough
Let's investigate the data first.
We can see that the data is a dgCMatrix class object. This is a sparse matrix class from the
package Matrix. Sparse matrix is more memory efficient for some specific data.
str(train$data)
## Formal class 'dgCMatrix' [package "Matrix"] with 6 slots
## ..@ i : int [1:143286] 2 6 8 11 18 20 21 24 28 32 ...
## ..@ p : int [1:127] 0 369 372 3306 5845 6489 6513 8380 8384 10991 ...
## ..@ Dim : int [1:2] 6513 126
## ..@ Dimnames:List of 2
## .. ..$ : NULL
## .. ..$ : chr [1:126] "cap-shape=bell" "cap-shape=conical" "cap-shape=convex" "cap-shape=f
## ..@ x : num [1:143286] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ factors : list()
10/128
11. Basic Walkthrough
To use XGBoost to classify poisonous mushrooms, the minimum information we need to
provide is:
1. Input features
2. Target variable
3. Objective
4. Number of iteration
XGBoost allows dense and sparse matrix as the input.·
A numeric vector. Use integers starting from 0 for classification, or real values for
regression
·
For regression use 'reg:linear'
For binary classification use 'binary:logistic'
·
·
The number of trees added to the model·
11/128
12. Basic Walkthrough
To run XGBoost, we can use the following command:
The output is the classification error on the training data set.
bst = xgboost(data = train$data, label = train$label,
nround = 2, objective = "binary:logistic")
## [0] train-error:0.000614
## [1] train-error:0.001228
12/128
13. Basic Walkthrough
Sometimes we might want to measure the classification by 'Area Under the Curve':
bst = xgboost(data = train$data, label = train$label, nround = 2,
objective = "binary:logistic", eval_metric = "auc")
## [0] train-auc:0.999238
## [1] train-auc:0.999238
13/128
14. Basic Walkthrough
To predict, you can simply write
pred = predict(bst, test$data)
head(pred)
## [1] 0.2582498 0.7433221 0.2582498 0.2582498 0.2576509 0.2750908
14/128
15. Basic Walkthrough
Cross validation is an important method to measure the model's predictive power, as well as
the degree of overfitting. XGBoost provides a convenient function to do cross validation in a line
of code.
Notice the difference of the arguments between xgb.cv and xgboost is the additional nfold
parameter. To perform cross validation on a certain set of parameters, we just need to copy
them to the xgb.cv function and add the number of folds.
cv.res = xgb.cv(data = train$data, nfold = 5, label = train$label, nround = 2,
objective = "binary:logistic", eval_metric = "auc")
## [0] train-auc:0.998668+0.000354 test-auc:0.998497+0.001380
## [1] train-auc:0.999187+0.000785 test-auc:0.998700+0.001536
15/128
16. Basic Walkthrough
xgb.cv returns a data.table object containing the cross validation results. This is helpful for
choosing the correct number of iterations.
cv.res
## train.auc.mean train.auc.std test.auc.mean test.auc.std
## 1: 0.998668 0.000354 0.998497 0.001380
## 2: 0.999187 0.000785 0.998700 0.001536
16/128
18. Higgs Boson Competition
The debut of XGBoost was in the higgs boson competition.
Tianqi introduced the tool along with a benchmark code which achieved the top 10% at the
beginning of the competition.
To the end of the competition, it was already the mostly used tool in that competition.
18/128
19. Higgs Boson Competition
XGBoost offers the script on github.
To run the script, prepare a data directory and download the competition data into this
directory.
19/128
21. Higgs Boson Competition
Then we can read in the data
dtrain = read.csv("data/training.csv", header=TRUE)
dtrain[33] = dtrain[33] == "s"
label = as.numeric(dtrain[[33]])
data = as.matrix(dtrain[2:31])
weight = as.numeric(dtrain[[32]]) * testsize / length(label)
sumwpos <- sum(weight * (label==1.0))
sumwneg <- sum(weight * (label==0.0))
21/128
22. Higgs Boson Competition
The data contains missing values and they are marked as -999. We can construct an
xgb.DMatrix object containing the information of weight and missing.
xgmat = xgb.DMatrix(data, label = label, weight = weight, missing = -999.0)
22/128
23. Higgs Boson Competition
Next step is to set the basic parameters
param = list("objective" = "binary:logitraw",
"scale_pos_weight" = sumwneg / sumwpos,
"bst:eta" = 0.1,
"bst:max_depth" = 6,
"eval_metric" = "auc",
"eval_metric" = "ams@0.15",
"silent" = 1,
"nthread" = 16)
23/128
24. Higgs Boson Competition
We then start the training step
bst = xgboost(params = param, data = xgmat, nround = 120)
24/128
25. Higgs Boson Competition
Then we read in the test data
dtest = read.csv("data/test.csv", header=TRUE)
data = as.matrix(dtest[2:31])
xgmat = xgb.DMatrix(data, missing = -999.0)
25/128
26. Higgs Boson Competition
We now can make prediction on the test data set.
ypred = predict(bst, xgmat)
26/128
27. Higgs Boson Competition
Finally we output the prediction according to the required format.
Please submit the result to see your performance :)
rorder = rank(ypred, ties.method="first")
threshold = 0.15
ntop = length(rorder) - as.integer(threshold*length(rorder))
plabel = ifelse(rorder > ntop, "s", "b")
outdata = list("EventId" = idx,
"RankOrder" = rorder,
"Class" = plabel)
write.csv(outdata, file = "submission.csv", quote=FALSE, row.names=FALSE)
27/128
28. Higgs Boson Competition
Besides the good performace, the efficiency is also a highlight of XGBoost.
The following plot shows the running time result on the Higgs boson data set.
28/128
29. Higgs Boson Competition
After some feature engineering and parameter tuning, one can achieve around 25th with a
single model on the leaderboard. This is an article written by a former-physist introducing his
solution with a single XGboost model:
https://no2147483647.wordpress.com/2014/09/17/winning-solution-of-kaggle-higgs-
competition-what-a-single-model-can-do/
On our post-competition attempts, we achieved 11th on the leaderboard with a single XGBoost
model.
29/128
31. Training Objective
To understand other parameters, one need to have a basic understanding of the model behind.
Suppose we have trees, the model is
where each is the prediction from a decision tree. The model is a collection of decision trees.
K
∑
k=1
K
fk
fk
31/128
32. Training Objective
Having all the decision trees, we make prediction by
where is the feature vector for the -th data point.
Similarly, the prediction at the -th step can be defined as
= ( )yiˆ
∑
k=1
K
fk xi
xi i
t
= ( )yiˆ (t)
∑
k=1
t
fk xi
32/128
33. Training Objective
To train the model, we need to optimize a loss function.
Typically, we use
Rooted Mean Squared Error for regression
LogLoss for binary classification
mlogloss for multi-classification
·
- L = ( −1
N
∑N
i=1 yi yiˆ )2
·
- L = − ( log( ) + (1 − ) log(1 − ))1
N
∑N
i=1 yi pi yi pi
·
- L = − log( )1
N
∑N
i=1 ∑M
j=1 yi,j pi,j
33/128
34. Training Objective
Regularization is another important part of the model. A good regularization term controls the
complexity of the model which prevents overfitting.
Define
where is the number of leaves, and is the score on the -th leaf.
Ω = γT + λ
1
2 ∑
j=1
T
w2
j
T w2
j j
34/128
35. Training Objective
Put loss function and regularization together, we have the objective of the model:
where loss function controls the predictive power, and regularization controls the simplicity.
Obj = L + Ω
35/128
36. Training Objective
In XGBoost, we use gradient descent to optimize the objective.
Given an objective to optimize, gradient descent is an iterative technique which
calculate
at each iteration. Then we improve along the direction of the gradient to minimize the
objective.
Obj(y, )yˆ
Obj(y, )∂yˆ yˆ
yˆ
36/128
37. Training Objective
Recall the definition of objective . For a iterative algorithm we can re-define the
objective function as
To optimize it by gradient descent, we need to calculate the gradient. The performance can also
be improved by considering both the first and the second order gradient.
Obj = L + Ω
Ob = L( , ) + Ω( ) = L( , + ( )) + Ω( )j(t)
∑
i=1
N
yi yˆ(t)
i ∑
i=1
t
fi ∑
i=1
N
yi yiˆ (t−1)
ft xi ∑
i=1
t
fi
Ob∂yiˆ (t) j(t)
Ob∂2
yiˆ (t) j(t)
37/128
38. Training Objective
Since we don't have derivative for every objective function, we calculate the second order taylor
approximation of it
where
Ob ≃ [L( , ) + ( ) + ( )] + Ω( )j(t)
∑
i=1
N
yi yˆ(t−1)
gift xi
1
2
hif 2
t xi ∑
i=1
t
fi
· = l( , )gi ∂yˆ(t−1) yi yˆ(t−1)
· = l( , )hi ∂2
yˆ(t−1) yi yˆ(t−1)
38/128
39. Training Objective
Remove the constant terms, we get
This is the objective at the -th step. Our goal is to find a to optimize it.
Ob = [ ( ) + ( )] + Ω( )j(t)
∑
i=1
n
gift xi
1
2
hif 2
t xi ft
t ft
39/128
40. Tree Building Algorithm
The tree structures in XGBoost leads to the core problem:
how can we find a tree that improves the prediction along the gradient?
40/128
41. Tree Building Algorithm
Every decision tree looks like this
Each data point flows to one of the leaves following the direction on each node.
41/128
42. Tree Building Algorithm
The core concepts are:
Internal Nodes
Leaves
·
Each internal node split the flow of data points by one of the features.
The condition on the edge specifies what data can flow through.
-
-
·
Data points reach to a leaf will be assigned a weight.
The weight is the prediction.
-
-
42/128
43. Tree Building Algorithm
Two key questions for building a decision tree are
1. How to find a good structure?
2. How to assign prediction score?
We want to solve these two problems with the idea of gradient descent.
43/128
44. Tree Building Algorithm
Let us assume that we already have the solution to question 1.
We can mathematically define a tree as
where is a "directing" function which assign every data point to the -th leaf.
This definition describes the prediction process on a tree as
(x) =ft wq(x)
q(x) q(x)
Assign the data point to a leaf by
Assign the corresponding score on the -th leaf to the data point.
· x q
· wq(x) q(x)
44/128
45. Tree Building Algorithm
Define the index set
This set contains the indices of data points that are assigned to the -th leaf.
= {i|q( ) = j}Ij xi
j
45/128
46. Tree Building Algorithm
Then we rewrite the objective as
Since all the data points on the same leaf share the same prediction, this form sums the
prediction by leaves.
Ob = [ ( ) + ( )] + γT + λj(t)
∑
i=1
n
gift xi
1
2
hif 2
t xi
1
2 ∑
j=1
T
w2
j
= [( ) + ( + λ) ] + γT∑
j=1
T
∑
i∈Ij
gi wj
1
2 ∑
i∈Ij
hi w2
j
46/128
47. Tree Building Algorithm
It is a quadratic problem of , so it is easy to find the best to optimize .
The corresponding value of is
wj wj Obj
= −w∗
j
∑i∈Ij
gi
+ λ∑i∈Ij
hi
Obj
Ob = − + γTj(t) 1
2 ∑
j=1
T (∑i∈Ij
gi)2
+ λ∑i∈Ij
hi
47/128
48. Tree Building Algorithm
The leaf score
relates to
= −wj
∑i∈Ij
gi
+ λ∑i∈Ij
hi
The first and second order of the loss function and
The regularization parameter
· g h
· λ
48/128
49. Tree Building Algorithm
Now we come back to the first question: How to find a good structure?
We can further split it into two sub-questions:
1. How to choose the feature to split?
2. When to stop the split?
49/128
50. Tree Building Algorithm
In each split, we want to greedily find the best splitting point that can optimize the objective.
For each feature
1. Sort the numbers
2. Scan the best splitting point.
3. Choose the best feature.
50/128
51. Tree Building Algorithm
Now we give a definition to "the best split" by the objective.
Everytime we do a split, we are changing a leaf into a internal node.
51/128
52. Tree Building Algorithm
Let
Recall the best value of objective on the -th leaf is
be the set of indices of data points assigned to this node
and be the sets of indices of data points assigned to two new leaves.
· I
· IL IR
j
Ob = − + γj(t) 1
2
(∑i∈Ij
gi)2
+ λ∑i∈Ij
hi
52/128
53. Tree Building Algorithm
The gain of the split is
gain =
[
+ −
]
− γ
1
2
(∑i∈IL
gi)2
+ λ∑i∈IL
hi
(∑i∈IR
gi)2
+ λ∑i∈IR
hi
(∑i∈I gi)2
+ λ∑i∈I hi
53/128
54. Tree Building Algorithm
To build a tree, we find the best splitting point recursively until we reach to the maximum
depth.
Then we prune out the nodes with a negative gain in a bottom-up order.
54/128
55. Tree Building Algorithm
XGBoost can handle missing values in the data.
For each node, we guide all the data points with a missing value
Finally every node has a "default direction" for missing values.
to the left subnode, and calculate the maximum gain
to the right subnode, and calculate the maximum gain
Choose the direction with a larger gain
·
·
·
55/128
56. Tree Building Algorithm
To sum up, the outline of the algorithm is
Iterate for nround times·
Grow the tree to the maximun depth
Prune the tree to delete nodes with negative gain
-
Find the best splitting point
Assign weight to the two new leaves
-
-
-
56/128
58. Parameter Introduction
XGBoost has plenty of parameters. We can group them into
1. General parameters
2. Booster parameters
3. Task parameters
Number of threads·
Stepsize
Regularization
·
·
Objective
Evaluation metric
·
·
58/128
59. Parameter Introduction
After the introduction of the model, we can understand the parameters provided in XGBoost.
To check the parameter list, one can look into
The documentation of xgb.train.
The documentation in the repository.
·
·
59/128
61. Parameter Introduction
Parameter for Tree Booster
eta
gamma
·
Step size shrinkage used in update to prevents overfitting.
Range in [0,1], default 0.3
-
-
·
Minimum loss reduction required to make a split.
Range [0, ], default 0
-
- ∞
61/128
62. Parameter Introduction
Parameter for Tree Booster
max_depth
min_child_weight
max_delta_step
·
Maximum depth of a tree.
Range [1, ], default 6
-
- ∞
·
Minimum sum of instance weight needed in a child.
Range [0, ], default 1
-
- ∞
·
Maximum delta step we allow each tree's weight estimation to be.
Range [0, ], default 0
-
- ∞
62/128
63. Parameter Introduction
Parameter for Tree Booster
subsample
colsample_bytree
·
Subsample ratio of the training instance.
Range (0, 1], default 1
-
-
·
Subsample ratio of columns when constructing each tree.
Range (0, 1], default 1
-
-
63/128
64. Parameter Introduction
Parameter for Linear Booster
lambda
alpha
lambda_bias
·
L2 regularization term on weights
default 0
-
-
·
L1 regularization term on weights
default 0
-
-
·
L2 regularization term on bias
default 0
-
-
64/128
65. Parameter Introduction
Objectives·
"reg:linear": linear regression, default option.
"binary:logistic": logistic regression for binary classification, output probability
"multi:softmax": multiclass classification using the softmax objective, need to specify
num_class
User specified objective
-
-
-
-
65/128
67. Guide on Parameter Tuning
It is nearly impossible to give a set of universal optimal parameters, or a global algorithm
achieving it.
The key points of parameter tuning are
Control Overfitting
Deal with Imbalanced data
Trust the cross validation
·
·
·
67/128
68. Guide on Parameter Tuning
The "Bias-Variance Tradeoff", or the "Accuracy-Simplicity Tradeoff" is the main idea for
controlling overfitting.
For the booster specific parameters, we can group them as
Controlling the model complexity
Robust to noise
·
max_depth, min_child_weight and gamma-
·
subsample, colsample_bytree-
68/128
69. Guide on Parameter Tuning
Sometimes the data is imbalanced among classes.
Only care about the ranking order
Care about predicting the right probability
·
Balance the positive and negative weights, by scale_pos_weight
Use "auc" as the evaluation metric
-
-
·
Cannot re-balance the dataset
Set parameter max_delta_step to a finite number (say 1) will help convergence
-
-
69/128
70. Guide on Parameter Tuning
To select ideal parameters, use the result from xgb.cv.
Trust the score for the test
Use early.stop.round to detect continuously being worse on test set.
If overfitting observed, reduce stepsize eta and increase nround at the same time.
·
·
·
70/128
72. Advanced Features
There are plenty of highlights in XGBoost:
Customized objective and evaluation metric
Prediction from cross validation
Continue training on existing model
Calculate and plot the variable importance
·
·
·
·
72/128
73. Customization
According to the algorithm, we can define our own loss function, as long as we can calculate the
first and second order gradient of the loss function.
Define and . We can optimize the loss function if we can calculate
these two values.
grad = l∂yt−1 hess = l∂2
yt−1
73/128
74. Customization
We can rewrite logloss for the -th data point as
The is calculated by applying logistic transformation on our prediction .
Then the logloss is
i
L = log( ) + (1 − ) log(1 − )yi pi yi pi
pi yˆi
L = log + (1 − ) logyi
1
1 + e−yˆi
yi
e−yˆi
1 + e−yˆi
74/128
75. Customization
We can see that
Next we translate them into the code.
· grad = − = −1
1+e−yˆi
yi pi yi
· hess = = (1 − )1+e−yˆi
(1+e−yˆi )2
pi pi
75/128
77. Customization
The complete code:
logregobj = function(preds, dtrain) {
# Extract the true label from the second argument
labels = getinfo(dtrain, "label")
preds = 1/(1 + exp(-preds))
grad = preds - labels
hess = preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
77/128
78. Customization
The complete code:
logregobj = function(preds, dtrain) {
# Extract the true label from the second argument
labels = getinfo(dtrain, "label")
# apply logistic transformation to the output
preds = 1/(1 + exp(-preds))
grad = preds - labels
hess = preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
78/128
79. Customization
The complete code:
logregobj = function(preds, dtrain) {
# Extract the true label from the second argument
labels = getinfo(dtrain, "label")
# apply logistic transformation to the output
preds = 1/(1 + exp(-preds))
# Calculate the 1st gradient
grad = preds - labels
hess = preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
79/128
80. Customization
The complete code:
logregobj = function(preds, dtrain) {
# Extract the true label from the second argument
labels = getinfo(dtrain, "label")
# apply logistic transformation to the output
preds = 1/(1 + exp(-preds))
# Calculate the 1st gradient
grad = preds - labels
# Calculate the 2nd gradient
hess = preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
80/128
81. Customization
The complete code:
logregobj = function(preds, dtrain) {
# Extract the true label from the second argument
labels = getinfo(dtrain, "label")
# apply logistic transformation to the output
preds = 1/(1 + exp(-preds))
# Calculate the 1st gradient
grad = preds - labels
# Calculate the 2nd gradient
hess = preds * (1 - preds)
# Return the result
return(list(grad = grad, hess = hess))
}
81/128
82. Customization
We can also customize the evaluation metric.
evalerror = function(preds, dtrain) {
labels = getinfo(dtrain, "label")
err = as.numeric(sum(labels != (preds > 0)))/length(labels)
return(list(metric = "error", value = err))
}
82/128
83. Customization
We can also customize the evaluation metric.
evalerror = function(preds, dtrain) {
# Extract the true label from the second argument
labels = getinfo(dtrain, "label")
err = as.numeric(sum(labels != (preds > 0)))/length(labels)
return(list(metric = "error", value = err))
}
83/128
84. Customization
We can also customize the evaluation metric.
evalerror = function(preds, dtrain) {
# Extract the true label from the second argument
labels = getinfo(dtrain, "label")
# Calculate the error
err = as.numeric(sum(labels != (preds > 0)))/length(labels)
return(list(metric = "error", value = err))
}
84/128
85. Customization
We can also customize the evaluation metric.
evalerror = function(preds, dtrain) {
# Extract the true label from the second argument
labels = getinfo(dtrain, "label")
# Calculate the error
err = as.numeric(sum(labels != (preds > 0)))/length(labels)
# Return the name of this metric and the value
return(list(metric = "error", value = err))
}
85/128
86. Customization
To utilize the customized objective and evaluation, we simply pass them to the arguments:
param = list(max.depth=2,eta=1,nthread = 2, silent=1,
objective=logregobj, eval_metric=evalerror)
bst = xgboost(params = param, data = train$data, label = train$label, nround = 2)
## [0] train-error:0.0465223399355136
## [1] train-error:0.0222631659757408
86/128
87. Prediction in Cross Validation
"Stacking" is an ensemble learning technique which takes the prediction from several models. It
is widely used in many scenarios.
One of the main concern is avoid overfitting. The common way is use the prediction value from
cross validation.
XGBoost provides a convenient argument to calculate the prediction during the cross validation.
87/128
88. Prediction in Cross Validation
res = xgb.cv(params = param, data = train$data, label = train$label, nround = 2,
nfold=5, prediction = TRUE)
## [0] train-error:0.046522+0.001347 test-error:0.046522+0.005387
## [1] train-error:0.022263+0.000637 test-error:0.022263+0.002545
str(res)
## List of 2
## $ dt :Classes 'data.table' and 'data.frame': 2 obs. of 4 variables:
## ..$ train.error.mean: num [1:2] 0.0465 0.0223
## ..$ train.error.std : num [1:2] 0.001347 0.000637
## ..$ test.error.mean : num [1:2] 0.0465 0.0223
## ..$ test.error.std : num [1:2] 0.00539 0.00254
## ..- attr(*, ".internal.selfref")=<externalptr>
## $ pred: num [1:6513] 2.58 -1.07 -1.03 2.59 -3.03 ...
88/128
89. xgb.DMatrix
XGBoost has its own class of input data xgb.DMatrix. One can convert the usual data set into
it by
It is the data structure used by XGBoost algorithm. XGBoost preprocess the input data and
label into an xgb.DMatrix object before feed it to the training algorithm.
If one need to repeat training process on the same big data set, it is good to use the
xgb.DMatrix object to save preprocessing time.
dtrain = xgb.DMatrix(data = train$data, label = train$label)
89/128
91. Continue Training
Train the model for 5000 rounds is sometimes useful, but we are also taking the risk of
overfitting.
A better strategy is to train the model with fewer rounds and repeat that for many times. This
enable us to observe the outcome after each step.
91/128
93. Continue Training
# Train with only one round
bst = xgboost(params = param, data = dtrain, nround = 1)
## [0] train-error:0.0222631659757408
ptrain = predict(bst, dtrain, outputmargin = TRUE)
setinfo(dtrain, "base_margin", ptrain)
## [1] TRUE
bst = xgboost(params = param, data = dtrain, nround = 1)
## [0] train-error:0.00706279748195916
93/128
94. Continue Training
# Train with only one round
bst = xgboost(params = param, data = dtrain, nround = 1)
## [0] train-error:0.00706279748195916
# margin means the baseline of the prediction
ptrain = predict(bst, dtrain, outputmargin = TRUE)
setinfo(dtrain, "base_margin", ptrain)
## [1] TRUE
bst = xgboost(params = param, data = dtrain, nround = 1)
## [0] train-error:0.0152003684937817
94/128
95. Continue Training
# Train with only one round
bst = xgboost(params = param, data = dtrain, nround = 1)
## [0] train-error:0.0152003684937817
# margin means the baseline of the prediction
ptrain = predict(bst, dtrain, outputmargin = TRUE)
# Set the margin information to the xgb.DMatrix object
setinfo(dtrain, "base_margin", ptrain)
## [1] TRUE
bst = xgboost(params = param, data = dtrain, nround = 1)
## [0] train-error:0.00706279748195916
95/128
96. Continue Training
# Train with only one round
bst = xgboost(params = param, data = dtrain, nround = 1)
## [0] train-error:0.00706279748195916
# margin means the baseline of the prediction
ptrain = predict(bst, dtrain, outputmargin = TRUE)
# Set the margin information to the xgb.DMatrix object
setinfo(dtrain, "base_margin", ptrain)
## [1] TRUE
# Train based on the previous result
bst = xgboost(params = param, data = dtrain, nround = 1)
## [0] train-error:0.00122831260555811
96/128
97. Importance and Tree plotting
The result of XGBoost contains many trees. We can count the number of appearance of each
variable in all the trees, and use this number as the importance score.
bst = xgboost(data = train$data, label = train$label, max.depth = 2, verbose = FALSE,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
xgb.importance(train$dataDimnames[[2]], model = bst)
## Feature Gain Cover Frequence
## 1: 28 0.67615484 0.4978746 0.4
## 2: 55 0.17135352 0.1920543 0.2
## 3: 59 0.12317241 0.1638750 0.2
## 4: 108 0.02931922 0.1461960 0.2
97/128
98. Importance and Tree plotting
We can also plot the trees in the model by xgb.plot.tree.
xgb.plot.tree(agaricus.train$data@Dimnames[[2]], model = bst)
98/128
100. Early Stopping
When doing cross validation, it is usual to encounter overfitting at a early stage of iteration.
Sometimes the prediction gets worse consistantly from round 300 while the total number of
iteration is 1000. To stop the cross validation process, one can use the early.stop.round
argument in xgb.cv.
bst = xgb.cv(params = param, data = train$data, label = train$label,
nround = 20, nfold = 5,
maximize = FALSE, early.stop.round = 3)
100/128
103. Kaggle Winning Solution
To get a higher rank, one need to push the limit of
1. Feature Engineering
2. Parameter Tuning
3. Model Ensemble
The winning solution in the recent Otto Competition is an excellent example.
103/128
104. Kaggle Winning Solution
They used a 3-layer ensemble learning model, including
33 models on top of the original data
XGBoost, neural network and adaboost on 33 predictions from the models and 8 engineered
features
Weighted average of the 3 prediction from the second step
·
·
·
104/128
105. Kaggle Winning Solution
The data for this competition is special: the meanings of the featuers are hidden.
For feature engineering, they generated 8 new features:
Distances to nearest neighbours of each classes
Sum of distances of 2 nearest neighbours of each classes
Sum of distances of 4 nearest neighbours of each classes
Distances to nearest neighbours of each classes in TFIDF space
Distances to nearest neighbours of each classed in T-SNE space (3 dimensions)
Clustering features of original dataset
Number of non-zeros elements in each row
X (That feature was used only in NN 2nd level training)
·
·
·
·
·
·
·
·
105/128
106. Kaggle Winning Solution
This means a lot of work. However this also implies they need to try a lot of other models,
although some of them turned out to be not helpful in this competition. Their attempts include:
A lot of training algorithms in first level as
Some preprocessing like PCA, ICA and FFT
Feature Selection
Semi-supervised learning
·
Vowpal Wabbit(many configurations)
R glm, glmnet, scikit SVC, SVR, Ridge, SGD, etc...
-
-
·
·
·
106/128
107. Influencers in Social Networks
Let's learn to use a single XGBoost model to achieve a high rank in an old competition!
The competition we choose is the Influencers in Social Networks competition.
It was a hackathon in 2013, therefore the size of data is small enough so that we can train the
model in seconds.
107/128
108. Influencers in Social Networks
First let's download the data, and load them into R
train = read.csv('train.csv',header = TRUE)
test = read.csv('test.csv',header = TRUE)
y = train[,1]
train = as.matrix(train[,-1])
test = as.matrix(test)
108/128
111. Influencers in Social Networks
The data contains information from two users in a social network service. Our mission is to
determine who is more influencial than the other one.
This type of data gives us some room for feature engineering.
111/128
112. Influencers in Social Networks
The first trick is to increase the information in the data.
Every data point can be expressed as <y, A, B>. Actually it indicates <1-y, B, A> as well.
We can simply use extract this part of information from the training set.
new.train = cbind(train[,12:22],train[,1:11])
train = rbind(train,new.train)
y = c(y,1-y)
112/128
113. Influencers in Social Networks
The following feature engineering steps are done on both training and test set. Therefore we
combine them together.
x = rbind(train,test)
113/128
114. Influencers in Social Networks
The next step could be calculating the ratio between features of A and B seperately:
followers/following
mentions received/sent
retweets received/sent
followers/posts
retweets received/posts
mentions received/posts
·
·
·
·
·
·
114/128
115. Influencers in Social Networks
Considering there might be zeroes, we need to smooth the ratio by a constant.
Next we can calculate the ratio with this helper function.
calcRatio = function(dat,i,j,lambda = 1) (dat[,i]+lambda)/(dat[,j]+lambda)
115/128
117. Influencers in Social Networks
Combine the features into the data set.
x = cbind(x[,1:11],
A.follow.ratio,A.mention.ratio,A.retweet.ratio,
A.follow.post,A.mention.post,A.retweet.post,
x[,12:22],
B.follow.ratio,B.mention.ratio,B.retweet.ratio,
B.follow.post,B.mention.post,B.retweet.post)
117/128
118. Influencers in Social Networks
Then we can compare the difference between A and B. Because XGBoost is scale invariant,
therefore minus and division are the essentially same.
AB.diff = x[,1:17]-x[,18:34]
x = cbind(x,AB.diff)
train = x[1:nrow(train),]
test = x[-(1:nrow(train)),]
118/128
119. Influencers in Social Networks
Now comes to the modeling part. We investigate how far can we can go with a single model.
The parameter tuning step is very important in this step. We can see the performance from
cross validation.
119/128
120. Influencers in Social Networks
Here's the xgb.cv with default parameters.
set.seed(1024)
cv.res = xgb.cv(data = train, nfold = 3, label = y, nrounds = 100, verbose = FALSE,
objective='binary:logistic', eval_metric = 'auc')
120/128
121. Influencers in Social Networks
We can see the trend of AUC on training and test sets.
121/128
122. Influencers in Social Networks
It is obvious our model severly overfits. The direct reason is simple: the default value of eta is
0.3, which is too large for this mission.
Recall the parameter tuning guide, we need to decrease eta and inccrease nrounds based on
the result of cross validation.
122/128
123. Influencers in Social Networks
After some trials, we get the following set of parameters:
set.seed(1024)
cv.res = xgb.cv(data = train, nfold = 3, label = y, nrounds = 3000,
objective='binary:logistic', eval_metric = 'auc',
eta = 0.005, gamma = 1,lambda = 3, nthread = 8,
max_depth = 4, min_child_weight = 1, verbose = F,
subsample = 0.8,colsample_bytree = 0.8)
123/128
124. Influencers in Social Networks
We can see the trend of AUC on training and test sets.
124/128
125. Influencers in Social Networks
Next we extract the best number of iterations.
We calculate the AUC minus the standard deviation, and choose the iteration with the largest
value.
bestRound = which.max(as.matrix(cv.res)[,3]-as.matrix(cv.res)[,4])
bestRound
## [1] 2442
cv.res[bestRound,]
## train.auc.mean train.auc.std test.auc.mean test.auc.std
## 1: 0.934967 0.00125 0.876629 0.002073
125/128
126. Influencers in Social Networks
Then we train the model with the same set of parameters:
set.seed(1024)
bst = xgboost(data = train, label = y, nrounds = 3000,
objective='binary:logistic', eval_metric = 'auc',
eta = 0.005, gamma = 1,lambda = 3, nthread = 8,
max_depth = 4, min_child_weight = 1,
subsample = 0.8,colsample_bytree = 0.8)
preds = predict(bst,test,ntreelimit = bestRound)
126/128
127. Influencers in Social Networks
Finally we submit our solution
This wins us top 10 on the leaderboard!
result = data.frame(Id = 1:nrow(test),
Choice = preds)
write.csv(result,'submission.csv',quote=FALSE,row.names=FALSE)
127/128