xgboost를 이해하기 위해서 찾아보다가 내가 궁금한 내용을 따로 정리하였으나, 역시 구체적인 수식은 아직 모르겠다.
요즘 Kaggle에서 유명한 Xgboost가 뭘까?
Ensemble중 하나인 Boosting기법?
Ensemble 유형인 Bagging과 Boosting 차이는?
왜 Ensemble이 low bias, high variance 모델인가?
Bias 와 Variance 관계는?
Boosting 기법은 어떤게 있나?
Xgboost에서 사용하는 CART 알고리즘은?
A brief presentation given on the basics of Ensemble Methods. Given as a 'Lightning Talk' during the 7th Cohort of General Assembly's Data Science Immersive Course
Abstract: This PDSG workshop introduces basic concepts of ensemble methods in machine learning. Concepts covered are Condercet Jury Theorem, Weak Learners, Decision Stumps, Bagging and Majority Voting.
Level: Fundamental
Requirements: No prior programming or statistics knowledge required.
xgboost를 이해하기 위해서 찾아보다가 내가 궁금한 내용을 따로 정리하였으나, 역시 구체적인 수식은 아직 모르겠다.
요즘 Kaggle에서 유명한 Xgboost가 뭘까?
Ensemble중 하나인 Boosting기법?
Ensemble 유형인 Bagging과 Boosting 차이는?
왜 Ensemble이 low bias, high variance 모델인가?
Bias 와 Variance 관계는?
Boosting 기법은 어떤게 있나?
Xgboost에서 사용하는 CART 알고리즘은?
A brief presentation given on the basics of Ensemble Methods. Given as a 'Lightning Talk' during the 7th Cohort of General Assembly's Data Science Immersive Course
Abstract: This PDSG workshop introduces basic concepts of ensemble methods in machine learning. Concepts covered are Condercet Jury Theorem, Weak Learners, Decision Stumps, Bagging and Majority Voting.
Level: Fundamental
Requirements: No prior programming or statistics knowledge required.
Bagging, also known as Bootstrap aggregating, is an ensemble learning technique that helps to improve the performance and accuracy of machine learning algorithms. It is used to deal with bias-variance trade-offs and reduces the variance of a prediction model. Bagging avoids overfitting of data and is used for both regression and classification models, specifically for decision tree algorithms.
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 231번째 논문 review 입니다
이번 논문은 Google Brain에서 나온 A Simple Framework for Contrastive Learning of Visual Representations입니다. Geoffrey Hinton님이 마지막 저자이시기도 해서 최근에 더 주목을 받고 있는 논문입니다.
이 논문은 최근에 굉장히 핫한 topic인 contrastive learning을 이용한 self-supervised learning쪽 논문으로 supervised learning으로 학습한 ResNet50와 동일한 성능을 얻을 수 있는 unsupervised pre-trainig 방법을 제안하였습니다. Data augmentation, Non-linear projection head, large batch size, longer training, NTXent loss 등을 활용하여 훌륭한 representation learning이 가능함을 보여주었고, semi-supervised learning이나 transfer learning에서도 매우 뛰어난 결과를 보여주었습니다. 자세한 내용은 영상을 참고해주세요
논문링크: https://arxiv.org/abs/2002.05709
영상링크: https://youtu.be/FWhM3juUM6s
Random Forest Classifier in Machine Learning | Palin AnalyticsPalin analytics
Random Forest is a supervised learning ensemble algorithm. Ensemble algorithms are those which combine more than one algorithms of same or different kind for classifying objects....
Our fall 12-Week Data Science bootcamp starts on Sept 21st,2015. Apply now to get a spot!
If you are hiring Data Scientists, call us at (1)888-752-7585 or reach info@nycdatascience.com to share your openings and set up interviews with our excellent students.
---------------------------------------------------------------
Come join our meet-up and learn how easily you can use R for advanced Machine learning. In this meet-up, we will demonstrate how to understand and use Xgboost for Kaggle competition. Tong is in Canada and will do remote session with us through google hangout.
---------------------------------------------------------------
Speaker Bio:
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Pre-requisite(if any): R /Calculus
Preparation: A laptop with R installed. Windows users might need to have RTools installed as well.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
Event arrangement:
6:45pm Doors open. Come early to network, grab a beer and settle in.
7:00-9:00pm XgBoost Demo
Reference:
https://github.com/dmlc/xgboost
Bagging, also known as Bootstrap aggregating, is an ensemble learning technique that helps to improve the performance and accuracy of machine learning algorithms. It is used to deal with bias-variance trade-offs and reduces the variance of a prediction model. Bagging avoids overfitting of data and is used for both regression and classification models, specifically for decision tree algorithms.
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 231번째 논문 review 입니다
이번 논문은 Google Brain에서 나온 A Simple Framework for Contrastive Learning of Visual Representations입니다. Geoffrey Hinton님이 마지막 저자이시기도 해서 최근에 더 주목을 받고 있는 논문입니다.
이 논문은 최근에 굉장히 핫한 topic인 contrastive learning을 이용한 self-supervised learning쪽 논문으로 supervised learning으로 학습한 ResNet50와 동일한 성능을 얻을 수 있는 unsupervised pre-trainig 방법을 제안하였습니다. Data augmentation, Non-linear projection head, large batch size, longer training, NTXent loss 등을 활용하여 훌륭한 representation learning이 가능함을 보여주었고, semi-supervised learning이나 transfer learning에서도 매우 뛰어난 결과를 보여주었습니다. 자세한 내용은 영상을 참고해주세요
논문링크: https://arxiv.org/abs/2002.05709
영상링크: https://youtu.be/FWhM3juUM6s
Random Forest Classifier in Machine Learning | Palin AnalyticsPalin analytics
Random Forest is a supervised learning ensemble algorithm. Ensemble algorithms are those which combine more than one algorithms of same or different kind for classifying objects....
Our fall 12-Week Data Science bootcamp starts on Sept 21st,2015. Apply now to get a spot!
If you are hiring Data Scientists, call us at (1)888-752-7585 or reach info@nycdatascience.com to share your openings and set up interviews with our excellent students.
---------------------------------------------------------------
Come join our meet-up and learn how easily you can use R for advanced Machine learning. In this meet-up, we will demonstrate how to understand and use Xgboost for Kaggle competition. Tong is in Canada and will do remote session with us through google hangout.
---------------------------------------------------------------
Speaker Bio:
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Pre-requisite(if any): R /Calculus
Preparation: A laptop with R installed. Windows users might need to have RTools installed as well.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
Event arrangement:
6:45pm Doors open. Come early to network, grab a beer and settle in.
7:00-9:00pm XgBoost Demo
Reference:
https://github.com/dmlc/xgboost
9. Bagging and Boosting and Stacking
동일한 학습 알고리즘을 사용하는 방법을 Ensemble
서로 다른 모델을 결합하여 새로운 모델을 만드는 것
10. Bagging
학습 데이터를 랜덤으로 Sampling 하여 여러 개의 Bag으로 분할하고,
각 Bag별로 모델을 학습한 후, 각 결과를 합하여 최종 결과를 도출
n: 전체 학습 data 수
n‘: bag에 포함된 data 수, 전체 data 중 sampling data
m: bag의 개수, 학습할 모델별로 sampling data set
19. Boosting
Algorithm 특징
AdaBoost 다수결을 통한 정답 분류 및 오답에 가중치 부여
GBM Loss Function의 Gradient를 통해 오답에 가중치 부여
Xgboost GBM 대비 성능 향상
시스템 자원 효율적 활용(CPU, Mem)
Kaggle을 통한 성능 검증(많은 상위 랭커가 사용)
2014년 공개
Light GBM Xgboost 대비 성능 향상 및 자원소모 최소화
Xgboost가 처리하지 못하는 대용량 데이터 학습 가능
Approximates the split을 통한 성능 향상
2016년 공개
Boosting Algorithm
28. Light GBM
• Decision Tree 알고리즘기반의 GBM 프레임워크 (빠르고, 높은 성능)
• Ranking, classification 등의 문제에 활용
차이점
• Leaf-wise로 tree를 성장(수직 방향) , 다른 알고리즘 (Level-wise)
• 최대 delta loss의 leaf를 성장
• 동일한 leaf를 성장할때, Leaf-wise가 loss를 더 줄일 수 있다.
Light GBM 인기
• 대량의 데이터를 병렬로 빠르게 학습가능 (Low Memory, GPU 활용가능)
• 예측정확도가 더 높음(Leaf-wise tree의 장점 à 과접합에 민감)
속도
• XGBoost 대비 2~10배 (동일한 파라미터 설정시)
사용 빈도
• Light GBM이 설치된 툴이 많이 없음. XGBoost(2014), Light GBM(2016)
활용
• Leaf-wise Tree는 overfitting에 민감하여, 대량의 데이터 학습에 적합
• 적어도 10,000 건 이상