Presentation of a real use case at TAJ law firm (Deloitte Paris) of applying Machine learning on accounting to help clients to prepare their tax audit.
Open Source Tools & Data Science Competitions odsc
This talk shares the presenter’s experience with open source tools in data science competitions. In the past several years Kaggle and other competitions have created a large online community of data scientists. In addition to competing with each other for fame and glory, members of this community also generously share knowledge, insights using forum and open source code. The open competition and sharing have resulted in rapid progress in the sophistication of the entire community. This presentation will briefly cover this journey from a competitor’s perspective, and share hands on tips on some open source tools proven popular and useful in recent competitions.
Our fall 12-Week Data Science bootcamp starts on Sept 21st,2015. Apply now to get a spot!
If you are hiring Data Scientists, call us at (1)888-752-7585 or reach info@nycdatascience.com to share your openings and set up interviews with our excellent students.
---------------------------------------------------------------
Come join our meet-up and learn how easily you can use R for advanced Machine learning. In this meet-up, we will demonstrate how to understand and use Xgboost for Kaggle competition. Tong is in Canada and will do remote session with us through google hangout.
---------------------------------------------------------------
Speaker Bio:
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Pre-requisite(if any): R /Calculus
Preparation: A laptop with R installed. Windows users might need to have RTools installed as well.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
Event arrangement:
6:45pm Doors open. Come early to network, grab a beer and settle in.
7:00-9:00pm XgBoost Demo
Reference:
https://github.com/dmlc/xgboost
Winning Kaggle 101: Introduction to StackingTed Xiao
An Introduction to Stacking by Erin LeDell, from H2O.ai
Presented as part of the "Winning Kaggle 101" event, hosted by Machine Learning at Berkeley and Data Science Society at Berkeley. Special thanks to the Berkeley Institute of Data Science for the venue!
H2O.ai: http://www.h2o.ai/
ML@B: ml.berkeley.edu
DSSB: http://dssberkeley.org
BIDS: http://bids.berkeley.edu/
Feature Engineering in H2O Driverless AI - Dmitry Larko - H2O AI World London...Sri Ambati
This talk was recorded in London on October 30th, 2018 and can be viewed here: https://youtu.be/d6UMEmeXB6o
In his talk Dmitry is going to cover common feature engineering techniques used to build robust machine learning models as well as some not widely known/used approaches.
Bio: Senior Data Scientist at H2O.ai, Dmitry Larko also is a former #25 Kaggle Grandmaster and loves to use his machine learning and data science skills in Kaggle Competitions and predictive analytics software development. He has more than 15 years of experience in information technology. Post his masters in computer information systems from Krasnoyarsk State Technical University (KSTU), he started his career in data warehousing and business intelligence and gradually moved to big data and data science. He holds a lot of experience in predictive analytics in a wide array of domains and tasks. Prior to H2O.ai, Dmitry held the position of SAP BW Developer at Chevron, Data Scientist at EPAM, and that of Lead Software Engineer with the Russian Federation.
Open Source Tools & Data Science Competitions odsc
This talk shares the presenter’s experience with open source tools in data science competitions. In the past several years Kaggle and other competitions have created a large online community of data scientists. In addition to competing with each other for fame and glory, members of this community also generously share knowledge, insights using forum and open source code. The open competition and sharing have resulted in rapid progress in the sophistication of the entire community. This presentation will briefly cover this journey from a competitor’s perspective, and share hands on tips on some open source tools proven popular and useful in recent competitions.
Our fall 12-Week Data Science bootcamp starts on Sept 21st,2015. Apply now to get a spot!
If you are hiring Data Scientists, call us at (1)888-752-7585 or reach info@nycdatascience.com to share your openings and set up interviews with our excellent students.
---------------------------------------------------------------
Come join our meet-up and learn how easily you can use R for advanced Machine learning. In this meet-up, we will demonstrate how to understand and use Xgboost for Kaggle competition. Tong is in Canada and will do remote session with us through google hangout.
---------------------------------------------------------------
Speaker Bio:
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Pre-requisite(if any): R /Calculus
Preparation: A laptop with R installed. Windows users might need to have RTools installed as well.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
Event arrangement:
6:45pm Doors open. Come early to network, grab a beer and settle in.
7:00-9:00pm XgBoost Demo
Reference:
https://github.com/dmlc/xgboost
Winning Kaggle 101: Introduction to StackingTed Xiao
An Introduction to Stacking by Erin LeDell, from H2O.ai
Presented as part of the "Winning Kaggle 101" event, hosted by Machine Learning at Berkeley and Data Science Society at Berkeley. Special thanks to the Berkeley Institute of Data Science for the venue!
H2O.ai: http://www.h2o.ai/
ML@B: ml.berkeley.edu
DSSB: http://dssberkeley.org
BIDS: http://bids.berkeley.edu/
Feature Engineering in H2O Driverless AI - Dmitry Larko - H2O AI World London...Sri Ambati
This talk was recorded in London on October 30th, 2018 and can be viewed here: https://youtu.be/d6UMEmeXB6o
In his talk Dmitry is going to cover common feature engineering techniques used to build robust machine learning models as well as some not widely known/used approaches.
Bio: Senior Data Scientist at H2O.ai, Dmitry Larko also is a former #25 Kaggle Grandmaster and loves to use his machine learning and data science skills in Kaggle Competitions and predictive analytics software development. He has more than 15 years of experience in information technology. Post his masters in computer information systems from Krasnoyarsk State Technical University (KSTU), he started his career in data warehousing and business intelligence and gradually moved to big data and data science. He holds a lot of experience in predictive analytics in a wide array of domains and tasks. Prior to H2O.ai, Dmitry held the position of SAP BW Developer at Chevron, Data Scientist at EPAM, and that of Lead Software Engineer with the Russian Federation.
Deep Learningについて、日本情報システム・ユーザー協会(JUAS)のJUAS ビジネスデータ研究会 AI分科会で発表しました。その際に使用した資料です。専門家向けではなく、一般向けの資料です。
なお本資料は、2015年12月の日本情報システム・ユーザー協会(JUAS)での発表資料の改訂版となります。
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
Feature Engineering - Getting most out of data for predictive modelsGabriel Moreira
How should data be preprocessed for use in machine learning algorithms? How to identify the most predictive attributes of a dataset? What features can generate to improve the accuracy of a model?
Feature Engineering is the process of extracting and selecting, from raw data, features that can be used effectively in predictive models. As the quality of the features greatly influences the quality of the results, knowing the main techniques and pitfalls will help you to succeed in the use of machine learning in your projects.
In this talk, we will present methods and techniques that allow us to extract the maximum potential of the features of a dataset, increasing flexibility, simplicity and accuracy of the models. The analysis of the distribution of features and their correlations, the transformation of numeric attributes (such as scaling, normalization, log-based transformation, binning), categorical attributes (such as one-hot encoding, feature hashing, Temporal (date / time), and free-text attributes (text vectorization, topic modeling).
Python, Python, Scikit-learn, and Spark SQL examples will be presented and how to use domain knowledge and intuition to select and generate features relevant to predictive models.
Slide explaining the distinction between bagging and boosting while understanding the bias variance trade-off. Followed by some lesser known scope of supervised learning. understanding the effect of tree split metric in deciding feature importance. Then understanding the effect of threshold on classification accuracy. Additionally, how to adjust model threshold for classification in supervised learning.
Note: Limitation of Accuracy metric (baseline accuracy), alternative metrics, their use case and their advantage and limitations were briefly discussed.
Decision tree is a type of supervised learning algorithm (having a pre-defined target variable) that is mostly used in classification problems. It is a tree in which each branch node represents a choice between a number of alternatives, and each leaf node represents a decision.
Top contenders in the 2015 KDD cup include the team from DataRobot comprising Owen Zhang, #1 Ranked Kaggler and top Kagglers Xavier Contort and Sergey Yurgenson. Get an in-depth look as Xavier describes their approach. DataRobot allowed the team to focus on feature engineering by automating model training, hyperparameter tuning, and model blending - thus giving the team a firm advantage.
Overview of tree algorithms from decision tree to xgboostTakami Sato
For my understanding, I surveyed popular tree algorithms on Machine Learning and their evolution. This is the first time I wrote a presentation in English. So, I am happy if you give me a feedback.
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...Edureka!
This Edureka Random Forest tutorial will help you understand all the basics of Random Forest machine learning algorithm. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Data Science concepts, learn random forest analysis along with examples. Below are the topics covered in this tutorial:
1) Introduction to Classification
2) Why Random Forest?
3) What is Random Forest?
4) Random Forest Use Cases
5) How Random Forest Works?
6) Demo in R: Diabetes Prevention Use Case
You can also take a complete structured training, check out the details here: https://goo.gl/AfxwBc
BTE 320-498 Summer 2017 Take Home Exam (200 poi.docxAASTHA76
BTE 320-498/ Summer 2017
Take Home Exam
(200 points)
Due 6/30/2017 – 11:59pm (No extensions)
Presentation in class Friday June 30 at 5:30 PM
Required Part
1. (a) Explain in English what the following function will do. Explain how it works.
(b) What will be the output if the following calls are made:
whoknows(2) =
whoknows(15) =
whoknows(-3) =
(c) Write a function digitize (using loops) that takes two parameters: one integer
parameter and one bool parameter. The function would print the integer one digit
at a time each on a separate line. If the bool parameter passed were true, the
function would print the digits from the most significant digit to the least
significant. Otherwise, it would print it in the reverse order (least significant to
most significant).
Function Call Output
digitize(1758,true) 1
7
5
8
digitize(1758,false) 8
5
7
1
(d) Write a function (without using loops) that reverses the digits in an integer
and prints out the integer in this reverse form. It is not necessary to calculate the
value of the reverse integer, just print out the digits in reverse order. The function
should be called reverse. Remember to explain your functions, either by adding
comments or using pseudocode or showing how you derived the function. State
any assumptions you make.
2. (a) Write a function, printdivisors, that takes a single integer parameter and prints
all the numbers less that the parameter that are divisors of the parameter (i.e.
divides it without a remainder) including 1. So printdivisors(6) will print 1,2,3.
Note you may use a wrapper function or default parameters.
(b) Write a function, sumdivisors, that takes a single integer parameter and returns
the sum of all the divisors of the parameter (including 1). So sumdivisors(6) will
return 6 as 1+2+3=6. Note you may use a wrapper function or default parameters.
(c) Write a function, allperfects, that takes two parameters, each an integer, in any
order and prints out all the perfect numbers between the lower parameter and the
higher parameter. A perfect number is one is which the sum of its divisors is equal
to the number itself.
Remember to explain your functions, either by adding comments or showing how
you derived the function. State any assumptions you make.
3. (a) Write a recursive function, printZeros, which prints out a series of zeros. The
function takes one parameter and prints out the number of zeros specified by the
parameter. So printZeros(4) will print: 0000 and printZeros(2) will print 00.
(b) Write a recursive function, printZPattern, which prints out a pattern of zeros
as follows:
printZPattern(3) outputs:
000
printZPattern(1) outputs:
0
printZPattern(4) outputs:
0000
00 000
0 00
0
(c) How would you modify your second function to print a mirror pattern, such as
(you do not have to code this one, just explain):
printZPattern2(3) outputs:
000
00
0
00
000
Re ...
Deep Learningについて、日本情報システム・ユーザー協会(JUAS)のJUAS ビジネスデータ研究会 AI分科会で発表しました。その際に使用した資料です。専門家向けではなく、一般向けの資料です。
なお本資料は、2015年12月の日本情報システム・ユーザー協会(JUAS)での発表資料の改訂版となります。
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
Feature Engineering - Getting most out of data for predictive modelsGabriel Moreira
How should data be preprocessed for use in machine learning algorithms? How to identify the most predictive attributes of a dataset? What features can generate to improve the accuracy of a model?
Feature Engineering is the process of extracting and selecting, from raw data, features that can be used effectively in predictive models. As the quality of the features greatly influences the quality of the results, knowing the main techniques and pitfalls will help you to succeed in the use of machine learning in your projects.
In this talk, we will present methods and techniques that allow us to extract the maximum potential of the features of a dataset, increasing flexibility, simplicity and accuracy of the models. The analysis of the distribution of features and their correlations, the transformation of numeric attributes (such as scaling, normalization, log-based transformation, binning), categorical attributes (such as one-hot encoding, feature hashing, Temporal (date / time), and free-text attributes (text vectorization, topic modeling).
Python, Python, Scikit-learn, and Spark SQL examples will be presented and how to use domain knowledge and intuition to select and generate features relevant to predictive models.
Slide explaining the distinction between bagging and boosting while understanding the bias variance trade-off. Followed by some lesser known scope of supervised learning. understanding the effect of tree split metric in deciding feature importance. Then understanding the effect of threshold on classification accuracy. Additionally, how to adjust model threshold for classification in supervised learning.
Note: Limitation of Accuracy metric (baseline accuracy), alternative metrics, their use case and their advantage and limitations were briefly discussed.
Decision tree is a type of supervised learning algorithm (having a pre-defined target variable) that is mostly used in classification problems. It is a tree in which each branch node represents a choice between a number of alternatives, and each leaf node represents a decision.
Top contenders in the 2015 KDD cup include the team from DataRobot comprising Owen Zhang, #1 Ranked Kaggler and top Kagglers Xavier Contort and Sergey Yurgenson. Get an in-depth look as Xavier describes their approach. DataRobot allowed the team to focus on feature engineering by automating model training, hyperparameter tuning, and model blending - thus giving the team a firm advantage.
Overview of tree algorithms from decision tree to xgboostTakami Sato
For my understanding, I surveyed popular tree algorithms on Machine Learning and their evolution. This is the first time I wrote a presentation in English. So, I am happy if you give me a feedback.
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...Edureka!
This Edureka Random Forest tutorial will help you understand all the basics of Random Forest machine learning algorithm. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Data Science concepts, learn random forest analysis along with examples. Below are the topics covered in this tutorial:
1) Introduction to Classification
2) Why Random Forest?
3) What is Random Forest?
4) Random Forest Use Cases
5) How Random Forest Works?
6) Demo in R: Diabetes Prevention Use Case
You can also take a complete structured training, check out the details here: https://goo.gl/AfxwBc
BTE 320-498 Summer 2017 Take Home Exam (200 poi.docxAASTHA76
BTE 320-498/ Summer 2017
Take Home Exam
(200 points)
Due 6/30/2017 – 11:59pm (No extensions)
Presentation in class Friday June 30 at 5:30 PM
Required Part
1. (a) Explain in English what the following function will do. Explain how it works.
(b) What will be the output if the following calls are made:
whoknows(2) =
whoknows(15) =
whoknows(-3) =
(c) Write a function digitize (using loops) that takes two parameters: one integer
parameter and one bool parameter. The function would print the integer one digit
at a time each on a separate line. If the bool parameter passed were true, the
function would print the digits from the most significant digit to the least
significant. Otherwise, it would print it in the reverse order (least significant to
most significant).
Function Call Output
digitize(1758,true) 1
7
5
8
digitize(1758,false) 8
5
7
1
(d) Write a function (without using loops) that reverses the digits in an integer
and prints out the integer in this reverse form. It is not necessary to calculate the
value of the reverse integer, just print out the digits in reverse order. The function
should be called reverse. Remember to explain your functions, either by adding
comments or using pseudocode or showing how you derived the function. State
any assumptions you make.
2. (a) Write a function, printdivisors, that takes a single integer parameter and prints
all the numbers less that the parameter that are divisors of the parameter (i.e.
divides it without a remainder) including 1. So printdivisors(6) will print 1,2,3.
Note you may use a wrapper function or default parameters.
(b) Write a function, sumdivisors, that takes a single integer parameter and returns
the sum of all the divisors of the parameter (including 1). So sumdivisors(6) will
return 6 as 1+2+3=6. Note you may use a wrapper function or default parameters.
(c) Write a function, allperfects, that takes two parameters, each an integer, in any
order and prints out all the perfect numbers between the lower parameter and the
higher parameter. A perfect number is one is which the sum of its divisors is equal
to the number itself.
Remember to explain your functions, either by adding comments or showing how
you derived the function. State any assumptions you make.
3. (a) Write a recursive function, printZeros, which prints out a series of zeros. The
function takes one parameter and prints out the number of zeros specified by the
parameter. So printZeros(4) will print: 0000 and printZeros(2) will print 00.
(b) Write a recursive function, printZPattern, which prints out a pattern of zeros
as follows:
printZPattern(3) outputs:
000
printZPattern(1) outputs:
0
printZPattern(4) outputs:
0000
00 000
0 00
0
(c) How would you modify your second function to print a mirror pattern, such as
(you do not have to code this one, just explain):
printZPattern2(3) outputs:
000
00
0
00
000
Re ...
Python is the choice llanguage for data analysis,
The aim of this slide is to provide a comprehensive learning path to people new to python for data analysis. This path provides a comprehensive overview of the steps you need to learn to use Python for data analysis.
Week 2 iLab TCO 2 — Given a simple problem, design a solutio.docxmelbruce90096
Week 2 iLab
TCO 2 — Given a simple problem, design a solution algorithm that uses arithmetic expressions and built-in functions.
Scenario
Your goal is to solve the following simple programming exercise. You have been contracted by a local antique store to design an algorithm determining the total purchases and sales tax. According to the store owner, the user will need to see the subtotal, the sales tax amount, and the total purchase amount. A customer is purchasing four items from the antique store. Design an algorithm where the user will enter the price of each of the four items. The algorithm will determine the subtotal, the sales tax, and the total purchase amount. Assume the sales tax is 7%.
Be sure to think about the logic and design first (input-process-output (IPO) chart, flowchart, and pseudocode). Display all output using currency formatting.
Advanced (optional): Use a constant for the 7% sales tax.
Rubric
Point distribution for this activity:
iLab Activity
Document
Points possible
Points received
Variable list
10
IPO chart
10
Flowchart
10
Pseudocode/C# code
10
Desk-check
10
Total Points
50
Name:_________________
(1) Variable List With Data Type
List all the variables you will use (use valid variable names). Indicate whether the data type is string, integer, or double, and so on.
(2) IPO Model
List the inputs, any processes, calculations, and outputs. Use the same valid variable names you used in Step 1.
Inputs
Process (calculations)
Outputs
(3) Flowchart
Use MS Visio to create a flowchart. Paste the flowchart here, or attach as separate document. Use the same valid variable names you used in Step 1.
(4) Pseudocode or C# Code
Describe your solution using pseudocode or actual C# code. Use the same valid variable names you selected in Step 1.
(5) Desk-Check
Desk-check your solution by selecting appropriate test data.
Test data: List the values for your test data.
Expected output: What is the expected output of your program?
Step
Variables (write variable names in first line below)
Output
Enter step numbers
1
2
3
Week 2 Activity—Game Seating Charges
TCO 2—Given a simple problem, design a solution algorithm that uses arithmetic expressions and built-in functions.
Assignment
Your goal is to solve the following simple programming exercise. You have been contracted by a local stadium to design an algorithm determining the total seating charges for any game held at the stadium. Lower-level seats cost $25 per seat, mid-level seats cost $15 per seat, and upper-level seats cost $10 per seat. The algorithm should ask the user for the number of seats being purchased in each seating level. Then, the algorithm will determine the total for each level and a grand total for the enti.
Big Data & Machine Learning - TDC2013 Sao PauloOCTO Technology
BigData and Machine Learning: Usage and Opportunities for your IT department
Talk presented at The Developer Conference in São Paulo - 12/0713
Mathieu DESPRIEE
Building a performing Machine Learning model from A to ZCharles Vestur
A 1-hour read to become highly knowledgeable about Machine learning and the machinery underneath, from scratch!
A presentation introducing to all fundamental concepts of Machine Learning step by step, following a classical approach to build a performing model. Simple examples and illustrations are used all along the presentation to make the concepts easier to grasp.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Feature Importance Analysis with XGBoost in Tax audit
1. Preparation of a tax audit
with Machine Learning
“Feature Importance” analysis applied
to accounting using XGBoost R package
Meetup Paris Machine Learning Applications Group – Paris – May 13th, 2015
2. Who am I?
Michaël Benesty
@pommedeterre33 @pommedeterresautee fr.linkedin.com/in/mbenesty
• CPA (Paris): 4 years
• Financial auditor (NYC): 2 years
• Tax law associate @ Taj (Deloitte - Paris) since 2013
• Department TMC (Computerized tax audit)
• Co-author XGBoost R package with Tianqi Chen (main author) & Tong
He (package maintainer)
3. WARNING
Everything that will be presented
tonight is exclusively based
on open source software
Please try the same at home
4. Plan
1. Accounting & tax audit context
2. Machine learning application
3. Gradient boosting theory
5. Accounting crash course 101 (1/2)
Accounting is a way to transcribe economical operations.
• My company buys €10 worth of potatoes to cook delicious French
fries.
Account number Account Name Debit Credit
601 Purchase 10.00
512 Bank 10.00
Description: Buy €10 of potatoes to XYZ
6. Accounting crash course 101 (2/2)
French Tax law requires many more information in my accounting:
• Who?
• Name of the potatoes provider
• Account of the potatoes provider
• When?
• When the accounting entry is posted
• Date of the invoice from the potatoes seller
• Payment date
• …
• What?
• Invoice ref
• Item description
• …
• How Much?
• Foreign currency
• …
• …
7. Tax audit context
Since 2014, companies audited by the French tax administration shall
provide their entire accounting as a CSV / XML file.
Simplified* example:
EcritureDate|CompteNum|CompteLib|PieceDate|EcritureLib|Debit|Credit
20110805|601|Purchase|20110701|Buy potatoes|10|0
20110805|512|Bank|20110701|Buy potatoes|0|10
*: usually there are 18 columns
8. Example of a trivial apparent anomaly
Article 39 of French tax code states that (simplified):
“For FY 2011, an expense is deductible from P&L 2011 when its
operative event happens in 2011”
In our audit software (ACL), we add a new Boolean feature to
the dataset: True if the invoice date is out of 2011, False
otherwise
9. Boring tasks to perform by a human
Find a pattern to predict if accounting entry will be tagged as an anomaly
regarding the way its fields are populated.
1. Take time to display lines marked as out of FY
demo dataset (1 500 000 lines) ≈ 100 000 lines marked having invoice out of FY
2. Take time to analyze 18 columns of the accounting
from 200 to >> 100 000 different values per column
3. Take time to find a pattern/rule by hand. Use filters. Iterate.
4. Take time to check that pattern found in selection is not in remaining
data
10. What Machine Learning can do to help?
1. Look at whole dataset without human help
2. Analyze each value in each column without human help
3. Find a pattern without human help
4. Generate a (R-Markdown) report without human help
Requirements:
• Interpretable
• Scalable
• Works (almost) out of the box
11. 2 tries for a success
1st try: Subgroup mining (Failed)
Find feature values common to a group of observations which are
different from the rest of the dataset.
2nd try: Feature importance on decision tree based
algorithm (Success)
Use predictive algorithm to describe the existing data.
12. 1st try: Subgroup mining algorithm
Find feature values common to a group of observations which are different from
the rest of the dataset.
1. Find an existing open source project
2. Check it gives interpretable results in reasonable time
3. Help project main author on:
• reducing memory footprint by 50%, fixing many small bugs (2 months)
• R interface (1 month)
• Find and fix a huge bug in the core algorithm just before going in production (1 week)
After the last bug fix, the algorithm was too slow to be used on real accounting…
13. 2nd try: XGBoost
Available on R, Python, Julia, CLI
Fast speed and memory efficient
• Can be more than 10 times faster than GBM in Sklearn and R (Benchmark on GitHub deposit)
• New external memory learning implementation (based on distributed computation implementation)
Distributed and Portable
• The distributed version runs on Hadoop (YARN), MPI, SGE etc.
• Scales to billions of examples (tested on 4 billions observations / 20 computers)
XGBoost won many Kaggle competitions, like:
• WWW2015 Microsoft Malware Classification Challenge (BIG 2015)
• Tradeshift Text Classification
• HEP meets ML Award in Higgs Boson Challenge
• XGBoost is by far the most discussed tool in ongoing Otto competition
14. Iterative feature importance with XGBoost (1/3)
Shows which features are the most important to predict if an entry has
its field PieceDate (invoice date) out of the Fiscal Year.
In this example, FY is from 2010/12/01
to 2011/11/30
It is not surprising to have PieceDate
among the most important features
because the label is based on this
feature! But the distribution of
important invoice date is interesting
here.
Most entries out of the FY have the
same invoice date:
20111201
15. Iterative feature importance with XGBoost (2/3)
Since in previous slide, one feature represents > 99% of the gain we
remove it from the dataset and we run a new analysis.
Most entries
are related to
the same
JournalCode
(nature of
operation)
16. Iterative feature importance with XGBoost (3/3)
Entries marked as out of FY have the same invoice date, and are related
to the same JournalCode. We run a new analysis without JournalCode:
Most of the
entries with an
invoice date
issue are
related to
Inventory
accounts!
That’s the kind
of pattern we
were looking
for
17. XGBoost explained in 2 pics (1/2)
Classification And Regression Tree (CART)
Decision tree is about learning a set of rules:
if 𝑋1 ≤ 𝑡1 & if 𝑋2 ≤ 𝑡2 then 𝑅1
if 𝑋1 ≤ 𝑡1 & if 𝑋2 > 𝑡2 then 𝑅2
…
Advantages:
• Interpretable
• Robust
• Non linear link
Drawbacks:
• Weak Learner
• High variance
18. XGBoost explained in 2 pics (2/2)
Gradient boosting on CART
• One more tree = loss mean decreases = more data explained
• Each tree captures some parts of the model
• Original data points in tree 1 are replaced by the loss points for tree 2 and 3
19. Learning a model ≃ Minimizing the loss
function
Given a prediction 𝑦 and a label 𝑦, a loss function ℓ measures the
discrepancy between the algorithm's 𝑛 prediction and the desired 𝑛 output.
• Loss on training data:
𝐿 =
𝑖=1
𝑛
ℓ(𝑦𝑖, 𝑦𝑖)
• Logistic loss for binary classification:
ℓ 𝑦𝑖, 𝑦𝑖 = −
1
𝑛 𝑖=1
𝑛
𝑦𝑖 log 𝑦𝑖 + 1 − 𝑦𝑖 log(1 − 𝑦𝑖)
Logistic loss punishes by the infinity* a false certainty in prediction 0; 1
*: lim
𝑥→0+
log 𝑥 = −∞
20. Growing a tree
In practice, we grow the tree greedily:
• Start from tree with depth 0
• For each leaf node of the tree, try to add a split. The change of objective after adding the
split is:
𝐺𝑎𝑖𝑛 =
𝐺 𝐿
2
𝐻𝐿 + 𝜆
+
𝐺 𝑅
2
𝐻 𝑅 + 𝜆
−
𝐺 𝐿 + 𝐺 𝑅
2
𝐻 𝑅 + 𝐻𝐿 + 𝜆
− 𝛾
G is called sum of residual which means the general mean direction of the residual we
want to fit.
H corresponds to the sum of weights in all the instances.
𝛾 and 𝜆 are 2 regularization parameters.
Score of
left child Score of right child Score if we don’t split
Complexity cost by
introducing
Additional leaf
Tianqi Chen. (Oct. 2014) Learning about the model: Introduction to Boosted Trees
21. Gradient Boosting
Iteratively learning weak classifiers with respect to a distribution and
adding them to a final strong classifier.
• Each round we learn a new tree to approximate the negative gradient
and minimize the loss
𝑦𝑖
(𝑡)
= 𝑦𝑖
(𝑡−1)
+ 𝑓𝑡(𝑥𝑖)
• Loss:
𝑂𝑏𝑗(𝑡)
=
𝑖=1
𝑛
ℓ 𝑦𝑖, 𝑦 𝑡−1
+ 𝑓𝑡(𝑥𝑖) + Ω(𝑓𝑡)
Friedman, J. H. (March 1999) Stochastic Gradient Boosting. Complexity cost
by introducing
additional tree
Tree t predictionWhole model prediction
22. Gradient descent
“Gradient Boosting is a special case of the functional gradient descent
view of boosting.”
Mason, L.; Baxter, J.; Bartlett, P. L.; Frean, Marcus (May 1999). Boosting Algorithms as Gradient Descent in Function Space.
2D View
Loss
Sometimes
you are lucky
Usually you finish here
23. Building a good model for feature importance
For feature importance analysis, in Simplicity Vs Accuracy trade-off,
choose the first. Few rule of thumbs (empiric):
• nrounds: number of trees. Keep it low (< 20 trees)
• max.depth: deepness of each tree. Keep it low (< 7)
• Run iteratively the feature importance analysis and remove the most
important features until the 3 most important features represent less
than 70% of the whole gain.
24. Love XGBoost? Vote XGBoost!
Otto challenge
Help XGBoost open source project to spread knowledge by voting for
our script explaining how to use our tool (no prize to win)
https://www.kaggle.com/users/32300/tianqi-chen/otto-group-product-classification-
challenge/understanding-xgboost-model-on-otto-data
25. Too much time in your life?
• General papers about gradient boosting:
• Greedy function approximation a gradient boosting machine. J.H. Friedman
• Stochastic Gradient Boosting. J.H. Friedman
• Tricks used by XGBoost
• Additive logistic regression a statistical view of boosting. J.H. Friedman T. Hastie R. Tibshirani (for the second-order statistics for tree
splitting)
• Learning Nonlinear Functions Using Regularized Greedy Forest. R. Johnson and T. Zhang (proposes to do fully corrective step, as well
as regularizing the tree complexity)
• Learning about the model: Introduction to Boosted Trees. Tianqi Chen. (from the author of XGBoost)