In our 3rd applied machine learning online course, we'll dive into different methods for data preparation, including handling missing values, dummification and rescaling.
Applied Data Science Course Part 1: Concepts & your first ML modelDataiku
In this first course of our Applied Data Science online course series, you'll learn about the mindset shift of going from small to big data, basic definitions and concepts, and an overview of the data science workflow.
Ethics in Data Science and Machine LearningHJ van Veen
Introduction and overview on ethics in data science and machine learning, variations and examples of algorithmic bias, and a call-to-action for self-regulation. Given by Thierry Silbermann as part of the Sao Paulo Machine Learning Meetup, theme: "Ethics".
https://www.linkedin.com/in/thierrysilbermann
https://twitter.com/silbermannt
https://github.com/thierry-silbermann
Ethical hacking is the art of legally exploiting the security weaknesses to steal confidential/personal information from an individual or organization’s network.
Applied Data Science Course Part 1: Concepts & your first ML modelDataiku
In this first course of our Applied Data Science online course series, you'll learn about the mindset shift of going from small to big data, basic definitions and concepts, and an overview of the data science workflow.
Ethics in Data Science and Machine LearningHJ van Veen
Introduction and overview on ethics in data science and machine learning, variations and examples of algorithmic bias, and a call-to-action for self-regulation. Given by Thierry Silbermann as part of the Sao Paulo Machine Learning Meetup, theme: "Ethics".
https://www.linkedin.com/in/thierrysilbermann
https://twitter.com/silbermannt
https://github.com/thierry-silbermann
Ethical hacking is the art of legally exploiting the security weaknesses to steal confidential/personal information from an individual or organization’s network.
My presentation at The Richmond Data Science Community (Jan 2018). The slides are slightly different than what I had presented last year at The Data Intelligence Conference.
Data Science Training | Data Science For Beginners | Data Science With Python...Simplilearn
This Data Science presentation will help you understand what is Data Science, who is a Data Scientist, what does a Data Scientist do and also how Python is used for Data Science. Data science is an interdisciplinary field of scientific methods, processes, algorithms and systems to extract knowledge or insights from data in various forms, either structured or unstructured, similar to data mining. This Data Science tutorial will help you establish your skills at analytical techniques using Python. With this Data Science video, you’ll learn the essential concepts of Data Science with Python programming and also understand how data acquisition, data preparation, data mining, model building & testing, data visualization is done. This Data Science tutorial is ideal for beginners who aspire to become a Data Scientist.
This Data Science presentation will cover the following topics:
1. What is Data Science?
2. Who is a Data Scientist?
3. What does a Data Scientist do?
This Data Science with Python course will establish your mastery of data science and analytics techniques using Python. With this Python for Data Science Course, you’ll learn the essential concepts of Python programming and become an expert in data analytics, machine learning, data visualization, web scraping and natural language processing. Python is a required skill for many data science positions, so jumpstart your career with this interactive, hands-on course.
Why learn Data Science?
Data Scientists are being deployed in all kinds of industries, creating a huge demand for skilled professionals. A data scientist is the pinnacle rank in an analytics organization. Glassdoor has ranked data scientist first in the 25 Best Jobs for 2016, and good data scientists are scarce and in great demand. As a data you will be required to understand the business problem, design the analysis, collect and format the required data, apply algorithms or techniques using the correct tools, and finally make recommendations backed by data.
You can gain in-depth knowledge of Data Science by taking our Data Science with python certification training course. With Simplilearn’s Data Science certification training course, you will prepare for a career as a Data Scientist as you master all the concepts and techniques. Those who complete the course will be able to:
1. Gain an in-depth understanding of data science processes, data wrangling, data exploration, data visualization, hypothesis building, and testing. You will also learn the basics of statistics.
Install the required Python environment and other auxiliary tools and libraries
2. Understand the essential concepts of Python programming such as data types, tuples, lists, dicts, basic operators and functions
3. Perform high-level mathematical computing using the NumPy package and its largelibrary of mathematical functions.
Learn more at: https://www.simplilearn.com
Data Science is a wonderful technology that has applications in almost every field. Let's learn the basics of this domain on 16th March at (time).
Agenda
1. What is Data Science? How is it different from ML, DL, and AI
2. Why is this skill in demand?
3. What are some popular applications of Data Science
4. Popular tools and frameworks used in Data Science
Provides a brief overview of what machine learning is, how it works (theory), how to prepare data for a machine learning problem, an example case study, and additional resources.
If there is one crucial thing in building ML models, this would be the data preparation. That is the process of transforming raw data to a state where machine learning algorithms could be run to disclose insights and make predictions. Data preparation involves analysis, depends on the nature of the problem and the particular algorithms. As far as there are knowledge and experience involved, there is no such thing as automation, which makes the role of the data scientist the key to success.
ML is trendy and Microsoft already have more than 10 services to support ML. So we will focus on tools like Azure ML Workbench and Python for data preparation, review some common tricks to approach data and experiment in Azure ML Studio.
Supervised and Unsupervised Learning In Machine Learning | Machine Learning T...Simplilearn
This presentation on "Supervised and Unsupervised Learning" will help you understand what is machine learning, what are the types of Machine learning, what is supervised machine learning, types of supervised machine learning, what is unsupervised learning, types of unsupervised learning and what are the differences between supervised and unsupervised machine learning. In supervised learning, the model learns from a labeled data whereas in unsupervised learning, model trains itself on unlabeled data. Now, let us get started and understand supervised and unsupervised learning and how they are different from each other.
Below are the topics explained in this supervised and unsupervised learning in Machine Learning presentation-
1. What is Machine Learning
- Types of Machine Learning
- Supervised Learning
- Unsupervised Learning
2. Supervised Learning
- Types of Supervised Learning
3. Unsupervised Learning
- Types of Unsupervised Learning
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars. This Machine Learning course prepares engineers, data scientists and other professionals with the knowledge and hands-on skills required for certification and job competency in Machine Learning.
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
By the end of this Machine Learning course, you will be able to:
1. Master the concepts of supervised, unsupervised and reinforcement learning concepts and modeling.
2. Gain practical mastery over principles, algorithms, and applications of Machine Learning through a hands-on approach which includes working on 28 projects and one capstone project.
3. Acquire a thorough knowledge of the mathematical and heuristic aspects of Machine Learning.
4. Understand the concepts and operation of support vector machines, kernel SVM, naive Bayes, decision tree classifier, random forest classifier, logistic regression, K-nearest neighbors, K-means clustering and more.
5. Be able to model a wide variety of robust Machine Learning algorithms including deep learning, clustering, and recommendation systems
Learn more at: https://www.simplilearn.com/
Presentation at Data ScienceTech Institute campuses, Paris and Nice, May 2016 , including Intro, Data Science History and Terms; 10 Real-World Data Science Lessons; Data Science Now: Polls & Trends; Data Science Roles; Data Science Job Trends; and Data Science Future
Artificial Intelligence Introduction & Business usecasesVikas Jain
Vikas Jain is a leading keynote speaker on artificial intelligence.
Develop AI Solution mindset to help business leaders & professionals from IT/non-IT Industry can use it to solve complex problems and grow their business.
Online fraud costs the global economy more than $400 billion, with more than 800 million personal records stolen in 2013 alone. Increasingly, fraud has diversified to different digital channels, including mobile and online payments, creating new challenges as innovative fraud patterns emerge. Hence it is still a challenge to find effective methods to mitigate fraud. Existing solutions include simple if-then rules and classical machine learning algorithms.
From an academic perspective, credit card fraud detection is a standard classification problem, in which historical transaction data is used to predict future frauds. However, practical aspects make the problem more complex. Indeed, existent comparison measures lack a realistic representation of monetary gains and losses, which is necessary for effective fraud detection. Moreover, there is an enormous amount of transactions from which only a tiny part are frauds, which implies a huge class imbalance. Additionally, a real fraud detection system is required to give a response in milliseconds. This criterion needs to be taken into account in the modeling process in order for the system to be successfully implemented. To solve these problems, in this presentation two recently proposed algorithms are compared: Bayes minimum risk and example-dependent cost-sensitive decision tree. These methods are compared with state of the art algorithms and shows significant improvements measured by financial savings.
Feature Engineering in Machine LearningKnoldus Inc.
In this Knolx we are going to explore Data Preprocessing and Feature Engineering Techniques. We will also understand what is Feature Engineering and its importance in Machine Learning. How Feature Engineering can help in getting the best results from the algorithms.
In this presentation, two different data-sets are being collected to implement the machine learning classification techniques introduced from introduction to data mining and machine learning coursework. Both data-sets are collected by analyzing their output and team members interest. Following are the data-sets named as, Electricity grid stability simulated data-set and Face Recognition on Olivetti Data set
Webinar - The Science of Segmentation: What Questions You Should be Asking Yo...VMware Tanzu
Enterprise companies starting the transformation into a data-driven organization often wonder where to start. Companies have traditionally collected large amounts of data from sources such as operational systems. With the rise of big data, big data technologies and the Internet of Things (IoT), additional sources – such as sensor readings and social media posts – are rapidly becoming available. In order to effectively utilize both traditional sources and new ones, companies first need to join and view the data in a holistic context. After establishing a data lake to bring all data sources together in a single analytics environment, one of the first data science projects worth exploring is segmentation, which automatically identifies patterns.
In this DSC webinar, two Pivotal data scientists will discuss:
· What segmentation is
· Traditional approaches to segmentation
· How big data technologies are enabling advances in this field
They will also share some stories from past data science engagements, outline best practices and discuss the kinds of insights that can be derived from a big data approach to segmentation using both internal and external data sources.
Panelist:
Grace Gee, Data Scientist -- Pivotal
Jarrod Vawdrey, Data Scientist -- Pivotal
Hosted by:
Tim Matteson, Co-Founder -- Data Science Central
To learn more about data at Pivotal, visit http://www.pivotal.io/big-data
To view video, visit https://www.youtube.com/watch?v=svKLdMWusGA
My presentation at The Richmond Data Science Community (Jan 2018). The slides are slightly different than what I had presented last year at The Data Intelligence Conference.
Data Science Training | Data Science For Beginners | Data Science With Python...Simplilearn
This Data Science presentation will help you understand what is Data Science, who is a Data Scientist, what does a Data Scientist do and also how Python is used for Data Science. Data science is an interdisciplinary field of scientific methods, processes, algorithms and systems to extract knowledge or insights from data in various forms, either structured or unstructured, similar to data mining. This Data Science tutorial will help you establish your skills at analytical techniques using Python. With this Data Science video, you’ll learn the essential concepts of Data Science with Python programming and also understand how data acquisition, data preparation, data mining, model building & testing, data visualization is done. This Data Science tutorial is ideal for beginners who aspire to become a Data Scientist.
This Data Science presentation will cover the following topics:
1. What is Data Science?
2. Who is a Data Scientist?
3. What does a Data Scientist do?
This Data Science with Python course will establish your mastery of data science and analytics techniques using Python. With this Python for Data Science Course, you’ll learn the essential concepts of Python programming and become an expert in data analytics, machine learning, data visualization, web scraping and natural language processing. Python is a required skill for many data science positions, so jumpstart your career with this interactive, hands-on course.
Why learn Data Science?
Data Scientists are being deployed in all kinds of industries, creating a huge demand for skilled professionals. A data scientist is the pinnacle rank in an analytics organization. Glassdoor has ranked data scientist first in the 25 Best Jobs for 2016, and good data scientists are scarce and in great demand. As a data you will be required to understand the business problem, design the analysis, collect and format the required data, apply algorithms or techniques using the correct tools, and finally make recommendations backed by data.
You can gain in-depth knowledge of Data Science by taking our Data Science with python certification training course. With Simplilearn’s Data Science certification training course, you will prepare for a career as a Data Scientist as you master all the concepts and techniques. Those who complete the course will be able to:
1. Gain an in-depth understanding of data science processes, data wrangling, data exploration, data visualization, hypothesis building, and testing. You will also learn the basics of statistics.
Install the required Python environment and other auxiliary tools and libraries
2. Understand the essential concepts of Python programming such as data types, tuples, lists, dicts, basic operators and functions
3. Perform high-level mathematical computing using the NumPy package and its largelibrary of mathematical functions.
Learn more at: https://www.simplilearn.com
Data Science is a wonderful technology that has applications in almost every field. Let's learn the basics of this domain on 16th March at (time).
Agenda
1. What is Data Science? How is it different from ML, DL, and AI
2. Why is this skill in demand?
3. What are some popular applications of Data Science
4. Popular tools and frameworks used in Data Science
Provides a brief overview of what machine learning is, how it works (theory), how to prepare data for a machine learning problem, an example case study, and additional resources.
If there is one crucial thing in building ML models, this would be the data preparation. That is the process of transforming raw data to a state where machine learning algorithms could be run to disclose insights and make predictions. Data preparation involves analysis, depends on the nature of the problem and the particular algorithms. As far as there are knowledge and experience involved, there is no such thing as automation, which makes the role of the data scientist the key to success.
ML is trendy and Microsoft already have more than 10 services to support ML. So we will focus on tools like Azure ML Workbench and Python for data preparation, review some common tricks to approach data and experiment in Azure ML Studio.
Supervised and Unsupervised Learning In Machine Learning | Machine Learning T...Simplilearn
This presentation on "Supervised and Unsupervised Learning" will help you understand what is machine learning, what are the types of Machine learning, what is supervised machine learning, types of supervised machine learning, what is unsupervised learning, types of unsupervised learning and what are the differences between supervised and unsupervised machine learning. In supervised learning, the model learns from a labeled data whereas in unsupervised learning, model trains itself on unlabeled data. Now, let us get started and understand supervised and unsupervised learning and how they are different from each other.
Below are the topics explained in this supervised and unsupervised learning in Machine Learning presentation-
1. What is Machine Learning
- Types of Machine Learning
- Supervised Learning
- Unsupervised Learning
2. Supervised Learning
- Types of Supervised Learning
3. Unsupervised Learning
- Types of Unsupervised Learning
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars. This Machine Learning course prepares engineers, data scientists and other professionals with the knowledge and hands-on skills required for certification and job competency in Machine Learning.
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
By the end of this Machine Learning course, you will be able to:
1. Master the concepts of supervised, unsupervised and reinforcement learning concepts and modeling.
2. Gain practical mastery over principles, algorithms, and applications of Machine Learning through a hands-on approach which includes working on 28 projects and one capstone project.
3. Acquire a thorough knowledge of the mathematical and heuristic aspects of Machine Learning.
4. Understand the concepts and operation of support vector machines, kernel SVM, naive Bayes, decision tree classifier, random forest classifier, logistic regression, K-nearest neighbors, K-means clustering and more.
5. Be able to model a wide variety of robust Machine Learning algorithms including deep learning, clustering, and recommendation systems
Learn more at: https://www.simplilearn.com/
Presentation at Data ScienceTech Institute campuses, Paris and Nice, May 2016 , including Intro, Data Science History and Terms; 10 Real-World Data Science Lessons; Data Science Now: Polls & Trends; Data Science Roles; Data Science Job Trends; and Data Science Future
Artificial Intelligence Introduction & Business usecasesVikas Jain
Vikas Jain is a leading keynote speaker on artificial intelligence.
Develop AI Solution mindset to help business leaders & professionals from IT/non-IT Industry can use it to solve complex problems and grow their business.
Online fraud costs the global economy more than $400 billion, with more than 800 million personal records stolen in 2013 alone. Increasingly, fraud has diversified to different digital channels, including mobile and online payments, creating new challenges as innovative fraud patterns emerge. Hence it is still a challenge to find effective methods to mitigate fraud. Existing solutions include simple if-then rules and classical machine learning algorithms.
From an academic perspective, credit card fraud detection is a standard classification problem, in which historical transaction data is used to predict future frauds. However, practical aspects make the problem more complex. Indeed, existent comparison measures lack a realistic representation of monetary gains and losses, which is necessary for effective fraud detection. Moreover, there is an enormous amount of transactions from which only a tiny part are frauds, which implies a huge class imbalance. Additionally, a real fraud detection system is required to give a response in milliseconds. This criterion needs to be taken into account in the modeling process in order for the system to be successfully implemented. To solve these problems, in this presentation two recently proposed algorithms are compared: Bayes minimum risk and example-dependent cost-sensitive decision tree. These methods are compared with state of the art algorithms and shows significant improvements measured by financial savings.
Feature Engineering in Machine LearningKnoldus Inc.
In this Knolx we are going to explore Data Preprocessing and Feature Engineering Techniques. We will also understand what is Feature Engineering and its importance in Machine Learning. How Feature Engineering can help in getting the best results from the algorithms.
In this presentation, two different data-sets are being collected to implement the machine learning classification techniques introduced from introduction to data mining and machine learning coursework. Both data-sets are collected by analyzing their output and team members interest. Following are the data-sets named as, Electricity grid stability simulated data-set and Face Recognition on Olivetti Data set
Webinar - The Science of Segmentation: What Questions You Should be Asking Yo...VMware Tanzu
Enterprise companies starting the transformation into a data-driven organization often wonder where to start. Companies have traditionally collected large amounts of data from sources such as operational systems. With the rise of big data, big data technologies and the Internet of Things (IoT), additional sources – such as sensor readings and social media posts – are rapidly becoming available. In order to effectively utilize both traditional sources and new ones, companies first need to join and view the data in a holistic context. After establishing a data lake to bring all data sources together in a single analytics environment, one of the first data science projects worth exploring is segmentation, which automatically identifies patterns.
In this DSC webinar, two Pivotal data scientists will discuss:
· What segmentation is
· Traditional approaches to segmentation
· How big data technologies are enabling advances in this field
They will also share some stories from past data science engagements, outline best practices and discuss the kinds of insights that can be derived from a big data approach to segmentation using both internal and external data sources.
Panelist:
Grace Gee, Data Scientist -- Pivotal
Jarrod Vawdrey, Data Scientist -- Pivotal
Hosted by:
Tim Matteson, Co-Founder -- Data Science Central
To learn more about data at Pivotal, visit http://www.pivotal.io/big-data
To view video, visit https://www.youtube.com/watch?v=svKLdMWusGA
Data as a Profit Driver – Emerging Techniques to Monetize Data as a Strategic...DATAVERSITY
The Digital Economy is changing the way organizations do business across the globe, and is set to transform the economy on an unprecedented scale. Business optimization, and entirely new business models are emerging as data-driven technology provides unprecedented opportunity for innovation and change. In many organizations, data not only supports business profitability, but data itself has become the critical business asset.
What does it mean to leverage data as a business asset? And how can today’s data-centric technologies support the data-driven revolution? Join our expert panelists as they discuss the latest innovations in the data landscape.
Big Data Developer Career Path: Job & Interview PreparationIntellipaat
Youtube link : https://www.youtube.com/watch?v=iggl879a0s8
Intellipaat Big Data Hadoop Training: https://intellipaat.com/big-data-hadoop-training/
Read complete Big Data Hadoop tutorial here: https://intellipaat.com/blog/tutorial/hadoop-tutorial/
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016Caserta
Caserta Concepts Founder and President, Joe Caserta, gave this presentation at Strata + Hadoop World 2016 in New York, NY. His session covers path-to-purchase analytics using a data lake and spark.
For more information, visit http://casertaconcepts.com/
We'll start by considering how Nonprofits use data - toward what ends. Then we'll talk about how "data-people" do the things they do, i.e. what are the basic processes and tools common to any data project? Finally, we'll touch on some relevant technology trends, and consider one of today's hottest job titles: "Data Scientist"
Visuals present better and quicker insights when forecasting sales. At a glance business strategies can be planned - time periods, geographic locations, pick variables that can highlight what works or doesn't, where it scores or doesn't, join two or more variables that work in specific geographical locations or don't, etc. All this put together makes data virtualization a very nifty tool to project what can make or break your predictions for sales!
Testing Big Data application is more a verification of its data processing rather than testing the individual features. It demands a high level of testing skills as the processing is very fast.
Leap to Next Generation Data Management with Denodo 7.0Denodo
Watch Mike's keynote presentation from Fast Data Strategy Virtual Summit here: https://goo.gl/cC3bCq
Mike Ferguson is an independent analyst and the Managing Director for Intelligent Business Strategies. In this session, he will be discussing how Denodo Platform 7.0 enables and redefines data management for the next generation.
Attend this session to discover:
• Perspectives from independent industry analyst Mike Ferguson
• Why data virtualization is gaining momentum
• How Denodo Platform 7.0 enables next generation data management
Data Architecture Strategies: Building an Enterprise Data Strategy – Where to...DATAVERSITY
The majority of successful organizations in today’s economy are data-driven, and innovative companies are looking at new ways to leverage data and information for strategic advantage. While the opportunities are vast, and the value has clearly been shown across a number of industries in using data to strategic advantage, the choices in technology can be overwhelming. From Big Data to Artificial Intelligence to Data Lakes and Warehouses, the industry is continually evolving to provide new and exciting technological solutions.
This webinar will help make sense of the various data architectures & technologies available, and how to leverage them for business value and success. A practical framework will be provided to generate “quick wins” for your organization, while at the same time building towards a longer-term sustainable architecture. Case studies will also be provided to show how successful organizations have successfully built a data strategies to support their business goals.
Building a New Platform for Customer Analytics Caserta
Caserta Concepts and Databricks partner up to bring you this insightful webinar on how a business can choose from all of the emerging big data technologies to figure out which one best fits their needs.
In this advanced business analysis training session, you will learn Data Analytics Business Intelligence. Topics covered in this session are:
• What is Business Intelligence?
• Data / information / knowledge
• What is Data Analytics?
• What is Business Analytics?
• What is Big Data?
• Types of Data
• Types of Analytics
• What is Business Intelligence?
For more information, click here: https://www.mindsmapped.com/courses/business-analysis/advanced-business-analyst-training/
Using Data Mining Technique, Loginworks is offering the web data mining solutions. One of the leading Data mining companies delivering data mining services.
https://www.loginworks.com/data-mining/
Applied Data Science Course Part 2: the data science workflow and basic model...Dataiku
In the second part of our applied machine learning online course, you'll get an overview of the different steps in the data science workflow as well as a deep dive in 3 basic types of models: linear, tree-based and clustering.
The Rise of the DataOps - Dataiku - J On the Beach 2016 Dataiku
Many organisations are creating groups dedicated to data. These groups have many names : Data Team, Data Labs, Analytics Teams….
But whatever the name, the success of those teams depends a lot on the quality of the data infrastructure and their ability to actually deploy data science applications in production.
In that regards a new role of “DataOps” is emerging. Similar, to Dev Ops for (Web) Dev, the Data Ops is a merge between a data engineer and a platform administrator. Well versed in cluster administration and optimisation, a data ops would have also a perspective on the quality of data quality and the relevance of predictive models.
Do you want to be a Data Ops ? We’ll discuss its role and challenges during this talk
How to Build a Successful Data Team - Florian Douetteau (@Dataiku) Dataiku
As you walk into your office on Monday morning, before you've even had a chance to grab a cup of coffee, your CEO asks to see you. He's worried: both customer churn and fraudulent transactions have increased over the past 6 months. As Data Manager, you have 6 months to solve this problem.
As Data Manager, you know the challenges ahead:
- Multitudes of technology choices to make
- Building a team and solving the skill-set disconnect
- Data can be deceiving...
- Figuring out what the successful data product must be
Florian works in the “data” field since 01’, back when it was not yet big. He worked in successful startups in search engine, advertising, and gaming industries, holding various data or CTO roles. He started Dataiku in 2013, his first venture as a CEO, with the goal of alleviating the daily pains encountered by data teams all around.
The 3 Key Barriers Keeping Companies from Deploying Data Products Dataiku
Getting from raw data to deploying data-driven solutions requires technology, data, and people. All of which exist. So why aren’t we seeing more truly data-driven companies: what's missing and why? During Strata Hadoop World Singapore 2015, Pauline Brown, Director of Marketing at Dataiku, explains how lack of collaboration is what is keeping companies from building and deploying data products effectively. Learn more about Dataiku and Data Science Studio: www.dataiku.com
Before Kaggle : from a business goal to a Machine Learning problem Dataiku
Many think that a Data Science is like a Kaggle competition. There are, however big differences in the approach. This presentation is about designing carefully your evaluation scheme to avoid overfitting and unexpected production performances.
This is a presentation by Pierre Gutierrez (Dataiku’s data scientist).
Retrouvez l'intégralité de la présentation commune de Dataiku et Coyote sur la "Valorisation des données".
Cette présentation a été réalisée dans le cadre du Symposium du 04 Juin 2015, organisé par le Club Urba-EA et le Club Pilotes de Processus.
Plus d'informations sur www.dataiku.com
Dataiku productive application to production - pap is may 2015 Dataiku
Beyond Predictive Analytics : Deploying apps to production and keep them improving
Some smart companies have been putting predictive application in production for decades. Still, either because of lack of sharing or lack of generality, there is still no single and obvious way to put a predictive application in production today.
As a consequence, for most companies, transitioning analytics from development to production is still “the next frontier”.
Behind the single word "production” lays a great number of questions like: what exactly do you put in production: data, model, code all three ? Who is responsible for maintenance and quality check over time : business, tech or both ? How can I make my predictive app continuously improve and check that it delivers the promised business value over time ? What are the best practice for maintenance and updates by the way ? Will my data scientists keep working after first development or should I lay half of them off ? etc…
Let’s make a small analogy with the development of web sites in the 90’s and early 00’s :
Back then, the winners where not necessarily the web sites with an amazing design, but a winner had clearly made the necessary efforts and had a robust way to put their web site reliabily in production
Today, every web developper can enjoy the confort of Heroku, Amazon, Github, docker, Angular, bootstrap … and so we forget. How much time before we get the same confort for the predictive world ?
Dataiku - Big data paris 2015 - A Hybrid Platform, a Hybrid Team Dataiku
Between traditional Business Intelligence and "Big Data" approaches, many companies need to innovate and work in a hybrid manner. How and with what tools can business and technical profiles collaborate productively together? lorian Douetteau, Dataiku's CEO, answers these questions.
Dataiku at SF DataMining Meetup - Kaggle Yandex ChallengeDataiku
This is a presentation made on the 13th August 2014 at the SF Data Mining Meetup at Trulia. It's about Dataiku and the Kaggle Personalized Web Search Ranking challenge sponsored by Yandex
Dataiku big data paris - the rise of the hadoop ecosystemDataiku
Snapshot of the hadoop ecosystem at the beginning of 2014, with the rise of real time and in memory processing distributed frameworks that complement and supplant the Map Reduce paradigm
BreizhJUG - Janvier 2014 - Big Data - Dataiku - Pages JaunesDataiku
Presentation made at Rennes in January for the handsome BreizhJUG. This is a mixed presentation for big data technologies, which covers topics such as : Why Hadoop ? What next ? Machine Learning for big data in practice.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Quantitative Data AnalysisReliability Analysis (Cronbach Alpha) Common Method...2023240532
Quantitative data Analysis
Overview
Reliability Analysis (Cronbach Alpha)
Common Method Bias (Harman Single Factor Test)
Frequency Analysis (Demographic)
Descriptive Analysis
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).