Slide helps in generating an understand about the intuition and mathematics / stats behind association rule mining. This presentation starts by highlighting the difference between causal and correlation. This is followed Apriori algorithm and the metrics which are used with it. Each metric is discussed in detail. Then a formulation has been generated in classification setting which can be used to generate rules i.e. rule mining.
Other Reference: https://www.slideshare.net/JustinCletus/mining-frequent-patterns-association-and-correlations
The Presentation is regarding the Market Basket Analysis Concept which is done practically with the real world data from a small Canteen. This is completely a real time data on which the analysis results are drawn.
The Presentation is regarding the Market Basket Analysis Concept which is done practically with the real world data from a small Canteen. This is completely a real time data on which the analysis results are drawn.
This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques
Decision tree is a type of supervised learning algorithm (having a pre-defined target variable) that is mostly used in classification problems. It is a tree in which each branch node represents a choice between a number of alternatives, and each leaf node represents a decision.
Data Mining: Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingSalah Amean
the chapter contains :
Data Preprocessing: An Overview,
Data Quality,
Major Tasks in Data Preprocessing,
Data Cleaning,
Data Integration,
Data Reduction,
Data Transformation and Data Discretization,
Summary.
This presentation educates you about Classification and
Regression trees (CART), CART decision tree methodology, Classification Trees, Regression Trees, Differences in CART, When to use CART?, Advantages of CART, Limitations of CART and What is a CART in Machine Learning?.
For more topics stay tuned with Learnbay.
This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques
Decision tree is a type of supervised learning algorithm (having a pre-defined target variable) that is mostly used in classification problems. It is a tree in which each branch node represents a choice between a number of alternatives, and each leaf node represents a decision.
Data Mining: Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingSalah Amean
the chapter contains :
Data Preprocessing: An Overview,
Data Quality,
Major Tasks in Data Preprocessing,
Data Cleaning,
Data Integration,
Data Reduction,
Data Transformation and Data Discretization,
Summary.
This presentation educates you about Classification and
Regression trees (CART), CART decision tree methodology, Classification Trees, Regression Trees, Differences in CART, When to use CART?, Advantages of CART, Limitations of CART and What is a CART in Machine Learning?.
For more topics stay tuned with Learnbay.
Factor analysis is a technique that is used to reduce a large number of variables into fewer numbers of factors. The basic assumption of factor analysis is that for a collection of observed variables there are a set of underlying variables called factors (smaller than the observed variables), that can explain the interrelationships among those variables.
Statistics is an important tool in pharmacological research that is used to summarize (descriptive statistics) experimental data in terms of central tendency (mean or median) and variance (standard deviation, standard error of the mean, confidence interval or range)
Similar to Understanding Association Rule Mining (20)
This presentation gives a high level idea on the working of reinforcement learning and the general settings associated with it. Mainly this presentation presents the algorithms which are present in the reinforcement learning.
Slide explaining the distinction between bagging and boosting while understanding the bias variance trade-off. Followed by some lesser known scope of supervised learning. understanding the effect of tree split metric in deciding feature importance. Then understanding the effect of threshold on classification accuracy. Additionally, how to adjust model threshold for classification in supervised learning.
Note: Limitation of Accuracy metric (baseline accuracy), alternative metrics, their use case and their advantage and limitations were briefly discussed.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
3. Association Rule Mining
Association rule mining is a
procedure which is meant to find
frequent patterns, correlations,
associations, or causal structures
from datasets found in various
kinds of databases such as
relational databases, transactional
databases, and other forms of data
repositories.
Simply; when this, then also this
4. Association Rule Mining
Used to identify -
● Frequent Patterns
● Correlations
● Associations
● Causal Structures
where these are applied → movie recommendations, grocery item placements, product recommendations, etc.
5. Algorithm - Apriori - Metrics
Following three metrics are generally used -
Support: The percentage of transactions that contain all of the items in an item set.
● The higher the support the more frequently the item set occurs.
● Rules with a high support are preferred since they are likely to be applicable to a large number of future transactions.
Confidence: The probability that a transaction that contains the items on the left hand side of the rule also
contains the item on the right hand side.
● The higher the confidence, the greater the likelihood that the item on the right hand side will be purchased or, in other
words, the greater the return rate we can expect for a given rule.
Lift: The probability of all of the items in a rule occurring together divided by the product of the probabilities
of the items on the left and right hand side occurring as if there was no association between them.
● Overall, lift summarizes the strength of association between the products on the left and right hand
side of the rule; the larger the lift the greater the link between the two products.
15. Algorithm
Step 1 Set a minimum & maximum Support and Confidence.
Step 2 Take all the subsets in transactions having higher support than
minimum support.
Step 3 Take all the rules of these subsets having higher confidence than
minimum confidence.
Step 4 Generate other rule assessment measures for the rules.
Step 5 Sort the rules by using an appropriate filter.
Cons → Slow Algorithm as it’s a bottom up approach and makes pair from all
available factors and compute related statistics
17. Other Rule Assessment Measures
● Added Value
● All-confidence
● Casual Confidence
● Casual Support
● Certainty Factor
● Chi-Squared
● Cross-Support Ratio
● Collective Strength
● Confidence
● Conviction
● Cosine
● Coverage
● Descriptive Confirmed Confidence
● Difference of Confidence
● Example & Counter-Example Rate
● Fisher's Exact Test
● Gini Index
● Hyper-Confidence
● Hyper-Lift
● Imbalance Ratio
● Improvement
● Jaccard Coefficient
● J-Measure
● Kappa
● Klosgen
● Kulczynski
● Goodman-Kruskal Lambda
● Laplace Corrected Confidence
● Least Contradiction
● Lerman Similarity
● Leverage
● Lift
● MaxConf
● Mutual Information
● Odds Ratio
● Phi Correlation Coefficient
● Ralambrodrainy Measure
● Relative Linkage Disequilibrium
● Relative Support
● Rule Power Factor
● Sebag-Schoenauer Measure
● Support
● Varying Rates Liaison
● Yule's Q
● Yule's Y
18. Support, Relative Support
Support:
● Support of a rule is defined as the number of transactions that contain both X and Y.
● Used as a measure of significance of a rule.
Symmetric Measure
Range: [0, INF)
Formula:
Relative Support:
● Relative Support is the fraction of transactions that contain both X and Y.
● ⇒ Empirical Joint Probability of the items comprising the rule.
● Used as a measure of significance of a rule.
Symmetric Measure
Range: [0, 1]
Formula:
20. Confidence (a.k.a. Strength)
Confidence:
● Confidence of a rule is the conditional probability that a transaction contains the
consequent Y given that it contains the antecedent X.
● Problem with Confidence is that it is sensitive to the frequency of the consequent Y in the
database.
● Caused by the way the confidence is calculated, consequents with higher support will
automatically produce higher confidence values even if there is no association b/w the
items.
Asymmetric Measure
Range: [0, 1]
Formula:
22. Lift (a.k.a. Interest)
Lift:
● Lift is defined as the ratio of the observed joint probability of X and Y to the expected joint
probability if they were statistically independent.
● Lift is susceptible to noise in small databases.
● Caused by the way the confidence is calculated, rare itemsets with low counts (low
probability) which by chance occur a few times (or only once) together will produce
enormous lift values.
Symmetric Measure
Range: [0, INF) (1 means independence)
Formula:
24. Coverage (a.k.a. antecedent support or LHS support)
Coverage:
● Coverage is defined as the relative support of the antecedent X i.e. it is is the fraction of
transactions that contain X.
● ⇒ Empirical Probability of the item X.
● Used as a measure of significance of a rule.
Asymmetric Measure
Range: [0, 1]
Formula:
26. Certainty Factor (a.k.a. Loevinger)
Certainty Factor:
● It is a measure of variation of the probability that Y is in transaction when only
considering transactions with X.
● An increasing CF means a decrease of the probability that Y is not in a transaction that X
is in. Negative CFs have a similar interpretation.
Asymmetric Measure
Range: [-1, 1] (0 means independence)
Formula:
27. Leverage
Leverage:
● Leverage measures the difference between the observed and expected joint probability of
XY assuming that X and Y are independent.
● Leverage gives an absolute measure of how surprising a rule is and should be used
together with lift.
● Can be interpreted as gap to independence.
Symmetric Measure
Range: [-1, 1] (0 means independence)
Formula:
28. Leverage
Rule A→ E may be preferable over the first two because it is simpler and has higher leverage
29. Jaccard Coefficient (a.k.a. Coherence)
Jaccard Coefficient:
● This coefficient measure the similarity between two sets.
Symmetric Measure
Range: [-1, 1] (0 means independence)
Formula:
32. Conviction
Conviction:
● Conviction measures the expected error of the rule i.e. how often X occurs in a
transaction where Y does not.
● Thus it can be said that it is a measure of the strength of the rule wrt the complement of
the consequent.
● If the joint probability of X!Y is less than that expected under independence of X and !Y,
then conviction is high, and vice versa.
● An alternative to confidence which was found not to capture direction of association s
adequately.
Asymmetric Measure
Range: [0, INF) (1 means independence, rule that always hold have INF)
Formula:
34. Odds Ratio
Odds Ratio:
● It is defined as the odds of finding X in transactions which contain Y divided by the odds
of finding X in transactions which do not contain Y.
● Lift is susceptible to noise in small databases.
● Odds ratios greater than 1 imply higher odds of Y occurring in the presence of X as
opposed to its complement !X , whereas odds smaller than one imply higher odds of Y
occurring with !X.
Symmetric Measure
Range: [0, INF) (1 means independence)
Formula:
37. Filter Used
(CASE
WHEN Itemset only present on BOTH Side THEN (FLOAT(CriticalClass_oddsRatio) - 0)
WHEN Itemset present on BOTH Side THEN (FLOAT(CriticalClass_oddsRatio) - FLOAT(Gen_oddsRatio))
WHEN Itemset only present on GENERAL Side THEN (0 - FLOAT(Gen_oddsRatio)) END) AS Diff_CriticalClassGen_OddsRatio,
Diff_CriticalClassGen_Conviction,
Diff_CriticalClassGen_Supp,
Diff_CriticalClassGen_Certainty,
*
FROM {
| #Handling INFINITY value
|FROM
| Table
|WHERE
| #viewing entries ONLY present on GEN side OR viewing entries ONLY present on CriticalClass side
}
ORDER BY
#rule_rhs desc,
#rule_lhs desc,
Diff_CriticalClassGen_OddsRatio DESC,
Diff_CriticalClassGen_Conviction DESC,
Diff_CriticalClassGen_Supp DESC,
#Diff_CriticalClassGen_Certainty desc,
38. Mining Pattern
Step 1: Run the query.
Step 2: Be creative and with some intuition select some item.
Step 3: Modify the query so that it gives pair with selected
item and again be creative and with intuition select some
item.
Using the discovered pair for further increasing the
pattern
Step a: use the discovered pair as lhs part and run the
query on table with increased rule length.
Step b: Be creative and with some intuition select the
next item.
Using the discovered pair for further analyzing
Step a: Use the existing pair to get raw data and
analyze it.
Step b: Use the existing pair to get derived parameter
data and analyze it (also check for existing critical
class signature + location).
Step c: If discovered pair indeed is adequate and is
finding some critical class, use this signature.
- Testing for FP
- If adequate use it for blocking
Rule Developed
40. Issues and Fine tuning
● Issues b/c of the data inconsistency in streaming data
● Modifying data Preprocessing for the itemset
●
● Version on Derived Parameters