SlideShare a Scribd company logo
Mining Transactional Data
        Ted Dunning - 2004
Outline
●   What are LLR tests?
    –   What value have they shown?
●   What are transactional values?
    –   How can we define LLR tests for them?
●   How can these methods be applied?
    –   Modeling architecture examples
●   How new is this?
Log-likelihood Ratio Tests
●   Theorem due to Chernoff showed that
    generalized log-likelihood ratio is asymptotically
    2 distributed in many useful cases
●   Most well known statistical tests are either
    approximately or exactly LLR tests
    –   Includes z-test, F-test, t-test, Pearson's 2
●   Pearson's 2 is an approximation valid for large
    expected counts ... G2 is the exact form for
    multinomial contingency tables
Mathematical Definition
●   Ratio of maximum likelihood under the null
    hypothesis to the unrestricted maximum
    likelihood
                      max l  X ∣
                 = max l  X ∣
                      ∈0


                      ∈


                  d.o.f.=dim −dim 0
●   -2 log  is asymptotically 2 distributed
Comparison of Two Observations
●   Two independent observations, X1 and X2 can be
    compared to determine whether they are from the
    same distribution
                1 , 2  ∈ ×
                       max           l  X 1∣l  X 2∣
                =      ∈

                        max l  X 1∣1 l  X 2∣2 
                     1 ∈ , 2 ∈


                d.o.f.=dim 
History of LLR Tests for “Text”
●   Statistics of Surprise and Coincidence
●   Genomic QA tools
●   Luduan
●   HNC text-mining, preference mining
●   MusicMatch recommendation engine
How Useful is LLR?
●   A test in 1997 showed that a query construction
    system using LLR (Luduan) decreased the error
    rate of the best document routing system
    (Inquery) by approximately 5x at 10% recall and
    nearly 2x at 20% recall
●   Language and species ID programs showed
    similar improvements versus state of the art
●   Previously unsuspected structure around intron
    splice sites was discovered using LLR tests
TREC Document Routing Results
               1

              0.9

              0.8
                                                 Luduan vs Inquery
              0.7

              0.6
  Precision




              0.5

              0.4
                                          Inquery
              0.3
                                          Luduan
              0.2                         Convectis

              0.1

               0
                    0   0.1   0.2   0.3    0.4        0.5   0.6   0.7   0.8   0.9   1
                                                 Recall
What are Transactional Variables?
●   A transactional sequence is a sequence of
    transactions.
●   Transactions are instances of a symbol and
    (optionally) a time and an amount:
                 Z = z 1 ... z N 
                 z i = i , t i , x i 
                  i ∈ , an alphabet of symbols
                 t i , x i ∈ℝ
Example - Text
●   A textual document is a transactional sequence
    without times or amounts

                 Z =  1 ...  N 
                  i ∈
Example – Traffic Violation History
●   A history of traffic violations is a (hopefully
    empty) sequence of violation types and
    associated dates (times)

              Z = z 1 ... z N 
              z i = i , t i 
               i ∈{stop-sign , speeding , DUI ,...}
              t i ∈ℝ
Example – Speech Transcript
●   A conversation between a and b can be rendered
    as a transactions containing words spoken by
    either a or b at particular times:
                Z = z 1 ... z N 
                z i = i , t i 
                 i ∈{a , b}×
                t i ∈ℝ
Example – Financial History
●   A credit card history can be viewed as a
    transactional sequence with merchant code, date
    (=time) and amount:

     Z = z 1 ... z N          9/03/03
                                9/04/03
                                          Cash Advance
                                          Groceries
                                                             $300
                                                               79
                                9/07/03   Fuel                 21
     z i =〈 i , t i , x i 〉    9/10/03   Groceries            42
                                9/23/03   Department Store    173
      i ∈                    10/03/03   Payment            -600
                               10/09/03   Hotel & Motel       104
     t i ∈ℝ                    10/17/03   Rental Cars         201
                               10/24/03   Lufthansa           838
Proposed Evolution
                Transaction
                  Mining
                              Augmented
   LLR tests                      Data


Transactional                    Luduan,
Data                                 etc
   Data                        LLR tests
 Augmentation
                   Text
LLR for Transaction Sequence
●   Assuming reasonable interactions between
    timing, symbol selection and amount distribution,
    LLR test can be decomposed
●   Two major terms remain, one for symbols and
    timing together, one for amounts

       LLR= LLRsymbols & timing LLRamounts
Anecdotal Observations
●   Symbol selection often looks multinomial, or
    (rarely) Markov
●   Timing is often nearly Poisson (but rate depends
    on which symbol)
●   Distribution of amount appears to depend on
    symbol, but generally not on inter-transaction
    timing. Mixed discrete/continuous distributions
    are common in financial settings
Transaction Sequence Distributions
●   Mixed Poisson distributions give desired
    symbol/timing behavior
●   Amount distribution depends on symbol
                       k  − T
                 T  e
     pZ = ∏                         ∏          p x i∣  
            ∈       k !           i=1. .. N
                                                                i




            [               ][                    ]∏
                       k                  − T
                                    N
                                 T  e
     pZ = N ! ∏
                       
                                                                    p x i∣  
                ∈ k  !            N!                                       i
                                                    i=1. .. N

      = , ∑  =1
                 ∈
LLR for Multinomial
●   Easily expressed as entropy of contingency table



                       [                                  ]
                           k 11   k 12   ...       k1 n       k 1*
                           k 21   k 22   ...       k2n        k 2*
                           ⋮      ⋮      ⋱         ⋮          ⋮
                           k m1   k m2   ...       k mn       k m*
                           k * 1 k * 2 ... k * n              k **

    −2 log =2 N
                     ∑ ij log ij −∑ i * log i *−∑ * j log * j 
                      ij                       i                        j

                     k ij k **              ij
    log =∑ k ij log            =∑ k ij log                          d.o.f.=m−1n−1
          ij         k i * k * j ij         * j
LLR for Poisson Mixture
●   Easily expressed using timed contingency table



                  [                             ∣]
                  k 11      k 12   ...   k1n    t1
                  k 21      k 22   ...   k 2n   t2
                  ⋮         ⋮      ⋱     ⋮      ⋮
                  k m1      k m2   ...   k mn   tm
                      k * 1 k * 2 ... k * n ∣ t *

                              k ij t *              ij
             log =∑ k ij log           =∑ k ij log
                     ij        t i k * j ij         * j
             d.o.f.=m−1 n
LLR for Normal Distribution
●   Assume X1 and X2 are normally distributed
●   Null hypothesis of identical mean and variance


                                                     
                            − x−2

p  x∣ ,  =
                     1
                        e      2 2
                                       
                                       =
                                          ∑ xi   
                                                 =
                                                    ∑  x i −2
                  2                    N              N


                            
                             
           −2 log =2 N 1 log N 2 log
                             
                             1
                                       
                                       
                                       
                                       2        
           d.o.f.=2
Calculations
●   Assume X1 and X2 are normally distributed
●   Null hypothesis of identical mean and variance

           p  x∣ ,=
                             1
                          2 
                                e
                                       − x−2
                                          2 2
                                             = i
                                             
                                                N
                                                      ∑ xi
                                                           = i
                                                           
                                                                      N
                                                                          ∑  x−2

           log p X 1∣ ,  log p X 1∣ , −log p X 1∣1,  1 −log p X 2∣2,  2 =

           −     ∑ [
               i=1. . N 1
                            log  2 log 
                                             x 1i −2
                                                2 2      ] [
                                                        − ∑ log  2 log 
                                                         i=1. . N
                                                              2
                                                                             x 2 i −2
                                                                                 2 2        ]
                 ∑ [                                       ] ∑[                                      ]
                                                          2                                      2
                                                x −                               x − 
                           log  2 log  1  1i 2 1           log  2 log  2 2i 2 2
               i=1. . N 1                          2 1  i=1. . N 2
                                                                                        2 2

          −2 log =2 N 1 log
                                    
                                     1
                                        N 2 log
                                                 
                                                 2   
           d.o.f.=2
Transactional Data in Context
             Real-world input often
             consists of one or more
             bags of transactional values
             combined with an
             assortment of conventional
  1.2        numerical or categorial
  34 years
  male       values.

             Extracting information from
             the transactional data can be
             difficult and is often,
             therefore, not done.
Real World Target Variables
             Mislabeled   a   Secondary
             Instances         Labels




                                          b




       Labeled
       as Red
Luduan Modeling Methodology
●   Use LLR tests to find exemplars (query terms)
    from secondary label sets
●   Create positive and negative secondary label
    models for each class of transactional data
●   Cluster using output of all secondary label
    models and all conventional data
●   Test clusters for stability
●   Use distance cluster centroids and/or secondary
    label models as derived input variables
Example #1- Auto Insurance
●   Predict probability of attrition and loss for auto
    insurance customers
●   Transactional variables include
    –   Claim history
    –   Traffic violation history
    –   Geographical code of residence(s)
    –   Vehicles owned
●   Observed attrition and loss define past behavior
Derived Variables
●   Split training data according to observable classes
    –   These include attrition and loss > 0
●   Define LLR variables for each class/variable
    combination
●   These 2 m v derived variables can be used for
    clustering (spectral, k-means, neural gas ...)
●   Proximity in LLR space to clusters are the new
    modeling variables
Results
●   Conventional NN modeling by competent analyst
    was able to explain 2% of variance
    –   No significant difference on training/test data
●   Models built using Luduan based cluster
    proximity variables were able to explain 70% of
    variance (KS approximately 0.4)
    –   No significant difference on training/test data
Example #2 – Fraud Detection
●   Predict probability that an account is likely to
    result in charge-off due to payment fraud
●   Transactional variables include
    –   Zip code
    –   Recent payments and charges
    –   Recent non-monetary transactions
●   Bad payments, charge-off, delinquency are
    observable behavioral outcomes
Derived Variables
●   Split training data according to observable classes
    (charge-off, NSF payment, delinquency)
●   Define LLR variables for each class/variable
    combination
●   These 2 m v derived variables can be used
    directly as model variables
●   No results available for publication
Example #3 – E-commerce monitor
●   Detect malfunctions or changes in behavior of e-
    commerce system due to fraud or system failure
●   Transaction variables include (time, SKU,
    amount)
●   Desired output is alarm for operational staff
Derived Variables
●   Time warp derived as product of smoothed daily
    and weekly sales rates
●   Time warp updated monthly to account for
    seasonal variations
●   Warped time used in transactions
●   Warped time since last transaction ≈ LLR in
    single product/single price case
●   Full LLR allows testing for significant difference
    in Champion/Challenger e-commerce optimizer
Transductive Derived Variables
●   All objective segmentations of data provide new
    LLR variables
●   Cross product of model outputs versus objective
    segmentation provide additional LLR variables
    for second level model derivation
●   Comparable to Luduan query construction
    technique – TREC pooled evaluation technique
    provided cross product of relevance versus
    perceived relevance
Relationship To Risk Tables
●   Risk tables are estimate of relative risk for each
    value of a single symbolic variable
    –   Useful with variables such as post-code of primary
        residence
    –   Ad hoc smoothing used to deal with small counts
●   Not usually applied to symbol sequences
●   Risk tables ignore time entirely
●   Risk tables require considerable analyst finesse
Relationship to Known Techniques
●   Clock-tick symbols
    –   Time-embedded symbols viewed as sequences of
        symbols along with “ticks” that occur at fixed time
        intervals
    –   Allows multinomial LLR as poor man's mixed
        Poisson LLR
●   Not a well known technique, not used in
    production models
●   Difficulties in choosing time resolution and
    counting period
Conclusions
●   Theoretical properties of transaction variables are
    well defined
●   Similarities to known techniques indicates low
    probability of gross failure
●   Similarity to Luduan techniques suggests high
    probability of superlative performance
●   Transactional LLR statistics define similarity
    metrics useful for clustering

More Related Content

What's hot

What's hot (20)

Deductive databases
Deductive databasesDeductive databases
Deductive databases
 
Dynamic Itemset Counting
Dynamic Itemset CountingDynamic Itemset Counting
Dynamic Itemset Counting
 
Data Mining: Association Rules Basics
Data Mining: Association Rules BasicsData Mining: Association Rules Basics
Data Mining: Association Rules Basics
 
Google Big Table
Google Big TableGoogle Big Table
Google Big Table
 
Introduction to data science
Introduction to data scienceIntroduction to data science
Introduction to data science
 
Business Intelligence and decision support system
Business Intelligence and decision support system Business Intelligence and decision support system
Business Intelligence and decision support system
 
Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}
 
Big data introduction
Big data introductionBig data introduction
Big data introduction
 
Association rule mining.pptx
Association rule mining.pptxAssociation rule mining.pptx
Association rule mining.pptx
 
Database recovery
Database recoveryDatabase recovery
Database recovery
 
Database performance tuning and query optimization
Database performance tuning and query optimizationDatabase performance tuning and query optimization
Database performance tuning and query optimization
 
System analysis and design Class 2
System analysis and design Class 2System analysis and design Class 2
System analysis and design Class 2
 
Seminar datawarehousing
Seminar datawarehousingSeminar datawarehousing
Seminar datawarehousing
 
Big Data: Its Characteristics And Architecture Capabilities
Big Data: Its Characteristics And Architecture CapabilitiesBig Data: Its Characteristics And Architecture Capabilities
Big Data: Its Characteristics And Architecture Capabilities
 
Introduction to Clustered Indexes and Heaps
Introduction to Clustered Indexes and HeapsIntroduction to Clustered Indexes and Heaps
Introduction to Clustered Indexes and Heaps
 
Data base management system
Data base management systemData base management system
Data base management system
 
Bayesian networks in AI
Bayesian networks in AIBayesian networks in AI
Bayesian networks in AI
 
Data Warehouse
Data WarehouseData Warehouse
Data Warehouse
 
Metadata in data warehouse
Metadata in data warehouseMetadata in data warehouse
Metadata in data warehouse
 
Data warehousing
Data warehousingData warehousing
Data warehousing
 

Viewers also liked

Data mining (lecture 1 & 2) conecpts and techniques
Data mining (lecture 1 & 2) conecpts and techniquesData mining (lecture 1 & 2) conecpts and techniques
Data mining (lecture 1 & 2) conecpts and techniques
Saif Ullah
 
Distributed Databases
Distributed DatabasesDistributed Databases
Distributed Databases
elliando dias
 
DATA MINING TOOL- ORANGE
DATA MINING TOOL- ORANGEDATA MINING TOOL- ORANGE
DATA MINING TOOL- ORANGE
Neeraj Goswami
 
Data cube computation
Data cube computationData cube computation
Data cube computation
Rashmi Sheikh
 
Distributed Database System
Distributed Database SystemDistributed Database System
Distributed Database System
Sulemang
 

Viewers also liked (20)

Data mining (lecture 1 & 2) conecpts and techniques
Data mining (lecture 1 & 2) conecpts and techniquesData mining (lecture 1 & 2) conecpts and techniques
Data mining (lecture 1 & 2) conecpts and techniques
 
Intelligent Search
Intelligent SearchIntelligent Search
Intelligent Search
 
data mining and data warehousing
data mining and data warehousingdata mining and data warehousing
data mining and data warehousing
 
Data mining and its applications!
Data mining and its applications!Data mining and its applications!
Data mining and its applications!
 
Distributed Databases
Distributed DatabasesDistributed Databases
Distributed Databases
 
Centralised and distributed databases
Centralised and distributed databasesCentralised and distributed databases
Centralised and distributed databases
 
Lecture 11 - distributed database
Lecture 11 - distributed databaseLecture 11 - distributed database
Lecture 11 - distributed database
 
Datacube
DatacubeDatacube
Datacube
 
DATA MINING TOOL- ORANGE
DATA MINING TOOL- ORANGEDATA MINING TOOL- ORANGE
DATA MINING TOOL- ORANGE
 
Lecture13 - Association Rules
Lecture13 - Association RulesLecture13 - Association Rules
Lecture13 - Association Rules
 
Data Mining: Data cube computation and data generalization
Data Mining: Data cube computation and data generalizationData Mining: Data cube computation and data generalization
Data Mining: Data cube computation and data generalization
 
Data Mining: Data processing
Data Mining: Data processingData Mining: Data processing
Data Mining: Data processing
 
Data cubes
Data cubesData cubes
Data cubes
 
Data Processing-Presentation
Data Processing-PresentationData Processing-Presentation
Data Processing-Presentation
 
Distributed database
Distributed databaseDistributed database
Distributed database
 
Data cube computation
Data cube computationData cube computation
Data cube computation
 
Distributed Database System
Distributed Database SystemDistributed Database System
Distributed Database System
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Types of Data Processing
Types of Data ProcessingTypes of Data Processing
Types of Data Processing
 
DATA WAREHOUSING
DATA WAREHOUSINGDATA WAREHOUSING
DATA WAREHOUSING
 

Similar to Transactional Data Mining

Introduction to pairtrading
Introduction to pairtradingIntroduction to pairtrading
Introduction to pairtrading
Kohta Ishikawa
 
Ai32647651
Ai32647651Ai32647651
Ai32647651
IJMER
 
19. algorithms and-complexity
19. algorithms and-complexity19. algorithms and-complexity
19. algorithms and-complexity
ashishtinku
 
The convenience yield implied by quadratic volatility smiles presentation [...
The convenience yield implied by quadratic volatility smiles   presentation [...The convenience yield implied by quadratic volatility smiles   presentation [...
The convenience yield implied by quadratic volatility smiles presentation [...
yigalbt
 

Similar to Transactional Data Mining (20)

Transactional Data Mining Ted Dunning 2004
Transactional Data Mining Ted Dunning 2004Transactional Data Mining Ted Dunning 2004
Transactional Data Mining Ted Dunning 2004
 
Algo complexity
Algo complexityAlgo complexity
Algo complexity
 
Interactive Visualization in Human Time -StampedeCon 2015
Interactive Visualization in Human Time -StampedeCon 2015Interactive Visualization in Human Time -StampedeCon 2015
Interactive Visualization in Human Time -StampedeCon 2015
 
Asymptotic notation
Asymptotic notationAsymptotic notation
Asymptotic notation
 
Algorithms - A Sneak Peek
Algorithms - A Sneak PeekAlgorithms - A Sneak Peek
Algorithms - A Sneak Peek
 
Description and retrieval of medical visual information based on language mod...
Description and retrieval of medical visual information based on language mod...Description and retrieval of medical visual information based on language mod...
Description and retrieval of medical visual information based on language mod...
 
1_Asymptotic_Notation_pptx.pptx
1_Asymptotic_Notation_pptx.pptx1_Asymptotic_Notation_pptx.pptx
1_Asymptotic_Notation_pptx.pptx
 
On estimating the integrated co volatility using
On estimating the integrated co volatility usingOn estimating the integrated co volatility using
On estimating the integrated co volatility using
 
pradeepbishtLecture13 div conq
pradeepbishtLecture13 div conqpradeepbishtLecture13 div conq
pradeepbishtLecture13 div conq
 
Unit 3
Unit 3Unit 3
Unit 3
 
Unit 3
Unit 3Unit 3
Unit 3
 
Block Cipher vs. Stream Cipher
Block Cipher vs. Stream CipherBlock Cipher vs. Stream Cipher
Block Cipher vs. Stream Cipher
 
Introduction to pairtrading
Introduction to pairtradingIntroduction to pairtrading
Introduction to pairtrading
 
Ai32647651
Ai32647651Ai32647651
Ai32647651
 
19. algorithms and-complexity
19. algorithms and-complexity19. algorithms and-complexity
19. algorithms and-complexity
 
Lec10
Lec10Lec10
Lec10
 
The convenience yield implied by quadratic volatility smiles presentation [...
The convenience yield implied by quadratic volatility smiles   presentation [...The convenience yield implied by quadratic volatility smiles   presentation [...
The convenience yield implied by quadratic volatility smiles presentation [...
 
Unit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdfUnit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdf
 
11.generalized and subset integrated autoregressive moving average bilinear t...
11.generalized and subset integrated autoregressive moving average bilinear t...11.generalized and subset integrated autoregressive moving average bilinear t...
11.generalized and subset integrated autoregressive moving average bilinear t...
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
 

More from Ted Dunning

Progress for big data in Kubernetes
Progress for big data in KubernetesProgress for big data in Kubernetes
Progress for big data in Kubernetes
Ted Dunning
 
Dunning time-series-2015
Dunning time-series-2015Dunning time-series-2015
Dunning time-series-2015
Ted Dunning
 

More from Ted Dunning (20)

Dunning - SIGMOD - Data Economy.pptx
Dunning - SIGMOD - Data Economy.pptxDunning - SIGMOD - Data Economy.pptx
Dunning - SIGMOD - Data Economy.pptx
 
How to Get Going with Kubernetes
How to Get Going with KubernetesHow to Get Going with Kubernetes
How to Get Going with Kubernetes
 
Progress for big data in Kubernetes
Progress for big data in KubernetesProgress for big data in Kubernetes
Progress for big data in Kubernetes
 
Anomaly Detection: How to find what you didn’t know to look for
Anomaly Detection: How to find what you didn’t know to look forAnomaly Detection: How to find what you didn’t know to look for
Anomaly Detection: How to find what you didn’t know to look for
 
Streaming Architecture including Rendezvous for Machine Learning
Streaming Architecture including Rendezvous for Machine LearningStreaming Architecture including Rendezvous for Machine Learning
Streaming Architecture including Rendezvous for Machine Learning
 
Machine Learning Logistics
Machine Learning LogisticsMachine Learning Logistics
Machine Learning Logistics
 
Tensor Abuse - how to reuse machine learning frameworks
Tensor Abuse - how to reuse machine learning frameworksTensor Abuse - how to reuse machine learning frameworks
Tensor Abuse - how to reuse machine learning frameworks
 
Machine Learning logistics
Machine Learning logisticsMachine Learning logistics
Machine Learning logistics
 
T digest-update
T digest-updateT digest-update
T digest-update
 
Finding Changes in Real Data
Finding Changes in Real DataFinding Changes in Real Data
Finding Changes in Real Data
 
Where is Data Going? - RMDC Keynote
Where is Data Going? - RMDC KeynoteWhere is Data Going? - RMDC Keynote
Where is Data Going? - RMDC Keynote
 
Real time-hadoop
Real time-hadoopReal time-hadoop
Real time-hadoop
 
Cheap learning-dunning-9-18-2015
Cheap learning-dunning-9-18-2015Cheap learning-dunning-9-18-2015
Cheap learning-dunning-9-18-2015
 
Sharing Sensitive Data Securely
Sharing Sensitive Data SecurelySharing Sensitive Data Securely
Sharing Sensitive Data Securely
 
Real-time Puppies and Ponies - Evolving Indicator Recommendations in Real-time
Real-time Puppies and Ponies - Evolving Indicator Recommendations in Real-timeReal-time Puppies and Ponies - Evolving Indicator Recommendations in Real-time
Real-time Puppies and Ponies - Evolving Indicator Recommendations in Real-time
 
How the Internet of Things is Turning the Internet Upside Down
How the Internet of Things is Turning the Internet Upside DownHow the Internet of Things is Turning the Internet Upside Down
How the Internet of Things is Turning the Internet Upside Down
 
Apache Kylin - OLAP Cubes for SQL on Hadoop
Apache Kylin - OLAP Cubes for SQL on HadoopApache Kylin - OLAP Cubes for SQL on Hadoop
Apache Kylin - OLAP Cubes for SQL on Hadoop
 
Dunning time-series-2015
Dunning time-series-2015Dunning time-series-2015
Dunning time-series-2015
 
Doing-the-impossible
Doing-the-impossibleDoing-the-impossible
Doing-the-impossible
 
Anomaly Detection - New York Machine Learning
Anomaly Detection - New York Machine LearningAnomaly Detection - New York Machine Learning
Anomaly Detection - New York Machine Learning
 

Recently uploaded

Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlFuture Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Peter Udo Diehl
 

Recently uploaded (20)

To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and Planning
 
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlFuture Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutes
 
In-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT ProfessionalsIn-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT Professionals
 
IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
AI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekAI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří Karpíšek
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
Introduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationIntroduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG Evaluation
 
Powerful Start- the Key to Project Success, Barbara Laskowska
Powerful Start- the Key to Project Success, Barbara LaskowskaPowerful Start- the Key to Project Success, Barbara Laskowska
Powerful Start- the Key to Project Success, Barbara Laskowska
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 

Transactional Data Mining

  • 1. Mining Transactional Data Ted Dunning - 2004
  • 2. Outline ● What are LLR tests? – What value have they shown? ● What are transactional values? – How can we define LLR tests for them? ● How can these methods be applied? – Modeling architecture examples ● How new is this?
  • 3. Log-likelihood Ratio Tests ● Theorem due to Chernoff showed that generalized log-likelihood ratio is asymptotically 2 distributed in many useful cases ● Most well known statistical tests are either approximately or exactly LLR tests – Includes z-test, F-test, t-test, Pearson's 2 ● Pearson's 2 is an approximation valid for large expected counts ... G2 is the exact form for multinomial contingency tables
  • 4. Mathematical Definition ● Ratio of maximum likelihood under the null hypothesis to the unrestricted maximum likelihood max l  X ∣ = max l  X ∣ ∈0 ∈ d.o.f.=dim −dim 0 ● -2 log  is asymptotically 2 distributed
  • 5. Comparison of Two Observations ● Two independent observations, X1 and X2 can be compared to determine whether they are from the same distribution 1 , 2  ∈ × max l  X 1∣l  X 2∣ = ∈ max l  X 1∣1 l  X 2∣2  1 ∈ , 2 ∈ d.o.f.=dim 
  • 6. History of LLR Tests for “Text” ● Statistics of Surprise and Coincidence ● Genomic QA tools ● Luduan ● HNC text-mining, preference mining ● MusicMatch recommendation engine
  • 7. How Useful is LLR? ● A test in 1997 showed that a query construction system using LLR (Luduan) decreased the error rate of the best document routing system (Inquery) by approximately 5x at 10% recall and nearly 2x at 20% recall ● Language and species ID programs showed similar improvements versus state of the art ● Previously unsuspected structure around intron splice sites was discovered using LLR tests
  • 8. TREC Document Routing Results 1 0.9 0.8 Luduan vs Inquery 0.7 0.6 Precision 0.5 0.4 Inquery 0.3 Luduan 0.2 Convectis 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall
  • 9. What are Transactional Variables? ● A transactional sequence is a sequence of transactions. ● Transactions are instances of a symbol and (optionally) a time and an amount: Z = z 1 ... z N  z i = i , t i , x i   i ∈ , an alphabet of symbols t i , x i ∈ℝ
  • 10. Example - Text ● A textual document is a transactional sequence without times or amounts Z =  1 ...  N   i ∈
  • 11. Example – Traffic Violation History ● A history of traffic violations is a (hopefully empty) sequence of violation types and associated dates (times) Z = z 1 ... z N  z i = i , t i   i ∈{stop-sign , speeding , DUI ,...} t i ∈ℝ
  • 12. Example – Speech Transcript ● A conversation between a and b can be rendered as a transactions containing words spoken by either a or b at particular times: Z = z 1 ... z N  z i = i , t i   i ∈{a , b}× t i ∈ℝ
  • 13. Example – Financial History ● A credit card history can be viewed as a transactional sequence with merchant code, date (=time) and amount: Z = z 1 ... z N  9/03/03 9/04/03 Cash Advance Groceries $300 79 9/07/03 Fuel 21 z i =〈 i , t i , x i 〉 9/10/03 Groceries 42 9/23/03 Department Store 173  i ∈ 10/03/03 Payment -600 10/09/03 Hotel & Motel 104 t i ∈ℝ 10/17/03 Rental Cars 201 10/24/03 Lufthansa 838
  • 14. Proposed Evolution Transaction Mining Augmented LLR tests Data Transactional Luduan, Data etc Data LLR tests Augmentation Text
  • 15. LLR for Transaction Sequence ● Assuming reasonable interactions between timing, symbol selection and amount distribution, LLR test can be decomposed ● Two major terms remain, one for symbols and timing together, one for amounts LLR= LLRsymbols & timing LLRamounts
  • 16. Anecdotal Observations ● Symbol selection often looks multinomial, or (rarely) Markov ● Timing is often nearly Poisson (but rate depends on which symbol) ● Distribution of amount appears to depend on symbol, but generally not on inter-transaction timing. Mixed discrete/continuous distributions are common in financial settings
  • 17. Transaction Sequence Distributions ● Mixed Poisson distributions give desired symbol/timing behavior ● Amount distribution depends on symbol k  − T  T  e pZ = ∏ ∏ p x i∣    ∈ k ! i=1. .. N i [ ][ ]∏ k − T  N T  e pZ = N ! ∏  p x i∣   ∈ k  ! N! i i=1. .. N  = , ∑  =1  ∈
  • 18. LLR for Multinomial ● Easily expressed as entropy of contingency table [ ] k 11 k 12 ... k1 n k 1* k 21 k 22 ... k2n k 2* ⋮ ⋮ ⋱ ⋮ ⋮ k m1 k m2 ... k mn k m* k * 1 k * 2 ... k * n k ** −2 log =2 N  ∑ ij log ij −∑ i * log i *−∑ * j log * j  ij i j k ij k ** ij log =∑ k ij log =∑ k ij log d.o.f.=m−1n−1 ij k i * k * j ij * j
  • 19. LLR for Poisson Mixture ● Easily expressed using timed contingency table [ ∣] k 11 k 12 ... k1n t1 k 21 k 22 ... k 2n t2 ⋮ ⋮ ⋱ ⋮ ⋮ k m1 k m2 ... k mn tm k * 1 k * 2 ... k * n ∣ t * k ij t * ij log =∑ k ij log =∑ k ij log ij t i k * j ij * j d.o.f.=m−1 n
  • 20. LLR for Normal Distribution ● Assume X1 and X2 are normally distributed ● Null hypothesis of identical mean and variance  − x−2 p  x∣ ,  = 1 e 2 2  = ∑ xi  = ∑  x i −2  2  N N   −2 log =2 N 1 log N 2 log  1    2  d.o.f.=2
  • 21. Calculations ● Assume X1 and X2 are normally distributed ● Null hypothesis of identical mean and variance p  x∣ ,= 1  2  e − x−2 2 2 = i  N ∑ xi = i  N ∑  x−2 log p X 1∣ ,  log p X 1∣ , −log p X 1∣1,  1 −log p X 2∣2,  2 = − ∑ [ i=1. . N 1 log  2 log   x 1i −2 2 2 ] [ − ∑ log  2 log  i=1. . N 2  x 2 i −2 2 2 ] ∑ [ ] ∑[ ] 2 2  x −   x −   log  2 log  1  1i 2 1  log  2 log  2 2i 2 2 i=1. . N 1 2 1 i=1. . N 2 2 2 −2 log =2 N 1 log   1 N 2 log  2  d.o.f.=2
  • 22. Transactional Data in Context Real-world input often consists of one or more bags of transactional values combined with an assortment of conventional 1.2 numerical or categorial 34 years male values. Extracting information from the transactional data can be difficult and is often, therefore, not done.
  • 23. Real World Target Variables Mislabeled a Secondary Instances Labels b Labeled as Red
  • 24. Luduan Modeling Methodology ● Use LLR tests to find exemplars (query terms) from secondary label sets ● Create positive and negative secondary label models for each class of transactional data ● Cluster using output of all secondary label models and all conventional data ● Test clusters for stability ● Use distance cluster centroids and/or secondary label models as derived input variables
  • 25. Example #1- Auto Insurance ● Predict probability of attrition and loss for auto insurance customers ● Transactional variables include – Claim history – Traffic violation history – Geographical code of residence(s) – Vehicles owned ● Observed attrition and loss define past behavior
  • 26. Derived Variables ● Split training data according to observable classes – These include attrition and loss > 0 ● Define LLR variables for each class/variable combination ● These 2 m v derived variables can be used for clustering (spectral, k-means, neural gas ...) ● Proximity in LLR space to clusters are the new modeling variables
  • 27. Results ● Conventional NN modeling by competent analyst was able to explain 2% of variance – No significant difference on training/test data ● Models built using Luduan based cluster proximity variables were able to explain 70% of variance (KS approximately 0.4) – No significant difference on training/test data
  • 28. Example #2 – Fraud Detection ● Predict probability that an account is likely to result in charge-off due to payment fraud ● Transactional variables include – Zip code – Recent payments and charges – Recent non-monetary transactions ● Bad payments, charge-off, delinquency are observable behavioral outcomes
  • 29. Derived Variables ● Split training data according to observable classes (charge-off, NSF payment, delinquency) ● Define LLR variables for each class/variable combination ● These 2 m v derived variables can be used directly as model variables ● No results available for publication
  • 30. Example #3 – E-commerce monitor ● Detect malfunctions or changes in behavior of e- commerce system due to fraud or system failure ● Transaction variables include (time, SKU, amount) ● Desired output is alarm for operational staff
  • 31. Derived Variables ● Time warp derived as product of smoothed daily and weekly sales rates ● Time warp updated monthly to account for seasonal variations ● Warped time used in transactions ● Warped time since last transaction ≈ LLR in single product/single price case ● Full LLR allows testing for significant difference in Champion/Challenger e-commerce optimizer
  • 32. Transductive Derived Variables ● All objective segmentations of data provide new LLR variables ● Cross product of model outputs versus objective segmentation provide additional LLR variables for second level model derivation ● Comparable to Luduan query construction technique – TREC pooled evaluation technique provided cross product of relevance versus perceived relevance
  • 33. Relationship To Risk Tables ● Risk tables are estimate of relative risk for each value of a single symbolic variable – Useful with variables such as post-code of primary residence – Ad hoc smoothing used to deal with small counts ● Not usually applied to symbol sequences ● Risk tables ignore time entirely ● Risk tables require considerable analyst finesse
  • 34. Relationship to Known Techniques ● Clock-tick symbols – Time-embedded symbols viewed as sequences of symbols along with “ticks” that occur at fixed time intervals – Allows multinomial LLR as poor man's mixed Poisson LLR ● Not a well known technique, not used in production models ● Difficulties in choosing time resolution and counting period
  • 35. Conclusions ● Theoretical properties of transaction variables are well defined ● Similarities to known techniques indicates low probability of gross failure ● Similarity to Luduan techniques suggests high probability of superlative performance ● Transactional LLR statistics define similarity metrics useful for clustering