The document summarizes a research paper that proposes a new method called IHGAT (Intention-aware Heterogeneous Graph Attention Networks) to detect fraud transactions. IHGAT models user intentions and leverages transaction-level interactions by constructing a heterogeneous transaction-intention network. It represents transactions and intentions as nodes in a graph and uses various attention mechanisms to aggregate neighbor information. Experimental results on a real-world e-commerce dataset show IHGAT significantly outperforms other baselines and provides good interpretability of its predictions.
Graph-based algorithm for checking wrong indirect relationships in non-free c...TELKOMNIKA JOURNAL
In this context, this paper proposes a combination of parameterised decision mining and relation sequences to detect wrong indirect relationship in the non-free choice. The existing decision mining without parameter can only detect the direction, but not the correctness. This paper aims to identify the direction and correctness with decision mining with parameter. This paper discovers a graph process model based on the event log. Then, it analyses the graph process model for obtaining decision points. Each decision point is processed by using parameterised decision mining, so that decision rules are formed. The derived decision rules are used as parameters of checking wrong indirect relationship in the non-free choice. The evaluation shows that the checking wrong indirect relationships in non-free choice with parameterised decision mining have 100% accuracy, whereas the existing decision mining has 90.7% accuracy.
COMPARATIVE STUDY OF DISTRIBUTED FREQUENT PATTERN MINING ALGORITHMS FOR BIG S...IAEME Publication
Association rule mining plays an important role in decision support system. Nowadays in the era of internet, various online marketing sites and social networking sites are generating enormous amount of structural/semi structural data in the form of sales data, tweets, emails, web pages and so on. This online generated data is too large that it becomes very complex to process and analyze it using traditional systems which consumes more time. This paper overcomes the main memory bottleneck in single computing system. There are two major goals of this paper. In this paper, big sales dataset of AMUL dairy is preprocessed using Hadoop Map Reduce that convert it into the transactional dataset. Then, after removing the null transactions; distributed frequent pattern mining algorithm MR-DARM (Map Reduce based Distributed Association Rule Mining) is used to find most frequent item set. Finally, strong association rules are generated from frequent item sets. The paper also compares the time efficiency of MR-DARM algorithm with existing Count Distributed Algorithm (CDA) and Fast Distributed Mining (FDM) distributed frequent pattern mining algorithms. The compared algorithms are presented together with experimental results that lead to the final conclusions.
Graph-based algorithm for checking wrong indirect relationships in non-free c...TELKOMNIKA JOURNAL
In this context, this paper proposes a combination of parameterised decision mining and relation sequences to detect wrong indirect relationship in the non-free choice. The existing decision mining without parameter can only detect the direction, but not the correctness. This paper aims to identify the direction and correctness with decision mining with parameter. This paper discovers a graph process model based on the event log. Then, it analyses the graph process model for obtaining decision points. Each decision point is processed by using parameterised decision mining, so that decision rules are formed. The derived decision rules are used as parameters of checking wrong indirect relationship in the non-free choice. The evaluation shows that the checking wrong indirect relationships in non-free choice with parameterised decision mining have 100% accuracy, whereas the existing decision mining has 90.7% accuracy.
COMPARATIVE STUDY OF DISTRIBUTED FREQUENT PATTERN MINING ALGORITHMS FOR BIG S...IAEME Publication
Association rule mining plays an important role in decision support system. Nowadays in the era of internet, various online marketing sites and social networking sites are generating enormous amount of structural/semi structural data in the form of sales data, tweets, emails, web pages and so on. This online generated data is too large that it becomes very complex to process and analyze it using traditional systems which consumes more time. This paper overcomes the main memory bottleneck in single computing system. There are two major goals of this paper. In this paper, big sales dataset of AMUL dairy is preprocessed using Hadoop Map Reduce that convert it into the transactional dataset. Then, after removing the null transactions; distributed frequent pattern mining algorithm MR-DARM (Map Reduce based Distributed Association Rule Mining) is used to find most frequent item set. Finally, strong association rules are generated from frequent item sets. The paper also compares the time efficiency of MR-DARM algorithm with existing Count Distributed Algorithm (CDA) and Fast Distributed Mining (FDM) distributed frequent pattern mining algorithms. The compared algorithms are presented together with experimental results that lead to the final conclusions.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
In today's digital world, credit card fraud is a growing concern. This project explores machine learning techniques for credit card fraud detection. We delve into building models that can identify suspicious transactions in real-time, protecting both consumers and financial institutions. for more detection and machine learning algorithm explore data science and analysis course: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
This project showcases an AI-driven approach to detecting credit card fraud using machine learning algorithms. The project utilizes a dataset containing transactions with various features such as transaction amount, location, and time. The goal is to build a predictive model that can accurately identify fraudulent transactions and minimize financial losses for banks and customers. The presentation covers data preprocessing techniques, feature engineering, and the application of machine learning algorithms such as logistic regression or random forests. It also discusses model evaluation metrics and the importance of fraud detection in the banking industry. Visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Detecting Fraud Using Transaction Frequency DataITIIIndustries
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. In this paper, we present a fraud detection method which detects irregular frequency of transaction usage in an Enterprise Resource Planning (ERP) system. We discuss the design, development and empirical evaluation of outlier detection and distance measuring techniques to detect frequency-based anomalies within an individual user’s profile, relative to other similar users. Primarily, we propose three automated techniques: a univariate method, called Boxplot which is based on the sample’s median; and two multivariate methods which use Euclidean distance, for detecting transaction frequency anomalies within each transaction profile. The two multivariate approaches detect potentially fraudulent activities by identifying: (1) users where the Euclidean distance between their transaction-type set is above a certain threshold and (2) users/data points that lie far apart from other users/clusters or represent a small cluster size, using k-means clustering. The proposed methodology allows an auditor to investigate the transaction frequency anomalies and adjust the different parameters, such as the outlier threshold and the Euclidean distance threshold values to tune the number of alerts. The novelty of the proposed technique lies in its ability to automatically trigger alerts from transaction profiles, based on transaction usage performed over a period of time. Experiments were conducted using a real dataset obtained from the production client of a large organization using SAP R/3 (presently the most predominant ERP system), to run its business. The results of this empirical research demonstrate the effectiveness of the proposed approach.
In this paper, implementations of three Hough Transform based fingerprint alignment algorithms are analyzed with respect to time complexity on Java Card environment. Three algorithms are: Local Match Based Approach (LMBA), Discretized Rotation Based Approach
(DRBA), and All Possible to Match Based Approach (APMBA). The aim of this paper is to present the complexity and implementations of existing work of one of the mostly used method of fingerprint alignment, in order that the complexity can be simplified or find the best algorithm with efficient complexity and implementation that can be easily implemented on Java Card environment for match on card. Efficiency involves the accuracy of the implementation, time taken to perform fingerprint alignment, memory required by the implementation and instruction operations required and used.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Machine learning classification analysis model community satisfaction with tr...IAESIJAI
Traditional markets are public service facilities that can be utilized by the community. The market function is used place where sellers and buyers meet in conducting transactions. This study aims to build a machine learning classification analysis model in measuring community satisfaction with traditional market facilities. The analytical methods used include Fuzzy. multiple linear regression (MRL), artificial neural network (ANN), and decision tree (DT). Fuzzy is used to generate a pattern of rules in determining the level of satisfaction. MRL serves to measure and test the correlation of rules that have been formed. The ANN method is used to carry out the classification analysis process based on learning. In the final stage. DT is used to describe the decision tree of the analysis process. This study presents the results of machine learning analysis which is very good in determining satisfaction with an accuracy rate of 99.99%. This result is influenced by fuzzy logic which can develop a classification rule pattern of 32 patterns. MRL also shows a significant correlation level of 81.1% based on the indicator variables. Overall, the machine learning classification analysis model can provide knowledge to be considered in the management of traditional markets as public service facilities.
This report contains:-
1. what is data analytics, its usages, its types.
2. Tools used for data analytics
3. description of Classification
4. description of the association
5. description of clustering
6. decision tree, SVM modelling etc with example
The role of Louvain-coloring clustering in the detection of fraud transactionsIJECEIAES
Clustering is a technique in data mining capable of grouping very large amounts of data to gain new knowledge based on unsupervised learning. Clustering is capable of grouping various types of data and fields. The process that requires this technique is in the business sector, especially banking. In the transaction business process in banking, fraud is often encountered in transactions. This raises interest in clustering data fraud in transactions. An algorithm is needed in the cluster, namely Louvain’s algorithm. Louvain’s algorithm is capable of clustering in large numbers, which represent them in a graph. So, the Louvain algorithm is optimized with colored graphs to facilitate research continuity in labeling. In this study, 33,491 non-fraud data were grouped, and 241 fraud transaction data were carried out. However, Louvain’s algorithm shows that clustering increases the amount of data fraud of 90% by accurate.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
In today's digital world, credit card fraud is a growing concern. This project explores machine learning techniques for credit card fraud detection. We delve into building models that can identify suspicious transactions in real-time, protecting both consumers and financial institutions. for more detection and machine learning algorithm explore data science and analysis course: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
This project showcases an AI-driven approach to detecting credit card fraud using machine learning algorithms. The project utilizes a dataset containing transactions with various features such as transaction amount, location, and time. The goal is to build a predictive model that can accurately identify fraudulent transactions and minimize financial losses for banks and customers. The presentation covers data preprocessing techniques, feature engineering, and the application of machine learning algorithms such as logistic regression or random forests. It also discusses model evaluation metrics and the importance of fraud detection in the banking industry. Visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Detecting Fraud Using Transaction Frequency DataITIIIndustries
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. In this paper, we present a fraud detection method which detects irregular frequency of transaction usage in an Enterprise Resource Planning (ERP) system. We discuss the design, development and empirical evaluation of outlier detection and distance measuring techniques to detect frequency-based anomalies within an individual user’s profile, relative to other similar users. Primarily, we propose three automated techniques: a univariate method, called Boxplot which is based on the sample’s median; and two multivariate methods which use Euclidean distance, for detecting transaction frequency anomalies within each transaction profile. The two multivariate approaches detect potentially fraudulent activities by identifying: (1) users where the Euclidean distance between their transaction-type set is above a certain threshold and (2) users/data points that lie far apart from other users/clusters or represent a small cluster size, using k-means clustering. The proposed methodology allows an auditor to investigate the transaction frequency anomalies and adjust the different parameters, such as the outlier threshold and the Euclidean distance threshold values to tune the number of alerts. The novelty of the proposed technique lies in its ability to automatically trigger alerts from transaction profiles, based on transaction usage performed over a period of time. Experiments were conducted using a real dataset obtained from the production client of a large organization using SAP R/3 (presently the most predominant ERP system), to run its business. The results of this empirical research demonstrate the effectiveness of the proposed approach.
In this paper, implementations of three Hough Transform based fingerprint alignment algorithms are analyzed with respect to time complexity on Java Card environment. Three algorithms are: Local Match Based Approach (LMBA), Discretized Rotation Based Approach
(DRBA), and All Possible to Match Based Approach (APMBA). The aim of this paper is to present the complexity and implementations of existing work of one of the mostly used method of fingerprint alignment, in order that the complexity can be simplified or find the best algorithm with efficient complexity and implementation that can be easily implemented on Java Card environment for match on card. Efficiency involves the accuracy of the implementation, time taken to perform fingerprint alignment, memory required by the implementation and instruction operations required and used.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Machine learning classification analysis model community satisfaction with tr...IAESIJAI
Traditional markets are public service facilities that can be utilized by the community. The market function is used place where sellers and buyers meet in conducting transactions. This study aims to build a machine learning classification analysis model in measuring community satisfaction with traditional market facilities. The analytical methods used include Fuzzy. multiple linear regression (MRL), artificial neural network (ANN), and decision tree (DT). Fuzzy is used to generate a pattern of rules in determining the level of satisfaction. MRL serves to measure and test the correlation of rules that have been formed. The ANN method is used to carry out the classification analysis process based on learning. In the final stage. DT is used to describe the decision tree of the analysis process. This study presents the results of machine learning analysis which is very good in determining satisfaction with an accuracy rate of 99.99%. This result is influenced by fuzzy logic which can develop a classification rule pattern of 32 patterns. MRL also shows a significant correlation level of 81.1% based on the indicator variables. Overall, the machine learning classification analysis model can provide knowledge to be considered in the management of traditional markets as public service facilities.
This report contains:-
1. what is data analytics, its usages, its types.
2. Tools used for data analytics
3. description of Classification
4. description of the association
5. description of clustering
6. decision tree, SVM modelling etc with example
The role of Louvain-coloring clustering in the detection of fraud transactionsIJECEIAES
Clustering is a technique in data mining capable of grouping very large amounts of data to gain new knowledge based on unsupervised learning. Clustering is capable of grouping various types of data and fields. The process that requires this technique is in the business sector, especially banking. In the transaction business process in banking, fraud is often encountered in transactions. This raises interest in clustering data fraud in transactions. An algorithm is needed in the cluster, namely Louvain’s algorithm. Louvain’s algorithm is capable of clustering in large numbers, which represent them in a graph. So, the Louvain algorithm is optimized with colored graphs to facilitate research continuity in labeling. In this study, 33,491 non-fraud data were grouped, and 241 fraud transaction data were carried out. However, Louvain’s algorithm shows that clustering increases the amount of data fraud of 90% by accurate.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
CS598 DataMining Capstore Paper Review Presentation - sbrama2.pdf
1. CS598 Data Mining Capstone, Summer 2022
Paper Review by Sathish Rama (sbrama2)
Paper :
Review of “Intention-aware Heterogeneous Graph Attention Networks for Fraud Transactions Detection”
From KDD ’21, August 14–18, 2021, Virtual Event, Singapore.
Link to Paper : https://dl.acm.org/doi/10.1145/3447548.3467142
2. Background & Motivation
o Fraud transactions – major threat to e-commerce platforms
o Increase in organized fraud
o Complex scenarios
Current/Related Methods
o Traditional methods are based on statistical features (doesn’t
capture user behavior )
o Various deep learning models based on user behavioral data are
proposed. Sequence-based & tree based methods have been
extensively studied.
o Existing methods despite remarkable success, treat each
transaction as independent data instance without considering
transaction level interactions or intent of a set of transactions thus
ignoring rich information
Motivation for the proposed method
Leveraging rich interactions among transactions & behavior sequence
for fraud detection
3. Paper Method - IHGAT ( Intention-aware Heterogenous Graph Attention networks)
o Transaction intention network is devised using cross interaction information over transactions and
intentions.
o Using above, a graph neural network method is coined IHGAT ( Intention-aware Heterogenous Graph
Attention networks)
o The IHGAT model is used to detect if a transaction is a fraud or not.
o Experimented on real world Alibaba platform to show results for both offline & online model
Problem
o Detect fraud transaction using proposed method and label each transaction as 0 or 1 where 1 denotes
fraud transaction and 0 otherwise.
4. Concepts Used in the Proposed Method
o Behavior sequence : Chronologically ordered behaviors. A behavior
sequence of a user is shown in figure(a) below.
o Behavior tree : A tree-like data structure consisting of behavior nodes.
A behavior node is unique identification of the behavior using a name and
id. Below is figure showing behavior tree with intentions.
o User intentions : Every branch in a behavior tree denotes a user
intention. For example in figure (b) below four different user intentions
are marked with different colors. The first ‘Intention1” is presented as
{Home, Search, Product List}, which corresponds to the leftmost branch of
the behavior tree.
o Heterogeneous transaction-intention network (HTIN): A HTIN is
denoted as G = {V, E}, where V and E are the nodes and edges,
respectively. The node set V consists of transaction nodes and user
intention nodes. The edge set E contains two types of edges, transaction-
transaction edges and transaction-intention
5. Architecture of proposed IHGAT method
The overall process has two stages. First, the intention
neighbors are aggregated by a sequence-based model
with attention mechanism. Then a multi-head graph
attention layer is applied to aggregate transaction
neighbors.
1. User intention is modeled by embedding layer and sequence
encoding.
2. Intention neighbors of a transaction node are aggregated by
LTSM attention mechanism. LTSM model is a long short-
term memory network, used in deep learning esp. in
sequence prediction problems.
3. Multi head graph attention layer is used to aggregate
interactions among transactions.
4. After aggregating the intention and transaction neighbors,
the obtained representation is fed into multiple fully
connected neural networks and a regression layer with a
sigmoid unit and then predicted fraud probability(𝑝) of
transaction is derived.
6. Experiments
• Extensive experiments on a large-scale real-world industrial dataset are conducted
• First, verify the performance on the task of fraud transactions detection and perform ablation tests to demonstrate the
effectiveness of every component in the model.
• Then major hyper-parameters were observed analyzed and looked closely.
• Results visualized to demonstrate the interpretability of the method
Dataset
• Large-scale industrial dataset from Alibaba Group (online e-commerce
platform ) is used.
• Randomly sampled 1.27 million transactions (ranging from 2020/05/01 to
2020/05/31) for training and 0.31 million transactions (ranging from
2020/06/01 to 2020/06/7) for testing as shown in Table 1.
• For each transaction, last 24 hours of user behavior is back tracked and
behavior sequence and behavior tree is generated and then HTIN is
constructed
• 1.76 million transaction and intention nodes
• 21.93 million transaction-intention and transaction-transaction edges as shown in Table 2.
7. Experiments
Baselines : To demonstrate effectiveness of proposed method, sequence-based models, tree-based models, graph-based models, and
variants of the proposed model are compared as baselines.
• Sequence-based Methods: LSTM, BiLTSM, GRU, CNN and Transformer methods are compared
• Tree-based Methods: CS Tree-LSTM and LIC Tree-LSTM methods
• Graph-based Methods: GraheSAGE and GAT
• Ablation Test : The proposed method and multiple variants of IHGAT are derived to analyze performance such as
o One variant without edge among transactions
o One variant without transaction attention mechanism
o One variant without intention attention mechanism
o One variant without considering order information of intentions
Evaluation Metrics : Two widely used metrics, namely AUC and R@P𝑁 , to measure the performance of fraud transactions detection.
• AUC is defined as the area under ROC curve
• R@P𝑁 indicates the Recall rate when the Precision rate equals to 𝑁 ( high precision rate is needed for fraud detection problems)
ROC Curve : A metric used to measure the performance of a model. The ROC curve depicts the rate of true positives with respect to the
rate of false positives
Higher AUC and R@P𝑁 indicates higher performance of the approaches.
8. Results
Results Comparison across different
methods
• Proposed method IHGAT is significantly
better than all the baselines
• Proposed method when compared
1. With sequence-based methods: AUC is
at least 3.79% higher & R@P0.9 is
64.21% higher
2. With Tree-based methods: AUC is higher
by 1.82% and R@P0.9 by 23.16%
3. With Graph based methods: AUC is
higher by 1.05% and R@P0.9 by 8.93%.
• Within the proposed method, a variant without the transaction-transaction interactions(IHGAT𝑇−𝑇), obtains the worst performance among all the
variants with 2.62% decreased in AUC and 25.77% decreased in R@P0.9 respectively
• From the results of IHGAT𝐼𝐴𝑡𝑡 and IHGAT𝐼𝐿𝑆𝑇𝑀 , we can see that the attention mechanism on user intentions can capture the key user intention and
the order information among user intentions is useful in the task of fraud transactions detection.
The main reason IHGAT to score better is, it captured both transaction-intention and transaction-transaction interactions.
9. Results
Effects of Behavior Sequence Length
• Divided the testing set into 5 groups to analyze the effects of different behavior sequence lengths as shown below.
• Overall, both tree-based and graph-based models are better than the sequence-based approaches in all sequence lengths.
• Graph-based models, namely GraphSAGE and GAT, achieve the better performances than LIC tree-LSTM when the sequence length is less than 120,
but poor performance seen when it is greater than 120, except for IHGAT.
• One observation is, elaborate user intention modeling seem to play a important role in longer sequence groups.
• The performances of most models, as the increase of behavior sequence length, improve obviously at the beginning, and then flatten to some extent.
Proposed model, benefits from the construction of user intentions and heterogeneous transaction-intention network obtains
• Best results in various sequence lengths
• Achieves a significant improvement on longer sequences.
10. Results
Other Major Hyper-parameters
The paper investigated the effects of two major parameters.
• Sliding window(l) :
o An important component in building transaction-transaction interactions
o It is observed that for both AUC and R@P0.9, the performances gets better as the sliding window size increases, and 𝑙 = 3 gets the best
performance
o The reason is too small window size could not build complete transaction-transaction edges, while too large window sizes may introduce
interference edges that are not very closely related.
• Embedding dimensions:
o Lower dimensions may not be able to completely represent user behavior, while higher dimensions cannot improve classification
performances and may cost more training time.
11. Results Visualization
• The paper visualized the attention weights of a fraud transaction(𝑇0), as shown below. The behavior sequence of 𝑇0 is segmented
into five intentions from 𝐼1 to 𝐼5, shown in Figure (a).
• Figure (b) shows I2 and I4 gets higher value. I4 is a intuitive pattern is an intuitive pattern of potential fraudsters, as they tend to
switch accounts frequently to avoid the identification rules of platforms.
• Figure (b) shows transaction neighbors. It is observed that T0 has the highest weight & T2 is the second highest. The edge
between these is established using same remark of transaction and it is observed that fraudsters sometimes uses such common
remarks(or secret code) to communicate with their accomplices.
12. Conclusion
• The paper investigated the detection of fraud transactions by elaborately modeling user intentions and leveraging the
transaction-level interactions
• Devised a heterogeneous transaction intention network and a graph-based neural model (IHGAT) to detect the fraud transaction.
• Experiments conducted on a real-world dataset show that proposed model is effective in fraud transactions detection
provided good interpretability of results.
• I found this method very interesting, and it clearly shows better results compared to sequence-based methods.
• I’m curious about how if any real-world data challenges may impact the performance of this method such as
o We may miss some transaction data in a sequence due to network/system failure so how would the method perform.
o Sometimes there may be benign transaction patterns with similar comments for frequent pattern shopping such as buying
gifts to family members or fund transfers with friends etc
o Amount of compute needed to actively detect fraud with low latency response times since building a large IHGAT network
with several embeddings and large sliding window could be very compute intensive.
Other comments