Kohonen networks are a type of self-organizing map (SOM) neural network that is effective for clustering analysis. SOMs reduce the dimensionality of input data while preserving the topological properties of the input. Kohonen networks learn through competitive learning where output nodes compete to be activated by an input observation. The weights of the winning node and its neighbors are adjusted to be more similar to the input values. This allows the network to cluster similar input patterns together on the output map. The document provides a detailed example of how Kohonen networks work through competition, cooperation, and adaptation steps to cluster a sample dataset.
Data pre-processing for neural networks
NNs learn faster and give better performance if the input variables are pre-processed before being used to train the network
A Virtual Infrastructure for Mitigating Typical Challenges in Sensor NetworksMichele Weigle
Hady Abdel-Salem's PhD Defense Slides
Department of Computer Science
Old Dominion University
November 1, 2010
Note: You may need to download the file to see all of the animations.
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Simplilearn
This Deep Learning interview questions and answers presentation will help you prepare for Deep Learning interviews. This presentation is ideal for both beginners as well as professionals who are appearing for Deep Learning, Machine Learning or Data Science interviews. Learn what are the most important Deep Learning interview questions and answers and know what will set you apart in the interview process.
Some of the important Deep Learning interview questions are listed below:
1. What is Deep Learning?
2. What is a Neural Network?
3. What is a Multilayer Perceptron (MLP)?
4. What is Data Normalization and why do we need it?
5. What is a Boltzmann Machine?
6. What is the role of Activation Functions in neural network?
7. What is a cost function?
8. What is Gradient Descent?
9. What do you understand by Backpropagation?
10. What is the difference between Feedforward Neural Network and Recurrent Neural Network?
11. What are some applications of Recurrent Neural Network?
12. What are Softmax and ReLU functions?
13. What are hyperparameters?
14. What will happen if learning rate is set too low or too high?
15. What is Dropout and Batch Normalization?
16. What is the difference between Batch Gradient Descent and Stochastic Gradient Descent?
17. Explain Overfitting and Underfitting and how to combat them.
18. How are weights initialized in a network?
19. What are the different layers in CNN?
20. What is Pooling in CNN and how does it work?
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Learn more at: https//www.simplilearn.com
StarCompliance is a leading firm specializing in the recovery of stolen cryptocurrency. Our comprehensive services are designed to assist individuals and organizations in navigating the complex process of fraud reporting, investigation, and fund recovery. We combine cutting-edge technology with expert legal support to provide a robust solution for victims of crypto theft.
Our Services Include:
Reporting to Tracking Authorities:
We immediately notify all relevant centralized exchanges (CEX), decentralized exchanges (DEX), and wallet providers about the stolen cryptocurrency. This ensures that the stolen assets are flagged as scam transactions, making it impossible for the thief to use them.
Assistance with Filing Police Reports:
We guide you through the process of filing a valid police report. Our support team provides detailed instructions on which police department to contact and helps you complete the necessary paperwork within the critical 72-hour window.
Launching the Refund Process:
Our team of experienced lawyers can initiate lawsuits on your behalf and represent you in various jurisdictions around the world. They work diligently to recover your stolen funds and ensure that justice is served.
At StarCompliance, we understand the urgency and stress involved in dealing with cryptocurrency theft. Our dedicated team works quickly and efficiently to provide you with the support and expertise needed to recover your assets. Trust us to be your partner in navigating the complexities of the crypto world and safeguarding your investments.
Data pre-processing for neural networks
NNs learn faster and give better performance if the input variables are pre-processed before being used to train the network
A Virtual Infrastructure for Mitigating Typical Challenges in Sensor NetworksMichele Weigle
Hady Abdel-Salem's PhD Defense Slides
Department of Computer Science
Old Dominion University
November 1, 2010
Note: You may need to download the file to see all of the animations.
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Simplilearn
This Deep Learning interview questions and answers presentation will help you prepare for Deep Learning interviews. This presentation is ideal for both beginners as well as professionals who are appearing for Deep Learning, Machine Learning or Data Science interviews. Learn what are the most important Deep Learning interview questions and answers and know what will set you apart in the interview process.
Some of the important Deep Learning interview questions are listed below:
1. What is Deep Learning?
2. What is a Neural Network?
3. What is a Multilayer Perceptron (MLP)?
4. What is Data Normalization and why do we need it?
5. What is a Boltzmann Machine?
6. What is the role of Activation Functions in neural network?
7. What is a cost function?
8. What is Gradient Descent?
9. What do you understand by Backpropagation?
10. What is the difference between Feedforward Neural Network and Recurrent Neural Network?
11. What are some applications of Recurrent Neural Network?
12. What are Softmax and ReLU functions?
13. What are hyperparameters?
14. What will happen if learning rate is set too low or too high?
15. What is Dropout and Batch Normalization?
16. What is the difference between Batch Gradient Descent and Stochastic Gradient Descent?
17. Explain Overfitting and Underfitting and how to combat them.
18. How are weights initialized in a network?
19. What are the different layers in CNN?
20. What is Pooling in CNN and how does it work?
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Learn more at: https//www.simplilearn.com
StarCompliance is a leading firm specializing in the recovery of stolen cryptocurrency. Our comprehensive services are designed to assist individuals and organizations in navigating the complex process of fraud reporting, investigation, and fund recovery. We combine cutting-edge technology with expert legal support to provide a robust solution for victims of crypto theft.
Our Services Include:
Reporting to Tracking Authorities:
We immediately notify all relevant centralized exchanges (CEX), decentralized exchanges (DEX), and wallet providers about the stolen cryptocurrency. This ensures that the stolen assets are flagged as scam transactions, making it impossible for the thief to use them.
Assistance with Filing Police Reports:
We guide you through the process of filing a valid police report. Our support team provides detailed instructions on which police department to contact and helps you complete the necessary paperwork within the critical 72-hour window.
Launching the Refund Process:
Our team of experienced lawyers can initiate lawsuits on your behalf and represent you in various jurisdictions around the world. They work diligently to recover your stolen funds and ensure that justice is served.
At StarCompliance, we understand the urgency and stress involved in dealing with cryptocurrency theft. Our dedicated team works quickly and efficiently to provide you with the support and expertise needed to recover your assets. Trust us to be your partner in navigating the complexities of the crypto world and safeguarding your investments.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
2. Kohonen Networks – Self Organizing Map
• Kohonen networks were introduced in 1982 by Finnish researcher Tuevo Kohonen
• Although applied initially to image and sound analyses, Kohonen networks are
nevertheless an effective mechanism for clustering analysis
• Kohonen networks represent a type of self-organizing map (SOM), which itself
represents a special class of neural networks
• The goal of SOMs is to convert a complex high-dimensional input signal into a
simpler low-dimensional discrete map - nicely appropriate for cluster analysis,
where underlying hidden patterns among records and fields are sought
• SOMs structure the output nodes into clusters of nodes, where nodes in closer
proximity are more similar to each other than to other nodes that are farther apart
• Some researchers have shown that SOMs represent a nonlinear generalization of
principal components analysis, another dimension-reduction technique
• SOMs are based on competitive learning, where the output nodes compete among
themselves to be the winning node (or neuron), the only node to be activated by a
particular input observation
3. Clustering Task
• A typical SOM architecture is shown in the following figure
• The input layer is shown at the bottom of the figure, with one input node for each
field. Just as with neural networks, these input nodes perform no processing
themselves, but simply pass the field input values along downstream
4. Prerequisites of Clustering
• Like neural networks, SOMs are feedforward and completely connected
• Feedforward networks do not allow looping or cycling.
• Completely connected means that every node in a given layer is connected to every
node in the next layer, although not to other nodes in the same layer
• Like neural networks, each connection between nodes has a weight associated with
it, which at initialization is assigned randomly to a value between 0 and 1
Adjusting these weights represents the key for the learning mechanism in both
neural networks and SOMs
• Variable values need to be normalized or standardized, just as for neural networks,
so that certain variables do not overwhelm others in the learning algorithm
• Unlike most neural networks SOMs have no hidden layer
• The data from the input layer is passed along directly to the output layer
• The output layer is represented in the form of a lattice, usually in one or two
dimensions, and typically in the shape of a rectangle, although other shapes, such as
hexagons, may be used
• The output layer shown in the earlier figure is a 3 × 3 square
5. Working of SOM
• For a given record (instance), a particular field value is forwarded from a particular
input node to every node in the output layer
• For example, suppose that the normalized age and income values for the first record
in the data set are 0.69 and 0.88, respectively.
• The 0.69 value would enter the SOM through the input node associated with age,
and this node would pass this value of 0.69 to every node in the output layer
• Similarly, the 0.88 value would be distributed through the income input node to
every node in the output layer
• These values, together with the weights assigned to each of the connections, would
determine the values of a scoring function (such as Euclidean distance) for each
output node
• The output node with the “best” outcome from the scoring function would then
be designated as the winning node
6. Working of SOM
• SOMs exhibit three characteristic processes:
1. Competition
• As mentioned above, the output nodes compete with each other to produce the best
value for a particular scoring function, most commonly the Euclidean distance
• In this case, the output node that has the smallest Euclidean distance between the
field inputs and the connection weights would be declared the winner
2. Cooperation.
• The winning node therefore becomes the center of a neighborhood of excited neurons.
This emulates the behavior of human neurons, which are sensitive to the output of other
neurons in their immediate neighborhood.
• In SOMs, all the nodes in this neighborhood share in the “excitement” or “reward”
earned by the winning nodes, that of adaptation. Thus, even though the nodes in the
output layer are not connected directly, they tend to share common features, due to
this neighborliness parameter
3. Adaptation.
• The nodes in the neighborhood of the winning node participate in adaptation, that is,
learning The weights of these nodes are adjusted so as to further improve the score
function
• In other words, these nodes will thereby have an increased chance of winning the
competition once again, for a similar set of field values
7. Kohonen Networks
• Kohonen networks are SOMs that exhibit Kohonen learning
• Suppose that we consider the set of m field values for the nth record to be an input
vector and the current set of m weights for a particular
output node j to be a weight vector
• In Kohonen learning, the nodes in the neighborhood of the winning node adjust
their weights using a linear combination of the input vector and the current weight
vector
• Kohonen indicates that the learning rate 𝜂 should be a decreasing function of
training epochs (runs through the data set) and that a linearly or geometrically
decreasing 𝜂 is satisfactory for most purposes
8. Algorithm for Kohonen Networks
• The algorithm for Kohonen Networks is shown below:
For each input vector x, perform the following steps:
• Competition : For each output node j, calculate the value of the
scoring function. For example, for Euclidean distance,
Find the winning node J that minimizes over all output nodes
• Cooperation : Identify all output nodes j within the topological neighborhood of
J defined by the neighborhood size R.
• Updation : Adjust the weights:
Adjust the learning rate and neighborhood size, as needed
• Stop when the termination criteria are met
9. A Worked Example
• Suppose that we have a data set with two attributes, age and income, which have
already been normalized, and suppose that we would like to use a 2 × 2 Kohonen
network to uncover hidden clusters in the data set. The Network is shown below:
10. A Worked Example
• With such a small network, we set the neighborhood size to be R = 0, so that only
the winning node will be awarded the opportunity to adjust its weight
• Also, we set the learning rate 𝜂 to be 0.5
• Assume that the weights have been randomly initialized as follows
• For the first input vector, x1 = (0.8, 0.8), we perform the following competition,
cooperation, and adaptation sequence
• Competition Step : Compute the Euclidean Distances between the input and weights
for four of the output nodes
• So, Node 1 is the winner here, why? As it might be seen that the input vector (0.8,
0.8) is most similar to the weight (0.9, 0.8)
• In other words, we may expect node 1 to uncover a cluster of older, high-income
persons
11. A Worked Example
• Cooperation. In this simple example, we have set the neighborhood size R = 0 so that
the level of cooperation among output nodes is nil! Therefore, only the winning
node, node 1, will be rewarded with a weight adjustment. (We omit this step in the
remainder of the example.)
• Adaptation. For the winning node, node 1, the weights are adjusted as follows: For j
= 1 (node 1), n = 1 (the first record) and learning rate 𝜂 = 0.5, for each of the fields
this becomes
• For age:
• For Income :
• Note the type of adjustment that takes place. The weights are nudged in the
direction of the fields’ values of the input record. That is, w11, the weight on the age
connection for the winning node, was originally 0.9, but was adjusted in the
direction of the normalized value for age in the first record, 0.8. As the learning rate
𝜂 = 0.5, this adjustment is half (0.5) of the distance between the current weight and
the field value
• This adjustment will help node 1 to become even more proficient at capturing the
records of older, high-income persons
12. A Worked Example
• Let's use this concept to 3 more input points x2 = (0.8, 0.1), x3 = (0.2, 0.9) and x1 =
(0.1, 0.1) with the same initial weights
• For input 2, the distances are computed as 0.78, 0.14, 0.99 and 0.71 and Output
node 2 is the winner, the updated weights are 0.85 and 0.15
• For input 3, the distances are computed as 0.66, 0.99, 0.14 and 0.71 and Output
node 3 is the winner, the updated weights are 0.15 and 0.85
• For input 4, the distances are computed as 1.03, 0.75, 0.75 and 0.10 and Output
node 4 is the winner, the updated weights are 0.1 and 0.10
• Consolidated outcome is as follows
Cluster Associated with Description
1 Node 1 Older person with high
income
2 Node 2 Older person with low
income
3 Node 3 Younger person with
high income
4 Node 4 Younger person with
low income
13. A Bigger Example
• Let’s apply the Kohonen Net to the Churn dataset
• Recall that the data set contains 20 variables worth of information about 3333
customers, along with an indication of whether that customer churned (left the
company) or not
• Total 9 variables/attributes are used for clustering the customers – 2 Categorical
(International plan and Voice Mail plan) and 7 continuous (Account length, voice
mail messages, day minutes, evening minutes, night minutes, international minutes,
and customer service calls)
• Z-score standardization has been adopted for standardizing the continuous variables
• A 3X3 Kohonen Net architecture has been adopted
• The learning of parameters were set as follows :
• For the first 20 cycles (passes through the data set), the neighborhood size was
set at R = 2, and the learning rate was set to decay linearly starting at 𝜂 = 0.3
• Then, for the next 150 cycles, the neighborhood size was reset to R = 1, while
the learning rate was allowed to decay linearly from 𝜂 = 0.3 to 𝜂 = 0
15. A Bigger Example
• As it turned out, the Kohonen algorithm used only six of the nine available output
nodes, as shown in the following figure with output nodes 01, 11, and 21 being
pruned
16. A Bigger Example - Interpretation
• Can we develop the Cluster profiles?
• Consider International Plan = Yes v/s International Plan = No
• International Plan = Yes customers reside exclusively in Nodes 12 and 22
17. A Bigger Example - Interpretation
• Can we develop the Cluster profiles?
• Consider Voice mail = Yes v/s Voice main = No
• The three clusters along the bottom row (i.e., Cluster 00, Cluster 10, and Cluster 20)
contain only non-adopters of the VoiceMail Plan
• Clusters 02 and 12 contain only adopters of the VoiceMail Plan
• Cluster 22 contains mostly non-adopters and also some adopters of the VoiceMail
18. A Bigger Example - Interpretation
• Recall that because of the neighborliness parameter, clusters that are closer
together should be more similar than clusters that are farther apart.
• Note that all International Plan adopters reside in contiguous (neighboring) clusters,
as do all non-adopters. Similarly for later figure, except that Cluster 22 contains a
mixture
• We see that Cluster 12 represents a special subset of customers, those who have
adopted both the International Plan and the VoiceMail Plan even though this
subset represents only 2.4% of the customers but a well defined one and the
Kohonen network uncovered it
• Next we interpret the clusters according to the continuous predictors
19. A Bigger Example - Interpretation
• The following figure provides information about how the values of all the variables
are distributed among the clusters, with one column per cluster and one row per
variable
20. A Bigger Example - Interpretation
• The darker rows indicate the more important variables, that is, the variables that
proved more useful for discriminating among the clusters
• Consider Account Length_Z. Cluster 00 contains customers who tend to have been
with the company for a long time, that is, their account lengths tend to be on the
large side. Contrast this with Cluster 20, whose customers tend to be fairly new
• For the continuous variables, the data analyst should report the means for each
variable, for each cluster, along with an assessment of whether the difference in
means across the clusters is significant
• It is important that the means reported to the client appear on the original
(untransformed) scale, and not on the Z scale or min–max scale, so that the client
may better understand the clusters
21. A Bigger Example - Interpretation
• The following table provides these means, along with the results of an analysis of
variance for assessing whether the difference in means across clusters is significant
22. A Bigger Example - Interpretation
• Each row contains the information for one numerical variable, with one analysis of
variance for each row. Each cell contains the cluster mean, standard deviation,
standard error (standard deviation∕√cluster count), and cluster count
• The degrees of freedom are df1 = k - 1 = 6 - 1 = 5 and df2 = N - k = 3333 - 6 = 3327
• The F-test statistic is the value of F = MSTR∕MSE for the analysis of variance for that
particular variable, and the Importance statistic is simply (1 – p)-value, where p-
value = P(F > F test statistic)
• Account length and the number of voice mail messages as the two most important
numerical variables for discriminating among clusters
• Next, graphically it is seen that the account length for Cluster 00 is greater than that
of Cluster 20 which also is supported by the statistics in the last table, which shows
that the mean account length of 141.508 days for Cluster 00 and 61.707 days for
Cluster 20
• Also, tiny Cluster 12 has the highest mean number of voice mail messages (31.662),
with Cluster 02 also having a large amount (29.229)
• Finally, note that the neighborliness of Kohonen clusters tends to make neighboring
clusters similar. It would have been surprising, for example to find a cluster with
141.508 mean account length right next to a cluster with 61.707 mean account
length. In fact, this did not happen
23. A Bigger Example – Cluster Profiles
• Cluster 00: Loyal Non-Adopters. Belonging to neither the VoiceMail Plan nor the
International Plan, customers in large Cluster 00 have nevertheless been with the
company the longest, with by far the largest mean account length, which may be
related to the largest number of calls to customer service. This cluster exhibits the
lowest average minutes usage for day minutes and international minutes, and the
second lowest evening minutes and night minutes
• Cluster 02: Voice Mail Users. This large cluster contains members of the VoiceMail
Plan, with therefore a high mean number of voice mail messages, and no members
of the International Plan. Otherwise, the cluster tends toward the middle of the
pack for the other variables
• Cluster 10: Average Customers. Customers in this medium-sized cluster belong to
neither the Voice Mail Plan nor the International Plan. Except for the second-
largest mean number of calls to customer service, this cluster otherwise tends
toward the average values for the other variables
• Cluster 12: Power Customers. This smallest cluster contains customers who belong
to both the VoiceMail Plan and the International Plan. These sophisticated
customers also lead the pack in usage minutes across three categories and are in
second place in the other category. They also have the fewest average calls to
customer service. The company should keep a watchful eye on this cluster, as they
may represent a highly profitable group.
24. A Bigger Example – Cluster Profiles
• Cluster 20: Newbie Non-AdoptersUsers. Belonging to neither the VoiceMail
Plan nor the International Plan, customers in large Cluster 00 represent the
company’s newest customers, on average, with easily the shortest mean account
length. These customers set the pace with the highest mean night minutes
usage
• Cluster 22: International Plan Users. This small cluster contains members
of the International Plan and only a few members of the VoiceMail Plan.
The number of calls to customer service is second lowest, which may
mean that they need a minimum of hand-holding. Besides the lowest mean
night minutes usage, this cluster tends toward average values for the other
variables.
• Cluster profiles may of themselves be of actionable benefit to companies and
researchers for example, suggest marketing segmentation strategies in an
era of shrinking budgets
• Rather than targeting the entire customer base for a mass mailing, for example,
perhaps only the most profitable customers may be targeted
• Another strategy is to identify those customers whose potential loss would be of
greater harm to the company, such as the customers in Cluster 12 above
• Finally, customer clusters could be identified that exhibit behavior predictive of
churning; intervention with these customers could save them for the company