A widely used approach for gaining insight into the heterogeneity of consumer’s buying behavior is market segmentation. Conventional market segmentation models often ignore the fact that consumers’ behavior may evolve over time. Therefore retailers consume limited resources attempting to service unprofitable consumers. This study looks into the integration between enhanced Recency, Frequency, Monetary (RFM) scores and Consumer Lifetime Value (CLV) matrix for a medium size retailer in the State of Kuwait. A modified regression algorithm investigates the consumer purchase trend gaining knowledge from a pointof-sales data warehouse. In addition, this study applies enhanced normal distribution formula to remove outliers, followed by soft clustering Fuzzy C-Means and hard clustering Expectation Maximization (EM) algorithms to the analysis of consumer buying behavior. Using cluster quality assessment shows EM algorithm scales much better than Fuzzy C-Means algorithm with its ability to assign good initial points in the smaller dataset.
NEW MARKET SEGMENTATION METHODS USING ENHANCED (RFM), CLV, MODIFIED REGRESSIO...ijcsit
A widely used approach for gaining insight into the heterogeneity of consumer’s buying behavior is market segmentation. Conventional market segmentation models often ignore the fact that consumers’ behavior may evolve over time. Therefore retailers consume limited resources attempting to service unprofitable consumers. This study looks into the integration between enhanced Recency, Frequency, Monetary (RFM) scores and Consumer Lifetime Value (CLV) matrix for a medium size retailer in the State of Kuwait. A modified regression algorithm investigates the consumer purchase trend gaining knowledge from a pointof-sales data warehouse. In addition, this study applies enhanced normal distribution formula to remove outliers, followed by soft clustering Fuzzy C-Means and hard clustering Expectation Maximization (EM) algorithms to the analysis of consumer buying behavior. Using cluster quality assessment shows EM algorithm scales much better than Fuzzy C-Means algorithm with its ability to assign good initial points in the smaller dataset.
Customer Clustering Based on Customer Purchasing Sequence DataIJERA Editor
Customer clustering has become a priority for enterprises because of the importance of customer relationship management. Customer clustering can improve understanding of the composition and characteristics of customers, thereby enabling the creation of appropriate marketing strategies for each customer group. Previously, different customer clustering approaches have been proposed according to data type, namely customer profile data, customer value data, customer transaction data, and customer purchasing sequence data. This paper considers the customer clustering problem in the context of customer purchasing sequence data. However, two major aspects distinguish this paper from past research: (1) in our model, a customer sequence contains itemsets, which is a more realistic configuration than previous models, which assume a customer sequence would merely consist of items; and (2) in our model, a customer may belong to multiple clusters or no cluster, whereas in existing models a customer is limited to only one cluster. The second difference implies that each cluster discovered using our model represents a crucial type of customer behavior and that a customer can exhibit several types of behavior simultaneously. Finally, extensive experiments are conducted through a retail data set, and the results show that the clusters obtained by our model can provide more accurate descriptions of customer purchasing behaviors.
INTEGRATION OF MACHINE LEARNING TECHNIQUES TO EVALUATE DYNAMIC CUSTOMER SEGME...IJDKP
The telecommunications industry is highly competitive, which means that the mobile providers need a
business intelligence model that can be used to achieve an optimal level of churners, as well as a minimal
level of cost in marketing activities. Machine learning applications can be used to provide guidance on
marketing strategies. Furthermore, data mining techniques can be used in the process of customer
segmentation. The purpose of this paper is to provide a detailed analysis of the C.5 algorithm, within naive
Bayesian modelling for the task of segmenting telecommunication customers behavioural profiling
according to their billing and socio-demographic aspects. Results have been experimentally implemented.
Activity Based Profitability ManagementMiguel Garcia
Activity Based Profitability Management (ABPM) and Activity Based Budgeting (ABB) offers organizations a complete tool to gain a competitive advantage and provides crucial information to support the process of making strategic and operational decisions in the current business environment. It might seem that having this type of information for the management of profits, costs and budgets is not necessary to implement Digital Transformation solutions because the implementation of new technologies does not require an evaluation of this type, and it is assumed that it must be implemented independently of what it implies and at any cost, but this is an error because it will always require business processes, products or services, customers or users, service channels, etc. that must be evaluated from the financial and business process point of view, implemented, measured and improved within a competitive and market environment. In this sense, the profitablitiy, cost and budget information provided by the approach of ABPM and ABB will lead to better business decisions that significantly increase the performance and profits of the companies.
A study on the chain restaurants dynamic negotiation games of the optimizatio...ijcsit
In the era of meager profit, production costs often become an important factor affecting SMEs’ operating
conditions, and how to effectively reduce production costs has become an issue of in-depth consideration
for the business owners. Especially, the food and beverage (F&B) industry cannot accurately predict the
demand. It many cause demand forecast fall and excess or insufficient inventory pressure. Companies of
the F&B industry may be even unable to meet immediate customer needs. They are faced great challenges
in quick response and inventory pressure. This study carried out the product inventory model analysis of
the most recent year’s sales data of the fresh food materials for chain restaurants in a supply chain region
with raw material suppliers and demanders. Moreover, this study adopted the multi-agent dynamic strategy
game to establish the joint procurement decision model negotiation algorithm for analysis and verification
by simulation cases to achieve the design of dynamic negotiation optimization mechanism for the joint
procurement of food materials. Coupled with supply chain management 3C theory for food material
inventory management, we developed the optimization method for determining the order quantities of the
chain restaurants. For product demand forecast, we applied the commonality model, production and
delivery capacity model, and the model of consumption and replenishment based on market demand
changes in categorization and development. Moreover, with the existence of dependencies between product
demands as the demand forecast basis, we determined the appropriate inventory model accordingly.
Over- or underestimating sales is detrimental to marketing and sales efforts as well as
inventories and cash flow management. Thus the purpose of this investigation is to evaluate the forecasting
accuracy of three competing multivariate time-series models that take into account existing
Connecting B2C to B2B: a Top Down Approach for Industrial DistributorsStephane Bratu
In this article, the author discusses the importance for pricers in many industries – and particularly in the distribution industry – of engaging in both business-to-business and business-to-consumer markets. This article focuses on proposing a new way to execute pricing strategies and operations for industrial distribution companies that sell both to other businesses (B2B) and directly to consumers (B2C). This industry specific example provides pricing strategies and analytic approaches that can be applied by pricers in multiple businesses and markets. Dr. Stephane Bratu is Direc- tor of Pricing and Analytics at Arrow Electronics.
NEW MARKET SEGMENTATION METHODS USING ENHANCED (RFM), CLV, MODIFIED REGRESSIO...ijcsit
A widely used approach for gaining insight into the heterogeneity of consumer’s buying behavior is market segmentation. Conventional market segmentation models often ignore the fact that consumers’ behavior may evolve over time. Therefore retailers consume limited resources attempting to service unprofitable consumers. This study looks into the integration between enhanced Recency, Frequency, Monetary (RFM) scores and Consumer Lifetime Value (CLV) matrix for a medium size retailer in the State of Kuwait. A modified regression algorithm investigates the consumer purchase trend gaining knowledge from a pointof-sales data warehouse. In addition, this study applies enhanced normal distribution formula to remove outliers, followed by soft clustering Fuzzy C-Means and hard clustering Expectation Maximization (EM) algorithms to the analysis of consumer buying behavior. Using cluster quality assessment shows EM algorithm scales much better than Fuzzy C-Means algorithm with its ability to assign good initial points in the smaller dataset.
Customer Clustering Based on Customer Purchasing Sequence DataIJERA Editor
Customer clustering has become a priority for enterprises because of the importance of customer relationship management. Customer clustering can improve understanding of the composition and characteristics of customers, thereby enabling the creation of appropriate marketing strategies for each customer group. Previously, different customer clustering approaches have been proposed according to data type, namely customer profile data, customer value data, customer transaction data, and customer purchasing sequence data. This paper considers the customer clustering problem in the context of customer purchasing sequence data. However, two major aspects distinguish this paper from past research: (1) in our model, a customer sequence contains itemsets, which is a more realistic configuration than previous models, which assume a customer sequence would merely consist of items; and (2) in our model, a customer may belong to multiple clusters or no cluster, whereas in existing models a customer is limited to only one cluster. The second difference implies that each cluster discovered using our model represents a crucial type of customer behavior and that a customer can exhibit several types of behavior simultaneously. Finally, extensive experiments are conducted through a retail data set, and the results show that the clusters obtained by our model can provide more accurate descriptions of customer purchasing behaviors.
INTEGRATION OF MACHINE LEARNING TECHNIQUES TO EVALUATE DYNAMIC CUSTOMER SEGME...IJDKP
The telecommunications industry is highly competitive, which means that the mobile providers need a
business intelligence model that can be used to achieve an optimal level of churners, as well as a minimal
level of cost in marketing activities. Machine learning applications can be used to provide guidance on
marketing strategies. Furthermore, data mining techniques can be used in the process of customer
segmentation. The purpose of this paper is to provide a detailed analysis of the C.5 algorithm, within naive
Bayesian modelling for the task of segmenting telecommunication customers behavioural profiling
according to their billing and socio-demographic aspects. Results have been experimentally implemented.
Activity Based Profitability ManagementMiguel Garcia
Activity Based Profitability Management (ABPM) and Activity Based Budgeting (ABB) offers organizations a complete tool to gain a competitive advantage and provides crucial information to support the process of making strategic and operational decisions in the current business environment. It might seem that having this type of information for the management of profits, costs and budgets is not necessary to implement Digital Transformation solutions because the implementation of new technologies does not require an evaluation of this type, and it is assumed that it must be implemented independently of what it implies and at any cost, but this is an error because it will always require business processes, products or services, customers or users, service channels, etc. that must be evaluated from the financial and business process point of view, implemented, measured and improved within a competitive and market environment. In this sense, the profitablitiy, cost and budget information provided by the approach of ABPM and ABB will lead to better business decisions that significantly increase the performance and profits of the companies.
A study on the chain restaurants dynamic negotiation games of the optimizatio...ijcsit
In the era of meager profit, production costs often become an important factor affecting SMEs’ operating
conditions, and how to effectively reduce production costs has become an issue of in-depth consideration
for the business owners. Especially, the food and beverage (F&B) industry cannot accurately predict the
demand. It many cause demand forecast fall and excess or insufficient inventory pressure. Companies of
the F&B industry may be even unable to meet immediate customer needs. They are faced great challenges
in quick response and inventory pressure. This study carried out the product inventory model analysis of
the most recent year’s sales data of the fresh food materials for chain restaurants in a supply chain region
with raw material suppliers and demanders. Moreover, this study adopted the multi-agent dynamic strategy
game to establish the joint procurement decision model negotiation algorithm for analysis and verification
by simulation cases to achieve the design of dynamic negotiation optimization mechanism for the joint
procurement of food materials. Coupled with supply chain management 3C theory for food material
inventory management, we developed the optimization method for determining the order quantities of the
chain restaurants. For product demand forecast, we applied the commonality model, production and
delivery capacity model, and the model of consumption and replenishment based on market demand
changes in categorization and development. Moreover, with the existence of dependencies between product
demands as the demand forecast basis, we determined the appropriate inventory model accordingly.
Over- or underestimating sales is detrimental to marketing and sales efforts as well as
inventories and cash flow management. Thus the purpose of this investigation is to evaluate the forecasting
accuracy of three competing multivariate time-series models that take into account existing
Connecting B2C to B2B: a Top Down Approach for Industrial DistributorsStephane Bratu
In this article, the author discusses the importance for pricers in many industries – and particularly in the distribution industry – of engaging in both business-to-business and business-to-consumer markets. This article focuses on proposing a new way to execute pricing strategies and operations for industrial distribution companies that sell both to other businesses (B2B) and directly to consumers (B2C). This industry specific example provides pricing strategies and analytic approaches that can be applied by pricers in multiple businesses and markets. Dr. Stephane Bratu is Direc- tor of Pricing and Analytics at Arrow Electronics.
International Journal of Business and Management Invention (IJBMI)inventionjournals
International Journal of Business and Management Invention (IJBMI) is an international journal intended for professionals and researchers in all fields of Business and Management. IJBMI publishes research articles and reviews within the whole field Business and Management, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The Journal will bring together leading researchers, engineers and scientists in the domain of interest from around the world. Topics of interest for submission include, but are not limited to
Factors Affecting Purchasing Effectiveness in the Public Sugar Sector:A Case ...paperpublications3
Abstract:In the recent past, procurement performance has been attracting great attention from practitioners, academicians and researchers due to poor performance resulting from non adherence to proper processes and procedures. Many of the studies have devoted their content to financial factors as measures of effectiveness dismally giving consideration to non financial factors. This study aimed at investigating selected non financial factors that influence the effectiveness of purchasing function in the public sugar sector guided by four specific objectives; to find out how purchasing interaction with other departments impacts on its effectiveness, to find out how Purchasing delegated authority impacts on its effectiveness, to find out how Purchasing activity Execution impacts on its effectiveness and to find out how supplier relationship management practices impacts on purchasing function effectiveness. The four variables were found to have an effect on effectiveness of purchasing function in the public sugar sector. The study adopted a descriptive case research design and the study population comprised of 118 management staff Nzoia Sugar Company Ltd. A purposive sampling technique was employed to select a sample size of 57 respondents. Questionnaires were used as the main data collection instruments. Descriptive statistics data analysis method was applied to analyze numerical data gathered using closed ended questions aided by Statistical Package for Social Sciences (SPSS). From the findings, level of task execution explained 43.1% of purchasing department’s effectiveness, level of supplier relationship explained 20.9% and interaction level explained 2.2% while the level of purchasing delegated authority had a negative relationship with its effectiveness at -4.1% which means that the more autonomous purchasing department becomes the less effective it will be. The study recommends application of supplier collaboration strategies, integration of supply chain management tasks with IT to help speed up decision making process between the SCM partners, signing service level agreements (SLA),purchasing function to increase effectiveness by training and being members of professional bodies such as CIPS and KISM.
Keywords:Assessment, delegated authority, effectiveness, efficiency, inventory, non financial measures, purchasing interaction.
Customer Churn Management For Profit Maximization PowerPoint Presentation SlidesSlideTeam
The PowerPoint template allows firm in preventing its customers from reducing their purchase of products and services. It will help firm by providing various ways through which firm can manage their customer churn. It will cover details about churn propensity model. The template covers details about key statistics associated to customer churn. It covers details about present situation of customer attrition and customer churn rate on monthly basis. Customer churn is considered as critical issue which affect firms overall firm performance, due to which firm will incur heavy losses in terms of abandon purchases and lower revenues. It is to be noted that retaining customer is more profitable than acquiring new customers. The template will cover information regarding various types of customer churn such as when customer stops spending, churn due to product quality or complete customer account loss. The template will provide details about how firm will handle customer attrition by focusing on four stages of churn management by acquiring churned customers, delighting customers, preventing customer attrition, and saving customers through various campaigns. The template covers details about churn propensity model which will help preventing customer churn through predictive analytics by utilizing different statistical techniques such as machine learning. https://bit.ly/36qQZKg
Predicting future sales is intended to control the number of existing stock, so the lack or excess stock can be minimized. When the number of sales can be accurately predicted, then the fulfillment of consumer demand can be prepared in a timely and cooperation with the supplier company can be maintained properly so that the company can avoid losing sales and customers. This study aims to propose a model to predict the sales quantity (multi-products) by adopting the Recency-Frequency-Monetary (RFM) concept and Fuzzy Analytic Hierarchy Process (FAHP) method. The measurement of sales prediction accuracy in this study using a standard measurement of Mean Absolute Percentage Error (MAPE), which is the most important criteria in analyzing the accuracy of the prediction. The results indicate that the average MAPE value of the model was high (3.22%), so this model can be referred to as a sales prediction model.
This Research deals with the supply chain
management (SCM) provide us a high practical rapidity flow of
high quality, significant information that will assist suppliers to
provide a constant and specifically timed flow of resources to
customers. However, unplanned demand oscillations, including
those caused by stock outs, in the supply chain performance
development produce distortions. There are numerous causes,
often in combination that will cause these supply chain
distortions to start what has become known as the “Bullwhip
Effect”.
While the devil is generally hidden in the details, as is the
case here, the most common drivers of these demand distortions
are: Customers, Promotions, Sales, Manufacturing Policies,
Processes, Systems and Suppliers. The “Bullwhip Effect” has in
the past been recognized as normal, and in fact, thought to be a
predictable part of the order-to-delivery cycle. In this paper we
propose a novel effective approach to find the MSE (Mean square
error) with the help of MAMDANI Fuzzy logic.
AN OPTIMIZING INTEGRATED INVENTORY MODEL WITH INVESTMENT FOR QUALITY IMPROVEM...IJITCA Journal
This paper presents a vendor-buyer integrated inventory model. This paper considers the problem of a vendor and buyer integrated production inventory model for the vendor and the buyer optimization model
under quality improvement investment and setup cost reduction in the production system such that the total
profit is maximized. The relationship between demand and price is considered as a linear. Entirety profit is
the supply chain presentation calculate and it is calculated as the dissimilarity among revenue from sales
and total cost, where the last is the sum of the vendor’s and buyer’s setup/order and inventory holding
costs, opportunity in setup cost and opportunity investment cost. This manuscript efforts to conclude the
optimal production run time and capital investments in setup cost reduction and process quality
improvement for production system such that the total profit is maximized. The main focus for this paper is
the setup cost reduction and investment for quality improvement. The proposed model is based on the
integrated total profit for both buyer and vendor which find out the optimal value of order quantity,
opportunity investment cost for quality improvement and setup cost reduction. The solution procedure is
developed in order to find the total profit of the vendor and the buyer which is to be maximized. To conclude, a numerical example is given to demonstrate the solution procedure.
Pricing Maturity Assessment, the results and finding per industry: Chemicals, Postal and Logistics, Telecommunications, HighTech, Automotive, LifeSciences, Machinery and Equipment, FMCG and Retail.
Reliability, quality and a competitive price are table stakes in
the business of maintaining and repairing industrial facilities
and equipment, commonly known as Maintenance, Repair
and Operations (MRO). Given the nature of MRO, urgency can
often catapult to the top of the list of requirements. Sellers
who cannot consistently come through will almost certainly
be dropped from future consideration.
Telecommunication Analysis (3 use-cases) with IBM watson analyticssheetal sharma
The purpose of this study is, with the help of Watson Analytics examine why customers are not used the connection of Bits Telecom Company, which factors are influence the churn. Also see the cross selling and up-selling, also focus on profitability and investment and find out the way for better results.
This project is related to “Organisational buying behavior for Tata Teleservises Maharashtra Limited”. Nowadays consumer is a king of market. Organisational buying behavior is the most important concept for any company by which many company formulate there marketing strategies. Thus project conducted in Pune city in Maharashtra state. The Pune city is one of the main IT Hub in India.
The Indian Internet Market Dynamics and Forecast (2008-2014) report analyses the latest developments in the fast growing Internet market of India by the main players and provides a market forecast until 2014. ROA Holdings and Optimum forecast that India's internet market is to reach as high as 868.47 million users by 2014, with an estimated compound annual growth rate (CAGR) of 20.45%. During 2008, more than 112 million subscribers were added, increasing the penetration from 20.31% to 29.76% in 2008.
In today’s market each & every firm is selling identical products, which are differentiated by
brands. The sole aim of every company is to promote their brands using different strategies in
order to earn revenue and to occupy a major market share. According to the “consumer reports” a person encounters approximately 247 images per day, but probably do not noticeeven half of them, neither gets exposed to it. This means that mere proximity or visibility ofthe message is not sufficient for the customer to notice it. Since it is not possible for thehuman brain to process so many messages all at once, the viewer often finds it difficult todecode the message communicated by the company, and thus the purpose of the wholecommunication goes in vain and gets wasted.
As the foreign companies, new technologies & consumers prospects towards innovative products, Every Electronics products are coming with internet facilities and trends towards internet usage is becoming high and high, that’s why the organisations also taking help of internet to decreasing the workload. They are extremely higher user of internet services than an individual consumer. Therefore there is a need to Internet service Providers (ISP’s) to grab this market.
Clustering Prediction Techniques in Defining and Predicting Customers Defecti...IJECEIAES
With the growth of the e-commerce sector, customers have more choices, a fact which encourages them to divide their purchases amongst several ecommerce sites and compare their competitors‟ products, yet this increases high risks of churning. A review of the literature on customer churning models reveals that no prior research had considered both partial and total defection in non-contractual online environments. Instead, they focused either on a total or partial defect. This study proposes a customer churn prediction model in an e-commerce context, wherein a clustering phase is based on the integration of the k-means method and the Length-RecencyFrequency-Monetary (LRFM) model. This phase is employed to define churn followed by a multi-class prediction phase based on three classification techniques: Simple decision tree, Artificial neural networks and Decision tree ensemble, in which the dependent variable classifies a particular customer into a customer continuing loyal buying patterns (Non-churned), a partial defector (Partially-churned), and a total defector (Totally-churned). Macroaveraging measures including average accuracy, macro-average of Precision, Recall, and F-1 are used to evaluate classifiers‟ performance on 10-fold cross validation. Using real data from an online store, the results show the efficiency of decision tree ensemble model over the other models in identifying both future partial and total defection.
International Journal of Business and Management Invention (IJBMI)inventionjournals
International Journal of Business and Management Invention (IJBMI) is an international journal intended for professionals and researchers in all fields of Business and Management. IJBMI publishes research articles and reviews within the whole field Business and Management, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The Journal will bring together leading researchers, engineers and scientists in the domain of interest from around the world. Topics of interest for submission include, but are not limited to
Factors Affecting Purchasing Effectiveness in the Public Sugar Sector:A Case ...paperpublications3
Abstract:In the recent past, procurement performance has been attracting great attention from practitioners, academicians and researchers due to poor performance resulting from non adherence to proper processes and procedures. Many of the studies have devoted their content to financial factors as measures of effectiveness dismally giving consideration to non financial factors. This study aimed at investigating selected non financial factors that influence the effectiveness of purchasing function in the public sugar sector guided by four specific objectives; to find out how purchasing interaction with other departments impacts on its effectiveness, to find out how Purchasing delegated authority impacts on its effectiveness, to find out how Purchasing activity Execution impacts on its effectiveness and to find out how supplier relationship management practices impacts on purchasing function effectiveness. The four variables were found to have an effect on effectiveness of purchasing function in the public sugar sector. The study adopted a descriptive case research design and the study population comprised of 118 management staff Nzoia Sugar Company Ltd. A purposive sampling technique was employed to select a sample size of 57 respondents. Questionnaires were used as the main data collection instruments. Descriptive statistics data analysis method was applied to analyze numerical data gathered using closed ended questions aided by Statistical Package for Social Sciences (SPSS). From the findings, level of task execution explained 43.1% of purchasing department’s effectiveness, level of supplier relationship explained 20.9% and interaction level explained 2.2% while the level of purchasing delegated authority had a negative relationship with its effectiveness at -4.1% which means that the more autonomous purchasing department becomes the less effective it will be. The study recommends application of supplier collaboration strategies, integration of supply chain management tasks with IT to help speed up decision making process between the SCM partners, signing service level agreements (SLA),purchasing function to increase effectiveness by training and being members of professional bodies such as CIPS and KISM.
Keywords:Assessment, delegated authority, effectiveness, efficiency, inventory, non financial measures, purchasing interaction.
Customer Churn Management For Profit Maximization PowerPoint Presentation SlidesSlideTeam
The PowerPoint template allows firm in preventing its customers from reducing their purchase of products and services. It will help firm by providing various ways through which firm can manage their customer churn. It will cover details about churn propensity model. The template covers details about key statistics associated to customer churn. It covers details about present situation of customer attrition and customer churn rate on monthly basis. Customer churn is considered as critical issue which affect firms overall firm performance, due to which firm will incur heavy losses in terms of abandon purchases and lower revenues. It is to be noted that retaining customer is more profitable than acquiring new customers. The template will cover information regarding various types of customer churn such as when customer stops spending, churn due to product quality or complete customer account loss. The template will provide details about how firm will handle customer attrition by focusing on four stages of churn management by acquiring churned customers, delighting customers, preventing customer attrition, and saving customers through various campaigns. The template covers details about churn propensity model which will help preventing customer churn through predictive analytics by utilizing different statistical techniques such as machine learning. https://bit.ly/36qQZKg
Predicting future sales is intended to control the number of existing stock, so the lack or excess stock can be minimized. When the number of sales can be accurately predicted, then the fulfillment of consumer demand can be prepared in a timely and cooperation with the supplier company can be maintained properly so that the company can avoid losing sales and customers. This study aims to propose a model to predict the sales quantity (multi-products) by adopting the Recency-Frequency-Monetary (RFM) concept and Fuzzy Analytic Hierarchy Process (FAHP) method. The measurement of sales prediction accuracy in this study using a standard measurement of Mean Absolute Percentage Error (MAPE), which is the most important criteria in analyzing the accuracy of the prediction. The results indicate that the average MAPE value of the model was high (3.22%), so this model can be referred to as a sales prediction model.
This Research deals with the supply chain
management (SCM) provide us a high practical rapidity flow of
high quality, significant information that will assist suppliers to
provide a constant and specifically timed flow of resources to
customers. However, unplanned demand oscillations, including
those caused by stock outs, in the supply chain performance
development produce distortions. There are numerous causes,
often in combination that will cause these supply chain
distortions to start what has become known as the “Bullwhip
Effect”.
While the devil is generally hidden in the details, as is the
case here, the most common drivers of these demand distortions
are: Customers, Promotions, Sales, Manufacturing Policies,
Processes, Systems and Suppliers. The “Bullwhip Effect” has in
the past been recognized as normal, and in fact, thought to be a
predictable part of the order-to-delivery cycle. In this paper we
propose a novel effective approach to find the MSE (Mean square
error) with the help of MAMDANI Fuzzy logic.
AN OPTIMIZING INTEGRATED INVENTORY MODEL WITH INVESTMENT FOR QUALITY IMPROVEM...IJITCA Journal
This paper presents a vendor-buyer integrated inventory model. This paper considers the problem of a vendor and buyer integrated production inventory model for the vendor and the buyer optimization model
under quality improvement investment and setup cost reduction in the production system such that the total
profit is maximized. The relationship between demand and price is considered as a linear. Entirety profit is
the supply chain presentation calculate and it is calculated as the dissimilarity among revenue from sales
and total cost, where the last is the sum of the vendor’s and buyer’s setup/order and inventory holding
costs, opportunity in setup cost and opportunity investment cost. This manuscript efforts to conclude the
optimal production run time and capital investments in setup cost reduction and process quality
improvement for production system such that the total profit is maximized. The main focus for this paper is
the setup cost reduction and investment for quality improvement. The proposed model is based on the
integrated total profit for both buyer and vendor which find out the optimal value of order quantity,
opportunity investment cost for quality improvement and setup cost reduction. The solution procedure is
developed in order to find the total profit of the vendor and the buyer which is to be maximized. To conclude, a numerical example is given to demonstrate the solution procedure.
Pricing Maturity Assessment, the results and finding per industry: Chemicals, Postal and Logistics, Telecommunications, HighTech, Automotive, LifeSciences, Machinery and Equipment, FMCG and Retail.
Reliability, quality and a competitive price are table stakes in
the business of maintaining and repairing industrial facilities
and equipment, commonly known as Maintenance, Repair
and Operations (MRO). Given the nature of MRO, urgency can
often catapult to the top of the list of requirements. Sellers
who cannot consistently come through will almost certainly
be dropped from future consideration.
Telecommunication Analysis (3 use-cases) with IBM watson analyticssheetal sharma
The purpose of this study is, with the help of Watson Analytics examine why customers are not used the connection of Bits Telecom Company, which factors are influence the churn. Also see the cross selling and up-selling, also focus on profitability and investment and find out the way for better results.
This project is related to “Organisational buying behavior for Tata Teleservises Maharashtra Limited”. Nowadays consumer is a king of market. Organisational buying behavior is the most important concept for any company by which many company formulate there marketing strategies. Thus project conducted in Pune city in Maharashtra state. The Pune city is one of the main IT Hub in India.
The Indian Internet Market Dynamics and Forecast (2008-2014) report analyses the latest developments in the fast growing Internet market of India by the main players and provides a market forecast until 2014. ROA Holdings and Optimum forecast that India's internet market is to reach as high as 868.47 million users by 2014, with an estimated compound annual growth rate (CAGR) of 20.45%. During 2008, more than 112 million subscribers were added, increasing the penetration from 20.31% to 29.76% in 2008.
In today’s market each & every firm is selling identical products, which are differentiated by
brands. The sole aim of every company is to promote their brands using different strategies in
order to earn revenue and to occupy a major market share. According to the “consumer reports” a person encounters approximately 247 images per day, but probably do not noticeeven half of them, neither gets exposed to it. This means that mere proximity or visibility ofthe message is not sufficient for the customer to notice it. Since it is not possible for thehuman brain to process so many messages all at once, the viewer often finds it difficult todecode the message communicated by the company, and thus the purpose of the wholecommunication goes in vain and gets wasted.
As the foreign companies, new technologies & consumers prospects towards innovative products, Every Electronics products are coming with internet facilities and trends towards internet usage is becoming high and high, that’s why the organisations also taking help of internet to decreasing the workload. They are extremely higher user of internet services than an individual consumer. Therefore there is a need to Internet service Providers (ISP’s) to grab this market.
Clustering Prediction Techniques in Defining and Predicting Customers Defecti...IJECEIAES
With the growth of the e-commerce sector, customers have more choices, a fact which encourages them to divide their purchases amongst several ecommerce sites and compare their competitors‟ products, yet this increases high risks of churning. A review of the literature on customer churning models reveals that no prior research had considered both partial and total defection in non-contractual online environments. Instead, they focused either on a total or partial defect. This study proposes a customer churn prediction model in an e-commerce context, wherein a clustering phase is based on the integration of the k-means method and the Length-RecencyFrequency-Monetary (LRFM) model. This phase is employed to define churn followed by a multi-class prediction phase based on three classification techniques: Simple decision tree, Artificial neural networks and Decision tree ensemble, in which the dependent variable classifies a particular customer into a customer continuing loyal buying patterns (Non-churned), a partial defector (Partially-churned), and a total defector (Totally-churned). Macroaveraging measures including average accuracy, macro-average of Precision, Recall, and F-1 are used to evaluate classifiers‟ performance on 10-fold cross validation. Using real data from an online store, the results show the efficiency of decision tree ensemble model over the other models in identifying both future partial and total defection.
This document proposes advanced data analytics as the key solution for building intimate knowledge about our customers’ behaviour, preferences and aspirations; an essential requirement for maximizing revenue in our current competitive environment.
Churn in the Telecommunications Industryskewdlogix
Strategic Business Analysis Capstone Project Telecommunications Churn Management
Churn is a significant problem that costs telecommunications companies billions of dollars through lost revenue. Now that the market is more mature, the only way for a company to grow is to take their competitors customers. This issue
combined with the greater choice that consumers have gained means that any adverse touch point with a consumer can result in a lost customer.
6/4/2019 Print
https://content.ashford.edu/print/AUBUS620.12.1?sections=sec1.2&content=all&clientToken=facba320-3875-6b39-053e-fd4d5df2b43b&np=sec1.2 1/9
How can you apply this process to a company, product, and target market you are aware of?
1.2 The Marketing Management Process
Marketing management is a process that is intended to facilitate transactions by bringing buyers
and sellers together. Consistent with the marketing concept, the ultimate goal of the process is to
create exchanges that satisfy both company and customer.
As illustrated in Figure 1.2, the process of marketing management from the seller's perspective can
be characterized as a series of four stages of decision making: situation analysis, marketing
strategy, marketing mix decisions, and implementation and control.
Figure 1.2: Marketing management process
Each of these stages is described in greater detail in the sections that follow. Before proceeding,
however, it is important to keep two features of the model in mind. The purpose of the model is to
provide a measure of discipline to the process of marketing management to improve the quality of
managers' decisions. Its value lies in making sure that the decision maker is deliberate, thorough,
and systematic in the planning and execution of marketing strategy. An important consideration
when evaluating the model is that it is not simply a linear recipe card for decision making. It is
intended to provide an aid to assessing the goodness of fit between marketing problems and
alternative solutions. As such, it is not a substitute for thinking. The model can only be as useful,
flexible, and dynamic as the user makes it.
Stage I: Situation Analysis
In many instances, corporate, division, and business unit level goals and strategic priorities will
shape and direct the process of marketing management from the outset. Given those constraints,
the first step of the process is to undertake a thorough analysis of the current situation and
environment confronting the organization. Situation analysis is at the heart of marketing's endeavor
to identify new opportunities to satisfy unmet customer wants and needs. Opportunities typically
stem either from finding new ways to serve the needs of existing customers or uncovering new
markets for existing product or service lines. Many new opportunities incorporate elements of both
new products and new markets. Productrelated opportunities for a regional hospital, for example,
might include the addition of alternative therapies (e.g., acupuncture) or creating satellite wellness
or expresscare centers in local shopping centers and malls. The addition of a new service line in
sports medicine and rehabilitation care might be one way to reach a new segment of the market.
6/4/2019 Print
https://content.ashford.edu/print/AUBUS620.12.1?sections=sec1.2&content=all&clientToken=facba320-3875-6b39-053e-fd4d5df2b43b&np=sec1.2 2/9
CVS and many other companies are meeting shoppers'
needs conv.
Prepaid customer segmentation in telecommunications: An overview of common pr...Exacaster
There are number of frustrating factors for marketers who work with prepaid customers in
telecommunications. This Exacaster white paper summarizes the pros and cons of common segmentation strategies in prepaid markets.
Endüstri Mühendisliği - Yöneylem teknikleriyle Sağlık Tedarik Zinciri Modellemesidir. Maalesef dünya bu yöntemleri taşıyacak kadar deterministik değildir. Zaten sonraki aşamada fiili model denemesi planlanmış.
AHP Based Data Mining for Customer Segmentation Based on Customer Lifetime ValueIIRindia
Data mining techniques are widely used in various areas of marketing management for extracting useful information.Particularly in a business-to-customer (B2C) setting, it plays an important role in customer segmentation. A retailernot only tries to improve its relationship with its customers,but also enhances its business in a manufacturer-retailer-consumer chainwith respect to this information.Although there are various approaches for customer segmentation, we have used an analytic hierarchical process based data mining technique in this regard. Customers are segmented into six clusters based on Davis-Bouldin (DB) index and K-Means algorithm.Customer lifetime value (CLV)along four dimensions, viz., Length (L), Recency(R), Frequency (F) and Monetary value (M) are considered for these clusters. Then, we apply Saaty’s analytical hierarchical process (AHP) to determine the weights of these criteria, which in turn, helps in computing the CLV value for each of the clusters and their individual rankings. This information is quite important for a retailer to design promotional strategies for improving relationship between the retailer and its customers. To demonstrate the effectiveness of this methodology, we have implemented the model, taking a real life data-base of customers of an organization in the context of an Indian retail industry.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Customer churn occurs when customers or subscribers stop doing business with a company or service.
Also known as customer attrition, customer churn is a critical metric because it is much less expensive to retain existing customers than it is to acquire new customers – earning business from new customer’s means working leads all the way through the sales funnel, utilizing your marketing and sales resources throughout the process.
Canback and D'Agnese - Where in the World Is the Market?Tellusant, Inc.
This article by our executive chairman, Staffan Canback, describes how to analyze global markets.
Finding, measuring and capturing market opportunities in emerging countries are critical tasks for multinational consumer goods companies. Central to these tasks is the need to collect and analyze income distribution data within a globally coherent framework and to move beyond income metrics based on national averages.
The article describes a new framework and dataset that achieves this goal and demonstrates how income distribution data, combined with consumer and marketing data, can be incorporated into simple demand models such as the Bass diffusion model or the Golder-Tellis affordability model to understand market dynamics. Our analytical effort is the first example of income distribution data being used to assess market opportunities in emerging countries.
We find that demand models based on the number of people within various income brackets at national or local levels are superior to models based on average income. We further find that combining income distribution data with pricing,
marketing spending, consumer behavior and distribution coverage data makes it possible to measure which factors drive demand at the brand level — even in hard-to-analyze countries.
Making Analytics Actionable for Financial Institutions (Part I of III)Cognizant
To maximize ROI from their analytics platforms, financial institutions must build solutions that explicitly, visibly and sustainably enable real-time translation of data into meaningful and continuous improvements in their products, services, operating models and supporting infrastructures.
Making Analytics Actionable for Financial Institutions (Part II of III)Cognizant
To identify meaningful use cases for analytics-driven banking and financial services solutions, organizations need a thorough understanding of how customer interactions align with context and anticipate needs, while simplifying the decision-making process.
9Assessing Market Opportunities and Targeting Market Seg.docxfredharris32
9
Assessing Market
Opportunities and Targeting
Market Segments
Fuse/Thinkstock
The aim of marketing is to know and understand the customer so well
the product or service fits him and sells itself.
—Peter Drucker
Learning Objectives
After reading this chapter, you should be able to do the following:
• Delineate the importance of performing a market opportunity analysis, and explain the process of assess-
ing market opportunities.
• Identify the four activities involved in completing a market demand analysis, and discuss commonly used
bases for market segmentation.
• Explain the use of three methods for measuring market potential.
• Discuss the substeps of the market segmentation and target marketing phases and the steps involved in the
market segmentation process.
Section 9.1Market Opportunity Analysis
Introduction
This chapter focuses on the details of identifying market opportunities, evaluating these
opportunities, and then deciding whether to pursue an opportunity. The careful analysis of a
marketing opportunity not only helps the organization grow by pursuing feasible opportuni-
ties, it also helps the organization avoid the costly mistake of pursuing an opportunity that is
not really viable, or one for which the internal resources are insufficient for its sustainment
over the long run.
9.1 Market Opportunity Analysis
Market opportunity analysis is the process of defining the exact nature of the opportunities
available in an organization’s operating environment in terms of external, financial, and internal
considerations. Figure 9.1 presents an overview of this process in terms of the steps involved
in the analysis.
As this diagram depicts, opportunity analysis is a comprehensive analysis of all aspects of
an alternative before decisions are made to pursue it. The results of such an analysis put the
decision-maker in a position of having a strong database from which to choose among the
various alternatives present in the environment in line with financial and internal considerations
that are specified by management.
The analysis begins with a detailed study of the environment in which the proposed business
would operate. This includes not only the political, legal, economic, social, cultural, and tech-
nological environments, but also market size, growth trends, and consumers’ attitudes and
behavior. It also involves a study of current and potential competitors who may be going after
the same customers the organization proposes to attract. These factors are external to the
organization or person contemplating the new venture; therefore, a thorough analysis of these
factors requires a great deal of diligence. The analysis usually involves a substantial commit-
ment of time and money to collect the necessary information. The tools used in the opportu-
nity assessment process are illustrated in Figure 9.1.
If the environmental analysis indicates that these factors are favorable to the potential bus.
Similar to New Market Segmentation Methods Using Enhanced (RFM), CLV, Modified Regression and Clustering Methods (20)
In the era of data-driven warfare, the integration of big data and machine learning (ML) techniques has
become paramount for enhancing defence capabilities. This research report delves into the applications of
big data and ML in the defence sector, exploring their potential to revolutionize intelligence gathering,
strategic decision-making, and operational efficiency. By leveraging vast amounts of data and advanced
algorithms, these technologies offer unprecedented opportunities for threat detection, predictive analysis,
and optimized resource allocation. However, their adoption also raises critical concerns regarding data
privacy, ethical implications, and the potential for misuse. This report aims to provide a comprehensive
understanding of the current state of big data and ML in defence, while examining the challenges and
ethical considerations that must be addressed to ensure responsible and effective implementation.
Cloud Computing, being one of the most recent innovative developments of the IT world, has been
instrumental not just to the success of SMEs but, through their productivity and innovative contribution to
the economy, has even made a remarkable contribution to the economic growth of the United States. To
this end, the study focuses on how cloud computing technology has impacted economic growth through
SMEs in the United States. Relevant literature connected to the variables of interest in this study was
reviewed, and secondary data was generated and utilized in the analysis section of this paper. The findings
of this paper revealed that there have been meaningful contributions that the usage of virtualization has
made in the commercial dealings of small firms in the United States, and this has also been reflected in the
economic growth of the country. This paper further revealed that as important as cloud-based software is,
some SMEs are still skeptical about how it can help improve their business and increase their bottom line
and hence have failed to adopt it. Apart from the SMEs, some notable large firms in different industries,
including information and educational services, have adopted cloud computing technology and hence
contributed to the economic growth of the United States. Lastly, findings from our inferential statistics
revealed that no discernible change has occurred in innovation between small and big businesses in the
adoption of cloud computing. Both categories of businesses adopt cloud computing in the same way, and
their contribution to the American economy has no significant difference in the usage of virtualization.
Energy-constrained Wireless Sensor Networks (WSNs) have garnered significant research interest in
recent years. Multiple-Input Multiple-Output (MIMO), or Cooperative MIMO, represents a specialized
application of MIMO technology within WSNs. This approach operates effectively, especially in
challenging and resource-constrained environments. By facilitating collaboration among sensor nodes,
Cooperative MIMO enhances reliability, coverage, and energy efficiency in WSN deployments.
Consequently, MIMO finds application in diverse WSN scenarios, spanning environmental monitoring,
industrial automation, and healthcare applications.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication. IJCSIT publishes original research papers and review papers, as well as auxiliary material such as: research papers, case studies, technical reports etc.
With growing, Car parking increases with the number of car users. With the increased use of smartphones
and their applications, users prefer mobile phone-based solutions. This paper proposes the Smart Parking
Management System (SPMS) that depends on Arduino parts, Android applications, and based on IoT. This
gave the client the ability to check available parking spaces and reserve a parking spot. IR sensors are
utilized to know if a car park space is allowed. Its area data are transmitted using the WI-FI module to the
server and are recovered by the mobile application which offers many options attractively and with no cost
to users and lets the user check reservation details. With IoT technology, the smart parking system can be
connected wirelessly to easily track available locations.
Welcome to AIRCC's International Journal of Computer Science and Information Technology (IJCSIT), your gateway to the latest advancements in the dynamic fields of Computer Science and Information Systems.
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with
linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language
Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such
systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic
language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the
fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer)
which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different
questions that deal with the different concepts and have different difficulty levels. Constraint-based student
modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper
is the hierarchal representation of the system's basic grammar skills as domain knowledge. That
representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number
of trails the student takes for answering each question and fuzzy logic decision system are used to
determine the student learning level for each lesson as a long-term model. The results of the evaluation
showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
This research aims to further understanding in the field of continuous authentication using behavioural
biometrics. We are contributing a novel dataset that encompasses the gesture data of 15 users playing
Minecraft with a Samsung Tablet, each for a duration of 15 minutes. Utilizing this dataset, we employed
machine learning (ML) binary classifiers, being Random Forest (RF), K-Nearest Neighbors (KNN), and
Support Vector Classifier (SVC), to determine the authenticity of specific user actions. Our most robust
model was SVC, which achieved an average accuracy of approximately 90%, demonstrating that touch
dynamics can effectively distinguish users. However, further studies are needed to make it viable option
for authentication systems. You can access our dataset at the following
link:https://github.com/AuthenTech2023/authentech-repo
This paper discusses the capabilities and limitations of GPT-3 (0), a state-of-the-art language model, in the
context of text understanding. We begin by describing the architecture and training process of GPT-3, and
provide an overview of its impressive performance across a wide range of natural language processing
tasks, such as language translation, question-answering, and text completion. Throughout this research
project, a summarizing tool was also created to help us retrieve content from any types of document,
specifically IELTS (0) Reading Test data in this project. We also aimed to improve the accuracy of the
summarizing, as well as question-answering capabilities of GPT-3 (0) via long text
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification.
This work highlights transfer learning’s effectiveness in image classification using CNNs and VGG 16 that
provides insights into the selection of pre-trained models and hyper parameters for optimal performance.
We have proposed a comprehensive approach for image segmentation and classification, incorporating preprocessing techniques, the K-means algorithm for segmentation, and employing deep learning models such
as CNN and VGG 16 for classification.
The security of Electric Vehicle (EV) charging has gained momentum after the increase in the EV adoption
in the past few years. Mobile applications have been integrated into EV charging systems that mainly use a
cloud-based platform to host their services and data. Like many complex systems, cloud systems are
susceptible to cyberattacks if proper measures are not taken by the organization to secure them. In this
paper, we explore the security of key components in the EV charging infrastructure, including the mobile
application and its cloud service. We conducted an experiment that initiated a Man in the Middle attack
between an EV app and its cloud services. Our results showed that it is possible to launch attacks against
the connected infrastructure by taking advantage of vulnerabilities that may have substantial economic and
operational ramifications on the EV charging ecosystem. We conclude by providing mitigation suggestions
and future research directions.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
This paper describes the outcome of an attempt to implement the same transitive closure (TC) algorithm
for Apache MapReduce running on different Apache Hadoop distributions. Apache MapReduce is a
software framework used with Apache Hadoop, which has become the de facto standard platform for
processing and storing large amounts of data in a distributed computing environment. The research
presented here focuses on the variations observed among the results of an efficient iterative transitive
closure algorithm when run against different distributed environments. The results from these comparisons
were validated against the benchmark results from OYSTER, an open source Entity Resolution system. The
experiment results highlighted the inconsistencies that can occur when using the same codebase with
different implementations of Map Reduce.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Delivering Micro-Credentials in Technical and Vocational Education and TrainingAG2 Design
Explore how micro-credentials are transforming Technical and Vocational Education and Training (TVET) with this comprehensive slide deck. Discover what micro-credentials are, their importance in TVET, the advantages they offer, and the insights from industry experts. Additionally, learn about the top software applications available for creating and managing micro-credentials. This presentation also includes valuable resources and a discussion on the future of these specialised certifications.
For more detailed information on delivering micro-credentials in TVET, visit this https://tvettrainer.com/delivering-micro-credentials-in-tvet/
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
New Market Segmentation Methods Using Enhanced (RFM), CLV, Modified Regression and Clustering Methods
1. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
DOI: 10.5121/ijcsit.2019.11104 43
NEW MARKET SEGMENTATION METHODS USING
ENHANCED (RFM), CLV, MODIFIED REGRESSION
AND CLUSTERING METHODS
Fahed Yoseph1
and Mohammad AlMalaily2
1
School of Business and Economics, Åbo Akademi University
2
Faculty of Engineering, Mansoura University
ABSTRACT
A widely used approach for gaining insight into the heterogeneity of consumer’s buying behavior is market
segmentation. Conventional market segmentation models often ignore the fact that consumers’ behavior
may evolve over time. Therefore retailers consume limited resources attempting to service unprofitable
consumers. This study looks into the integration between enhanced Recency, Frequency, Monetary (RFM)
scores and Consumer Lifetime Value (CLV) matrix for a medium size retailer in the State of Kuwait. A
modified regression algorithm investigates the consumer purchase trend gaining knowledge from a point-
of-sales data warehouse. In addition, this study applies enhanced normal distribution formula to remove
outliers, followed by soft clustering Fuzzy C-Means and hard clustering Expectation Maximization (EM)
algorithms to the analysis of consumer buying behavior. Using cluster quality assessment shows EM
algorithm scales much better than Fuzzy C-Means algorithm with its ability to assign good initial points in
the smaller dataset.
KEYWORDS
Segmentation, Clustering, RFM model, Retailing
1. INTRODUCTION
Mass marketing strategies which are mainly based on marketing experts and sales manager’s
opinions of the market [17]. For example, Gholamian [22] states that the retail industry is highly
competitive, with the number of products often overwhelming. Consumers are faced with a
variety of products, causing the demand to be higher and more complex. In light of this trend,
modern marketing moves from mass-marketing (products-focus) to target-marketing (consumer-
focus). In response, small and medium-sized retailers (SMR) segment their markets to be more
strategic in their planning and design and implement successful marketing strategies and retention
policies. In the fast-changing retail industry, there is a clear need for advanced methods to
discover market segments from sales and other data, with market-segmentation empowering
retailers to precisely reach consumers with specific needs and wants, by dividing the market into
similar and identifiable segments, to focus on individuals with similar preferences, choices, needs
and interests on a common platform [24], [27]. Segmentation evaluates consumers as a segment
indirectly, rather than individually or directly. It enables retailers to make full use of their limited
resources to serve consumers effectively as consumer sub-groups [10]. Proper mechanisms for
treating point-of-sales (POS) events convert ever-increasing transaction data into knowledge [34].
To take on this challenge, this study introduces an enhanced normal distribution formula to
2. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
44
eliminate outliers in the dataset, followed by developing two of the most popular techniques of
market segmentation: Recency Frequency, Monetary (RFM) scores with Consumer Lifetime
Value (CLV). RFM, Birant [4], distinguishes important consumers and their purchase behavior
by involving: the consumer’s most recent purchase (R), the frequency of purchases (F), and spent
money (M). These variables and appropriate feature weights calculate an RFM-score that is a key
figure of market segmentation. CLV is quite different in that it is a quantitative measurement on
the number of sales the consumer is expected to commit to a retailer in total time [14]. CLV is
paired to RFM to predict the future cash flows attributed to the consumer during his or her entire
lifetime with the retailer before he or she churn [35].
Table 1. Market Segmentation Bases
Fig.1: The customer value matrix
3. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
45
2. IMPERIAL CASE STUDY
A Kuwaiti medium retail chain with 25 domestic branches, has grown a changing set of
consumers. The consumer-base is now diverse and owing to the demographic changes inside
Kuwait is no longer possible to define consumer profiles based on the known, previous,
consumption patterns and using traditional marketing strategy.
POS data from this medium-sized retailer from Kuwait has the benefit that it is usually generated
and stored in a structured way and is relatively easy to aggregate to consumer-level. This
characteristic of POS data makes it useful for analyses that require (more or less) complete data,
such as Fuzzy C-Means and EM clustering algorithms.
Our objective is to design an advanced segmentation method, construct a system utilizing the
method and test this system (the artefact) by analyzing the results produced by the system. The
idea is to show how the market segmentation process can be improved with a hybrid approach that
utilizes both regression and clustering as steps in the analysis of a POS data warehouse. Also to
show that with an appropriate design more usable results can be produced. More specifically, to
answer the following study questions:
1. Can a method with RFM, clv, normal distribution, regression and clustering steps help
discovering hidden patterns about consumer purchase behavior and segment the SMR
customer-base?
2. Can the proposed method help mapping the entire consumer's journey?
Sections that follow elaborate this by reviewing appropriate literature, in section 2,
developing the modeling system the system in section 3, and analyzing the results in
section 4. Conclusions follow in section 5 with answers to the aforementioned study
questions.
2.1 MARKET SEGMENTATION
The US Small Business Administration has traditionally defined Small to Medium Size Retailers
as businesses employing fewer than 500 employees [34]. According to the European Commission,
the SMR industry forms the backbone of the economy and are the key players in the creation of
new jobs and economic growth [19]. Historically, small size retailers have had the privilege of
developing close and mutually beneficial relationships with their consumers, thus keeping existing
consumers and reaching new markets is a major challenge for the retailer [16], [20]. These
relationships were possible because consumer’s buying behavior did not change much, and the
price was less of an issue due to less competition [6]. However, the recent economic and social
changes have transformed the retail industry, particularly the relationship between the retailer and
consumers has changed significantly. As a result, retailers have been forced to seek new marketing
strategies to identify the profitable segment of consumers, to develop marketing mixes that appeal
to those potential segments of consumers and to focus on providing value to the key segments of
consumers [16], [20].
The proposed enhanced normal distribution formula which allows the client to fully eliminate
outliers or keep percentage of outliers in the retrieved dataset. The formula is developed using
variables that can be changed based on the client desire whether to keep outliers or remove them.
As the outliers can be an indication of variance in the dataset or a mistake during data collection
phase. Our approach is to eliminate outlier points by removing any points above the Mean.
However, we are not in position to decide if they areimportant or not. To be more practical in this
4. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
46
study we introduce enhanced Normal Distribution formula with an independent variable (0 to 1)
to allow the client to change and compare output with different percentage of outliers.
following values (1 = Normal Distribution, closest point to the mean with small percentage of
outliers. Reducing the value lower than 1 will increase probability of outliers in the extracted
data). calculating the mean and standard deviation in the below formula. Probability density
function Y1 of normally distributed variables is given by
2.1.1 MARKET SEGMENTATION
To answer the second question in this study. the dataset has to be normally distributed. We have
observed some potential outliers in our dataset. After close examination, we discovered some
consumers made a very low purchase (5 KD) and extremely high purchase (10000 KD). Where
his/her average purchase power is in 330KD. This deviation from the true consumer average
purchase value. Outliers are mainly an observation point that is far distant from normal. To make
the scales of our graph more realistic, introducing new method to removal the outliers in the
dataset is essential. The standard deviation is a common practice for identifying outliers, it’s
symmetry makes it an attractive choice for our model. It is the considered to be the most
important probability distribution in statistics because it fits many natural phenomena, and most
of the data values in a normal distribution tend to cluster around the mean. The further a data
point is from the mean, the less likely it is to occur (Schafer, J. L. 1997). The normal distribution
has two parameters, usually denoted by µ and σ 2, which are its mean and variance (Altman,
1995).
The proposed enhanced normal distribution formula which allows the client to fully eliminate
outliers or keep percentage of outliers in the retrieved dataset. The formula is developed using
variables that can be changed based on the client desire whether to keep outliers or remove them.
As the outliers can be an indication of variance in the dataset or a mistake during data collection
phase. Our approach is to eliminate outlier points by removing any points above the Mean.
However, we are not in position to decide if they are important or not. To be more practical in
this study we introduce enhanced Normal Distribution formula with an independent variable (0 to
1) to allow the client to change and compare output with different percentage of outliers.
following values (1 = Normal Distribution, closest point to the mean with small percentage of
outliers. Reducing the value lower than 1 will increase probability of outliers in the extracted
data). calculating the mean and standard deviation in the below formula. Probability density
function Y1 of normally distributed variables is given by
1 = =
1
√2
∗ for − ∞ < < ∞
The total area under the normal curve Y1 is equal to 1.
The Coefficient
√
keeps the area under the curve = 1 , but this will make the value of the
mean point varies between 1 and 0. While the objective to keep the value always at the mean
point equal to 1. Therefore, we remove the effect of coefficient by multiplying the equation (Y1)
by the coefficient to the power of -1.
5. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
47
2 = √2 1 = √2 ∗
1
√2
∗ =
√2
√2
∗ =
Continuous for all values of X between - ∞ and ∞ so that each conceivable interval of real
numbers has a probability other than zero. - ∞ ≤ V ≤ ∞.
The final step is we further modified the equation by adding a variable (v Where 0 < v < ∞) to
extend the Y2 horizontally so most of the values closer to the mean will be closer to 1. And the
values far from the mean point will have a lower value:
3 = 2 !
= " #
!
Fig 2: Shows the 3 Phases in the Normal Distribution
The literature has traditionally defined RFM analysis as the standard approach to assess and
understand consumer lifetime value, and it is quite popular, especially in the retail industry.
According to Tsiptsis and Chorianopoulos [33], RFM involves the calculation and the examination
of three variables – Recency, Frequency, and Monetary (RFM). Recency refers to the inverse of
the most recent interval from the time when the latest consuming behavior happens to the present
moment. Frequency is the number of events the consumer purchases in a period. Monetary is
simply the amount of money consumed during the period. As the weighted average of its
individualcomponents, the RFM score and is calculated as
RFMPQ scorescaled=
RFMPQ score- min(RFMPQ score)
max(RFMPQ score)-min(RFMPQ score)
(1)
where rs = recency score and rw = recency weight, fs = frequency score and fw = frequency
weight, ms = monetary score and mw = monetary weight., ps = average purchase power (per
consumer) score and pw = average purchase power weight, qs = average purchase power (per
product) score and qw = average purchase power weight.
One limitation of RFM analysis in market segmentation is that the features are assumed static, and
they ignore behavioral changes. The recency parameter, however, indicates a momentary change,
but it only shows one static, transient event and cannot properly capture long-term dynamic
changes in consumer behavior. This is the reason for our proposal to apply a new dynamic variable
(C) to show the quantity and sign of change in consumer purchase behavior. Dwyer [14] defines
consumer lifetime value (CLV) as a quantitative measurement of the amount of sales the consumer
is expected to spend with a retailer over their lifetime. Furthermore, Safari [32] considers CLV as
the present value of all future profits obtained from a consumer over his or her lifetime relationship
6. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
48
with the retailer. To better utilize CLV in every-day decision making, Marcus [29] introduced CLV
matrix as a variant of the RFM analysis for small-business retailers. In CLV matrix, F, the
frequency of purchase and M, the average purchase amount are used for the segmenting
consumers. The easiness to understand quadrant identifiers is considered as its main advantage. In
Marcus’ approach, the average values for the number of purchases and the average amount spent
per consumer are calculated. After identifying these, each consumer is segmented to one of the
four resulting categories (quadrants) based on whether consumers are above or below the axis
averages (Fig. 1).
Table 2. Encoding Of The Age
2.1.2 DATA MINING
Modern information and communication technology generates massive amounts of data to
databases, data warehouses, and other repositories. Transforming the insights about (big) data into
knowledge can help retailers to make better business decisions [9]. Tufféry [34] sees data mining
as a powerful analytical tool for gaining insight into the retail industry. According to Azevedo [2],
data mining provides the analyses on product sales, consumer buying habits, data and identify
naturally occurring clusters of behavior, which then form the basis of segments. Ramageri and
Desai [31] say that in the retail sector, data mining offers insightful measures, taking into account
all the factors that affect the value of the consumer to the retailer over the entire course of
consumer relationship [34].
Gunaseelan and Uma [21] stated that the main aim in data mining is to discover valuable patterns
from a large collection of data for users. It can identify patterns, and apply data analysis and
discovery algorithms to produce a data mining model. Models help in generating a model, a
hypothesis about the data, that key executives can use to make better-informed decisions [40].
There are two primary data mining process goals, which are verification and discovery.
Verification is verifying the user’s hypothesis about the data while discovery is automation of
finding unknown patterns [28].
2.1.3 CLUSTERING ALGORITHMS
According to Lefait and Kechadi [28], clustering consists of “creating groups of objects based on
their features in such a way that the objects belonging to the same groups are similar, and those
belonging to different groups are dissimilar. Clustering analysis is one of the most important and
prominent market segmentation techniques, and it has long been the dominant and preferred
method for market segmentation [38], [24]. For example, D'Urso [37] used FCM method to
cluster potential Chinese travelers. The FCM method combines partitioning and hierarchical
clustering procedures. Khavand and Tarokh [24] proposed a data mining tool to prepare a
framework for segmenting consumers based on their estimated future CLV value in an Iranian
private bank, and the method was implemented in a health and beauty company, as well [26]. In
retail sales clustering methods have been applied at least in groceries [30], online retail [8] and
for identifying strategies for new ventures [7].
7. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
49
Mirkin [36] proposed a framework for partitional fuzzy clustering which suggests a model of how
the data are generated from a cluster structure to be identified. R.Suganya [41] extend the FCMP
framework to a number of clustering criteria and study the FCMP properties on fitting the
underlying proposed model from which data is generated. D'Urso, [37] used FCM method to
cluster potential Chinese travelers. The FCM method combines partitioning and hierarchical
clustering procedures. Consumer segmentation to help to analyze transaction data with Fuzzy C-
Means for clustering and Fuzzy RFM toidentifying the consumers who have high and low
loyalty.
Fuzzy C-Means algorithm (FCM), is one of the best known and the most widely used fuzzy
clustering algorithms [41]. FCM allow each point to have a degre e with every cluster
center. Each data points are given with a value between 0 and 1 memberships to determine the
degree of belonging to each group. The performance of FCM clustering depends on the selection
of the initial cluster center and the initial membership value [89]. FCM has a wide domain of
applications such as agricultural engineering, astronomy, chemistry, geology, image analysis,
medical diagnosis, and target recognition [44].
Expectation-maximization (EM) clustering algorithm [11] is closely related to the K-Means
algorithm.
In this algorithm, two subsequent steps are iterated until there are no more changes in the current
hypothesis [23]. In the Expectation-step (E-step) the probability that each observation is a member
of each of the chosen class is calculated. Maximization-step (M-step) alters the parameters of each
class with the objective to maximize those probabilities. The iteration is then repeated until
converging to a (local) optimum.
3. APPLIED METHODOLIGIES
This section presents those data mining techniques used in this study. The analysis process takes
four phases. The first phase focuses on data preprocessing, i.e., data cleansing, feature selection,
and data transformation. Regression and clustering algorithms are applied in the second and third
phase, respectively. This is enhanced to comprise six different variables, (R, F, M, P, Q and C).
The proposed variables PQ, the P variable represents the average purchase power per consumer
per all transactions. The Q variable represents the average purchase power per all products
purchased per consumer. The C is the Consumer Change Rate (a trend) that shows the quantity and
sign of a change of consumer purchase behavior. The enhanced regression algorithm finds this
trend. The output is then fed into the clustering algorithms with RFMPQ data. Fuzzy c-means
(FCM) and EM clustering algorithms are used for the segmentation. At the final phase, the
accuracy of these partitions is measured by the cluster quality assessment introduced byDrăghici
[13].
The point-of-sales database consists of all product sales and shows that the client sells diverse
products like clothing, shoes, schools supplies, and accessories. Transaction data from three years
are retrieved and extracted. Each transaction represents a purchase event, with each line consisting
of a transaction cashier, store code, item code, brand code, products (quantity) sold, unit price and
the total (quantity multiplied by the unit price), date and time of the purchase and information
about the consumer. Both active and inactive consumers are included in the study. This is vital to
enable the marketing team to develop appropriate marketing and retention campaigns.
8. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
50
3.1 PHASE 1: DATA PRE-PROCESSING
To prepare for this-this stage, several interviews are conducted with marketing experts, sales
directors, the IT manager, in-store employees, and POS engineers. Interviews intend to maximize
variation in responses to gain a deep understanding on the challenges experienced at the client
company in general and, more specifically, to find information and company insights about the
retail industry, the market, and the consumer base.
3.1.1 DATA TRANSFORMATION USING ENHANCED RFM MODEL
Based on the client information, RFM values are assigned. The latest purchase date of the
consumer R is found from a set of 1095 days (records from 2013 to 2016), a number of
transactions during this 1095-day period F comes from totally 11550 transactions, and total
amount purchased M comes from the total sales of 1,300,000 KD. The RFM attributes are
weighted with category 10. The data is extracted from a POS database and fed into a unified,
generic POS data warehouse in a common format with consistent definitions for keys and fields.
In this phase, the string variables must be convertedto numeric variables. The missing values are
checked and deleted or replaced by default or mean values of each parameter.
The rest of the data preprocessing phase handle noisy data, missing values and makes attributes
reduction and transformation. In this step, the data must be transformed into an appropriate format,
to make the discovery of patterns easier. Continuous consumer-related attributes are encoded by
decreasing the original values into a small number of value ranges. The age of the consumer was
encoded into six categories.
Gender attribute was encoded as 1 for Male, 2 for Female and 3 for Companies. Furthermore,
demographic concepts like cities and countries are replaced by higher level concept nationality.
For normalizing the RFMPQ score, uses the following rescaling method (2)
RFMPQ scorescaled=
RFMPQ score- min(RFMPQ score)
max(RFMPQ score)-min(RFMPQ score)
. (2)
3.1.2 PHASE 2: STEPS OF DATA TRANSFORMATION TO RFMC
As usual, the client company has limited resources, and they are not able to serve all consumers
with the same intensity. To manage the available resources more efficiently, the client wants to
distinguish between relatively low and high shoppers, choose only profitable market segments and
concentrate all efforts on the strategic value of these segments to increase profitability and
consumer retention. This study decided to incorporate RFMPQ scores with CLV matrix before
running the modified regression algorithm to generate the C data set. The CLV matrix will divide
the RFMPQ data set into four categories.
The CLV matrix calculates the number of purchases by taking the total number of purchases for
the consumer and then dividing by the total number of consumers in the consumer database. The
average purchase amount is derived by taking the total revenue and dividing it by the total
number of purchases. Comparing the average number of purchases, F between consumers and the
average purchase amount, M with total average values is the next step. M and F are used to
classify each consumer into one of the four fields Best, Spender, Frequent and Uncertain.
9. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
51
3.1.3 ENHANCED REGRESSION ALGORITHM TO GENERATE CONSUMER PURCHASE
CURVE (C DATA SET)
This study proposes a combination of two analytical data mining steps. For finding C, the change
in consumer purchase behavior, it uses supervised linear regression method. Then C dataset is then
put into the unsupervised clustering algorithms, to split the consumers into different groups based
on pattern dissimilarities. The variable C should answer a very important question if the consumer
is at high risk of shifting his or her service to another retailer. One of the most common indicators
of high-risk consumers is a drop off in purchase power and a decrease of visits.
A major limitation of RFM and market segmentation models is that they ignore behavioral
changes of consumers during the period of analysis. Although the recency parameter is one of the
indicators of such behavior, it suffers from the transient behavior of the consumer. Therefore,
introducing a new analysis parameter is essential for retailers to narrow down a group of
consumers at high risk or consumers that should be prompted to a higher category. From a retailer
perspective, purchase power differs from one consumer to the other. If the amount of purchase
power decrease continuously, then the consumer is on the verge of shifting his or her services to
another retailer or falling from the beneficial segment to the non-beneficial segment. Similarly, a
consumer with an increasing purchase power during the period of analysis shows the potential that
could be harnessed with appropriate marketing actions.
To capture this change of behavior, we first calculate purchase amounts of all consumers in each
selected period of analysis. The parameter C for time kis defined as
C k = $
pak-pak-1
pak-1
, pak-1
≠1
1, else
%, (3)
where pak indicates the total amount purchased by a consumer at time k.
If the data is divided into n+1 analysis periods, n changes are calculated. If the change rate values
of the latest analysis periods have the same sign (negative or positive), then the average of these
values will be used as the final change rate parameter C for each the consumer. Consumers
changing signs are assigned neutral C = 0. The final set consists of positive, negative and neutral
change rate values.
Furthermore, since the C dataset consists of numbers with six or more digits, the logarithm of C is
useful to obtain normalized C data the analysis database.
Cscaled= logb
C ,b>0,b≠1 (4)
The next step is to apply the modified best-fit regression algorithm on each segment separately by
using the demographic variables Age, Gender and Nationality. Then the purchase behavior change
rate C values are prepared for market segmentation by applying clustering algorithms for every
segment.
The solution starts with the calculation of the purchase amount trend of the regression algorithm in
the time series of the known k and known Mk, purchases at k. The result is m1, the slope of the
linear regression curve
10. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
52
Mk=m0+m1k+εk, (5)
where m0is the intercept and k is the random parameter.
The m1 rate determines the expected value and sign of the change rate C, based on past purchases.
To obtain the regression curve the purchase slope discount rate wk to modifies the effect of the
purchase Mk for each period k. More recent spending gets a higher rate. For instance, for a
consumer with 4-time periods in the analysis, with purchase slope discount rate 0.7, the latest time
Mk-1is obtained by multiplying the actual purchase amount by wk-1 = 0.71
, Mk-2 by wk-2 = 0.72
=
0.49, and so on. This method to compute the total purchase amount curve (slope) leads to
decreasing effect from older purchase and captures the alternative cost of consumer capital.
next, we introduce a new computed parameter named Consumer Purchase Curve (CPC). the (CPC)
parameter multiplies the Mk and k to obtain the correct calculated purchase curve (slope).
The final formula is shown in Equation (6).
Wherek is the selected period time for consumer purchase in the time series of purchases, and Mk
represents the total amount of a single transaction per consumer multiplied with wk. CPC stands for
the consumer purchase amount curve (Slope), and n is the number of time segments.
cpc P=
n ∑ (∆k'∆Mk
'''')-( ∑ ∆k' ) (∑ ∆Mk
'''' )
n ∑ ∆k'2
-( ∑ ∆k' )
2 (6)
Next, applies the ND Y3, to remove the Outliers extracted from the dataset
Here, Y3 represents the Normal Distribution in the newly modified equation.
where ∆)' = ∆) × 3 and ∆+,
'''' = ∆+, × 3.
Fig. 3. Sample of costumers Purchase history
Sample of one consumer’s purchase history using different normal distribution values. As shown
in Fig 3 we have different shapes indicating the percentage of outliers in the dataset by using
different variable for each consumer
3.2PHASE 4: MARKET SEGMENTATION USING CLUSTERING ALGORITHMS
3.2.1 FUZZY C-MEANS SOFT CLUSTERING
Fuzzy c-means (FCM) is unsupervised Clustering of numerical data method developed by Dunn in
1973 [41]. The algorithm allows one piece of data corresponding to two or more clusters with the
11. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
53
same weight, and it uses the concepts from the field of fuzzy set theory and fuzzy logic [41]. The
FCM is frequently used in pattern recognition and employs fuzzy partitioning such that a data
point can belong to all groups with different membership grades between 0 and 1 [42]. The Fuzzy
c-mean algorithm is composed of the following steps
Step 1.
Initialize U =[xij] matrix, U(0)
(7)
Step 2.
At k-step: calculate the centers vectors (8)
C(k)
=[cj] with U(k)
9Ci =
- ./0
1
2
034
50
- ./0
1
2
034
Step 3.
Update U(k)
, U(k+1)
(9)
678 =
9- 5/
2
/34
:0
(10)
Step 4.
;78 =
< "
=/0
=>0
#
1 4⁄
@
>34
Step 5.
A || U (k + 1) – u (k) || < ϵ then STOP; otherwise return to step 2 (11)
the m is any real number greater than 1, ;78 is the degree of memberships of xi in the cluster C, xi
is the ADℎ of d-dimensional measured data, ci is the d-dimension center of the cluster. This
algorithm works by assigning membership to each data point corresponding to each cluster center
on the basis of distance between the cluster center and the data point. More the data is near to the
cluster center more is its membership towards the particular cluster center. Clearly, summation of
membership of each data point should be equal to one. After each iteration membership and
cluster centers are updated according to the formula.
3.2.2 EXPECTATION- MAXIMIZATION (EM) HARD CLUSTERING
The Expectation-Maximization (EM) is an iterative estimation algorithm developed by Dempster,
Laird, and Rubin [11]. EM is historically very important algorithm in market segmentation and
12. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
54
data mining. EM algorithm has also proven its efficiency in a good performance, decreasing
sensitivity to noise and estimation problem involving unlabeled data.
Step 1. Expectation- Maximization (EM) Clustering Initialization, the E-step
Every class j, of M classes (or clusters), is formed by a vector parameter (θ), composed by the
mean (µj) and by the covariance matrix (Pj), which defines the Gaussian probability distribution
(Normal) features used to characterize the observed and unobserved entities of the data set as
shown in Equation (11)
θ t = Fμj
t ,Pj t G,j=1,……,M. (12)
On the initial instant (t=0) the implementation can generate randomly the initial values of mean
(µj) and of the covariance matrix (Pj). The EM algorithm aims to approximate the parameter vector
(θ) of the real distribution.
Fraley and Raftery [18]suggest another alternative to initialize (EM) with the clusters obtained by
a hierarchical clustering technique. The relevance degree of the points of each cluster is given by
the likelihood of each element attribute in comparison with the attributes of the other elements of
cluster Cj as shown in Equation (12) The E-step
CjIx =
| ∑ (t)
-
1
2|e
njPj(t)j
∑ | ∑ t |
-
1
2j e
njPk(t)M
k=1
, (13)
Step 2. M-Step
First is computed the mean (µ) of class j obtained through the mean of all points in function of the
relevance degree of each point, as shown in Equation (13)
μj
t+1 =
∑ p(Cj|xk)xk
N
k=1
∑ P(Cj|xk)
N
k=1
, (14)
To compute the covariance matrix for the next iteration the with the Bayes Theorem,P(A|B) =
P(B|A) * P(A)P (B) conditional probabilities of the class occurrence are calculated, as shown in
Equation (14)
∑ t+1 =
∑ P(Cj|xk)(xk-μj t )(xk-μj t )N
k=1
∑ P(Cj|xk)N
k=1
j . (15)
The probability of occurrence of each class is computed through the mean of probabilities (Cj)
Pj t+1 =
1
N
∑ P(Cj|xk)N
k=1 . (16)
Step 3. Cluster Convergence
After performing each iteration, a convergence inspection which verifies if the difference of the
attributes vector of an iteration to the previous iteration is smaller than an acceptable error
tolerance, given by parameter.
13. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
55
4. ANALYSIS RESULTS AND DISCUSSION
A sample data of the generated segment Best is shown in tables Table 2 and include information
such as the number of samples for each cluster, the average purchase change rate of the cluster as
well as the average age, gender and nationality of the cluster.
The Fuzzy c-means (FCM) analysis shows cluster 4 is the best cluster with positive purchase slope
(37.12), while cluster 3 (-44.41) has the worst negative purchase slope.
Table 3 shows the four clusters generated by EM. The analysis shows cluster 2 is the least cluster
with negative purchase slope (+3.82) and it is the most beneficial segment because it is superior to
the other clusters 2. The analysis indicates that consumers between the age of (18-24) to (25-34),
3. Average Gender (1.74) indicates that Female consumers and a very small percentage of Male
consumers and 4. Average Nationality (9.11) citizens of Kuwait and Saudi Arabia are best
consumers in this cluster. The analysis also shows cluster 1 (-150.98) has the worst negative
purchase slope.
4.1 ACCURACY AND EFFECTIVENESS DETERMINATION (INTER-CLUSTER DISTANCE)
Consumer dataset can be clustered in many ways, but how can one know the clusters are accurate
and meaningful? A way to answer this is here suggested. Clustering has become a key technique
in analyzing quality assessment in a variety of recent studies. There are several studies supporting
suggestions for measuring the similarity between clustering algorithms. Those measures are used
to compare how accurate different clustering algorithms are on a dataset. Accuracy is usually tied
to the type of benchmark being considered. The approach of Draghici [13] is to compare the size
of the clusters vs. the distance to the nearest cluster (the inter-cluster distance vs. the size
(diameter) of the cluster). That is the distance between the members of a cluster and the cluster’s
center, andthe diameter of the smallest sphere containing the cluster. If the inter-cluster distance is
much larger than the size of the clusters, then the clustering method is trustworthy. Table4 shows
that the quality can be assessed simply by looking at cluster diameter. The cluster is created by
means of a heuristic even when there is no similarity between clustered patterns. This is occurring
because the algorithm forces K clusters to be created. Comparing both algorithms using cluster
quality assessment then the Best segment shows EM clustering with the size of the cluster
(2.45967329) is more accurate than the Fuzzy c-means (FCM) algorithm with the size of the
cluster (0.001661999).
4.1.1 RFMC DATASET SUMMARY
According to the EM analysis results of the Best Segment, all four clusters in the Best segment
have a negative purchase slope, with a total average purchase slope of (-147.82). This ranks the
Best segment as the highest negative purchase curve slope. In the Spender segment, the EM
analysis results show the effectiveness of the new adapted target marketing strategy. Showing a
positive purchase curve (slope), yet the total average purchase slope is (-16.10), this is the segment
with the biggest churn rate. The Frequent segment EM analysis results show cluster 2 has a
positive purchase slope with (7.17), but the total average purchase slope is (-6.01). This ranks the
Frequent Segment slightly above the Spender Segment with respect to the purchase curve (slope).
In Fuzzy c-means (FCM) analysis, the Uncertain segment has the Best positive purchase slope,
with a total average purchase slope of (1.78). The positive purchase slope is based on smaller
purchases. Young females from the age 20 – 40 and citizens of the GCC region are the biggest
14. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
56
consumers with negative purchase slope. Consumers at high risk of canceling their services or
falling to a less beneficial segment are easily identified based on the analysis from the C dataset.
The retailer can develop a more targeted retail service model to retain such profitable consumers.
These valuable insights are derived only from this RFMC method identifying consumer purchase
trends.
Data analysis, though often rushed, is the most important stage in the market segmentation
solution. The analysis is here conducted under the supervision of internal marketing and sales
manager to identify which variable and segment make the most sense to focus all efforts on. Such
a framework identifies the model strengths and weaknesses, with special attention paid to all
implications stemming from each. The internal marketing and sales managers possess an
understanding of the client capabilities and resources. It helps the client to focus on the consumer
and develop marketing mixes for a very specific market segment.
Detailed description and specification of all segments in our case study are presented as well as
useful strategies as proposed by the marketing experts.
Table 3 Four Clusters (Cl) Based On The Best Segment Generated By Fuzzy C-Means (FCM) Algorithm
Table 4 Four Clusters (Cl) Based On The Best Segment Generated By EM Algorithm.
15. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
57
Table 5 Cluster Quality Assessment For The Best Segment Using The Em Algorithm
Table 6 Summary Analysis Of CLV Segments Based On The C Dataset
4.2 SUMMARY
This study uses two methods
1. Draghici approach is to compare the size of the clusters vs. the distance to the nearest cluster.
2. Industry marketing experts as human judgment to validating the accuracy and intelligence of
the results.
Analysis and experts agree that classifying consumer purchase behavior using a CLV matrix
against normally distributed enhanced RFMPQ Dataset gives high accuracy about the entire
consumer's journey information. The C variant takes into consideration the changes in the
consumer average purchase power over the time compared to the standard RFM model. Age and
gender variables show an estimated accuracy of 83%. However, the nationality variable gives low
accuracy, possibly because of null data removed by the normal distribution formula.
5. CONCLUSION, CONTRIBUTION, AND ANSWERS TO THE STUDY
QUESTIONS
In this study, we proposed new RFM variants (PQC) to the standard RFM model to segment a
POS dataset using demographic variables. Suggested is a novel step by step approach with CLV
and RFMPQC analysis in two data mining tasks. Regressionand clustering methods apply
separately for every RFMPQ variation. Treating consumers with different reflected purchase
16. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
58
behavior is the objective of this study. The proposal is to make this into a computable parameter a
new modified regression algorithm is also proposed in this study.
To help mapping the entire consumer's journey, in this study existing consumer purchase profile of
demographic variables like age, gender and nationality are segmented and presented accordingly.
The study has shown that integrating CLV and enhanced RFM methods provide a credible base for
capturing consumer’s purchase trend and understanding market segmentation with different values
of Frequency, Monetary, average purchase power per consumer per all transactions, average
purchase power per all products purchased per consumer and Purchase Change Rate. The model
using the proposed techniques can identify VIP consumers who already shifted or at the high risk
of shifting their business to another competitor or those consumers falling from a higher segment
to a lower segment.
Two clustering algorithms were tested in this study. Hard Clustering EM, and soft clustering
Fuzzy c-means. The experiment showed Fuzzy c-means difficulty in selecting the initial cluster
centers and requires more computation time as it requires more iterations than the EM algorithm.
However, shows lower clustering errors. It is more efficient for large scale clustering, while the
EM is more suitable for small scale clustering. The model provides simplicity. However, the
enhanced regression algorithm failed to accurately measure consumer final purchase trends
(Slope) over the selected periods of time. For example, if a consumer with increasing (Positive)
purchases, and the most recent purchase made by the consumer is lower than the previous
purchase, then the final curve (slope) is negative, regardless of the consumer increasing purchase
history. This indicates that the most recent purchase has the final effect on the consumer purchase
curve (slope). The enhanced regression does not strengthen the effect of the most recent purchase
in the computation of the total purchase amount slope while justifying the importance of previous
purchase slopes. The second contribution is a novel market segmentation method using clv to
classify consumers into four quadrants (Best, Spender, Frequent, Uncertain), and enhanced
normal distribution formula to remove outliers in data and enhanced regression based on the
enhanced RFMPQC variables followed by clustering on demographic data. The study provides
key attributes describing consumer purchase behavior in different demographic characteristics
such as gender, age, and nationality. The analytical information gained from extracted data is
useful to adapt more targeted marketing strategies and for decision making. We can confirm that
proposed market segmentation using demographic variables can contribute to the body of
literature in consumer purchase behavior and assist SMR retailers in meeting the needs and
preference of consumers.
REFERENCES
[1] APS Meeting Abstracts (Vol. 1, p. 11002).
[2] Azevedo, A. (Ed.). (2014). Integration of Data Mining in Business Intelligence Systems. IGI Global.
[3] Bernard, H. R. (2011). Study methods in anthropology: Qualitative and quantitative approaches.
Rowman Altamira.
[4] Birant, D. (2011). Data Mining Using RFM Analysis, INTECH Open Access Publisher.
[5] Broderick, A., & Pickton, D. (2005). Integrated marketing communications. Pearson Education UK.
[6] concept to implementation. Prentice-Hall, Inc.
[7] Carter, N. M., Stearns, T. M., Reynolds, P. D., & Miller, B. A. (1994). New venture strategies:
Theory development with an empirical base. Strategic Management Journal, 15(1), 21-41.
17. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
59
[8] Chen, D., Sain, S. L., & Guo, K. (2012). Data mining for the online retail industry: A case study of
RFM model-based consumer segmentation using data mining. Journal of Database Marketing &
Consumer Strategy Management, 19(3), 197-208.
[9] Chen, H., Chiang, R. H., & Storey, V. C. (2012). Business intelligence and analytics: from big data to
a big impact. MIS Quarterly, 1165-1188.
[10] Cleveland, M., Papadopoulos, N., & Laroche, M. (2011). Identity, demographics, and consumer
behaviors: International market segmentation across product categories. International Marketing
Review, 28(3), 244-266.
[11] Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via
the EM algorithm. Journal of the royal statistical society. Series B (methodological), 1-38.
[12] Dipanjan, D., Satish, G., & Goutam, C. (2011). Comparison of Probabilistic-D and k-Means
Clustering in Segment Profiles for B2B Markets. SAS Global Forum.
[13] Drăghici, S. (2003). Data analysis tools for DNA microarrays. CRC Press.
[14] Dwyer, F. R. (1997). Consumer lifetime valuation to support marketing decision making. Journal of
interactive marketing, 11(4), 6-13.
[15] Elby, A. (2015, April). The new AP Physics exams: Integrating qualitative and quantitative reasoning.
In APS Meeting Abstracts (Vol. 1, p. 11002).
[16] Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in
databases. AI magazine, 17(3), 37.
[17] Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). The KDD process for extracting useful
knowledge from volumes of data. Communications of the ACM, 39(11), 27-34.
[18] Fraley, C., & Raftery, A. E. (2006). MCLUST version 3: an R package for normal mixture modeling
and model-based clustering. WASHINGTON UNIV SEATTLE DEPT OF STATISTICS.
[19] Gal, A. (2010). Competitiveness of small and medium sized enterprises-a possible analytical
framework. HEJ: ECO-100115-A.
[20] Goyat, S. (2011). The basis of market segmentation: a critical review of literature. European Journal
of Business and Management, 3(9), 45-54.
[21] Gunaseelan, D., & Uma, P. (2012). An improved frequent pattern algorithm for mining association
rules. International Journal of Information and Communication Technology Study, 2(5).
[22] Hossein Javaheri, S., (2008), Response Modeling in Direct Marketing: a data mining based approach
for target selection, Masters thesis, epubl.luth.se/1653-0187/2008/014/LTU-PB-EX-08014-SE.pdf.
[23] Kashwan, K. R. & C. Velu (2013). Consumer Segmentation Using Clustering and Data Mining
Techniques. International Journal of Computer Theory & Engineering 5(6): 856-861.
[24] Khajvand, M., & Tarokh, M. J. (2011). Estimating consumer future value of different consumer
segments based on adapted RFM model in retail banking context. Procedia Computer Science, 3,
1327-1332.
[25] Khajvand, M., Zolfaghar, K., Ashoori, S., & Alizadeh, S. (2011). Estimating consumer lifetime value
based on RFM analysis of consumer purchase behavior: Case study. Procedia Computer Science, 3,
57-63.
[26] Kolyshkina, I., Nankani, E., Simoff, S., & Denize, S. (2010). Retail analytics in the context of
“Segmentation, Targeting, Optimization” of the operations of convenience store franchises. Anzmac.
18. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 1, February 2019
60
[27] Lefait, G., & Kechadi, T. (2010, February). Consumer segmentation architecture based on clustering
techniques. In Digital Society, 2010. ICDS'10. Fourth International Conference on (pp. 243-248).
IEEE.
[29] Marcus, C. (1998). A practical yet meaningful approach to consumer segmentation. Journal of
consumer marketing, 15(5), 494-504.
[30] McCarty JA, Hastak M (2007). Segmentation approaches in data-mining: A comparison of RFM,
CHAID, and logistic regression. J. Bus. Res., 60: 656-662.
[31] Ramageri, B. M., & Desai, B. L. (2013). Role of data mining in retail sector. International Journal on
Computer Science and Engineering, 5(1), 47.
[32] Safari, M. (2015). Consumer Lifetime Value to managing marketing strategies in the financial
services. International Letters of Social and Humanistic Sciences, 1(2), 164-173.
[33] Tsiptsis, K. K., & Chorianopoulos, A. (2011). Data mining techniques in CRM: inside consumer
segmentation. John Wiley & Sons.
[34] Tufféry, S. (2011). Data mining and statistics for decision making, John Wiley & Sons.
[35] Ziafat, H., & Shakeri, M. (2014). Using Data Mining Techniques in Consumer Segmentation.
International Journal of engineering Study and Applications, 1(4), 70-79.
[36] Nascimento, S., Mirkin, B., & Moura-Pires, F. (2000). A fuzzy clustering model of data and fuzzy c-
means. In Fuzzy Systems, 2000. FUZZ IEEE 2000. The Ninth IEEE International Conference on
(Vol. 1, pp. 302-307). IEEE.
[37] D'Urso, P., Prayag, G., Disegna, M., & Massari, R. (2013). Market segmentation using bagged fuzzy
c–means (BFCM): Destination image of Western Europe among Chinese travelers.
[38] K.M. Bataineh, M.Naji, M.Saqer “A Comparison Study between Various Fuzzy Clustering
Algorithms” vol.5 no.4 Aug. 2011
[39] Schafer, J. L. (1997). Analysis of incomplete multivariate data. Chapman and Hall/CRC.
[40] Altman, D. G., & Bland, J. M. (1995). Statistics notes: the normal distribution. Bmj, 310(6975), 298.
AUTHORS
Fahed Yoseph Business Intelligence and Big Data Consultant, Bachelor in Management information
system. Master in computer Science in the area of artificial intelligence. Current doctoral student at Åbo
Akademi University in Finland. Research interest in Computer Science, database and artificial intelligence.
Mohammad AlMalaily Financial, Marketing and BI Consultant, Bachelor in Electronic
Engineering.University in Finland. Research interest in Computer Science, Marketing and Finance.