In today’s dynamic marketplace, telecommunication organizations, both private and public, are increasingly leaving antiquated marketing philosophies and strategies to the adoption of more customer-driven initiatives that seek to understand, attract, retain and build intimate long term relationship with profitable customers. This paradigm shift has undauntedly led to the growing interest in Customer Relationship Management (CRM) initiatives that aim at ensuring customer identification and interactions. The urgent market requirement is to identify automated methods that can assist businesses in the complex task of predicting customer churning.
The immediate requirement of the market is to have systems that can perform accurate
(i) identification of loyal customers (so that companies can offer more services to retain them)
(ii) prediction of churners to ensure that only the customers who are planning to switch their service
providers are being targeted for retention
Decision support systems, Supplier selection, Information systems, Boolean al...ijmpict
For organizations operating with number of products/services and number of suppliers, to select the right
supplier meeting all their requirements will be a challenging job. Such organizations need a good
decision support system to evaluate the suppliers effectively. Several decision support systems have been
reported to deal with complex selection process to decide the right supplier. Many mathematical models
have also been developed. This paper presents a new method, named as Bit Decision Making (BDM)
method, which treats such complex system of decision making as a collection and sequence of reasonable
number of meaningful and manageable sub-systems by identifying and processing the relevant decision
criteria in each sub-system. Help of Boolean logic and Boolean algebra is taken to assign binary digit
values to the selection criteria and generate mathematical equations that correlate the inputs to the output
at each stage of decision making. Each sub-system with its own mathematical model has been treated as a
standardized decision sub-system for that phase of making decision in evaluating suppliers. The sequence
and connectivity of the sub-systems along with their outputs finally lead to selection of the best supplier. A
real-world case of evaluation of information technology (IT) tenders has been dealt with for application of
the proposed method. The paper discusses in detail the theory, methodology, application and features of
the new method.
Projection pursuit Random Forest using discriminant feature analysis model fo...IJECEIAES
A major and demand issue in the telecommunications industry is the prediction of churn customers. Churn describes the customer who attrites from the current provider to competitors searching for better service offers. Companies from the Telco sector frequently have customer relationship management offices it is the main objective in how to win back defecting clients because preserve long-term customers can be much more beneficial than gain newly recruited customers. Researchers and practitioners are paying great attention to developing a robust customer churn prediction model, especially in the telecommunication business by proposed numerous machine learning approaches. Many approaches of Classification are established, but the most effective in recent times is a tree-based method. The main contribution of this research is to predict churners/non-churners in the Telecom sector based on project pursuit Random Forest (PPForest) that uses discriminant feature analysis as a novelty extension of the conventional Random Forest for learning oblique Project Pursuit tree (PPtree). The proposed methodology leverages the advantage of two discriminant analysis methods to calculate the project index used in the construction of PPtree. The first method used Support Vector Machines (SVM) while, the second method used Linear Discriminant Analysis (LDA) to achieve linear splitting of variables during oblique PPtree construction to produce individual classifiers that are robust and more diverse than classical Random Forest. It is found that the proposed methods enjoy the best performance measurements e.g. Accuracy, hit rate, ROC curve, Lift, H-measure, AUC. Moreover, PPForest based on LDA delivers effective evaluators in the prediction model.
Developing Sales Information System Application using Prototyping ModelEditor IJCATR
This research aimed to develop the system that be able to manage the sales transaction, so the transaction services will be more quickly and efficiently. The system has developed using prototyping model, which have steps including: 1) communication and initial data collection, 2) quick design, 3) formation of prototyping, 4) evaluation of prototyping, 5) repairing prototyping, and 6) the final step is producing devices properly so it can used by user. The prototyping model intended to adjust the system in accordance with its use later, made in stages so that the problems that arise will be immediately addressed. The results of this research is a software which have consumer transaction services including the purchasing services, sale, inventory management, and report for management needed purpose. Based on questionnaires given to 18 respondents obtained information on the evaluation system built, among others: 1) 88% strongly agree and 11% agree, the application can increase effectiveness and efficiently the organizations/enterprises; 2) 33% strongly agree, 62 agree, and 5% not agree, the application can meet the needs of organization/enterprise.
Pattern recognition using context dependent memory model (cdmm) in multimodal...ijfcstjournal
Pattern recognition is one of the prime concepts in current technologies in both private and public sectors.
The analysis and recognition of two or more patterns is a complex task due to several factors. The
consideration of two or more patterns requires huge space for keeping the storage media as well as
computational aspect. Vector logic gives very good strategy for recognition of patterns. This paper
proposes pattern recognition in multimodal authentication system with the use of vector logic and makes
the computation model hard and less error rate. Using PCA two to three biometric patterns will be fusion
and then various key sizes will be extracted using LU factorization approach. The selected keys will be
combined using vector logic, which introduces a memory model often called Context Dependent Memory
Model (CDMM) as computational model in multimodal authentication system that gives very accurate and
very effective outcome for authentication as well as verification. In the verification step, Mean Square
Error (MSE) and Normalized Correlation (NC) as metrics to minimize the error rate for the proposed
model and the performance analysis will be presented.
LABELING CUSTOMERS USING DISCOVERED KNOWLEDGE CASE STUDY: AUTOMOBILE INSURAN...ijmvsc
In this paper, we used the knowledge discovery in databases and data mining, one of the data-based decision support techniques to help labeling customers in the automobile insurance industry. In most data mining application cases, major tasks including data preparation, data preprocessing, data transformation, data mining, interpretation, application and evaluation, are required. The results of a case study are presented that knowledge discovery of databases and data mining is used to explore decision rules for an automobile insurance company. The decision rules can be used to label the customers as “bad” or “good” for insurance policies.
Decision support systems, Supplier selection, Information systems, Boolean al...ijmpict
For organizations operating with number of products/services and number of suppliers, to select the right
supplier meeting all their requirements will be a challenging job. Such organizations need a good
decision support system to evaluate the suppliers effectively. Several decision support systems have been
reported to deal with complex selection process to decide the right supplier. Many mathematical models
have also been developed. This paper presents a new method, named as Bit Decision Making (BDM)
method, which treats such complex system of decision making as a collection and sequence of reasonable
number of meaningful and manageable sub-systems by identifying and processing the relevant decision
criteria in each sub-system. Help of Boolean logic and Boolean algebra is taken to assign binary digit
values to the selection criteria and generate mathematical equations that correlate the inputs to the output
at each stage of decision making. Each sub-system with its own mathematical model has been treated as a
standardized decision sub-system for that phase of making decision in evaluating suppliers. The sequence
and connectivity of the sub-systems along with their outputs finally lead to selection of the best supplier. A
real-world case of evaluation of information technology (IT) tenders has been dealt with for application of
the proposed method. The paper discusses in detail the theory, methodology, application and features of
the new method.
Projection pursuit Random Forest using discriminant feature analysis model fo...IJECEIAES
A major and demand issue in the telecommunications industry is the prediction of churn customers. Churn describes the customer who attrites from the current provider to competitors searching for better service offers. Companies from the Telco sector frequently have customer relationship management offices it is the main objective in how to win back defecting clients because preserve long-term customers can be much more beneficial than gain newly recruited customers. Researchers and practitioners are paying great attention to developing a robust customer churn prediction model, especially in the telecommunication business by proposed numerous machine learning approaches. Many approaches of Classification are established, but the most effective in recent times is a tree-based method. The main contribution of this research is to predict churners/non-churners in the Telecom sector based on project pursuit Random Forest (PPForest) that uses discriminant feature analysis as a novelty extension of the conventional Random Forest for learning oblique Project Pursuit tree (PPtree). The proposed methodology leverages the advantage of two discriminant analysis methods to calculate the project index used in the construction of PPtree. The first method used Support Vector Machines (SVM) while, the second method used Linear Discriminant Analysis (LDA) to achieve linear splitting of variables during oblique PPtree construction to produce individual classifiers that are robust and more diverse than classical Random Forest. It is found that the proposed methods enjoy the best performance measurements e.g. Accuracy, hit rate, ROC curve, Lift, H-measure, AUC. Moreover, PPForest based on LDA delivers effective evaluators in the prediction model.
Developing Sales Information System Application using Prototyping ModelEditor IJCATR
This research aimed to develop the system that be able to manage the sales transaction, so the transaction services will be more quickly and efficiently. The system has developed using prototyping model, which have steps including: 1) communication and initial data collection, 2) quick design, 3) formation of prototyping, 4) evaluation of prototyping, 5) repairing prototyping, and 6) the final step is producing devices properly so it can used by user. The prototyping model intended to adjust the system in accordance with its use later, made in stages so that the problems that arise will be immediately addressed. The results of this research is a software which have consumer transaction services including the purchasing services, sale, inventory management, and report for management needed purpose. Based on questionnaires given to 18 respondents obtained information on the evaluation system built, among others: 1) 88% strongly agree and 11% agree, the application can increase effectiveness and efficiently the organizations/enterprises; 2) 33% strongly agree, 62 agree, and 5% not agree, the application can meet the needs of organization/enterprise.
Pattern recognition using context dependent memory model (cdmm) in multimodal...ijfcstjournal
Pattern recognition is one of the prime concepts in current technologies in both private and public sectors.
The analysis and recognition of two or more patterns is a complex task due to several factors. The
consideration of two or more patterns requires huge space for keeping the storage media as well as
computational aspect. Vector logic gives very good strategy for recognition of patterns. This paper
proposes pattern recognition in multimodal authentication system with the use of vector logic and makes
the computation model hard and less error rate. Using PCA two to three biometric patterns will be fusion
and then various key sizes will be extracted using LU factorization approach. The selected keys will be
combined using vector logic, which introduces a memory model often called Context Dependent Memory
Model (CDMM) as computational model in multimodal authentication system that gives very accurate and
very effective outcome for authentication as well as verification. In the verification step, Mean Square
Error (MSE) and Normalized Correlation (NC) as metrics to minimize the error rate for the proposed
model and the performance analysis will be presented.
LABELING CUSTOMERS USING DISCOVERED KNOWLEDGE CASE STUDY: AUTOMOBILE INSURAN...ijmvsc
In this paper, we used the knowledge discovery in databases and data mining, one of the data-based decision support techniques to help labeling customers in the automobile insurance industry. In most data mining application cases, major tasks including data preparation, data preprocessing, data transformation, data mining, interpretation, application and evaluation, are required. The results of a case study are presented that knowledge discovery of databases and data mining is used to explore decision rules for an automobile insurance company. The decision rules can be used to label the customers as “bad” or “good” for insurance policies.
Enhancing Segmentation Approaches from Oaam to Fuzzy K-C-MeansChristo Ananth
Christo Ananth et al. discussed that Live wire with Active Appearance model (AAM) strategy is called Oriented Active Appearance Model (OAAM). The Geodesic Graph-cut calculation creates much better division results than some other completely programmed strategies distinguished in writing in the expressions of exactness and period preparing. This strategy besides viably consolidates the Dynamic Appearance Model, Live Wire and Graph Cut tips to abuse their integral focal points. It comprises of a couple of fundamental parts: model creating, instatement, and depiction. As to the instatement (acknowledgment) part, a pseudo methodology is typically utilized and the real organs are portioned cut essentially by cut by means of the OAAM strategy. The reason with respect to instatement is to give harsh item confinement in addition to shape imperatives for a last GC technique, which frequently will deliver refined outline. The proposed (Fuzzy K-C-Means) procedure offers extended viability and diminished accentuation when stood out together from various strategies. The estimating of picture is assessed by calculating the ability as much as number of units and the time which generally the image takes for making one accentuation. Some different systems were reviewed in addition to great conditions and burdens have been communicated in light of the fact that extraordinary to each. Words which have to do with photograph division are really portrayed close by with other gathering strategies.
Selecting Experts Using Data Quality Conceptsijdms
Personal networks are not always diverse or large enough to reach those with the right information. This
problem increases when assembling a group of experts from around the world, something which is a
challenge in Future-oriented Technology Analysis (FTA). In this work, we address the formation of a panel
of experts, specifically how to select a group of experts from a huge group of people. We propose an
approach which uses data quality dimensions to improve expert selection quality and provide quality
metrics to the forecaster. We performed a case study and successfully showed that it is possible to use data
quality methods to support the expert search process.
Thriving information system through business intelligence knowledge managemen...IJECEIAES
In the current digitalization dilemma of an organization, there is a need for the business intelligence and knowledge management element for enhancing a perspective of learning and strategic management. These elements will comprise a significant evolution of learning, insight gained, experiences and knowledge through compelling theoretical impact for practitioners, academicians, and scholars in the pertinent field of interest. This phenomenon occurs due to digitalization transformation towards industry revolution 5.0 and organizational excellence in the information system area. This research focuses on the characteristic of a comprehensive performance measure perspective in an organization that conceives information assessment and key challenges of Business Intelligence and Knowledge Management in perceiving a relevant organizational excellence framework. The dynamic research focusing on the decision-making process and leveraging better knowledge creation. The future of organization excellence seemed to be convergent in determining the holistic performance measure perspective and its factors towards industry revolution 5.0. The research ends up with a typical basic excellence framework that will mash up some characteristics in designing an organizational strategic performance framework. The output is a conceptual performance measure framework for a typical decision-making application for organizational strategic performance management dashboarding.
Design and implement a new secure prototype structure of e-commerce system IJECEIAES
The huge development of internet technologies and the widespread of modern and advanced devices lead to an increase in the size and diversity of e-commerce system development. These developments lead to an increase in the number of people that navigate these sites asking for their services and products. Which leads to increased competition in this field. Moreover, the expansion in the size of currency traded makes transaction protection an essential issue in this field. Providing security for each online client especially for a huge number of clients at the same time, causing an overload on the system server. This problem may lead to server deadlock, especially at rush time, which reduce system performance. To solve security and performance problems, this research suggests a prototype design for agent software. This agent will play the role of broker between the clients and the electronic marketplace. This is done by providing security inside the client device and converting the client’s order into a special form which is called a record form to be sent to the commercial website. Experimental results showed that this method increase system performance in terms of page loading time, transaction processing and improves the utilization of system resources.
Linking Behavioral Patterns to Personal Attributes through Data Re-Miningertekg
Download Link >https://ertekprojects.com/gurdal-ertek-publications/blog/linking-behavioral-patterns-to-personal-attributes-through-data-re-mining/
A fundamental challenge in behavioral informatics is the development of methodologies and systems that can achieve its goals and tasks, including be-havior pattern analysis. This study presents such a methodology, that can be con-verted into a decision support system, by the appropriate integration of existing tools for association mining and graph visualization. The methodology enables the linking of behavioral patterns to personal attributes, through the re-mining of colored association graphs that represent item associations. The methodology is described and mathematically formalized, and is demonstrated in a case study related with retail industry.
For more course tutorials visit
www.cis336.com
CIS 336 Final Exam Guide
1)Joe works for a company where the IT department charges him for the number of CRM login accounts that are in his department. What type of IT funding model is his company deploying?
2) This project cycle plan chart looks very much like a bar chart and is easy for management to read because of its visual nature.
Paper Explained: Deep learning framework for measuring the digital strategy o...Devansh16
Companies today are racing to leverage the latest digital technologies, such as artificial intelligence, blockchain, and cloud computing. However, many companies report that their strategies did not achieve the anticipated business results. This study is the first to apply state of the art NLP models on unstructured data to understand the different clusters of digital strategy patterns that companies are Adopting. We achieve this by analyzing earnings calls from Fortune Global 500 companies between 2015 and 2019. We use Transformer based architecture for text classification which show a better understanding of the conversation context. We then investigate digital strategy patterns by applying clustering analysis. Our findings suggest that Fortune 500 companies use four distinct strategies which are product led, customer experience led, service led, and efficiency led. This work provides an empirical baseline for companies and researchers to enhance our understanding of the field.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube. It's a work in progress haha: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let's connect: https://rb.gy/m5ok2y
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
If you would like to work with me email me: devanshverma425@gmail.com
Live conversations at twitch here: https://rb.gy/zlhk9y
To get updates on my content- Instagram: https://rb.gy/gmvuy9
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Convidar é a essência de reunião de network aumentar sua visibilidade e consequentemente o número de pessoas que conhece.
Porém muitas pessoas tem dificuldades para trazer convidados, assim criamos 5 argumentos matadores para convidar!
Aproveite e comente o que achou.
A polishop já e a maior empresa multi-canal do mundo com mais de 10 anos de marcado,
presença nos paises do mercosul, EUA, Africa e Europa e acaba de entrar no mercado
de marketing multinivel, seu novo canal de distribuição.
Temos agora a oportunidade de sermos pioneiros neste mega negocio.
um Empreendedor independente polishop.
Saiba todos os detalhes acessando:
http://www.sistemawinner.com.br/edgarana
Conferencias polishop todos os dias as 15hs e 21hs:
http://www.sistemawinner.com.br/edgarana/conferencia
Sucesso,
Enhancing Segmentation Approaches from Oaam to Fuzzy K-C-MeansChristo Ananth
Christo Ananth et al. discussed that Live wire with Active Appearance model (AAM) strategy is called Oriented Active Appearance Model (OAAM). The Geodesic Graph-cut calculation creates much better division results than some other completely programmed strategies distinguished in writing in the expressions of exactness and period preparing. This strategy besides viably consolidates the Dynamic Appearance Model, Live Wire and Graph Cut tips to abuse their integral focal points. It comprises of a couple of fundamental parts: model creating, instatement, and depiction. As to the instatement (acknowledgment) part, a pseudo methodology is typically utilized and the real organs are portioned cut essentially by cut by means of the OAAM strategy. The reason with respect to instatement is to give harsh item confinement in addition to shape imperatives for a last GC technique, which frequently will deliver refined outline. The proposed (Fuzzy K-C-Means) procedure offers extended viability and diminished accentuation when stood out together from various strategies. The estimating of picture is assessed by calculating the ability as much as number of units and the time which generally the image takes for making one accentuation. Some different systems were reviewed in addition to great conditions and burdens have been communicated in light of the fact that extraordinary to each. Words which have to do with photograph division are really portrayed close by with other gathering strategies.
Selecting Experts Using Data Quality Conceptsijdms
Personal networks are not always diverse or large enough to reach those with the right information. This
problem increases when assembling a group of experts from around the world, something which is a
challenge in Future-oriented Technology Analysis (FTA). In this work, we address the formation of a panel
of experts, specifically how to select a group of experts from a huge group of people. We propose an
approach which uses data quality dimensions to improve expert selection quality and provide quality
metrics to the forecaster. We performed a case study and successfully showed that it is possible to use data
quality methods to support the expert search process.
Thriving information system through business intelligence knowledge managemen...IJECEIAES
In the current digitalization dilemma of an organization, there is a need for the business intelligence and knowledge management element for enhancing a perspective of learning and strategic management. These elements will comprise a significant evolution of learning, insight gained, experiences and knowledge through compelling theoretical impact for practitioners, academicians, and scholars in the pertinent field of interest. This phenomenon occurs due to digitalization transformation towards industry revolution 5.0 and organizational excellence in the information system area. This research focuses on the characteristic of a comprehensive performance measure perspective in an organization that conceives information assessment and key challenges of Business Intelligence and Knowledge Management in perceiving a relevant organizational excellence framework. The dynamic research focusing on the decision-making process and leveraging better knowledge creation. The future of organization excellence seemed to be convergent in determining the holistic performance measure perspective and its factors towards industry revolution 5.0. The research ends up with a typical basic excellence framework that will mash up some characteristics in designing an organizational strategic performance framework. The output is a conceptual performance measure framework for a typical decision-making application for organizational strategic performance management dashboarding.
Design and implement a new secure prototype structure of e-commerce system IJECEIAES
The huge development of internet technologies and the widespread of modern and advanced devices lead to an increase in the size and diversity of e-commerce system development. These developments lead to an increase in the number of people that navigate these sites asking for their services and products. Which leads to increased competition in this field. Moreover, the expansion in the size of currency traded makes transaction protection an essential issue in this field. Providing security for each online client especially for a huge number of clients at the same time, causing an overload on the system server. This problem may lead to server deadlock, especially at rush time, which reduce system performance. To solve security and performance problems, this research suggests a prototype design for agent software. This agent will play the role of broker between the clients and the electronic marketplace. This is done by providing security inside the client device and converting the client’s order into a special form which is called a record form to be sent to the commercial website. Experimental results showed that this method increase system performance in terms of page loading time, transaction processing and improves the utilization of system resources.
Linking Behavioral Patterns to Personal Attributes through Data Re-Miningertekg
Download Link >https://ertekprojects.com/gurdal-ertek-publications/blog/linking-behavioral-patterns-to-personal-attributes-through-data-re-mining/
A fundamental challenge in behavioral informatics is the development of methodologies and systems that can achieve its goals and tasks, including be-havior pattern analysis. This study presents such a methodology, that can be con-verted into a decision support system, by the appropriate integration of existing tools for association mining and graph visualization. The methodology enables the linking of behavioral patterns to personal attributes, through the re-mining of colored association graphs that represent item associations. The methodology is described and mathematically formalized, and is demonstrated in a case study related with retail industry.
For more course tutorials visit
www.cis336.com
CIS 336 Final Exam Guide
1)Joe works for a company where the IT department charges him for the number of CRM login accounts that are in his department. What type of IT funding model is his company deploying?
2) This project cycle plan chart looks very much like a bar chart and is easy for management to read because of its visual nature.
Paper Explained: Deep learning framework for measuring the digital strategy o...Devansh16
Companies today are racing to leverage the latest digital technologies, such as artificial intelligence, blockchain, and cloud computing. However, many companies report that their strategies did not achieve the anticipated business results. This study is the first to apply state of the art NLP models on unstructured data to understand the different clusters of digital strategy patterns that companies are Adopting. We achieve this by analyzing earnings calls from Fortune Global 500 companies between 2015 and 2019. We use Transformer based architecture for text classification which show a better understanding of the conversation context. We then investigate digital strategy patterns by applying clustering analysis. Our findings suggest that Fortune 500 companies use four distinct strategies which are product led, customer experience led, service led, and efficiency led. This work provides an empirical baseline for companies and researchers to enhance our understanding of the field.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube. It's a work in progress haha: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let's connect: https://rb.gy/m5ok2y
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
If you would like to work with me email me: devanshverma425@gmail.com
Live conversations at twitch here: https://rb.gy/zlhk9y
To get updates on my content- Instagram: https://rb.gy/gmvuy9
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Convidar é a essência de reunião de network aumentar sua visibilidade e consequentemente o número de pessoas que conhece.
Porém muitas pessoas tem dificuldades para trazer convidados, assim criamos 5 argumentos matadores para convidar!
Aproveite e comente o que achou.
A polishop já e a maior empresa multi-canal do mundo com mais de 10 anos de marcado,
presença nos paises do mercosul, EUA, Africa e Europa e acaba de entrar no mercado
de marketing multinivel, seu novo canal de distribuição.
Temos agora a oportunidade de sermos pioneiros neste mega negocio.
um Empreendedor independente polishop.
Saiba todos os detalhes acessando:
http://www.sistemawinner.com.br/edgarana
Conferencias polishop todos os dias as 15hs e 21hs:
http://www.sistemawinner.com.br/edgarana/conferencia
Sucesso,
Customer relationship management (CRM) is an important element in all forms of industry. This process involves ensuring that the customers of a business are satisfied with the product or services that they are paying for. Since most businesses collect and store large volumes of data about their customers; it is easy for the data analysts to use that data and perform predictive analysis. One aspect of this includes customer retention and customer churn. Customer churn is defined as the concept of understanding whether or not a customer of the company will stop using the product or service in future. In this paper a supervised machine learning algorithm has been implemented using Python to perform customer churn analysis on a given data-set of Telco, a mobile telecommunication company. This is achieved by building a decision tree model based on historical data provided by the company on the platform of Kaggle. This report also investigates the utility of extreme gradient boosting (XGBoost) library in the gradient boosting framework (XGB) of Python for its portable and flexible functionality which can be used to solve many data science related problems highly efficiently. The implementation result shows the accuracy is comparatively improved in XGBoost than other learning models.
Data Mining in Telecommunication Industryijsrd.com
Telecommunication companies today are operating in highly competitive and challenging environment. Vast volume of data is generated from various operational systems and these are used for solving many business problems that required urgent handling. These data include call detail data, customer data and network data. Data Mining methods and business intelligence technology are widely used for handling the business problems in this industry. The goal of this paper is to provide a broad review of data mining concepts.
Customer churn classification using machine learning techniquesSindhujanDhayalan
Advanced data mining project on classifying customer churn by
using machine learning algorithms such as random forest,
C5.0, Decision tree, KNN, ANN, and SVM. CRISP-DM approach was followed for developing the project. Accuracy rate, Error rate, Precision, Recall, F1 and ROC curve was generated using R programming and the efficient model was found comparing these values.
Data Mining on Customer Churn ClassificationKaushik Rajan
Implemented multiple classifiers to classify if a customer will leave or stay with the company based on multiple independent variables.
Tools used:
> RStudio for Exploratory data analysis, Data Pre-processing and building the models
> Tableau and RStudio for Visualization
> LATEX for documentation
Machine learning models used:
> Random Forest
> C5.0
> Decision tree
> Neural Network
> K-Nearest Neighbour
> Naive Bayes
> Support Vector Machine
Methodology: CRISP-DM
An efficient data pre processing frame work for loan credibility prediction s...eSAT Journals
Abstract
In today's world data mining have increasingly become very interesting and popular in terms of all applications especially in the
banking industry. We have too much data and too much technology but don't have useful information. This is why we need data
mining process. The importance of data mining is increasing and studies have been done in many domains to solve tons of
problems using various data mining techniques. The art of preparing data for data mining is the most important and time
consuming phase. In developing countries like India, bankers should vigilant to fraudsters because they will create more problems
to the banking organization. Applying data mining techniques, it is very effective to build a successful predictive model that helps
the bankers to take the proper decision. This paper covers the set of techniques under the umbrella of data preprocessing based
on a case study of bank loan transaction data. The proposed model will help to distinguish borrowers who repay loans promptly
from those who do not. The frame work helps the organizations to implement better CRM by applying better prediction ability.
Keywords: Data preprocessing, Customer behavior, Input columns, Outlier columns, Target column, Dataset, CRM
Predicting churn with filter-based techniques and deep learningIJECEIAES
Customer churn prediction is of utmost importance in the telecommunications industry. Retaining customers through effective churn prevention strategies proves to be more cost-efficient. In this study, attribute selection analysis and deep learning are integrated to develop a customer churn prediction model to improve performance while reducing feature dimensions. The study includes the analysis of customer data attributes, exploratory data analysis, and data preprocessing for data quality enhancement. Next, significant features are selected using two attribute selection techniques, which are chi-square and analysis of variance (ANOVA). The selected features are fed into an artificial neural network (ANN) model for analysis and prediction. To enhance prediction performance and stability, a learning rate scheduler is deployed. Implementing the learning rate scheduler in the model can help prevent overfitting and enhance convergence speed. By dynamically adjusting the learning rate during the training process, the scheduler ensures that the model optimally adapts to the data while avoiding overfitting. The proposed model is evaluated using the Cell2Cell telecom database, and the results demonstrate that the proposed model exhibits a promising performance, showcasing its potential as an effective churn prediction solution in the telecommunications industry.
Content-Based Image Retrieval (CBIR) systems have been used for the searching of relevant images in various research areas. In CBIR systems features such as shape, texture and color are used. The extraction of features is the main step on which the retrieval results depend. Color features in CBIR are used as in the color histogram, color moments, conventional color correlogram and color histogram. Color space selection is used to represent the information of color of the pixels of the query image. The shape is the basic characteristic of segmented regions of an image. Different methods are introduced for better retrieval using different shape representation techniques; earlier the global shape representations were used but with time moved towards local shape representations. The local shape is more related to the expressing of result instead of the method. Local shape features may be derived from the texture properties and the color derivatives. Texture features have been used for images of documents, segmentation-based recognition,and satellite images. Texture features are used in different CBIR systems along with color, shape, geometrical structure and sift features.
The cyber attacks have become most prevalent in the past few years. During this time, attackers have discovered new vulnerabilities to carry out malicious activities on the internet. Both the clients and the servers have been victimized by the attackers. Clickjacking is one of the attacks that have been adopted by the attackers to deceive the innocuous internet users to initiate some action. Clickjacking attack exploits one of the vulnerabilities existing in the web applications. This attack uses a technique that allows cross domain attacks with the help of userinitiated clicks and performs unintended actions. This paper traces out the vulnerabilities that make a website vulnerable to clickjacking attack and proposes a solution for the same.
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Eswar Publications
The audio and video synchronization plays an important role in speech recognition and multimedia communication. The audio-video sync is a quite significant problem in live video conferencing. It is due to use of various hardware components which introduces variable delay and software environments. The objective of the synchronization is used to preserve the temporal alignment between the audio and video signals. This paper proposes the audio-video synchronization using spreading codes delay measurement technique. The performance of the proposed method made on home database and achieves 99% synchronization efficiency. The audio-visual
signature technique provides a significant reduction in audio-video sync problems and the performance analysis of audio and video synchronization in an effective way. This paper also implements an audio- video synchronizer and analyses its performance in an efficient manner by synchronization efficiency, audio-video time drift and audio-video delay parameters. The simulation result is carried out using mat lab simulation tools and simulink. It is automatically estimating and correcting the timing relationship between the audio and video signals and maintaining the Quality of Service.
Due to the availability of complicated devices in industry, models for consumers at lower cost of resources are developed. Home Automation systems have been developed by several researchers. The limitations of home automation includes complexity in architecture, higher costs of the equipment, interface inflexibility. In this paper as we have proposed, the working protocol of PIC 16F72 technology is which is secure, cost efficient, flexible that leads to the development of efficient home automation systems. The system is operational to control various home appliances like fans, Bulbs, Tube light. The following paper describes about components used and working of all components connected. The home automation system makes use of Android app entitled “Home App” which gives
flexibility and easy to use GUI.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
Agriculture plays an important role in the economy of our country. Over 58 percent of the rural households depend on the agriculture sector as their means of livelihood. Agriculture is one of the major contributors to Gross Domestic Product(GDP). Seeds are the soul of agriculture. This application helps in reducing the time for the researchers as well as farmers to know the seedling parameters. The application helps the farmers to know about the percentage of seedlings that will grow and it is very essential in estimating the yield of that particular crop. Manual calculation may lead to some error, to minimize that error, the developed app is used. The scientist and farmers require the app to know about the physiological seed quality parameters and to take decisions regarding their farming activities. In this article a desktop app for seed germination percentage and vigour index calculation are developed in PHP scripting language.
What happens when adaptive video streaming players compete in time-varying ba...Eswar Publications
Competition among adaptive video streaming players severely diminishes user-QoE. When players compete at a bottleneck link many do not obtain adequate resources. This imbalance eventually causes ill effects such as screen flickering and video stalling. There have been many attempts in recent years to overcome some of these problems. However, added to the competition at the bottleneck link there is also the possibility of varying network bandwidth which can make the situation even worse. This work focuses on such a situation. It evaluates current heuristic adaptive video players at a bottleneck link with time-varying bandwidth conditions. Experimental setup includes the TAPAS player and emulated network conditions. The results show PANDA outperforms FESTIVE, ELASTIC and the Conventional players.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Spreading Trade Union Activities through Cyberspace: A Case StudyEswar Publications
This report present the outcome of an investigative research conducted to examine the modu-operandi of academic staff union of polytechnics (ASUP) YabaTech. The investigation covered the logistics and cost implication for spreading union activities among members. It was discovered that cost of management and dissemination of information to members was at high side, also logistics problem constitutes to loss of information in transit hence cut away some members from union activities. To curtail the problem identified, we proposed the
design of secure and dynamic website for spreading union activities among members and public. The proposed system was implemented using HTML5 technology, interface frameworks like Bootstrap and j query which enables the responsive feature of the application interface. The backend was designed using PHPMYSQL. It was discovered from the evaluation of the new system that cost of managing information has reduced considerably, and logistic problems identified in the old system has become a forgotten issue.
Identifying an Appropriate Model for Information Systems Integration in the O...Eswar Publications
Nowadays organizations are using information systems for optimizing processes in order to increase coordination and interoperability across the organizations. Since Oil and Gas Industry is one of the large industries in whole of the world, there is a need to compatibility of its Information Systems (IS) which consists three categories of systems: Field IS, Plant IS and Enterprise IS to create interoperability and approach the
optimizing processes as its result. In this paper we introduce the different models of information systems integration, identify the types of information systems that are using in the upstream and downstream sectors of petroleum industry, and finally based on expert’s opinions will identify a suitable model for information systems integration in this industry.
Link-and Node-Disjoint Evaluation of the Ad Hoc on Demand Multi-path Distance...Eswar Publications
This work illustrates the AOMDV routing protocol. Its ancestor, the AODV routing protocol is also described. This tutorial demonstrates how forward and reverse paths are created by the AOMDV routing protocol. Loop free paths formulation is described, together with node and link disjoint paths. Finally, the performance of the AOMDV routing protocol is investigated along link and node disjoint paths. The WSN with the AOMDV routing protocol using link disjoint paths is better than the WSN with the AOMDV routing protocol using node disjoint paths for energy consumption.
Bridging Centrality: Identifying Bridging Nodes in Transportation NetworkEswar Publications
To identify the importance of node of a network, several centralities are used. Majority of these centrality measures are dominated by components' degree due to their nature of looking at networks’ topology. We propose a centrality to identification model, bridging centrality, based on information flow and topological aspects. We apply bridging centrality on real world networks including the transportation network and show that the nodes distinguished by bridging centrality are well located on the connecting positions between highly connected regions. Bridging centrality can discriminate bridging nodes, the nodes with more information flowed through them and locations between highly connected regions, while other centrality measures cannot.
Now a days we are living in an era of Information Technology where each and every person has to become IT incumbent either intentionally or unintentionally. Technology plays a vital role in our day to day life since last few decades and somehow we all are depending on it in order to obtain maximum benefit and comfort. This new era equipped with latest advents of technology, enlightening world in the form of Internet of Things (IoT). Internet of things is such a specified and dignified domain which leads us to the real world scenarios where each object can perform some task while communicating with some other objects. The world with full of devices, sensors and other objects which will communicate and make human life far better and easier than ever. This paper provides an overview of current research work on IoT in terms of architecture, a technology used and applications. It also highlights all the issues related to technologies used for IoT, after the literature review of research work. The main purpose of this survey is to provide all the latest technologies, their corresponding
trends and details in the field of IoT in systematic manner. It will be helpful for further research.
Automatic Monitoring of Soil Moisture and Controlling of Irrigation SystemEswar Publications
In past couple of decades, there is immediate growth in field of agricultural technology. Utilization of proper method of irrigation by drip is very reasonable and proficient. A various drip irrigation methods have been proposed, but they have been found to be very luxurious and dense to use. The farmer has to maintain watch on irrigation schedule in the conventional drip irrigation system, which is different for different types of crops. In remotely monitored embedded system for irrigation purposes have become a new essential for farmer to accumulate his energy, time and money and will take place only when there will be requirement of water. In this approach, the soil test for chemical constituents, water content, and salinity and fertilizer requirement data collected by wireless and processed for better drip irrigation plan. This paper reviews different monitoring systems and proposes an automatic monitoring system model using Wireless Sensor Network (WSN) which helps the farmer to improve the yield.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
Impact of Technology on E-Banking; Cameroon PerspectivesEswar Publications
The financial services industry is experiencing rapid changes in services delivery and channels usage, and financial companies and users of financial services are looking at new technologies as they emerge and deciding whether or not to embrace them and the new opportunities to save and manage enormous time, cost and stress.
There is no doubt about the favourable and manifold impact of technology on e-banking as pictured in this review paper, almost all banks are with the least and most access e-banking Technological equipments like ATMs and Cards. On the other Hand cheap and readily available technology has opened a favourable competition in ebanking services business with a lot of wide range competitors competing with Commercial Banks in Cameroon in providing digital financial services.
Classification Algorithms with Attribute Selection: an evaluation study using...Eswar Publications
Attribute or feature selection plays an important role in the process of data mining. In general the data set contains more number of attributes. But in the process of effective classification not all attributes are relevant.
Attribute selection is a technique used to extract the ranking of attributes. Therefore, this paper presents a comparative evaluation study of classification algorithms before and after attribute selection using Waikato Environment for Knowledge Analysis (WEKA). The evaluation study concludes that the performance metrics of the classification algorithm, improves after performing attribute selection. This will reduce the work of processing irrelevant attributes.
Mining Frequent Patterns and Associations from the Smart meters using Bayesia...Eswar Publications
In today’s world migration of people from rural areas to urban areas is quite common. Health care services are one of the most challenging aspect that is must require to the people with abnormal health. Advancements in the technologies lead to build the smart homes, which contains various sensor or smart meter devices to automate the process of other electronic device. Additionally these smart meters can be able to capture the daily activities of the patients and also monitor the health conditions of the patients by mining the frequent patterns and
association rules generated from the smart meters. In this work we proposed a model that is able to monitor the activities of the patients in home and can send the daily activities to the corresponding doctor. We can extract the frequent patterns and association rules from the log data and can predict the health conditions of the patients and can give the suggestions according to the prediction. Our work is divided in to three stages. Firstly, we used to record the daily activities of the patient using a specific time period at three regular intervals. Secondly we applied the frequent pattern growth for extracting the association rules from the log file. Finally, we applied k means clustering for the input and applied Bayesian network model to predict the health behavior of the patient and precautions will be given accordingly.
Network as a Service Model in Cloud Authentication by HMAC AlgorithmEswar Publications
Resource pooling on internet-based accessing on use as pay environmental technology and ruled in IT field is the
cloud. Present, in every organization has trusted the web, however, the information must flow but not hold the
data. Therefore, all customers have to use the cloud. While the cloud progressing info by securing-protocols. Third
party observing and certain circumstances directly stale in flow and kept of packets in the virtual private cloud.
Global security statistics in the year 2017, hacking sensitive information in cloud approximately maybe 75.35%,
and the world security analyzer said this calculation maybe reached to 100%. For this cause, this proposed
research work concentrates on Authentication-Message-Digest-Key with authentication in routing the Network as
a Service of packets in OSPF (Open Shortest Path First) implementing Cloud with GNS3 has tested them to
securing from attackers.
Microstrip patch antennas are recently used in wireless detection applications due to their low power consumption, low cost, versatility, field excitation, ease of fabrication etc. The microstrip patch antennas are also called as printed antennas which is suffer with an array elements of antenna and narrow bandwidth. To overcome the above drawbacks, Flame Retardant Material is used as the substrate. Rectangular shape of microstrip patch antenna with FR4 material as the substrate which is more suitable for the explosive detection applications. The proposed printed antenna was designed with the dimension of 60 x 60 mm2. FR-4 material has a dielectric constant value of 4.3 with thickness 1.56 mm, length and width 60 mm and 60 mm respectively. One side of the substrate contains the ground plane of dimensions 60 x60 mm2 made of copper and the other side of the substrate contains the patch which have dimensions 34 x 29 mm2 and thickness 0.03mm which is also made of copper. RMPA without slot, Vertical slot RMPA, Double horizontal slot RMPA and Centre slot RMPA structures were
designed and the performance of the antennas were analysed with various parameters such as gain, directivity, Efield, VSWR and return loss. From the performance analysis, double horizontal slot RMPA antenna provides a better result and it provides maximum gain (8.61dB) and minimum return loss (-33.918dB). Based on the E-field excitation value the SEMTEX explosive material is detected and it was simulated using CST software.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Optimized Feature Extraction and Actionable Knowledge Discovery for Customer Relationship Management
1. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 04 Pages: 3161-3168 (2017) ISSN: 0975-0290
3161
Optimized Feature Extraction and Actionable
Knowledge Discovery for Customer
Relationship Management
Dr P.Senthil Vadivu
Assc Professor & Head, Dept of Computer Applications(UG), Hindusthan College
of Arts and Science, Coimbatore-641028
Email:sowjupavan@gmail.com
-------------------------------------------------------------------ABSTRACT---------------------------------------------------------------
In today’s dynamic marketplace, telecommunication organizations, both private and public, are increasingly
leaving antiquated marketing philosophies and strategies to the adoption of more customer-driven initiatives that
seek to understand, attract, retain and build intimate long term relationship with profitable customers . This
paradigm shift has undauntedly led to the growing interest in Customer Relationship Management (CRM)
initiatives that aim at ensuring customer identification and interactions. The urgent market requirement is to
identify automated methods that can assist businesses in the complex task of predicting customer churning.
The immediate requirement of the market is to have systems that can perform accurate
(i) identification of loyal customers (so that companies can offer more services to retain them)
(ii) prediction of churners to ensure that only the customers who are planning to switch their service
providers are being targeted for retention
Keywords: CRM, CLA-AKD, LOF, ACO, UDA
--------------------------------------------------------------------------------------------------------------------------------------------------
Date of Submission: Feb 09, 2017 Date of Acceptance: Feb 17, 2017
--------------------------------------------------------------------------------------------------------------------------------------------------
1. INTRODUCTION
To design and develop such systems, this work combines
data mining algorithms and decision making by formulating
the decision making problems directly on top of the data
mining results in a post processing step. These systems can
discover loyal customers, churning customers and can
produce a set of actions that can be applied to transform
churning customers to loyal customers. These systems can
effectively produce intelligent CRM for telecommunication
industries. This introduction is to present the various topics
related to the topic of research.
2. CUSTOMER RELATIONSHIP
MANAGEMENT (CRM)
The main goal of CRM is to aid organizations in better
understanding of each customer's value to the company,
while improving the efficiency and effectiveness of
communication. . It is one of the most helpful analysis task
used by companies to maintain loyal customers. Figure 2 : Work Flow of Customer Relationship
Management (CRM)
(Source: http://www.zoho.com/crm/how-crm-
works.html)
There are three aspects of CRM that can be used. They are
operational CRM, Analytical CRM and Collaborative
CRM. Operational CRM is concerned with analyzing and
improving operations like (i) marketing automation, (ii)
sales force automation and (iii) customer service and self-
service. Analytical CRM, uses data warehousing and data
mining techniques, to improve company-customer
relationships. Collaborative CRM, as the name suggests,
integrates operation CRM with analytical CRM
2. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 04 Pages: 3161-3168 (2017) ISSN: 0975-0290
3162
3. CRM AND TELECOM INDUSTRY
Telecom is a huge and varied bastion of technologies,
companies, services and politics that is truly global in nature.
Telecom industries customer service is key to sales and
loyalty and customer service has become the differentiator
between its competitors.
Telecom industries use data mining techniques with
the aim of constructing customer profiles, which predict the
characteristics of customers of certain classes. Examples of
these classes are:
What kind of customers (described by their
attributes such as age, income, etc.) are likely
attritors (who will go to competitors), and
what kind are loyal customers?
4. CUSTOMER CHURNING
Customer churns also known as customer attrition, customer
turnover, or customer defection, is the loss of clients or
customers (http://en.wikipedia.org/wiki/Customer_attrition).
Figure 4 : Causes for Churn(Source : Chu et al., 20007)
5. ACTIONABLE KNOWLEDGE
Actionable knowledge discovery is critical in promoting and
releasing the productivity of data mining and knowledge
discovery for smart business operations and decision
making. In general, telecom industries use algorithms and
tools which focus only on discovering patterns that identify
loyalty and churners. In most of the companies, the
processes of identifying loyal customers and predicting
churners are performed by human experts. These human
experts use the visualization results and interestingness
ranking to discover knowledge manually.
5.1 METHODOLOGY
The main challenge in the work is to find
techniques that bridge the two fields, data mining and
telecommunication, for an efficient and successful
knowledge discovery. . The eventual goal of this data mining
effort is to identify factors that will improve the quality and
cost effectiveness of customer care (Yu and Ying, 2009).
Integration of decision support with computer-
based CRM can reduce errors, enhance customer care,
decrease customer churning and improve customer loyalty
(Wu et al., 2002).The main goal of using data mining
techniques in CLA-AKD is to identify those customers who
share common attributes, which can be interpreted to group
them as either loyal or churners.
In this work, two types of operations are performed
on customer database. They are,
(i) Customer Loyalty Assessment: To
recognize and classify customers with
multivariate attributes as loyal customers
(non-churners) and churners.
(ii) Actionable Knowledge Discovery – To
predict a set of actions that can be used to
convert churners to valuable loyal
customers.
Both operations can be efficiently performed using cluster
analysis and classification.
Figure 5.1: Architecture of CLA-AKD Model
The proposed CLA-AKD model performs customer loyalty
assessment and actionable knowledge discovery in three
steps, namely,
(i) Preprocessing
(ii) Customer Loyalty Assessment
(iii) Actionable Knowledge Discovery.
5.2 PHASE I: DATA CLEANING OPERATIONS
. Data cleaning routines attempt to smooth out noise
while identifying outliers, fill-in missing values and correct
inconsistencies in data. The amount of time and cost spend
on preprocessing has a direct impact on the accuracy (Figure
1.6.1).
3. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 04 Pages: 3161-3168 (2017) ISSN: 0975-0290
3163
Figure 1.6.1: Preprocessing and Accuracy Tradeoff
5.2.1 OUTLIER DETECTION AND REMOVAL
Outlier detection and removal is a process that is considered
very challenging and important by all data mining
applications. The first task of the first phase concentrates on
detecting noise or outliers in the telecom dataset and
removes them to obtain a clean dataset. For this purpose, an
enhanced density based outlier detection algorithm using
Local Outlier Factor (LOF) is proposed.
5.3 PHASE II: CUSTOMER LOYALTY ASSESSMENT
MODELS
Frameworks for predicting customer’s future behavior has
captured the attention of several researchers and
academicians in recent years, as early detection of future
customer churns is one of the most wanted CRM strategies.
Future of company depends on the stable income provided
by loyal customers. However, attracting new customers is a
difficult and costly task involved with market research,
advertisement and promotional expenses.
5.3.1 Step 1: Clustering
Typical pattern clustering activity involves the
following steps (Jain and Dubes 1988):
Pattern representation (optionally including feature
extraction and/or selection),
Definition of a pattern proximity measure
appropriate to the data domain,
Clustering or grouping,
Data abstraction (if needed), and
Assessment of output (if needed).
Figure 1.6.2.1: Data Clustering
5.3.2 Step 2: Classification
Classification is the process of finding a set of models (or
functions) that describe and distinguish data classes or
concepts, for the purpose of being able to use the model to
predict the class of objects whose class label is unknown.
The derived model is based on the analysis of a set of
training data.
Figure 1.6.2.2: General Process of Classification
5.4 PHASE III: ACTIONABLE KNOWLEDGE
DISCOVERY MODELS
The proposed AKD model consists of four steps,
The first step implements two dimensionality reduction
algorithms, namely, ACO (Ant Colony Optimization) or
UDA (Uncorrelated Discriminate Analysis) to reduce the
dataset size. The second step builds customer profile from
the training data, using two classifiers, a decision tree
Traini
ng Set
Testin
g Set
Learning
Algorithm
Learn Model
Apply Model
Classifi
cation
Model
Indu
ctio
n
Ded
ucti
on
4. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 04 Pages: 3161-3168 (2017) ISSN: 0975-0290
3164
learning algorithm and Bayesian Network classifier. These
two algorithms were selected because of their efficiency
provided during churn prediction. The third step searches for
optimal actions for each customer. This is a key component
of the system’s proactive solution. The fourth step produces
reports for domain experts to review the actions and
selectively deploy the actions.
6 CUSTOMER LOYALTY ASSESSMENT
MODEL
In telecommunication industry, ‘Customer Churn’
is used to denote customer movement from one service
provider to another. Customer Churn Management (CCM) is
the study of algorithms that process customer data and find
methods to identify loyal customers (Berson et al., 2000)
and take actions to secure these customers to the same
company (Kentrias, 2001). CCM implements techniques
that have the ability to forecast customer decision to shift
from one service provider to another. These techniques use
segmentation operations to segment its customers in terms of
their profitability and then apply retention management that
takes appropriate actions to convert churners to loyal
customer segment.
6.1 CLA MODEL
The steps involved in the proposed customer
loyalty assessment model are presented in Figure 1.7.1. The
proposed customer loyalty assessment model presents a two-
step procedure. In the first step, a SOM-(KMeans +
DBSCAN) method is proposed. This method combines SOM
(Self-Organizing Map) with a hybrid clustering algorithm
that combines the partition-based clustering algorithm
(KMeans) with a density based clustering algorithm
(DBSCAN). This algorithm analyzes the customer database
and segments the customers into two distinct groups of
customers with similar characteristics and behaviors. The
two groups of customers are loyal (non-churners) customers
and churners.
Figure 6.1 : Customer Loyalty Prediction Model
6.2 SELF-ORGANIZING MAPS
Self-Organizing Maps (SOM) are widely used
unsupervised neural network that has found several
applications that use clustering (Han and Kamber, 2000).
Example application includes industry applications (Kaski et
al., 1998) and market segmentation (Oja et al., 2002). It has
the advantage of projecting the relationships between high-
dimensional data onto a two-dimensional display, where
similar input records are located close to each other. By
adopting an unsupervised learning paradigm, the SOM
conducts clustering tasks in a completely data-driven way
(Kohonen et al., 1996), that is, it works with little a priori
information or assumptions concerning the input data. In
addition, the SOM’s capability of preserving the topological
relationships of the input data and its excellent visualization
features motivated this research work to adopt SOM in the
design of CLA-AKD.
6.3 CLUSTERING ALGORITHM
The proposed CLA model uses two algorithms, namely,
KMeans and DBSCAN to form a hybrid clustering model. In
the study, the DBSCAN is enhanced to automatically
estimate its parameters. This section presents the working of
KMeans algorithm, Enhanced DBSCAN algorithm and
proposed Hybrid algorithm.
6.4. KMeans Algorithm
K Means clustering generates a specific number of
disjoint, flat (non-hierarchical) clusters. It is well suited for
generating globular clusters. The K Means method is an
unsupervised, non-deterministic and iterative method with
the following properties.
There are always K clusters.
The clusters are non-hierarchical and they do not
overlap.
Every member of a cluster is closer to its cluster
than any other cluster because closeness does not
always involve the 'center' of clusters.
6.5 DBSCAN Algorithm
The second algorithm of the hybrid clustering
algorithm is the DBSCAN (Density Based Spatial Clustering
of Applications with Noise) algorithm. Density-based
clustering, introduced by Ester et al. (1996), locates regions
of high density that are separated from one another by
regions of low density. DBSCAN is a simple and effective
density-based clustering algorithm which has been proved to
work effectively in the field of clustering.
6.5.1 HYBRID KMEANS-DBSCAN ALGORITHM
The KMeans clustering algorithm is simple and
fast, buts its performance degrades in the presence of noise.
Alternatively, the DBSCAN algorithm performance is more
than satisfactory in the presence of noise but has time
complexity, and it is slow. A hybrid model is proposed to
take advantage of both KMeans and DBSCAN algorithms to
improve the speed and make it robust against noise. For this
purpose this study proposes a method to combine KMeans
and DBSCAN algorithm. This work enhances the work of
El-Sonbaty et al. (2004) and consists of the following three
steps.
(i) Use a preprocessing step that reduces the
dimensionality of the dataset and partition into
obtain k parts
5. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 04 Pages: 3161-3168 (2017) ISSN: 0975-0290
3165
(ii) Apply DBSCAN to each partition
(iii) Merge dense regions until the number of
clusters is K
6.6 CLASSIFICATION ALGORITHMS
This section presents the three classification algorithms,
namely, SVM, BPNN and Decision Trees, that are used to
classify churners as low, medium and high retention
customers.
6.6.1 Support Vector Machine
A key feature of the proposed ensemble model is the
integration of Support Vector Machines with clustering.
Among the various classifiers available, SVMs are well
known for their generalization performance and ability to
handle high dimensional data and a brief general explanation
of the same is provided in this section.
6.6.2 Back Propagation Neural Networks
A Back Propagation Neural Network (BPNN) is an artificial
neural network where connections between the units do not
form a directed cycle. They are the first simplest type of
artificial neural network devised where, the information
moves in only one direction, forward, from the input nodes,
through the hidden nodes (if any) and to the output nodes.
There are no cycles or loops in the network. The Back
Propagation Algorithm is a common way of teaching
artificial neural networks how to perform a given task. It
requires training which has knowledge or can calculate the
desired output for any given input. Backpropagation
algorithm learns the weights for a multilayer network, given
a network with a fixed set of units and interconnections. It
employs gradient descendent rule to attempt to minimize the
squared error between the network output values and the
target values for these outputs.
6.6.3 Decision Tree Classifier
Data classification, one of the important task of data
mining, is the process of finding the common properties
among a set of objects in a database and classifies them into
different classes (Han et al., 1993). One of the well-accepted
classification methods is the induction of decision trees. A
decision tree is a flow-chart like tree structure, where each
internal node denotes a test on an attribute, each branch
represents an outcome of the test and the leaf nodes
represent classes or class distributions. A typical decision
tree consists of three types of nodes :
a. A root node
b. Internal nodes
c. Leaf or terminal nodes
A root node is a node that has no. incoming edges and zero
or more outgoing edges. Each of the internal nodes has
exactly one incoming edge and two or more outgoing edges.
Nodes having exactly one incoming edge with no outgoing
edges are termed as leaf or terminal nodes. An example
decision tree is shown in Figure 1.7.5.3
In a decision tree, each leaf node is assigned a class
label. The non-terminal nodes, which include the root and
other internal nodes, contain attribute test conditions to
separate records that have different characteristics.
Classifying a test record is straightforward once a decision
tree has been constructed. Starting from the root node, the
test condition is applied to the record and based on the
output, the appropriate branch is followed. This will lead to
either another node, for which a new test condition can be
applied, or to a leaf node. The class label associated with the
leaf node is then assigned to the record.
Fig. 6.6.3: Example of a general decision tree
7. RESULTS AND DISCUSSION
Churn represents the loss of an existing customer to a
competitor. It is a prevalent problem faced by service
provider of a subscription service or recurring purchasable in
the telecommunication sector. It is well-known fact that the
cost of customer acquisition and win-back is very high, when
compared to retaining existing customer. Thus, customer
loyalty assessment for loyal customer and churn customer
identification and prediction are very important in
telecommunication industry. Another problem faced after
identifying churners is the decision of what actions to take to
convert them to loyal customers. This work has proposed
enhanced techniques to perform customer loyalty assessment
for identifying loyal customers and churners and also to
perform actionable knowledge discovery that presents a set
of actions that can be used to improve customer loyalty.
7.1 EVALUATION OF OUTLIER DETECTION
The results show that the enhanced LOF algorithm is
efficient in detecting outliers both with respect to outlier
detection rate and speed. The outlier detection rate decreased
(marginally) with the amount of outliers present. The outlier
detection rate is consistent with the different level of
randomness proving that the outlier removal or data cleaning
process is efficient. On average, 1.89% efficiency gain was
envisaged while using ELOF when compared to LOF with
respect to detection rate.
6. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 04 Pages: 3161-3168 (2017) ISSN: 0975-0290
3166
Figure 7.1: NRMSE of Missing Value Handling
Algorithms
Figure7.2: Speed of Missing Value Handling Algorithms
Figure 7.3 : Outlier Detection Rate
Figure 7.4: Speed (Seconds) of Outlier Detection
8. EVALUATION OF ACTIONABLE
KNOWLEDGE DISCOVERY MODELS
The bounded segmentation problem for actionable
knowledge discovery along with the proposed AKD systems
was applied to calculate the pre-specified k action sets with
maximum profit. Figure 8 and 8.1 shows the net profit
obtained by the existing and proposed AKD models.
0
1
2
3
4
5
6
7
8
9
10 20 30 40
NRMSE
KI EKI
0,58
0,6
0,62
0,64
0,66
0,68
0,7
0,72
10 20 30 40
Speed(Seconds)
KI EKI
90,00
91,00
92,00
93,00
94,00
95,00
96,00
97,00
98,00
1% 2% 3% 4% 5%
DetectionRate(%)
Outliers Introduced (%)
LOF ELOF
0
2
4
6
8
1% 2% 3% 4% 5%
Speed(Seconds)
Outliers Introduced (%)
LOF ELOF
7. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 04 Pages: 3161-3168 (2017) ISSN: 0975-0290
3167
Figure 8 : Speed (Seconds) of Loyalty Assessment Models
Figure8.1: Net Profit Obtained by Actionable Knowledge
Discovery Systems
9 CONCLUSIONS
The performance evaluation stage was conducted in
four stages, each analyzing the performance of the missing
value handling algorithm, outlier detection algorithm,
customer loyalty assessment model and actionable
knowledge discovery system. A telecom customer dataset
from IDEA customer care was used during experiments. The
missing value handling algorithm was evaluated using two
performance metrics, namely, Normalized Root Mean
Square and speed. Similarly, the outlier detection algorithm
was evaluated using outlier detection rate and speed. The
customer loyalty assessment procedure used accuracy, error
rate and speed to analyze the algorithms. The actionable
knowledge discovery system analysis was based on net
profit achieved and speed of discovery.
The various experiments conducted proved that the
proposed algorithms and the proposed CRM system for
customer loyalty assessment and actionable knowledge
discovery are efficient. Experimental results showed that the
system is effective in terms of analysis accuracy and speed
in identifying common customer behaviour patterns and
future churn prediction. The developed system has promising
value in the current constantly changing telecommunication
industry and can be used as effectively by companies to
improve customer relationship and improve business
opportunities.
10 FUTURE RESEARCH DIRECTIONS
The following can be considered to improve the
proposed customer loyalty assessment model and actionable
knowledge discovery system.
The proposed models can be further enhanced, if
the processes can be parallelizing. This is feasible,
by identifying operations that are independent to
each other and propose a parallel architecture to
improve the performance.
Amount of memory used loyalty assessment and
action discovery is another area which can be
analyzed in future.
Classification process can be improved by using
advanced techniques like ensemble clustering or
ensemble classification.
REFERENCES
1. Berson, A., Smith, S. and Thearling, K. (2000,
1999) Building data mining applications for CRM,
New York, McGraw-Hill.
2. Chu, B., Tsai, M. and Ho, C. (2007) Toward a
hybrid data mining model for customer retention,
Knowledge-Based Systems, Vol. 20, No. 8, Pp.
703-718.
3. Ester, M., Kriegel, H-P., Sander, J. and Xu, X.
(1996) A density-based algorithm for discovering
clusters in large spatial databases with noise,
Proceedings of the 2nd ACM SIGKDD, Portland,
Oregon, Pp. 226-231
4. El-Sonbaty, Y., Ismail, M.A. and arouk, M. (2004)
An Efficient Density Based Clustering Algorithm
for Large Databases, 16th IEEE International -
Conference on Tools with Artificial Intelligence,
Pp. 673-677.
5. Han, J. and Kamber, M. (2000) Data Mining:
Concepts and Techniques, The Morgan Kaufmann
Series in Data Management Systems, Morgan
Kaufmann.
6. Han, J., Cai, Y., Cercone, N. (1993) Data driven
discovery of quantitative rules in relational
databases, IEEE Trans. Knowledge and Data
Engineering, Vol. 5, Pp. 29-40.
0,00
2,00
4,00
6,00
8,00
10,00
12,00
14,00
Speed(Seconds)
Without Preprocessing
4,00
4,40
4,80
5,20
5,60
6,00
6,40
6,80
7,20
7,60
8,00
8,40
2 3 4 5
NetProfit(x100000
Rs.)
No.of Action Sets
BSP
8. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 04 Pages: 3161-3168 (2017) ISSN: 0975-0290
3168
7. http://en.wikipedia.org/wiki/Customer_attrition,
Last Accessed during March, 2014.
8. Jain, A.K. and Dubes, R.C. (1988) Algorithms for
Clustering Data, Prentice-Hall.
9. Kaski, S., Kangas, J. and Kohonen, T. (1998)
Bibliography of Self- Organizing Map (SOM)
Papers 1981-1997, Neural Computing Surveys,
Vol. 1, Pp. 102-350.
10. Kentrias, S. (2001) Customer relationship
management: The SAS perspective,
www.cm2day.com, Last Accessed during
September 2013.
11. Kohonen, T., Hynninen, J., Kangas, J. and
Laaksonen, V. (1996) SOM PAK: The Self-
Organizing Map program package, Helsinki
University of Technology, Finland, Report A31, Pp.
1-27.
12. Oja, M., Kaski, S. and Kohonen, T. (2002)
Bibliography of Self-Organizing Map (SOM)
Papers: 1998-2001 Addendum, Neural Computing
Surveys, Vol. 1, Pp. 1-176.
13. Wu, R., Peters, W. and Morgan, M.W. (2002) The
Next Generation Clinical Decision Support:
Linking Evidence to Best Practice, Journal of
Healthcare Information Management, Vol. 16,
No.4, Pp. 50-55.
14. Yu, C. and Ying, X. (2009) Application of Data
Mining Technology in E-Commerce, International
Forum on Computer Science-Technology and
Applications, Vol. 1, Pp. 291-293.
Author Biographies
P.Senthil Vadivu received Ph.D(Computer Science)
from Avinashilingam University for Women in 2015,
completed M.Phil from Bharathiar university in
2006.Currently working as the Head, Department
of Computer Applications, Hindusthan College of Arts
and Science, Coimbatore-28. Her Research area is
decision trees with neural networks.