The document provides an informal technical review of the FCC's planned broadband measurement regime. It notes several issues with the FCC's approach, including that it:
1) Focuses more on political goals of "neutrality" than user experience
2) Does not adequately capture key technical aspects like network variability, burstiness, or what constitutes a "speed"
3) Sets up unrealistic expectations about network performance that may not match actual user experience
4) Creates challenges around accurately measuring performance across different technologies like DSL and cable.
Overall, the review finds that while the FCC addresses some important issues, its approach lacks technical rigor and could lead to measurements that do not provide useful or actionable information
Essential science for broadband regulationMartin Geddes
Is 'net neutrality' an objectively measurable thing? The scientific report recently commissioned by Ofcom (the UK telecoms regulator) on Traffic Management Detection says 'no'. Furthermore, 'neutrality' isn't even what we want! This presentation is an annotated version from a webinar that summarises the report and suggests a way out of the 'neutrality' quagmire.
Broadband is a relatively new technology, and its underlying science is still being developed. We have long understood the 'right' units in other engineering disciplines: mass, length, hardness, etc. What is the 'right' unit for supply and demand for broadband?
This presentation discusses the need for having the right metric. This means solving two problems: the 'abstraction' gap, and the 'inference' gap. ∆Q is the ideal metric because it fills both gaps.
Introduction to ΔQ and Network Performance Science (extracts)Martin Geddes
Introduction and summary sections from long slide deck (165 slides) on network performance science as the associated mathematical breakthrough that makes it possible.
The document provides an overview of network performance science. It discusses how networks are essentially large distributed computing systems and how operators manufacture performance for distributed applications. It outlines the key activities of measuring, modeling, and managing performance hazards to understand customer experience risks. It notes issues like poor specifications that don't capture the stochastic nature of packet networks, metrics that hide important traffic patterns, non-composable contracts between vendors, and protocols like TCP attempting arbitrage on quality variations in networks.
Addicted to speed: Why broadband service providers need a ‘healthier lifestyle’Martin Geddes
Broadband service providers are trapped in a vicious circle of network upgrades where they try to use capacity to fix scheduling problems. To escape this cycle, they need to construct their networks differently to schedule traffic appropriately. The benefits are enormous.
The Ladder: How money and multiplexing are connectedMartin Geddes
This document discusses how network operator costs and revenues are connected through a "ladder" of causal relationships. It explains that revenue comes from delivering good quality experiences to users, which requires sufficient network capacity and flows without excessive packet loss or delay. Costs arise from the physical infrastructure and active network mechanisms needed to support these flows. Multiplexing plays a key role by matching variable demand to fixed network resources, but introduces risks of packet loss and delay if not performed effectively. Predicting and controlling these multiplexing effects is important for maximizing profits while managing costs and risks.
Essential science for broadband regulationMartin Geddes
Is 'net neutrality' an objectively measurable thing? The scientific report recently commissioned by Ofcom (the UK telecoms regulator) on Traffic Management Detection says 'no'. Furthermore, 'neutrality' isn't even what we want! This presentation is an annotated version from a webinar that summarises the report and suggests a way out of the 'neutrality' quagmire.
Broadband is a relatively new technology, and its underlying science is still being developed. We have long understood the 'right' units in other engineering disciplines: mass, length, hardness, etc. What is the 'right' unit for supply and demand for broadband?
This presentation discusses the need for having the right metric. This means solving two problems: the 'abstraction' gap, and the 'inference' gap. ∆Q is the ideal metric because it fills both gaps.
Introduction to ΔQ and Network Performance Science (extracts)Martin Geddes
Introduction and summary sections from long slide deck (165 slides) on network performance science as the associated mathematical breakthrough that makes it possible.
The document provides an overview of network performance science. It discusses how networks are essentially large distributed computing systems and how operators manufacture performance for distributed applications. It outlines the key activities of measuring, modeling, and managing performance hazards to understand customer experience risks. It notes issues like poor specifications that don't capture the stochastic nature of packet networks, metrics that hide important traffic patterns, non-composable contracts between vendors, and protocols like TCP attempting arbitrage on quality variations in networks.
Addicted to speed: Why broadband service providers need a ‘healthier lifestyle’Martin Geddes
Broadband service providers are trapped in a vicious circle of network upgrades where they try to use capacity to fix scheduling problems. To escape this cycle, they need to construct their networks differently to schedule traffic appropriately. The benefits are enormous.
The Ladder: How money and multiplexing are connectedMartin Geddes
This document discusses how network operator costs and revenues are connected through a "ladder" of causal relationships. It explains that revenue comes from delivering good quality experiences to users, which requires sufficient network capacity and flows without excessive packet loss or delay. Costs arise from the physical infrastructure and active network mechanisms needed to support these flows. Multiplexing plays a key role by matching variable demand to fixed network resources, but introduces risks of packet loss and delay if not performed effectively. Predicting and controlling these multiplexing effects is important for maximizing profits while managing costs and risks.
Fundamentals of network performance engineeringMartin Geddes
This document introduces network performance engineering and discusses three key concepts: 1) loss and delay accumulate along a network path, 2) the distribution of loss and delay is important, not just averages, and 3) loss and delay can be decomposed into geographic (G), serialisation (S), and variable contention (V) components. It argues this framework provides insights into broadband, LTE, SDN, and NFV that current approaches overlook by focusing on throughput over end-to-end quality of experience. Predictable Network Solutions and Martin Geddes Consulting aim to advance the practice of network performance engineering.
Network performance optimisation using high-fidelity measuresMartin Geddes
Communications service providers are seeking to increase their profitability and return on assets Predictable Network Solutions Ltd has the capability to support optimisation beyond traditional approaches to network data analytics. This capability is built around a robust scientific method. CSPs can benefit greatly from enhancing the fidelity of their measurements of critical aspects of network performance. Standard techniques fail to capture enough resolution. We have the missing leading-edge measurement capabilities that all CSPs need.
The ISP industry has been selling the public and government on the benefits of 'superfast' broadband. This presentation argues that the goal should instead be 'superfit' broadband.
The Properties and Mathematics of Data Transport QualityMartin Geddes
A Brief Introduction to ’Quality’ in Data Networks; its Interaction with End User Experience, its Conservation, Propagation, and how it can be Traded, Costed and Managed.
The issue of quality in networks has been long being troublesome, resulting in endless deferral. It was a hard issue for the pioneers to deal with ‘quality’ and ‘QoS’ as the underlying mathematics was insufficient to support their ambitions. We have now filled in a significant part of the missing mathematical foundations. The culmination of that work is the ∆Q framework.
As a by-product of this framework, a new approach to sharing quality has become possible: a polyservice network. We believe that this is a significant conceptual and practical advance. However, we have (until now) lacked industry standard terminology to describe it.
This short presentation introduces the idea of a polyservice network, and contrasts it with pre-existing approaches to ‘priority QoS’.
Sample proposal summary for quality arbitrage business unitMartin Geddes
The telecoms industry is getting to grips with quality and performance. The current system has a weak control over quality, and many pricing mismatches. As a result, there are arbitrage opportunities everywhere. This presentation for a global telco proposed a new business unit to take advantage of them.
Performance and Supply Chain Management for the Software TelcoMartin Geddes
Many network operators are currently engaged in the transformation to become a ‘software telco’. Programmable networks deliver more efficiency and flexibility from the underlying fixed physical network assets. However, this also introduces new business and technical risks. We look at how to manage the technical issues of the SDN/NFV world.
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveJuniper Networks
Service providers are facing more and more pressure as customers demand immediacy. Learn how adopting a carrier-grade, open network platform closes the innovation gap to create value for your network. http://juni.pr/1JQZYOl
A SESERV methodology for tussle analysis in Future Internet technologies - In...ictseserv
This document introduces a methodology for analyzing "tussles" that may occur between stakeholders with differing interests when new internet technologies are introduced. It defines tussles as conflicts that can arise at each stage of a technology's adoption and use. The methodology involves: 1) Identifying stakeholder roles and interests for a given functionality, 2) Identifying potential tussles between stakeholders, and 3) Assessing the impact of each tussle on stakeholders and the risk of spillover effects on other functionalities. The methodology aims to help understand how new technologies may affect stakeholders and to design technologies that allow for varying outcomes while avoiding instability and spillovers.
Discovering Influential User by Coupling Multiplex Heterogeneous OSN’SIRJET Journal
This document proposes a framework for modeling and analyzing influence diffusion in multiplex online social networks (OSNs). It introduces coupling plans to represent how data spreads across overlapping users in multiple OSNs. Specifically, it proposes both lossless and lossy coupling plans to map multiple networks into a single network. Extensive tests on real and synthetic datasets show the coupling plans can effectively identify influential users by considering their roles across multiple OSNs. The framework provides insights into influence propagation in multiplex networks and can solve the minimum cost influence problem by exploiting algorithms for single networks.
The UNIX Evolution: An Innovative History reaches a 20-Year MilestoneDana Gardner
Transcript of a sponsored discussion on how UNIX has evolved in the 20-year history of UNIX and the role of The Open Group in maintaining and updating the standard.
This document provides information about an organization called SBGC that provides IEEE project assistance to students. It offers categories of projects based on whether students have their own project ideas or want to select from SBGC's list. It lists the technologies and domains they support and the departments they can assist. It also describes the project deliverables and support provided, as well as the technologies and departments they work with.
BigData Republic teamed up with VodafoneZiggo and hosted an meetup on churn prediction.
Telecom companies like VodafoneZiggo have long benefited from the fine art/science of predicting churn. Currently, in the booming age of subscription based business models (e.g. Netflix, Spotify, HelloFresh), the importance of predicting churn has become widespread. During this event, VodafoneZiggo shared some of its wisdom with the public, after which BDR Data Scientist Tom de Ruijter presented an overview of the modeling tools at hand, both classical, as well as novel approaches. Finally, the participants engaged in a hands-on session showcasing the implementation of different approaches.
PART 1 — Churn Prediction in Practice by Florian Maas
At VodafoneZiggo we are incredibly excited about Advanced Analytics and the enormous potential for progress and innovation. In our state of the art open source platform we store the tremendous amount of data that is generated every single second in our mobile and fixed networks. This means that we have a vast body of rich information, which if unlocked, can lead to something very special. As a company with a primarily subscription-based service model, churn plays a vital role in the daily business. Not only is the churn rate a good indicator of customer (dis)satisfaction, it is also one out of two factors that determines the steady-state level of active customers. During this talk, we will show how data science provides added value in the process of churn prevention at VodafoneZiggo. We will talk about the data and the modeling approach we use, and the pitfalls and shortcomings that we have encountered while building the model. We will also briefly discuss potential improvements to the current approach, which brings us to talk #2.
PART 2 — The Churn Prediction Toolbox by Tom de Ruijter
The second talk will show you the fine intricacies of predicting churn through different approaches. We’ll start off with an overview of different modeling strategies for describing the problem of churn, both in terms of a classification problem as well as a regression problem. Secondly, Tom will give you insights in how you evaluate a churn model in a way such that business stakeholders know how to act upon the model results. Finally, we’ll work towards the hands-on session demonstrating different model approaches for churn prediction, ranging from classical time series prediction to recurrent neural networks.
This document describes a service-oriented architecture for data acquisition and control in the electric utility industry. The key challenges addressed are bridging operational and information technologies, avoiding brittle architectures, removing isolated systems, and managing growing remote sensor data and workforce changes. The proposed architecture uses a message-oriented middleware with AMQP and protocol buffers. It supports a RESTful design with core services for measurements, commands, events, and alarm management to integrate grid operations.
Geographic Analytics - How HP Visualises its Supply ChainNUS-ISS
This is an article by Dr Jozo Acksteiner and Ms Claudia Trautmann on Geographical Analytics. Dr Acksteiner was a speaker at ISS Seminar: Supply Chain Analytics on 20 Nov 2013
Red Hat, Green Energy Corp & Magpie - Open Source Smart Grid Plataform - ...impodgirl
The Pacific Northwest smart grid demonstration project led by Battelle Memorial Institute aims to validate the costs and benefits of smart grid technology. The $88.8 million project involves 12 utilities across 5 northwest states and will test technologies like dynamic pricing signals and demand response. It seeks to better integrate renewable energy and improve system efficiency over its 5-year duration. Red Hat is also entering the smart grid industry through a partnership with Grid Exchange Corporation to develop an open-source smart grid software integration platform applying standards like ICCP.
Disruptive technologies can be forecasted using the SAW (Steps and Waits) model rather than exponential models like Moore's Law. The SAW model predicts technologies will improve in steps of big improvements separated by waits with no growth, as seen across 26 technologies. This model could have helped Sony timely invest in LCDs over CRTs. To develop disruptive technologies, organizations should build on best practices, consult trusted advisors, think non-incrementally, connect with customers in new ways, and focus on creating connected ecosystems.
This presentation was given at ITU Telecom World in December 2015. It gives a viewpoint on key telecoms regulatory issues from the viewpoint of being a network performance expert.
Fundamentals of network performance engineeringMartin Geddes
This document introduces network performance engineering and discusses three key concepts: 1) loss and delay accumulate along a network path, 2) the distribution of loss and delay is important, not just averages, and 3) loss and delay can be decomposed into geographic (G), serialisation (S), and variable contention (V) components. It argues this framework provides insights into broadband, LTE, SDN, and NFV that current approaches overlook by focusing on throughput over end-to-end quality of experience. Predictable Network Solutions and Martin Geddes Consulting aim to advance the practice of network performance engineering.
Network performance optimisation using high-fidelity measuresMartin Geddes
Communications service providers are seeking to increase their profitability and return on assets Predictable Network Solutions Ltd has the capability to support optimisation beyond traditional approaches to network data analytics. This capability is built around a robust scientific method. CSPs can benefit greatly from enhancing the fidelity of their measurements of critical aspects of network performance. Standard techniques fail to capture enough resolution. We have the missing leading-edge measurement capabilities that all CSPs need.
The ISP industry has been selling the public and government on the benefits of 'superfast' broadband. This presentation argues that the goal should instead be 'superfit' broadband.
The Properties and Mathematics of Data Transport QualityMartin Geddes
A Brief Introduction to ’Quality’ in Data Networks; its Interaction with End User Experience, its Conservation, Propagation, and how it can be Traded, Costed and Managed.
The issue of quality in networks has been long being troublesome, resulting in endless deferral. It was a hard issue for the pioneers to deal with ‘quality’ and ‘QoS’ as the underlying mathematics was insufficient to support their ambitions. We have now filled in a significant part of the missing mathematical foundations. The culmination of that work is the ∆Q framework.
As a by-product of this framework, a new approach to sharing quality has become possible: a polyservice network. We believe that this is a significant conceptual and practical advance. However, we have (until now) lacked industry standard terminology to describe it.
This short presentation introduces the idea of a polyservice network, and contrasts it with pre-existing approaches to ‘priority QoS’.
Sample proposal summary for quality arbitrage business unitMartin Geddes
The telecoms industry is getting to grips with quality and performance. The current system has a weak control over quality, and many pricing mismatches. As a result, there are arbitrage opportunities everywhere. This presentation for a global telco proposed a new business unit to take advantage of them.
Performance and Supply Chain Management for the Software TelcoMartin Geddes
Many network operators are currently engaged in the transformation to become a ‘software telco’. Programmable networks deliver more efficiency and flexibility from the underlying fixed physical network assets. However, this also introduces new business and technical risks. We look at how to manage the technical issues of the SDN/NFV world.
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveJuniper Networks
Service providers are facing more and more pressure as customers demand immediacy. Learn how adopting a carrier-grade, open network platform closes the innovation gap to create value for your network. http://juni.pr/1JQZYOl
A SESERV methodology for tussle analysis in Future Internet technologies - In...ictseserv
This document introduces a methodology for analyzing "tussles" that may occur between stakeholders with differing interests when new internet technologies are introduced. It defines tussles as conflicts that can arise at each stage of a technology's adoption and use. The methodology involves: 1) Identifying stakeholder roles and interests for a given functionality, 2) Identifying potential tussles between stakeholders, and 3) Assessing the impact of each tussle on stakeholders and the risk of spillover effects on other functionalities. The methodology aims to help understand how new technologies may affect stakeholders and to design technologies that allow for varying outcomes while avoiding instability and spillovers.
Discovering Influential User by Coupling Multiplex Heterogeneous OSN’SIRJET Journal
This document proposes a framework for modeling and analyzing influence diffusion in multiplex online social networks (OSNs). It introduces coupling plans to represent how data spreads across overlapping users in multiple OSNs. Specifically, it proposes both lossless and lossy coupling plans to map multiple networks into a single network. Extensive tests on real and synthetic datasets show the coupling plans can effectively identify influential users by considering their roles across multiple OSNs. The framework provides insights into influence propagation in multiplex networks and can solve the minimum cost influence problem by exploiting algorithms for single networks.
The UNIX Evolution: An Innovative History reaches a 20-Year MilestoneDana Gardner
Transcript of a sponsored discussion on how UNIX has evolved in the 20-year history of UNIX and the role of The Open Group in maintaining and updating the standard.
This document provides information about an organization called SBGC that provides IEEE project assistance to students. It offers categories of projects based on whether students have their own project ideas or want to select from SBGC's list. It lists the technologies and domains they support and the departments they can assist. It also describes the project deliverables and support provided, as well as the technologies and departments they work with.
BigData Republic teamed up with VodafoneZiggo and hosted an meetup on churn prediction.
Telecom companies like VodafoneZiggo have long benefited from the fine art/science of predicting churn. Currently, in the booming age of subscription based business models (e.g. Netflix, Spotify, HelloFresh), the importance of predicting churn has become widespread. During this event, VodafoneZiggo shared some of its wisdom with the public, after which BDR Data Scientist Tom de Ruijter presented an overview of the modeling tools at hand, both classical, as well as novel approaches. Finally, the participants engaged in a hands-on session showcasing the implementation of different approaches.
PART 1 — Churn Prediction in Practice by Florian Maas
At VodafoneZiggo we are incredibly excited about Advanced Analytics and the enormous potential for progress and innovation. In our state of the art open source platform we store the tremendous amount of data that is generated every single second in our mobile and fixed networks. This means that we have a vast body of rich information, which if unlocked, can lead to something very special. As a company with a primarily subscription-based service model, churn plays a vital role in the daily business. Not only is the churn rate a good indicator of customer (dis)satisfaction, it is also one out of two factors that determines the steady-state level of active customers. During this talk, we will show how data science provides added value in the process of churn prevention at VodafoneZiggo. We will talk about the data and the modeling approach we use, and the pitfalls and shortcomings that we have encountered while building the model. We will also briefly discuss potential improvements to the current approach, which brings us to talk #2.
PART 2 — The Churn Prediction Toolbox by Tom de Ruijter
The second talk will show you the fine intricacies of predicting churn through different approaches. We’ll start off with an overview of different modeling strategies for describing the problem of churn, both in terms of a classification problem as well as a regression problem. Secondly, Tom will give you insights in how you evaluate a churn model in a way such that business stakeholders know how to act upon the model results. Finally, we’ll work towards the hands-on session demonstrating different model approaches for churn prediction, ranging from classical time series prediction to recurrent neural networks.
This document describes a service-oriented architecture for data acquisition and control in the electric utility industry. The key challenges addressed are bridging operational and information technologies, avoiding brittle architectures, removing isolated systems, and managing growing remote sensor data and workforce changes. The proposed architecture uses a message-oriented middleware with AMQP and protocol buffers. It supports a RESTful design with core services for measurements, commands, events, and alarm management to integrate grid operations.
Geographic Analytics - How HP Visualises its Supply ChainNUS-ISS
This is an article by Dr Jozo Acksteiner and Ms Claudia Trautmann on Geographical Analytics. Dr Acksteiner was a speaker at ISS Seminar: Supply Chain Analytics on 20 Nov 2013
Red Hat, Green Energy Corp & Magpie - Open Source Smart Grid Plataform - ...impodgirl
The Pacific Northwest smart grid demonstration project led by Battelle Memorial Institute aims to validate the costs and benefits of smart grid technology. The $88.8 million project involves 12 utilities across 5 northwest states and will test technologies like dynamic pricing signals and demand response. It seeks to better integrate renewable energy and improve system efficiency over its 5-year duration. Red Hat is also entering the smart grid industry through a partnership with Grid Exchange Corporation to develop an open-source smart grid software integration platform applying standards like ICCP.
Disruptive technologies can be forecasted using the SAW (Steps and Waits) model rather than exponential models like Moore's Law. The SAW model predicts technologies will improve in steps of big improvements separated by waits with no growth, as seen across 26 technologies. This model could have helped Sony timely invest in LCDs over CRTs. To develop disruptive technologies, organizations should build on best practices, consult trusted advisors, think non-incrementally, connect with customers in new ways, and focus on creating connected ecosystems.
This presentation was given at ITU Telecom World in December 2015. It gives a viewpoint on key telecoms regulatory issues from the viewpoint of being a network performance expert.
Lessons for interoperability remedies from UK Open Bankingblogzilla
The UK’s Open Banking programme is a world-leading experiment in requiring banks to open up customer accounts (with their explicit consent) to third-party providers. What lessons can be learnt from this case for legislation that would require dominant platforms to provide similar functionality?
The Value of Network Neutrality to European ConsumersRené C.G. Arnold
The document provides an executive summary of a study on the value of network neutrality to European consumers. Some key findings of the study include:
- Consumers care most about having unrestricted access to online content and applications. Their awareness of network neutrality is tied to how traffic management may affect their quality of experience, not technical terms.
- Consumers are generally open to some prioritization of data but don't want it to negatively impact others' access. They value fairness in traffic management.
- Network neutrality attributes were found to be important factors in consumers' decisions about internet access purchases, unlike some previous studies.
- Providing consumers with information about how the internet works and traffic
TRPC director Dr. John Ure's presented on "Preparing for tomorrow: Regulation in a data-drive connected world" at Session 2: "The changing rules of the game" at the Inaugural ICT Regulators' Leadership Retreat, that took place in Singapore from 18 to 20 March 2015, organized by the Telecommunication Development Bureau (BDT) and the Infocomm Development Authority of Singapore (IDA).
CAN MACHINE-TO-MACHINE COMMUNICATIONS BE USED TO IMPROVE CUSTOMER EXPERIENCE ...Shaun West
This document discusses how machine-to-machine (M2M) communications can be used to improve the customer experience in a service environment. The authors conducted a literature review and interviews with stakeholders to understand how M2M data collection could be used to develop improved customer value propositions. While M2M has potential to enhance services, the authors found there are also risks if customer needs are not properly understood and different customer segments not accounted for. Firms must develop clear value propositions for each customer persona and be transparent in data use.
Effective performance engineering is a critical factor in delivering meaningful results. The implementation must be built into every aspect of the business, from IT and business management to internal and external customers and all other stakeholders. Convetit brought together ten experts in the field of performance engineering to delve into the trends and drivers that are defining the space. This Foresights discussion will directly influence Business and Technology Leaders that are looking to stay ahead of the challenges they face with delivering high performing systems to their end users, today and in the next 2-5 years.
State of application performance management in the Indian BFSI sector ValueNotes
Almost every participant in the BFSI sector identifies application
uptime as a critical metric of application performance and recognises
the need for those applications to function optimally i.e. increase
productivity while reducing costs. But this study showed that
organisations did not have defined standards of measurement and
did not consider industry benchmarks as relevant indicators.
IBM, NetCracker, and SAS are identified as market leaders for customer analytics solutions for telcos. They scored highly across technology capabilities, execution of strategy, and market impact. IBM offers a comprehensive analytics portfolio serving multiple telco business units. NetCracker provides analytics tailored for telcos and leverages its telco expertise. SAS generates significant revenues from its mature marketing analytics applications used by many large telcos.
The document discusses strategies for optimizing supply chain management (SCM) costs. It argues that the most effective strategy combines process design, analytics, and technology. First, organizations should analyze their logistics processes in detail to identify cost leakages and opportunities for improvement. Second, analytics tools can help analyze network design, routes, and costs to simulate different scenarios. Third, technology should be selected based on its ability to simplify processes and expedite exception handling. The document contends that only an integrated approach combining process redesign, analytics, and technology can consistently reduce costs and risks for complex global supply chains.
[CompTIA] 4th Annual Trends in Cloud Computing - Full ReportAssespro Nacional
This document summarizes key findings from CompTIA's 4th Annual Trends in Cloud Computing study. The study surveyed 501 IT professionals and 400 IT firms to assess cloud adoption trends. It found that cloud computing is becoming mainstream, with most companies now relying on cloud services for storage, disaster recovery, and security. However, some confusion remains regarding cloud models and terminology. While adoption is high, only 46% of IT firms described their cloud business as fully mature. The impact of cloud computing continues to drive both end users and IT firms to better understand cloud and move toward more strategic cloud integration.
EVALUTION OF CHURN PREDICTING PROCESS USING CUSTOMER BEHAVIOUR PATTERNIRJET Journal
This document summarizes research on predicting customer churn in the telecommunications industry. It first defines customer churn as the rate at which customers stop doing business with a company. It then reviews several past studies that have used techniques like decision trees, neural networks, and data mining to predict churn. The proposed research aims to develop a new churn prediction model using natural language processing (NLP) and machine learning approaches to improve accuracy. It will identify customer behavior patterns and evaluate factors that influence prediction accuracy. The model will be trained and tested on a telecommunications data set to calculate churn rates on both monthly and daily bases. This will help enhance customer service. Gaps in past research identified include issues with imbalanced data, high error rates, and
PLM 2018 - Is Openness really free? A critical analysis of switching costs fo...Karan Menon
Paper Presentation in PLM 2018
Authors:
Karan Menon, Hannu Kärkkäinen, Thorsten Wuest & Timo Seppälä
Tampere University of Technology; West Virginia University; ETLA, Finland.
Case StudyFrancisco LeonGrantham University.docxrobert345678
Case Study
Francisco Leon
Grantham University
LOG456 Emerging Trend Supply Chain
Instructor:
Due Date:12/20/2022
CASE QUESTIONS
1. What factors help to explain why J&J historically had as many as 12 distribution centers in Europe?
· In the past, Johnson & Johnson had as many as 12 distribution centers in Europe. This was because they focused on meeting their European customers' needs and service expectations. The company emphasizes keeping a high level of service by giving customers one-day and two-day delivery. It also cuts down on time it takes to place an order and get a shipment to its destination.
2. What steps in the supply chain network design process discussed in this chapter would have been most relevant to the task faced by J&J in Europe?
These steps would have helped J&J make a good design for its supply chain network.
1. Business development and resource allocation: They can look at business data and determine what resources will be needed and how to get them and use them on time. This includes finding out what customers want and taking environmental factors into account. So, to grow their business, they need to hire more people, analyze data, and set goals. Once this is done, they can start building a team and figuring out their plans.
2. Network optimization software can help them reduce the number of distribution centers. They can also plan an audit of their supply chain, which wallow help them find places to cut costs.
3. Model baseline scenario
As is—simulate transportation in and out, build and simulate business scenarios, create an econometric financial model, and develop assumptions and constraints for the infrastructure.
4. Coming up with a plan
Defining the main scenario to be evaluated, simulating inventory assets by plan, representing operating, capital, and one-time expenses, developing a financial model by design, and addressing IT, tax, incentive, legal, and infrastructure issues. Develop a plan for transition and implementation, including a timeline, resources, funds, structure, limitations, partners, stakeholders, and a communication strategy.
3. Are there other factors that the network optimization study should have considered?
· Essential things to consider are how close you are to your customers and how much money it will cost you to get there from where you are right now. These are the factors that are most important to consider. These are the two aspects that constitute the most important aspects to take into consideration. Because the frameworks have already been established, every phase that is still to come may have already been planned out. The corporation has significant data about the costs associated with the land and the utilities. In addition to the information it possesses regarding the labor market and the supplier network, this is another area in which it excels. The company will only need to make modifications to the components of the logistics network that are the mos.
This document summarizes an article from the International Journal of Mechanical Engineering and Technology. The article discusses the need for an integrated predictive collaboration performance evaluation framework between business partners. It notes that most competitiveness improvements have focused on individual performance measurement systems rather than collaborative value. The proposed framework would forecast performance trends based on supply chain experts' experiences rather than just historical data comparisons. It would also include additional aspects like trust between partners and information sharing. The framework aims to provide a more holistic evaluation and guidance on future performance and decision making for business partnerships.
This document discusses how big data can enable the travel and tourism industries. It defines big data as large datasets characterized by their volume, velocity, variety, and veracity. Big data comes from a variety of sources as people leave digital traces online and through mobile technologies. The benefits of big data for businesses include improved customer experience personalization, optimized marketing and products, predictive analytics, and risk management. The big data market is expected to double from 2014 to 2018. Future developments include improvements in data processing, centralized data repositories, and analytics solutions in the public cloud to reduce costs and security risks. Big data can deliver business insights, innovation, better customer relationships, and continuously improved experiences for the tourism industry.
Demograft for telecoms - benefits from location-based analyticsReach-U
Overview of benefits Telecom's can obtain applying location-based analysis to their data.
Benefits include:
‒ improved customer service experience,
‒ knowledge based investment decision,
‒ targeted service offerings to the customers based on their needs,
‒ sophisticated decisions in introducing new services.
Connected Shipping: Riding the Wave of E-CommerceCognizant
Digital platforms, applications and processes are rapidly changing how shipping and transportation companies operate. Our primary research study confirmed that while acknowledging the importance of a Web-based business model, many shipping companies are proceeding cautiously. Based on our analysis of the e-commerce market and the approaches that some companies are taking, we have defined a maturity framework to help shippers better assess their current capabilities and plan ahead.
Similar to FCC Open Internet Transparency - a review by Martin Geddes (20)
When we get water, electricity, or gas delivered to our home or place of work we expect it to have predictable quality. Why isn't this also true of broadband? The answer is we don't (yet) have the "glue" to integrate performance in digital supply chains.
The document provides a summary of two new discussion groups and includes links to various articles and websites about topics such as the future of the internet, blockchain technology, privacy policies, and QAnon. It ends by thanking the audience and noting there will be another livestream in April.
Digital supply chain quality managementMartin Geddes
We've figured out how to send physical goods around the world: aggregate them into containers. We're still struggling how to do digital good, which we disaggregate into packets. Here's the answer.
The goal of this presentation is to share exemplars of important broadband Internet access performance phenomena. In particular, we highlight the critical role of stationarity.
When they have non-stationarity, networks are useless for most applications. We show real-world examples of both stationarity and non-stationarity, and discuss the implications for broadband stakeholders.
These phenomena are only visible when using state-of-the-art high-fidelity metrics and measures that capture instantaneous flow.
Superfast or superfit? The case for UK broadband policy reformMartin Geddes
This is a critical moment for UK digital infrastructure policy. The context is one of rapid political, market and technological change. As a nation, we face important decisions over topics like post-Brexit regulation, universal service delivery, Openreach independence, TETRA replacement and 5G readiness. The imperative is to reflect on whether our historic approaches will meet our future needs. Where we anticipate a shortfall, we must act to protect our long-term national interest.
This paper aims to educate policymakers about one specific shortfall: the growing ‘capability gap’ between broadband demand and supply.
It makes two recommendations.
This unwanted situation is avoidable by two readily attainable changes in our policy approach.
Firstly, our policy metrics need to reflect the readiness of broadband infrastructure to support both present and future demand.
Secondly, the money needs to move to incentivise the right market behaviours to create a correspondingly fit-for-purpose supply.
When these reforms are enacted together, this will help to position the UK with a world-class infrastructure ready to attract capital and talent on a global scale.
Broadband service quality - rationing or markets?Martin Geddes
"Net neutrality" is implicitly framed as a debate over how to deliver an equitable ration of quality to each broadband user and application. This is the wrong debate to have, since it is both technically impossible and economically unfair. We should instead be discussing how to create a transparent market for quality that is both achievable and fair.
Introduction to network quality arbitrageMartin Geddes
Many large operators have expressed a desire to undertake disruptive change, and we have often proposed an agenda for such change. What typically happens is that, after several rounds of engagement, we observe that there is little mainstream organisational appetite to engage in disruption. Why so?
The main reason is a perception gap between the current state of the art (which any leading operator delivers) and our understanding of the state of the possible (which most operators are very far from). This gap exaggerates the risks of engaging in disruption, and underestimates the potential rewards.
Another reason is that our industry as a whole implicitly believes that network service quality is a matter of detecting and rectifying ‘faults’. This framing inhibits the consideration of the alternative paradigm of networks as resource trading spaces. As a result, the significant ‘quality arbitrage’ that exists in all IP networks is not visible.
Operators face the risk that others will exploit the arbitrage opportunity, to their serious commercial disadvantage. This has happened before, e.g. with TDM and the rise of ISPs, and is happening now with SD-WAN. We propose that a larger multinational operators need to proactively initiate the disruption via a new business unit.
The End of Information Technology: Introducing Hypersense & Human TechnologyMartin Geddes
If we were to climb into a time machine and set the dial for ten years into the future, what might personal communications look like? Might you inhabit a soothing virtual reality where your conference call takes place in a simulated lakeside villa? Might you consult with a virtual doctor? Employ a “Guardian Avatar” to act autonomously on your behalf eliminating online drudgery and security concerns? Although no particular future is certain, the seeds of what is to come can always be found within the present reality, albeit often only in retrospect.
The future of computing is a symbiosis of machines and people. To achieve this we need an "operating system" upgrade for digital technology. We all need a Guardian Avatar to help us to navigate the "metaverse", and to care for us and protect us.
Evaluating the internet end-user experience in the Russian FederationMartin Geddes
This document discusses initial findings from research commissioned by Euraisa:Peering to evaluate the internet end-user experience in the Russian Federation. It describes a new peering point at the IXcellerate Moscow One data centre, which offers private peering connections. The research measures the quality attenuation (ΔQ) between various locations to understand how network topology, link speeds, and traffic loads impact the user experience for different applications. Initial data was gathered between Moscow, Chelyabinsk, London, Dublin, Frankfurt, and Singapore to analyze delay and how it affects users.
This document outlines a journey from being beasts to becoming superheroes to gods using technology. It argues that as technology allows us to be present anywhere and anytime through things like telephones and computers, we are building "superconductors for our minds" that transcend biological limitations. However, privacy issues arise when sensual data is converted to symbols that computers can understand and share. The document suggests we must resolve this tension between privacy erosion and enhanced presence. It speculates that the trajectory of technology development may lead humanity to become "Homo evolutis" that deliberately directs its own and other species' evolution, achieving a god-like state of being everywhere through advanced communication technologies.
Beyond 'neutrality' - how to reconnect regulation to reality?Martin Geddes
This document discusses the lack of engagement between broadband policy literature and technical realities regarding the stochastic nature of network traffic management. It analyzes the mentions of relevant scientific terms in books on net neutrality policy and finds little exploration of concepts like stochasticity, emergence and probabilistic modeling. It argues that the focus on detecting and regulating "discriminatory" traffic has been misguided, and that policy should instead define quality of service floors and use objective measurement methods to evaluate user experience. The document promotes socializing technical knowledge with policymakers and shifting the regulatory perspective away from traffic management and towards ensuring a minimum quality of broadband service.
The perception gap: the barrier to disruptive innovation in telecomsMartin Geddes
The 'state of the possible' in telecoms is a long way ahead the 'state of the art'. The new science of network performance enables a large leap in customer experience and cost. However, the perception among operators is that only relatively small, incremental improvements are possible.
This presentation explores the reasons for this 'perception gap' between what is seen to be possible, and what actually is. It draws on our work at senior levels for tier 1 operators, as well as examples from outside the telecoms industry.
Overcoming this gap opens the possibility to disruptive innovation. Who will seize the opportunity? Incumbents, challengers or new entrants?
The document summarizes the opposition to a proposed 25-meter mobile telecommunications mast over the village of Lastingham in the North York Moors National Park in England. It provides three alternative proposals that would provide mobile coverage while preserving the landscape and being more cost effective. The alternatives include a shared tree mast camouflaged in the valley near existing infrastructure, a BT Openreach mobile infill solution using existing poles, and an EE micro network of small discrete antennas within the village. The document argues these alternatives address concerns about resilience, environmental impact, and value for money better than the proposed mast.
This paper is a bibliography of articles on the key technology trends of today: Mass personalisation, Inclusive and accessible design, Data-driven decision making, Generational change, Portfolio careers, Virtual workplace solution, Privacy, Resurgence in voice, On-the-go communications, Future of email, Virtual reality, Gaming and gamification, Sensor revolution, Sensual interfaces, Soundscaping, Wearables, Social robotics, Sentiment analysis, Anticipatory computing, Virtual assistants, Wireless and mobility, Distributed trust systems, Batteries and power, ‘Glomad’ workers, Home teleworking, Data destruction, Cybermeetings, Security as a service, Human productivity, Simplified security
A forecast of the needs of future business communications users, based on research by Martin Geddes and Dean Bubley. We address the questions: What are the future communications needs of workers? How and where do people work?
A Study of Traffic Management Detection Methods & ToolsMartin Geddes
This scientific report was commissioned by the UK telecoms regulator, Ofcom, from Predictable Network Solutions Ltd. It evaluates the suitability of different traffic management techniques for regulatory use. The conclusions are very significant for the "net neutrality" debate, since it points out many common misconceptions about how broadband actually works.
Hypertext to Hypervoice - The next stage in collaboration on the WebMartin Geddes
Imagine a world where computers enrich our voices with superhuman powers; where voice is integrated into our social media just as text and images currently are; where our voice can be used as a communication tool at its full capacity: simple, powerful and rich. This is the world of hypervoice, where voice on the Web is as native and
natural as hypertext.
Network performance - skilled craft to hard scienceMartin Geddes
This document describes the technical and business journey for network operators wanting to turn network performance from a skilled craft into hard science.
HijackLoader Evolution: Interactive Process HollowingDonato Onofri
CrowdStrike researchers have identified a HijackLoader (aka IDAT Loader) sample that employs sophisticated evasion techniques to enhance the complexity of the threat. HijackLoader, an increasingly popular tool among adversaries for deploying additional payloads and tooling, continues to evolve as its developers experiment and enhance its capabilities.
In their analysis of a recent HijackLoader sample, CrowdStrike researchers discovered new techniques designed to increase the defense evasion capabilities of the loader. The malware developer used a standard process hollowing technique coupled with an additional trigger that was activated by the parent process writing to a pipe. This new approach, called "Interactive Process Hollowing", has the potential to make defense evasion stealthier.
Securing BGP: Operational Strategies and Best Practices for Network Defenders...APNIC
Md. Zobair Khan,
Network Analyst and Technical Trainer at APNIC, presented 'Securing BGP: Operational Strategies and Best Practices for Network Defenders' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
Honeypots Unveiled: Proactive Defense Tactics for Cyber Security, Phoenix Sum...APNIC
Adli Wahid, Senior Internet Security Specialist at APNIC, delivered a presentation titled 'Honeypots Unveiled: Proactive Defense Tactics for Cyber Security' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.