This document discusses conducting a "Reconnaissance Check" of a company's telecommunications infrastructure to identify opportunities to improve operations and lower costs. It states that thorough reconnaissance can uncover multiple factors representing 35% or more in potential savings. The document advocates using techniques from military intelligence, surveillance, and reconnaissance (ISR) to understand network traffic flows, bottlenecks, services, and market prices in order to optimize the network configuration and carrier contracts. Quantitative metrics should be established to measure the network's performance and identify outliers that may indicate inefficiencies.
EPC Solutions LLP, which is into Large Scale Infra Projects (Metro, Airports, Stadiums & Other Mega Projects), Energy Solutions (EPC Solutions for Transmission & Distributed System upto 765kV, Contour & Route Survey, Soil Investigating etc.), Solar (EPC), Structure Supply - For Infra, Energy T&D & Solar Segment, MEP Services, SEZ & Other Consultancy Services, BIM Services (Upto LOD 500) & Geographical Information System, IT Services (Web Development, Software Solutions & Manpower Solutions).
EPC Solutions LLP, which is into Large Scale Infra Projects (Metro, Airports, Stadiums & Other Mega Projects), Energy Solutions (EPC Solutions for Transmission & Distributed System upto 765kV, Contour & Route Survey, Soil Investigating etc.), Solar (EPC), Structure Supply - For Infra, Energy T&D & Solar Segment, MEP Services, SEZ & Other Consultancy Services, BIM Services (Upto LOD 500) & Geographical Information System, IT Services (Web Development, Software Solutions & Manpower Solutions).
An experience is a personal and emotional event we remember. Every experience is established based upon pre-determined expectations we conceive and create in our minds. It’s personal, and therefore, remains a moving and evolving target in every scenario. When our experience concludes and the moment has passed, the outcome remains in our memory. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return. Users might forget facts or details about their computing environment but they find it difficult to forgot the feeling behind a bad network experience. When something goes wrong with the network or an application, do you always get the blame?
So what can Ultra Low, consistent latency deliver? Low latency is a requirement for intensive, time critical applications. Latency is measure on a port-to-port basis, that once a frame is received on a ingress port how long does it take the frame to go through the internal switching infrastructure and leave an ingress port. The Summit X670 Top of Rack switch supports latency of around 800-900usec while the Black Diamond chassis, BDX8, can switch frames in a little as 3usec. We’re big believers in the value of disaggregation – of breaking down traditional data center technologies into their core components so we can build new systems that are more flexible, more scalable, and more efficient. This approach has guided Facebook from the beginning, as we’ve grown and expanded our infrastructure to connect more than 1.28 billion people around the world.
Flatter networks. Traditional data center networks have a minimum of three tiers: top of rack (ToR), aggregation and core. Often, there is more than one aggregation tier, meaning the data center could have three or more network tiers. When network traffic is primarily best effort, this is sufficient. But as more mission-critical, real-time traffic flows into the data center, it becomes critical that organizations move to two-tier networks.
An increase in east-west traffic flows. Legacy data center networks are designed for traffic to flow from the edge of the network into the core and then back to the edge in a north-south direction. Today, however, factors such as workforce mobility, Hadoop, big data and other applications are driving east-west traffic flows from server to server.
Virtualization of other IT assets. Historically, compute resources such as processor, memory and storage were resident in the server itself. Over time, more and more of these resources are being put into “pools” that can be accessed on demand. In this case, the data center network becomes a “fabric” that acts as the backplane for the virtualized data center.
Platforming the Major Analytic Use Cases for Modern EngineeringDATAVERSITY
We’ll describe some use cases as examples of a broad range of modern use cases that need a platform. We will describe some popular valid technology stacks that enterprises use in accomplishing these modern use cases of customer churn, predictive analytics, fraud detection, and supply chain management.
In many industries, to achieve top-line growth, it is imperative that companies get the most out of existing customer relationships. Customer churn use cases are about generating high levels of profitable customer satisfaction through the use of knowledge generated from corporate and external data to help drive a more positive customer experience (CX).
Many organizations are turning to predictive analytics to increase their bottom line and efficiency and, therefore, competitive advantage. It can make the difference between business success or failure.
Fraudulent activity detection is exponentially more effective when risk actions are taken immediately (i.e., stop the fraudulent transaction), instead of after the fact. Fast digestion of a wide network of risk exposures across the network is required in order to minimize adverse outcomes.
Supply chain leaders are under constant pressure to reduce overall supply chain management (SCM) costs while maintaining a flexible and diverse supplier ecosystem. They will leverage IoT, sensors, cameras, and blockchain. Major investments in advanced analytics, warehouse relocation, and automation, both in distribution centers and stores, will be essential for survival.
This presentation examines how AMI data, the collection of this data and the creation of tools to use this data have dramatically changed and is continuing to change metering operations. We will look at some of the challenges we are facing as we learn how to do business most effectively with this information and these tools.
A Big Data Telco Solution by Dr. Laura Wynterwkwsci-research
Presented during the WKWSCI Symposium 2014
21 March 2014
Marina Bay Sands Expo and Convention Centre
Organized by the Wee Kim Wee School of Communication and Information at Nanyang Technological University
Gamma Analytics’ provides solutions for auditors to access process and manage telecom data. Gamma ETL module can collect any industry standard file from any source, de-serialize (parse), enrich and load to database or upload to CSV/Excel on the fly. CDRs, Mediation files, pre rated and rated billing files etc., in binary or ASCII formats can be handled natively. Comparing between different data files such as Switch and Billing are performed by the analytical engine. Reports are shown in the dashboard with drill down capability.
Key findings among the industry analysts: “Managing telecom & cloud expenses is a complex task that requires knowledge about multiple technical an business topics”.
Widecoup Billing has helped our clients to find savings primarily through the reduction on the different types of telco consumption and communications expenditures
This presentation examines how AMI data, the collection of this data and the creation of tools to use this data have dramatically changed and is continuing to change metering operations. We will look at some of the challenges we are facing as we learn how to do business most effectively with this information and these tools.
Unified Monitoring Webinar with Dustin WhittleAppDynamics
Listen to the recorded webinar here: https://www.appdynamics.com/lp/q3-unified-monitoring-webinar/
Dustin Whittle, AppDynamics' Director of Web Engineering, covers
-the problems and struggles with monitoring tools today
-how to identify and resolve critical issues before your customers are impacted
-how AppDynamics provides one approach for unified monitoring
And much, much more!
Infrastructure Audit Services From Hitachi Data Systems -- DatasheetHitachi Vantara
Infrastructure Audit Services from Hitachi Data Systems provide a detailed and comprehensive picture of an organization's IT infrastructure. They assist with future planning, provide a comparison to best practices and present the basis for designing future IT capabilities. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Data Strategy for Telcos : Preparedness and ManagementSouravRout
Telco‘s sit on a vast amount of data – both in terms of magnitude and variety. The Internet of Things (IoT) is set to magnify this spead and volume of data exponentially. As an organization, telco‘s use data across the board – network performance and optimization, marketing, product placements, pricing, plans, customer experience, fraud detection, etc. It thus becomes important to ensure data collection (and at the end, disposal where needed), processing, analytics and value creation are done uniformly across the organization.
An experience is a personal and emotional event we remember. Every experience is established based upon pre-determined expectations we conceive and create in our minds. It’s personal, and therefore, remains a moving and evolving target in every scenario. When our experience concludes and the moment has passed, the outcome remains in our memory. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return. Users might forget facts or details about their computing environment but they find it difficult to forgot the feeling behind a bad network experience. When something goes wrong with the network or an application, do you always get the blame?
So what can Ultra Low, consistent latency deliver? Low latency is a requirement for intensive, time critical applications. Latency is measure on a port-to-port basis, that once a frame is received on a ingress port how long does it take the frame to go through the internal switching infrastructure and leave an ingress port. The Summit X670 Top of Rack switch supports latency of around 800-900usec while the Black Diamond chassis, BDX8, can switch frames in a little as 3usec. We’re big believers in the value of disaggregation – of breaking down traditional data center technologies into their core components so we can build new systems that are more flexible, more scalable, and more efficient. This approach has guided Facebook from the beginning, as we’ve grown and expanded our infrastructure to connect more than 1.28 billion people around the world.
Flatter networks. Traditional data center networks have a minimum of three tiers: top of rack (ToR), aggregation and core. Often, there is more than one aggregation tier, meaning the data center could have three or more network tiers. When network traffic is primarily best effort, this is sufficient. But as more mission-critical, real-time traffic flows into the data center, it becomes critical that organizations move to two-tier networks.
An increase in east-west traffic flows. Legacy data center networks are designed for traffic to flow from the edge of the network into the core and then back to the edge in a north-south direction. Today, however, factors such as workforce mobility, Hadoop, big data and other applications are driving east-west traffic flows from server to server.
Virtualization of other IT assets. Historically, compute resources such as processor, memory and storage were resident in the server itself. Over time, more and more of these resources are being put into “pools” that can be accessed on demand. In this case, the data center network becomes a “fabric” that acts as the backplane for the virtualized data center.
Platforming the Major Analytic Use Cases for Modern EngineeringDATAVERSITY
We’ll describe some use cases as examples of a broad range of modern use cases that need a platform. We will describe some popular valid technology stacks that enterprises use in accomplishing these modern use cases of customer churn, predictive analytics, fraud detection, and supply chain management.
In many industries, to achieve top-line growth, it is imperative that companies get the most out of existing customer relationships. Customer churn use cases are about generating high levels of profitable customer satisfaction through the use of knowledge generated from corporate and external data to help drive a more positive customer experience (CX).
Many organizations are turning to predictive analytics to increase their bottom line and efficiency and, therefore, competitive advantage. It can make the difference between business success or failure.
Fraudulent activity detection is exponentially more effective when risk actions are taken immediately (i.e., stop the fraudulent transaction), instead of after the fact. Fast digestion of a wide network of risk exposures across the network is required in order to minimize adverse outcomes.
Supply chain leaders are under constant pressure to reduce overall supply chain management (SCM) costs while maintaining a flexible and diverse supplier ecosystem. They will leverage IoT, sensors, cameras, and blockchain. Major investments in advanced analytics, warehouse relocation, and automation, both in distribution centers and stores, will be essential for survival.
This presentation examines how AMI data, the collection of this data and the creation of tools to use this data have dramatically changed and is continuing to change metering operations. We will look at some of the challenges we are facing as we learn how to do business most effectively with this information and these tools.
A Big Data Telco Solution by Dr. Laura Wynterwkwsci-research
Presented during the WKWSCI Symposium 2014
21 March 2014
Marina Bay Sands Expo and Convention Centre
Organized by the Wee Kim Wee School of Communication and Information at Nanyang Technological University
Gamma Analytics’ provides solutions for auditors to access process and manage telecom data. Gamma ETL module can collect any industry standard file from any source, de-serialize (parse), enrich and load to database or upload to CSV/Excel on the fly. CDRs, Mediation files, pre rated and rated billing files etc., in binary or ASCII formats can be handled natively. Comparing between different data files such as Switch and Billing are performed by the analytical engine. Reports are shown in the dashboard with drill down capability.
Key findings among the industry analysts: “Managing telecom & cloud expenses is a complex task that requires knowledge about multiple technical an business topics”.
Widecoup Billing has helped our clients to find savings primarily through the reduction on the different types of telco consumption and communications expenditures
This presentation examines how AMI data, the collection of this data and the creation of tools to use this data have dramatically changed and is continuing to change metering operations. We will look at some of the challenges we are facing as we learn how to do business most effectively with this information and these tools.
Unified Monitoring Webinar with Dustin WhittleAppDynamics
Listen to the recorded webinar here: https://www.appdynamics.com/lp/q3-unified-monitoring-webinar/
Dustin Whittle, AppDynamics' Director of Web Engineering, covers
-the problems and struggles with monitoring tools today
-how to identify and resolve critical issues before your customers are impacted
-how AppDynamics provides one approach for unified monitoring
And much, much more!
Infrastructure Audit Services From Hitachi Data Systems -- DatasheetHitachi Vantara
Infrastructure Audit Services from Hitachi Data Systems provide a detailed and comprehensive picture of an organization's IT infrastructure. They assist with future planning, provide a comparison to best practices and present the basis for designing future IT capabilities. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Data Strategy for Telcos : Preparedness and ManagementSouravRout
Telco‘s sit on a vast amount of data – both in terms of magnitude and variety. The Internet of Things (IoT) is set to magnify this spead and volume of data exponentially. As an organization, telco‘s use data across the board – network performance and optimization, marketing, product placements, pricing, plans, customer experience, fraud detection, etc. It thus becomes important to ensure data collection (and at the end, disposal where needed), processing, analytics and value creation are done uniformly across the organization.
The Potential Impact of Robotic Process Automation & Artificial Intelligence ...
MSI Value Proposition v2.2 (4-2-15)
1. The Reconnaissance Check
Assessing the operation and co$ts of your Telecommunications Infrastructure
How maintaining Situational Awareness will
improve operations and lower co$ts
2. Benefits of the “Reconnaissance Check”
• There are numerous benefits of conducting a
thorough Reconnaissance Check of your
Telecommunications Environment.
• We will uncover multiple factors representing
opportunities to wage a successful engagement
to make your telecommunications environment
“work better and cost less”
1
7. Both Have …
High Degree of Risk
(Casualties vs. service interruptions and huge $ums of money)
Extreme Number of Variables
Highly trained specialists
Coordination of many highly complex tasks and people
6
8. Our specialist in Intelligence,
Surveillance, and Reconnaissance
(ISR) know the procedures and
how to connect the dots to
understand your voice and data
environment
7
9. Understanding the “Field of Battle”
• Important details about your network
behaviors allow our analysts to determine the
actual:
– Traffic Flows
– Bottlenecks
– Carrier Services in place
– Available Market Prices for carrier services
8
10. Understanding the “Field of Battle”
• Even though every network is highly unique,
certain well defined constants prevail:
– Traffic can be modeled
– Trouble spots can be detected
– Carrier Services Vary
– Carrier Prices Vary
• Converting Network Surveillance, and
Reconnaissance into Intelligence is our
specialty
9
11. It all starts with assessing the
current environment
10
12. And the military
has evolved powerful methods to bring
Battlefield Command and Control down
to a measurable science
11
13. You can never have too much reconnaissance.
General George S. Patton Jr.,
• Reconnaissance operations are those
operations undertaken to obtain, by visual
observation or other detection methods,
information about the activities and
resources of an enemy or potential enemy,
or to secure data concerning the
meteorological, hydrographical or
geographical characteristics and the
indigenous population of a particular area.
12
14. We use similar Intelligence, Surveillance,
Reconnaissance (ISR) techniques in an
activity that synchronizes and integrates the
planning and operation of sensors, assets,
and processing, exploitation, and
dissemination systems in direct support of
current and future operations. This is an
integrated intelligence and operations
function.
13
15. ISR as applied to Voice & Data Infrastructures
• Surveillance is the systematic observation of network elements, by
various means.
• Reconnaissance is a mission undertaken to obtain, by surveillance,
information about the activities and resources of a network, or to
secure data concerning the dynamic characteristics of a particular
network.
• Intelligence is (1) the product resulting from the collection, processing,
integration, analysis, evaluation, and interpretation of available
information concerning a voice and data network; (2) information and
knowledge about a problem obtained through observation,
investigation, analysis, or understanding.
14
16. You Can’t Manage, What You Don’t Measure
• Qualitative vs. Quantitative Data Collection
• Like snapshots of the Field of Battle, we must collect
information on every asset (qualitative)
– Location
– Purpose
– Configuration
– Operational Status
• We must also collect information on its performance
levels (quantitative)
– Usage Levels
– Efficiency
– Capabilities
15
17. Typical Roles and Responsibilities
• Network Support groups focus their attention
on day to day operations (tactical)
• This needs to be augmented by an Intelligence,
Surveillance and Reconnaissance (ISR) effort
to provide the strategic component necessary
for long term efficiency and co$t management.
• This relies heavily on quantitative as well as
qualitative “Intelligence” gathering.
16
18. Mission Importance
• Questions to consider:
– “Do I have the best network configuration to carry
out the mission of the business?”
– Am I using the most appropriate products from the
carriers?
– Am I getting the best pricing available in the
market place?
17
20. No Risk to You!
• Pressures on the IT and Telecom Managers will only increase
• These managers will look to analytics to improve the
operational and financial aspects of their networks
• Some managers are looking for management methods to rein
in costs and add budget predictability.
• It is still important to ensure the underlying assumptions have
a solid foundation of efficient spend.
• In the end, Analytics Matter and our ISR methods have a
proven track record of success with no risk on your part
• We only collect a fee on the savings, if we can’t find anything,
you don’t pay anything!
19
21. Establishing the proper metrics
• All networks exhibit behaviors which can be measured,
observed and analyzed for particular “signatures” of
significant events.
• It comes down knowing what to look for and how to recognize
it when it is present.
• That is, what should be monitored and analyzed as part of the
ISR process?
• It is not possible to “manage” or control a network without
specific quantitative goals expressed in realistic metrics.
• Further, it is not possible to manage the operations of a
network without meaningful quantitative feedback which can
be interpreted in terms of the “goals” mentioned above.
20
22. The ISR Process
• Our engagements include a review of the architecture,
technology, business goals, performance levels, and carrier
services.
• Determines if the environment is “optimized” for the mission
and if there are alternative in any of the architecture,
technology or carrier services which meet the requirements of
the business and leverage the changing market place of
technologies and services available.
• The resulting quantitative model of the environment produces
the metrics (e.g. telecom costs, traffic analysis, utilization
levels, error rates, etc.) used to identify problem areas as well
as the current market price/performance levels.
• We can move forward with alternative solutions to achieve
better price/performance levels. 21
23. What to expect!
• You get a complete “Intelligence” report based
on the Surveillance and Reconnaissance
actions.
• This report will provide tremendous insight
into the inner workings of your
telecommunications environment from a
qualitative and quantitative perspective
including financial analysis presented in
graphical form.
22
24. Important Insights
• With quantitative analysis, the key is to
identify an acceptable operational level and
then highlight the extreme outliers.
• Some outliers can be reasonably explained,
while others will be clear inefficiencies.
• Focus on the unjustified outliers and eliminate
them from the system.
23
25. How We Do It
• The secret sauce is the Intelligence (analytics)
that turns Surveillance and Reconnaissance
actions into useful information presenting you
with a comprehensive view of the “Field of
Battle”, i.e. your complete telecom
environment.
• The quantitative model of your telecom
environment is the key to making strategic
decisions for tactical results, i.e. works better,
costs less! 24
26. What Qualitative or Quantitative “Intelligence” should
be monitored?
• Surveillance and Reconnaissance data
• Turns into “Intelligence” when properly
analyzed
• Consider the following metrics:
– Application Performance
– Market Dynamics
– Traffic Volumes/Throughput
– Latency
– Etc.
25
28. Observations
• No receive “Load Balancing” at St. Louis
Park
– Little or no traffic on Secondary T1
• Disproportionate Load Balancing at West
Lake Village
27
31. Warning Signs
• Some signs can be indicative of a larger
problem
• Are you inside or outside of the “Operational
Envelope”?
• With quantitative analysis, the key is to
identify an acceptable values and then
highlight the extreme outliers.
30
33. What is the “Intelligence” telling you?
• Some of the data can be reasonably explained while others will
be clear inefficiencies.
• How do you tell the difference?
• Focus on the real “signatures” of inefficiencies and remove the
underlying cause with extreme prejudice.
• What signatures matter?
• Again, each problem has its own signature but an image will
emerge when we take seeming unrelated Surveillance and
Reconnaissance information and look at all of it in one view
revealing the nature of the interactions.
32
34. Does this sound like a lot of work?
• Our Knight Vision™ methods and analytical
tools removes the burden and produces
tangible measurable results.
• As your partner we participate in the
implementation of the “Best Practices” to
ensure the savings are realized.
33
35. Best Practices
• Define Key Metrics for continued
“Surveillance”
• Meet bi-annually with internal representatives
to review continuous “Reconnaissance”
reports of important metrics.
• Using the synthesized “Intelligence” determine
the need for any strategic changes.
– Continuously monitor the “Field of Battle
– Continuously monitor the market place for
alternative technologies and service providers
34
40. Business Benefits of Performance Management
• Aligns the IT infrastructure with business
processes
• Allows business processes to become more
competitive and responsive to changing customer
needs.
• Reduces network expenses and provides
information necessary to “manage” your providers
(SLAs).
• Improves efficiency and productivity from higher
network and application availability
• Improves IT staff productivity by reducing the
time required to resolve problems
39
41. Business Benefits of Performance Management
• Substantial ROI and savings
• Allows you to do more with less
• IT infrastructure is a critical business asset
• Requires performance management of the
infrastructure and applications
• Potential to trim Monthly Recurring Costs
by over 30%, while maintaining required
level of service.
40
43. Business Benefits of Performance Management
• C-level executives require high-level reports on
how the IT resources are performing over time.
• While operational managers need more detailed
accounts of how all of the manageable network
components are performing so problems can be
identified, diagnosed and corrected.
• Converged networks (VoIP, Unified Messaging,
etc.) create an even bigger challenge.
• Requires real-time monitoring and management of
application availability and QoS.
42