A concise report on the state of your environment, with prescriptive recommendations for how you can improve performance, efficiency and security in up to 14 vital IT domains.
Democratising Security: Update Your Policies or Update Your CVExtraHop Networks
Security is everyone's responsibility. That’s the lesson learned as enterprises seek to improve their detection and response for cyber incidents. This session introduces a new model where InfoSec sets the policies and delegates monitoring to application teams.
The ExtraHop wire data analytics platform enables IT teams to answer questions they hadn't known to ask before, such as "Which SSL servers are receiving heartbeats?" and "Where are heartbeat messages coming from?"
EMA Presentation: Driving Business Value with Continuous Operational Intellig...ExtraHop Networks
In this presentation, EMA Vice President of Research Jim Frey and ExtraHop SVP Erik Giesa explain how IT organizations can derive real-time IT and business insights from their wire data, as well as the unique capabilities included in the fourth-generation ExtraHop platform that make this continuous operational intelligence possible. For more information, visit www.extrahop.com
This presentation for Inside Analysis' Briefing Room explains the ExtraHop architecture for stream analytics. This concept enables you to mine all your wire data, which is all the data in motion in your environment.
This document discusses how organizations can use big data and operational analytics to transform IT operations. It outlines how taking a data-driven approach that combines machine data and wire data can provide real-time visibility across networks, applications, databases and other systems. This approach overcomes limitations of using individual monitoring tools by silo. The document also covers key considerations for implementing IT big data solutions such as data gravity, improving the signal-to-noise ratio, and understanding when data needs to be accessed in real-time. It provides an example of how healthcare company McKesson used network traffic analysis to improve Citrix application performance and reduce IT costs.
Ransomware: Hard to Stop for Enterprises, Highly Profitable for CriminalsExtraHop Networks
Ransomware attacks doubled in 2015 and the trend is sure to continue. To meet this growing threat, enterprises must gain real-time visibility into anomalous behaviour. This session explains how organisations can detect and mitigate ransomware attacks using wire data.
By passively analyzing your wire data, ExtraHop provides deep visibility into HL7 messages, Citrix performance, EHR behavior, ICD-10 conversion, and more.
Proactive monitoring and remediation
Optimization and continuous improvement
Pervasive security monitoring and compliance
Clinical and operations analytics
Democratising Security: Update Your Policies or Update Your CVExtraHop Networks
Security is everyone's responsibility. That’s the lesson learned as enterprises seek to improve their detection and response for cyber incidents. This session introduces a new model where InfoSec sets the policies and delegates monitoring to application teams.
The ExtraHop wire data analytics platform enables IT teams to answer questions they hadn't known to ask before, such as "Which SSL servers are receiving heartbeats?" and "Where are heartbeat messages coming from?"
EMA Presentation: Driving Business Value with Continuous Operational Intellig...ExtraHop Networks
In this presentation, EMA Vice President of Research Jim Frey and ExtraHop SVP Erik Giesa explain how IT organizations can derive real-time IT and business insights from their wire data, as well as the unique capabilities included in the fourth-generation ExtraHop platform that make this continuous operational intelligence possible. For more information, visit www.extrahop.com
This presentation for Inside Analysis' Briefing Room explains the ExtraHop architecture for stream analytics. This concept enables you to mine all your wire data, which is all the data in motion in your environment.
This document discusses how organizations can use big data and operational analytics to transform IT operations. It outlines how taking a data-driven approach that combines machine data and wire data can provide real-time visibility across networks, applications, databases and other systems. This approach overcomes limitations of using individual monitoring tools by silo. The document also covers key considerations for implementing IT big data solutions such as data gravity, improving the signal-to-noise ratio, and understanding when data needs to be accessed in real-time. It provides an example of how healthcare company McKesson used network traffic analysis to improve Citrix application performance and reduce IT costs.
Ransomware: Hard to Stop for Enterprises, Highly Profitable for CriminalsExtraHop Networks
Ransomware attacks doubled in 2015 and the trend is sure to continue. To meet this growing threat, enterprises must gain real-time visibility into anomalous behaviour. This session explains how organisations can detect and mitigate ransomware attacks using wire data.
By passively analyzing your wire data, ExtraHop provides deep visibility into HL7 messages, Citrix performance, EHR behavior, ICD-10 conversion, and more.
Proactive monitoring and remediation
Optimization and continuous improvement
Pervasive security monitoring and compliance
Clinical and operations analytics
Health IT has a Big Data opportunity with HL7 analytics. Learn about what is possible from Wes Wright, CIO at Seattle Children's Hospital, and Erik Giesa, SVP of Marketing and Business Development at ExtraHop.
What’s New: Splunk App for Stream and Splunk MINTSplunk
Join us to learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Michael Ronnfeldt of NXP discusses implementing an Analytics and Automation Platform using Splunk to address NXP's challenges. Some key points:
- NXP is a large semiconductor company with many products and divisions facing growing IT needs
- The current situation involves manual, slow monitoring and resolution of issues
- The Analytics and Automation Platform (SNA2P) uses Splunk for automated monitoring, incident detection and remediation, discovery, and centralized reporting to provide faster, better service
- Benefits include incidents being resolved before users notice and automation enforcing security and compliance through change control
- Future roadmap includes expanding the CMDB, deployment automation, test automation, and continuous integration
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
The document discusses how Splunk can provide analytics-driven security for higher education through ingesting and analyzing machine data. It outlines how advanced threats have evolved to be more coordinated and evasive. A new approach is needed that fuses technology, human intuition, and processes like collaboration to detect attackers through contextual behavioral analysis of all available data. Examples are provided of security questions that can be answered through Splunk analytics.
Wire data provides deep insights across IT, security and business use cases by capturing the communications transmitted over the wire between machines and applications in real-time. The Splunk App for Stream enables new operational intelligence by indexing this wire data without needing instrumentation. It provides enhanced visibility, efficient cloud-ready collection, and fast time to value through interface-driven deployment. Key features include protocol decoding, attribute filtering, aggregations, and custom content extraction for analysis in Splunk.
Getting Started with IT Service IntelligenceSplunk
This document discusses IT service intelligence (ITSI) concepts including defining services, key performance indicators (KPIs), service health scores, and service decomposition. A service can include multiple technology components and tiers that need to be monitored together from a user's perspective. KPIs are Splunk searches that monitor specific metrics like CPU or errors. Health scores from 0-100 indicate a service's status based on KPI status and importance. Entities that support services can come from CMDBs or searches. Services can be decomposed into sub-services and underlying processes to define relevant KPIs for monitoring. Adaptive thresholding and anomaly detection help determine normal vs abnormal behavior in dynamic or patterned data. ITSI allows
Machine Data 101: Turning Data Into Insight is a presentation about using Splunk software to analyze machine data. It discusses topics such as:
- What machine data is and examples of common sources like log files, social media, call center systems
- How Splunk indexes machine data from various sources in real-time regardless of format
- Techniques for enriching data in Splunk like tags, field aliases, calculated fields, event types, and lookups from external data sources
- Examples of collecting non-traditional data sources into Splunk like network data, HTTP events, databases, and mobile app data
The presentation provides an overview of Splunk's machine data platform and techniques for analyzing, enrich
SplunkLive! München 2016 - Splunk Enterprise 6.3 - Data OnboardingSplunk
This document discusses new features in Splunk Enterprise 6.3, including breakthrough performance and scale improvements that double search and indexing speed and increase capacity by 20-50%, lowering total cost of ownership by 20%+. It also describes new capabilities for advanced analysis and visualization, high-volume event collection, and an enterprise-scale platform with improved support for DevOps, IoT data analysis, and third-party integrations. A new HTTP Event Collector provides a token-based JSON API for ingesting events from various sources.
This document discusses new capabilities in Splunk's App for Stream and Splunk MINT products. It begins with an introduction and overview of each product. It then discusses key benefits like real-time insights, efficient cloud data collection, and fast time to value. Example use cases are provided for IT operations, security, and applications visibility. Supported protocols, platforms, and architecture options are also outlined. The document concludes by discussing challenges in mobile app delivery and how Splunk MINT addresses them through mobile data collection and correlation with other data sources.
Taking Splunk to the Next Level - Management Breakout SessionSplunk
Taking Splunk to the Next Level for Management outlines how Splunk can help organizations quantify the business value of machine data. It provides benchmarks from 400+ customer engagements that show potential efficiencies in IT operations, application delivery, and security and compliance. These include reduced incident resolution times, increased developer productivity, and faster security incident response. The document also offers best practices for aligning a Splunk deployment with key objectives, qualifying issues it can address, quantifying anticipated benefits, and measuring success based on key metrics and customer stories.
Ease out the GDPR adoption with ManageEngineManageEngine
Is your enterprise located in the EU or does it collect and process personal data of the EU citizens? Then it's high time for you to adopt the new GDPR regulation before 25 May, 2018. Check out what's GDPR and how ManageEngine can help you comply with this new mandate.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
Xerox uses Splunk to monitor its electronic payment processing systems. Some key benefits of Splunk include huge time savings over its previous Tivoli platform, increased efficiencies across the business from automated features, and improved visibility into transaction processing through Splunk dashboards. Splunk helps with IT operations monitoring, compliance activities, fraud management, and SSL certificate management. Xerox is able to track $90 billion in payments annually and monitor fraud in real-time using Splunk.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
Splunk MINT for Mobile Intelligence and Splunk App for Stream for Enhanced Op...Splunk
Learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Make Streaming IoT Analytics Work for YouHortonworks
1) Streaming analytics platforms for IoT need to focus on ingesting data from various sources, processing data in real-time, analyzing data, responding to events, and visualizing data.
2) Key areas for building such a platform include using a common abstraction layer, minimizing latency, integrating static and real-time data using lambda architecture, scaling out linearly, enabling rapid application development, and providing data visualization.
3) An example use case of a connected car generates large amounts of data that can be used for various purposes through a streaming analytics platform like predictive maintenance and customized experiences.
Transcend Automation is the authorized business partners for Kepware Technologies in India. We Market, Promote, Integrate their products for customers in India
Best Practices for 360 Feedback projectsmrsteamdoc
In this recorded live session, we share six best practices for successful 360 feedback. At the end of the session, this is a question and answer session with the panel.
Spidergap - Our 360 feedback questionnaire standard templateAlexis Kingsbury
This is Spidergap's 360 degree feedback assessment / questionnaire template. When you create a new 360 assessment in Spidergap, by default it will use this.
We recommend you review it and amend it to meet your needs using our online designer. (No additional cost)
You can signup for free and create your own at https://www.spidergap.com
Health IT has a Big Data opportunity with HL7 analytics. Learn about what is possible from Wes Wright, CIO at Seattle Children's Hospital, and Erik Giesa, SVP of Marketing and Business Development at ExtraHop.
What’s New: Splunk App for Stream and Splunk MINTSplunk
Join us to learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Michael Ronnfeldt of NXP discusses implementing an Analytics and Automation Platform using Splunk to address NXP's challenges. Some key points:
- NXP is a large semiconductor company with many products and divisions facing growing IT needs
- The current situation involves manual, slow monitoring and resolution of issues
- The Analytics and Automation Platform (SNA2P) uses Splunk for automated monitoring, incident detection and remediation, discovery, and centralized reporting to provide faster, better service
- Benefits include incidents being resolved before users notice and automation enforcing security and compliance through change control
- Future roadmap includes expanding the CMDB, deployment automation, test automation, and continuous integration
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
The document discusses how Splunk can provide analytics-driven security for higher education through ingesting and analyzing machine data. It outlines how advanced threats have evolved to be more coordinated and evasive. A new approach is needed that fuses technology, human intuition, and processes like collaboration to detect attackers through contextual behavioral analysis of all available data. Examples are provided of security questions that can be answered through Splunk analytics.
Wire data provides deep insights across IT, security and business use cases by capturing the communications transmitted over the wire between machines and applications in real-time. The Splunk App for Stream enables new operational intelligence by indexing this wire data without needing instrumentation. It provides enhanced visibility, efficient cloud-ready collection, and fast time to value through interface-driven deployment. Key features include protocol decoding, attribute filtering, aggregations, and custom content extraction for analysis in Splunk.
Getting Started with IT Service IntelligenceSplunk
This document discusses IT service intelligence (ITSI) concepts including defining services, key performance indicators (KPIs), service health scores, and service decomposition. A service can include multiple technology components and tiers that need to be monitored together from a user's perspective. KPIs are Splunk searches that monitor specific metrics like CPU or errors. Health scores from 0-100 indicate a service's status based on KPI status and importance. Entities that support services can come from CMDBs or searches. Services can be decomposed into sub-services and underlying processes to define relevant KPIs for monitoring. Adaptive thresholding and anomaly detection help determine normal vs abnormal behavior in dynamic or patterned data. ITSI allows
Machine Data 101: Turning Data Into Insight is a presentation about using Splunk software to analyze machine data. It discusses topics such as:
- What machine data is and examples of common sources like log files, social media, call center systems
- How Splunk indexes machine data from various sources in real-time regardless of format
- Techniques for enriching data in Splunk like tags, field aliases, calculated fields, event types, and lookups from external data sources
- Examples of collecting non-traditional data sources into Splunk like network data, HTTP events, databases, and mobile app data
The presentation provides an overview of Splunk's machine data platform and techniques for analyzing, enrich
SplunkLive! München 2016 - Splunk Enterprise 6.3 - Data OnboardingSplunk
This document discusses new features in Splunk Enterprise 6.3, including breakthrough performance and scale improvements that double search and indexing speed and increase capacity by 20-50%, lowering total cost of ownership by 20%+. It also describes new capabilities for advanced analysis and visualization, high-volume event collection, and an enterprise-scale platform with improved support for DevOps, IoT data analysis, and third-party integrations. A new HTTP Event Collector provides a token-based JSON API for ingesting events from various sources.
This document discusses new capabilities in Splunk's App for Stream and Splunk MINT products. It begins with an introduction and overview of each product. It then discusses key benefits like real-time insights, efficient cloud data collection, and fast time to value. Example use cases are provided for IT operations, security, and applications visibility. Supported protocols, platforms, and architecture options are also outlined. The document concludes by discussing challenges in mobile app delivery and how Splunk MINT addresses them through mobile data collection and correlation with other data sources.
Taking Splunk to the Next Level - Management Breakout SessionSplunk
Taking Splunk to the Next Level for Management outlines how Splunk can help organizations quantify the business value of machine data. It provides benchmarks from 400+ customer engagements that show potential efficiencies in IT operations, application delivery, and security and compliance. These include reduced incident resolution times, increased developer productivity, and faster security incident response. The document also offers best practices for aligning a Splunk deployment with key objectives, qualifying issues it can address, quantifying anticipated benefits, and measuring success based on key metrics and customer stories.
Ease out the GDPR adoption with ManageEngineManageEngine
Is your enterprise located in the EU or does it collect and process personal data of the EU citizens? Then it's high time for you to adopt the new GDPR regulation before 25 May, 2018. Check out what's GDPR and how ManageEngine can help you comply with this new mandate.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
Xerox uses Splunk to monitor its electronic payment processing systems. Some key benefits of Splunk include huge time savings over its previous Tivoli platform, increased efficiencies across the business from automated features, and improved visibility into transaction processing through Splunk dashboards. Splunk helps with IT operations monitoring, compliance activities, fraud management, and SSL certificate management. Xerox is able to track $90 billion in payments annually and monitor fraud in real-time using Splunk.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
Splunk MINT for Mobile Intelligence and Splunk App for Stream for Enhanced Op...Splunk
Learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Make Streaming IoT Analytics Work for YouHortonworks
1) Streaming analytics platforms for IoT need to focus on ingesting data from various sources, processing data in real-time, analyzing data, responding to events, and visualizing data.
2) Key areas for building such a platform include using a common abstraction layer, minimizing latency, integrating static and real-time data using lambda architecture, scaling out linearly, enabling rapid application development, and providing data visualization.
3) An example use case of a connected car generates large amounts of data that can be used for various purposes through a streaming analytics platform like predictive maintenance and customized experiences.
Transcend Automation is the authorized business partners for Kepware Technologies in India. We Market, Promote, Integrate their products for customers in India
Best Practices for 360 Feedback projectsmrsteamdoc
In this recorded live session, we share six best practices for successful 360 feedback. At the end of the session, this is a question and answer session with the panel.
Spidergap - Our 360 feedback questionnaire standard templateAlexis Kingsbury
This is Spidergap's 360 degree feedback assessment / questionnaire template. When you create a new 360 assessment in Spidergap, by default it will use this.
We recommend you review it and amend it to meet your needs using our online designer. (No additional cost)
You can signup for free and create your own at https://www.spidergap.com
This document discusses different network topologies. It defines topology as the layout of connected devices on a network and describes common topologies including bus, star, ring, mesh, tree and hybrid. For each topology, it provides details on the network configuration, advantages and disadvantages. It emphasizes that topology selection depends on factors like cost, flexibility and reliability. The document concludes with examples to test understanding of topology types.
We, at The TEAM Approach, provide this template to clients using 20/20Insight as a 360 feedback tool. It is personalized each time with screen shots showing the actual scales, etc. used in the client project.
Digital transformation strategy focuses on continuously improving processes, people, and technology to stay ahead of customer expectations. This involves assessing business processes and functions, technology, and organizational structure to establish pain points and opportunities. Recommendations are then made to improve processes, technology, and people using strategic roadmaps, digital tools, and new practices. The goal is to realize business benefits through measurable performance improvements and value creation at the intersection of strategy, processes, and technology, enabled by governance models.
This document discusses strategies for mature and declining markets. It explains that mature markets can still provide opportunities through differentiation, cost leadership, or pursuing additional growth. Declining markets require evaluating demand trends, exit barriers, and competitive intensity to determine the best strategy of harvesting, maintaining, niche, or profitable survival. Overall success relies on sustaining customer loyalty and value through quality, service, cost management, or stimulating further volume growth.
Elevate your Splunk Deployment by Better Understanding your Value Breakfast S...Splunk
This document discusses how to better understand the value of a Splunk deployment through assessing data sources. It presents a data source assessment tool to map data sources to use cases and organizational groups to identify opportunities. The tool shows which data sources are indexed and overlap between groups. It aims to maximize benefits from machine data by supporting business objectives and enabling broader impact.
Real Time Business Platform by Ivan Novick from PivotalVMware Tanzu Korea
This document discusses Pivotal's real time business platform for maximizing the value of data investments. It recommends identifying business problems with high ROI potential, then focusing data solutions on high-speed ingestion, consolidation, real-time queries, and analytics to drive real-time insights. The platform combines Gemfire for fast transactions with Greenplum for analytics. Use cases discussed include predictive maintenance, fraud detection, and recommendation engines. The platform provides a complete solution from data capture and analytics to application integration.
There are 250 Database products, are you running the right one?Aerospike, Inc.
This webinar discusses choosing the right database for organizations. It will cover industry trends driving data and database evolution, real-world use cases where speed and scale are important, and an architecture overview. Speakers from Forrester and Aerospike will discuss how new applications are challenging traditional databases and how Aerospike's in-memory database provides extremely high performance for large-scale, data-intensive workloads. The agenda includes an industry overview, tips for choosing a database, how data has evolved, examples where low latency is critical, and a question and answer session.
Many companies have discovered that there is “gold” in their server log files and machine data. Closely monitoring this data can improve security, help prevent costly outages and reduce the time it takes to recover from a problem. In this presentation, GTRI’s Micah Montgomery explains how operational intelligence can be gained from machine data, and how Splunk Enterprise can turn this data into actionable insights. Also presenting was NetApp’s Steve Fritzinger, who discussed how to manage the challenges of capturing and storing a flood of data without breaking the bank.
Presented at "Denver Big Data Analytics Day" on May 18, 2016 at GTRI.
Visualizing Your Network Health - Know your NetworkDellNMS
An old adage states that you cannot manage what you don’t know. Do you know what devices are on your network, where they are located, how they are configured, what they are connected to, and how they are affected by changes and failures?
Today’s network infrastructure is becoming more and more complex, while demands on the Network Administrator to ensure network availability and performance are higher than ever. Business critical systems depend upon you managing your entire network infrastructure and delivering high-quality service 24/7, 365 days a year. So how do you keep the pace?
Learn how real-time visibility into your entire network infrastructure provides the power to manage your assets with greater control.
ROI for IP Address Management (IPAM) SolutionsSolarWinds
IP Address Management is no longer just an issue for large enterprises alone. Small to Mid-sized enterprises also see a steady increase in the number IP-based devices they have to manage. In this presentation, we will showcase four customers and the value and ROI they experienced once they implemented SolarWinds IPAM.
In this presentation, you'll learn how to troubleshoot bandwidth issues with NetFlow Analyzer.
Topics covered:
1. Customizing data storage
2. Customizing dashboards
3. Reporting and automation
4. Troubleshooting with forensics
5. Traffic shaping
6. Capacity planning and billing
To know more, visit www.netflowanalyzer.com
This document discusses application performance monitoring tools and describes Flopsar Suite APM. It notes that companies increasingly rely on online applications and services but face challenges like slow response times. Traditional system management tools are expensive and complex. Flopsar Suite provides fault detection, root cause analysis, and intuitive dashboards to help companies monitor application performance in real-time across heterogeneous environments with minimum implementation time and training. It visualizes requests and detects anomalies, and has helped customers reduce incidents, increase productivity, lower costs, and reduce time spent troubleshooting issues.
This document discusses securing enterprise business applications. It notes that major companies rely on applications like SAP, Oracle, and Microsoft Dynamics for critical functions. However, these applications are often vulnerable to attacks like espionage, sabotage, and fraud due to issues like outdated versions, poor patching processes, and internet accessibility. The document argues that securing these widely implemented but vulnerable applications is essential for protecting companies and their sensitive data, operations, and financials.
Visualizing Your Network Health - Driving Visibility in Increasingly Complex...DellNMS
Dell Performance Monitoring Network Management solutions can provide your IT department with the affordable, in-depth visibility and actionable monitoring needed to manage network infrastructure complexity.
Join our webcast to learn how:
• Dynamic discovery of equipment provides the ability to map current location, configuration and interdependencies.
• Real-time visibility across network infrastructures can help ensure availability and performance.
• Actionable information about network health, faults, bandwidth hogs and performance issues reduces the mean-time-to-resolution.
• Proactive analysis can pinpoint the root cause of intermittent, hard to find problems.
Visualizing and optimizing your network is easier than you think
Modernizing Your DNS Platform with NS1 and ThousandEyesThousandEyes
The availability, performance and security of your DNS infrastructure is essential to offering a good digital experience to your users—yet DNS is an often overlooked aspect of architecting digital offerings.
Managed DNS provider NS1 joins ThousandEyes to provide insight into modernizing your DNS platform for improved digital experience. We'll cover ThousandEyes performance findings on managed DNS providers, techniques to improve performance for your users, and monitoring DNS infrastructure for availability and performance.
Keynote presentation from CMG Conference explaining the challenges in management and now monitoring and business visibility provided by modern APM tools is critical to business execution
Event Streaming Architecture for Industry 4.0 - Abdelkrim Hadjidj & Jan Kuni...Flink Forward
New use cases under the Industry 4.0 umbrella are playing a key role in improving factory operations, process optimization, cost reduction and quality improvement. We propose an event streaming architecture to streamline the information flow all the way from the factory to the main data center. Building such a streaming architecture enables a manufacturer to react faster to critical operational events. However, it presents two main challenges:
Data acquisition in real time: data should be collected regardless of its location or access challenges are. It is commonplace to ingest data from hundreds of heterogeneous data sources (ERP, MES, Sensors, maintenance systems, etc).
Event processing in real time: events collected from different parts of the organization should be combined into actionable insights in real time. This is extremely challenging in a context where events can be lost or delayed.
In this talk, we show how Apache NiFi and MiNiFi can be used to collect a wide range of datasources in real-time, connecting the industrial and information worlds. Then, we show how Apache Flink’s unique features enables us to make sense of this data. For instance, we will explain how Flink’s time management such Event Time mode, late arrival handling and watermark mechanism can be used to address the challenge of processing IoT data originating from geographically distributed plants. Finally, we demonstrate an end to end streaming architecture for Industry 4.0 based on the Cloudera DataFlow platform.
Exploding data growth doesn’t mean you have to sacrifice data security or compliance readiness. The more clarity you have into where your sensitive data is and who is accessing it, the easier it is to secure and meet compliance regulations.
Walk through this presentation to learn how to:
- Detect and block cyber security events in real-time
- Protect large and diverse data environments
- Simplify compliance enforcements and reporting
- Take control of escalating costs.
Privacy Impact Assessment Management System (PIAMS) The Canton Group
The document discusses the Privacy Impact Assessment Management System (PIAMS) developed by The Canton Group to improve the privacy impact assessment (PIA) process for federal agencies. PIAMS automates the collection, storage, and review of PIA documents to reduce costs and improve transparency. It replaces manual PIA processes and filing with a web-based system. The Internal Revenue Service successfully implemented PIAMS, reducing the time to complete PIAs by a factor of 10 and decreasing labor hours.
Embrace IT Operations Management with OpManager to get the visibility into your network, server & storage, application, and service layers. Find the exact fault in minutes and troubleshoot quickly.
Operating a Highly Available Cloud ServiceDepankar Neogi
Operating a highly available cloud service is not just about technology and architecture. It has a lot to do with people and processes. Everything fails all the time. So, how do you ensure you have the right people and the right processes in the right places to run a highly available web service. This talk covers people, processes and technology and tools required to run a highly available web service.
This document discusses data intensive applications and some of the challenges, tools, and best practices related to them. The key challenges with data intensive applications include large quantities of data, complex data structures, and rapidly changing data. Common tools mentioned include NoSQL databases, message queues, caches, search indexes, and batch/stream processing frameworks. The document also discusses concepts like distributed systems architectures, outage case studies, and strategies for improving reliability, scalability, and maintainability in data systems. Engineers working in this field need an accurate understanding of various tools and how to apply the right tools for different use cases while avoiding common pitfalls.
The document outlines 8 steps for organizations to address shadow IT by bringing unauthorized cloud services procured by end users under corporate IT oversight. It defines shadow IT, explains why it exists from both user and IT perspectives, and recommends that IT leaders take a balanced view to enable innovation while managing risks. Key steps include quantifying current shadow IT usage, educating on security risks, meeting with business units, establishing governance over approved cloud providers and services, and publishing a catalog of supported apps.
Staying Under These Performance Redlines Will Improve VoIP Call Qualitypanagenda
Webinar Recording: https://www.panagenda.com/webinars/staying-under-these-performance-redlines-will-improve-voip-call-quality/
Please join us for this discussion with Ståle Hansen (Microsoft RD & Teams MVP). We are going to explore the maximum limits for Hardware and Networking performance that impact Teams call quality.
If your company is relying on Microsoft Teams for all calls and meetings, then you need to attend this webinar. This will be a technical discussion with Microsoft Teams UC experts on the performance redlines that cannot be crossed. We will review actual metrics from our current customers including over 1 Million endpoints. If you want to improve VoIP call quality for your employees then you must remain beneath these barriers.
During the webinar you will receive an introduction to OfficeExpert Endpoint Performance Monitoring (EPM). This new SaaS solution provides accurate data analytics for Microsoft 365 and Teams voice performance from the end-user perspective. If you want to identify the performance redlines for your end-users and proactively fix hardware limitations and network slowdowns then this solution will provide the actionable insights you need.
During the webinar you will learn about the maximum limits for device hardware and networking performance. Staying underneath these redlines will ensure acceptable Teams call quality performance for your employees.
What you will learn
- Maximum CPU and Memory Usage Limits
- Performance redline you cannot cross for Round-Trip-Times (RTT)
- How to determine bad ISP performance
- How to proactively improve VoIP call quality
- Why getting to the Microsoft Cloud Global Network quickly is so important!
With stream analytics for your data in motion from ExtraHop, you can confidently migrate applications to virtualized environments and manage their performance.
With insight from ExtraHop, the six-person IT team at Geel has correlated, cross-tier visibility across all applications and systems, both on-premises and in the cloud.
The IT team at Zonar is leveraging wire data from ExtraHop to streamline their own operations and ensure better performance across the infrastructure. In addition to a large-scale infrastructure mapping initiative, the team is also using wire data to troubleshoot issues from code-level errors to machines throwing millions of DNS requests.
Managed Services Provider Serves Customers Better with Wire DataExtraHop Networks
ACS Solutions GmbH (ACS) is a managed services provider, delivering hosting, application and infrastructure, and cloud computing services. Lack of visibility into Citrix performance problems meant not only unhappy customers, but failure to satisfy SLAs. Analysis of ACS' wire data delivered critical insight into performance across the entire infrastructure, including the Citrix environment.
Conga case study: Application visibility in AWS with ExtraHopExtraHop Networks
Conga is a leading Salesforce application partner. They use the ExtraHop platform to gain new insights into their application performance in AWS as well as the real-time activities of their users.
Learn more at http://www.extrahop.com.
ExtraHop provides real-time insights through wire data analytics and their Operational Excellence offering helps organizations quickly apply those insights through a tailored engagement. The engagement includes analyzing business requirements, deploying ExtraHop, integrating other systems, customizing metrics and visualizations, and providing training and documentation. Examples demonstrate how the engagement helped customers track customer account activity, monitor purchasing behavior and supply chains, monitor Citrix performance, and understand application usage.
The QuickStart Deployment from ExtraHop's Solutions Architecture team provides an expertly deployed ExtraHop solution aligned with business priorities. It ensures a fast return on investment by having ExtraHop experts handle the deployment instead of burdening internal teams. The deployment includes an initial scoping meeting, network discovery, prioritizing data collection for critical applications, implementing the baseline configuration, and creating customized dashboards aligned with business needs.
This troubleshooting guide shows you how to identify and troubleshoot common web application performance problems using the ExtraHop Discovery Edition, a free virtual appliance for wire data analytics.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
1. Fast 360 Assessment Report
Real-Time Operational Intelligence Findings
Prepared by:
Joanna Smith, Director of IT, XYZ Corp
Nick Tesla, Systems Engineer, ExtraHop
Ada Lovelace, Regional Sales Manager, ExtraHop
2. ExtraHop | Fast 360 Assessment
EXECUTIVE SUMMARY OF FAST 360 ASSESSMENT FINDINGS
2
KEY FINDINGS FOR OBSERVED PERIOD
Cipher Suite and Encryption
5,660 weak cipher sessions were
observed over 20 hosts. This represents
a security risk.
DNS
15% of DNS requests are failing due to
IPv6 issues having a 2-4 second impact
on end-user performance.
Citrix
The longest Citrix login during the
observed period was 2.46 minutes.
Database
4,100 DB errors occurred and the
slowest query process time was over
10s.
Storage
A frequent backup script slowed down
storage performance and is congesting
the network.
Asset Discovery
Two FTP servers were discovered in
areas of the network where this protocol
is not allowed.
SMTP
There were 5,000 unencrypted SMTP
sessions, indicating a potential security
risk.
Web Optimization
Our website is returning 3.5K server
errors each hour, wasting server
resources.
Network
1.04 million TCP retransmission timeouts
were observed, adding roughly 5 second
delays for end users.
Real User Monitoring
Website responses for Safari browsers
are 39% slower than other browsers.
VOIP
A high number of SIP errors represent
end-users that cannot make calls.
Security Point Solutions
2,500 Shellshock attempts were
detected in HTTP and DHCP payloads.
Cloud Applications
3 GB of data has been sent to cloud
storage apps outside of corporate policy.
FTP
There were no FTP requests originating
outside of corporate headquarters, which
is expected.
The
ima
ge
can
not
3. ExtraHop | Fast 360 Assessment
WHAT EXTRAHOP CUSTOMERS ARE SAYING
3Research by
What has most surprised you about ExtraHop?
“The many, many insights you can gain from this platform. We haven’t even scratched the surface.”
– Brian Bohanon, IT Director, Aaron’s, Inc.
http://www.techvalidate.com/tvid/A59-E9B-B75
“In the tech business, you always hear from vendors that their solution will be easy to install, will be flexible to operate,
or will have an exceptional ROI. These promises are almost always too good to be true. ExtraHop has these stories as
well, but they consistently exceed expectations every time.”
– Todd Forgie, IT Vice President, MEDHOST
http://www.techvalidate.com/tvid/A6A-E5A-80B
TVID:
189-‐0A8-‐F83
of surveyed IT organizations paid
back their investment in ExtraHop
in 6–12 months or less.55%
of surveyed IT
organizations
improved mean-time-
to-resolution by 2x or
more with ExtraHop.
81%
TVID:
792-‐E08-‐501
4. ExtraHop | Fast 360 Assessment
CIPHER SUITE AND ENCRYPTION MONITORING – FINDINGS
4
INDUSTRY FACTS
• A data breach cost U.S. companies an
average of $6.5M per incident in 2014
– Ponemon Institute
• The average global 5,000 company spends
$15 million to recover from a certificate
outage—and faces another $25 million in
potential penalties
– Ponemon Institute
• Only 40% of HTTP servers support TLS
or SSL and present valid certificates
– Redhat (scan of Alexa top 1M sites)
• 20% of servers are using broken cipher
suites making encrypted data vulnerable
– Redhat
• RC4 is still used in >18% of HTTPS
servers – Redhat
KEY FINDINGS FOR CIPHER SUITE
AND ENCRYPTION MONITORING
• Sensitive information may be exposed
to malicious actors, which can directly
cause further data loss and security
breaches.
5,660
insecure sessions
• Sessions using RC4 encryption are
considered insecure and expose your
company to data theft.
64,000
sessions
• It has been 400 days since the oldest
SSL certificate expired. This exposes
the enterprise and customers to
malicious cybercrime.
1,900
days
• Number of sessions observed using
SSLv3, an insecure version vulnerable
to man-in-the-middle attacks.
1,650
Insecure sessions
See the Appendix for Cipher Suite and
Encryption dashboards
5. ExtraHop | Fast 360 Assessment
DNS MONITORING AND ANALYSIS – FINDINGS
5
• Timeouts will have an impact on
application performance and user
experience. If associated with fee-
based API driven services you may be
overcharged.
298,000
request timeouts
• Sauce Labs, a cloud-based automated
testing service is causing 35% of
timeouts. This should be investigated
to ensure you’re not being billed for this
traffic.
35%
of request timeouts
• Thousands of IPv6 requests have been
potentially causing 2 – 4 second delays
for clients and applictions. This should
be fixed immediately.
1,160
AAAA look-ups
• DNS errors may be caused by
misconfiguration. Fixing these may
resolve application issues and
slowness.
15,000
DNS response errors
KEY FINDINGS FOR DNS MONITORING AND ANALYSIS
INDUSTRY FACTS
• A DNS Dashboard for performance,
availability, and risk mitigation is
recommended best practice for any
enterprise by DHS and the ITSRA working
group along with ICANN
– U.S. Department of Homeland Security
See the Appendix for DNS Monitoring
dashboards
6. ExtraHop | Fast 360 Assessment
DATABASE HEALTH AND PERFORMANCE MONITORING – FINDINGS
6
KEY FINDINGS FOR DATABASE HEALTH
AND PERFORMANCE MONITORING INDUSTRY FACTS
• Database profilers can impact performance
by up to 20% – Microsoft
• 25% of DBAs surveyed reported unplanned
outages of up to 1 day, while 40% reported
outages between 1-5 days – Oracle
• High error rates have a negative impact
on the health and performance of your
databases. ExtraHop shows SQL
transaction details to troubleshoot
errors.
4,100
errors
• Worst database server processing time
during the observed period. More than
100ms is generally considered to have
a negative impact on application
performance.
428
milliseconds
• Privileged user logins should be
continuously monitored in order to
identify anomalous behavior that can
indicate a data breach.
99
privileged user logins
See the Appendix for Database Health and
Performance dashboards
7. ExtraHop | Fast 360 Assessment
STORAGE MONITORING – FINDINGS
7
KEY FINDINGS FOR
STORAGE MONITORING INDUSTRY FACTS
• PCI, HIPAA, and Sarbanes-Oxley all
require file audit access – TechNet
• In Windows Server 2008, CHKDSK requires
6 hours to identify corrupt files in a
system with 300m files – TechNet
• Files that should be cached based on
NFS response counts. This will
improve network utilization and
experience for users in branch offices.
38
files
• Storage errors can be investigated to
identify corrupted files, access, and
performance issues.
1.42K
errors
• A scheduled backup job is causing zero
windows (extreme latency) in NAS
response and causing application
errors.
1
scheduled backup
See the Appendix for Storage Monitoring
dashboards
8. ExtraHop | Fast 360 Assessment
SMTP MONITORING – FINDINGS
8
KEY FINDINGS FOR SMTP
PERFORMANCE MONITORING INDUSTRY FACTS
• In a survey of over 1,000 organizations,
72% experienced unplanned email outages
in a year. Of those, 71% lasted longer than
four hours – MessageOne
• ~21 billion emails appearing to come from
well-know commercial senders did not
actually come from their legitimate IP
addresses (between October 2014 and
March 2015) – Return Path
• Email was the main channel for 8.2% of all
data leaks globally in 2014 – Infowatch
• High SMTP error rates could indicate
email delivery failures that impact
employee productivity and business
operations.
2,000
errors
• Spikes in server processing time
should be investigated as they could be
indicators of issues like attempted
overloading of mail servers, malicious
spamming, or compromised clients.
300
milliseconds
• Encrypted sessions protect sensitive
information in flight. A large number of
unencrypted sessions could increase
potential security risks and cause non-
compliance with policy.
5,000
unencrypted sessions
See the Appendix for SMTP Monitoring
dashboards
9. ExtraHop | Fast 360 Assessment
WEB OPTIMIZATION – FINDINGS
9
• 302 redirects indicate a temporary
change in URI. Change these to 301
redirects for better SEO.
38
302 redirect codes
• 500 errors occur when a server
encounters an error but can’t provide
more information. If this number is not
zero, you have a problem.
3.5k/hr
500 server errors
• 404 errors can indicate broken links
pointing to your site, or other misplaced
resources. Users seeing these may
leave your site and never return.
101k/hr
404 errors
• Gif files are notoriously large, and your
site is seeing many requests for them.
Consider a different image format to
reduce bandwidth consumption on your
most requested assets.
1.6M
Requests for .gif images
KEY FINDINGS FOR WEB OPTIMIZATION
INDUSTRY FACTS
• People will visit a website less often if it is
slower than a close competitor by more
than 250 milliseconds – New York Times
• A 1-second delay in page response
decreases customer satisfaction by 16
percent, which in turn results in a 7 percent
reduction in conversions – Trac Research
See the Appendix for Web Optimization
dashboards
10. ExtraHop | Fast 360 Assessment
REAL USER MONITORING – FINDINGS
10
KEY FINDINGS FOR REAL USER MONITORING
INDUSTRY FACTS
• Up to a 7% increase in conversion rate
can be achieved for every 1 second of
performance improvement – KissMetrics
• Up to 1% of incremental revenue can be
earned for every 100ms of performance
improvement – Walmart Page Speed Study
• A one second delay can decrease
customer satisfaction by 16%
– Aberdeen Group
• Perceived page load time by end-
users. This is good performance but
should be monitored to ensure
revenue, conversions, and user
satisfaction.
1
seconds
• Server processing is the largest
contributor to performance. Pages are
usable sooner, but this should be
watched.
2.4seconds
• Dropped data segments forced
application retransmissions impacting
end-user performance and should be
addressed immediately.
330,000
• Is the most common end-user platform.
Understanding platforms, browsers,
and usage focuses application,
network, and infrastructure tuning
efforts.
Microsoft
Windows
See the Appendix for Real User Monitoring
dashboards
11. ExtraHop | Fast 360 Assessment
VOIP MONITORING – FINDINGS
11
KEY FINDINGS FOR
VOIP MONITORING INDUSTRY FACTS
• Packet capture is the most relied upon
troubleshooting method for VoIP issues
– Cisco support forum, 2014
• Voice was ranked as the second-most
used communication method (86%,
behind email at 93%) for employees
– InformationWeek Reports
• 68% of consumers would hang up as a
result of poor call quality and call a
competitor instead
– Customer Experience Foundation
• Minimum MOS score observed for RTP
provides insight into service level
violations. MOS ranks from 1 to 5 with
1 being the worst.
2.88
mean opinion score (MOS)
• RTP jitter is acceptable, with the
maximum jitter reaching only 9ms.
Excessive jitter makes calls
unintelligible.
9
milliseconds
• Responses with the 401 status code
indicate unauthorized activity and
should be investigated.
2,800
SIP 401 status codes
• Call initiations that failed due to “bad
event from client” errors. Users could
not make calls.
402
SIP “bad event
from client” errors
See the Appendix for VOIP Monitoring
dashboards
12. ExtraHop | Fast 360 Assessment
CLOUD APP MONITORING – FINDINGS
12
KEY FINDINGS FOR
CLOUD APPLICATIONS
• Cloud application bandwidth
consumption shows the max load that
is being used. High cloud app
bandwidth could impact data center
traffic.
1 MB/S
bandwidth consumed by cloud
apps
• Shows the amount of data being stored
in the cloud, including storage
destinations that don’t match your
policies.
6.8 GB/3 GB
compliant/non-compliant
cloud storage
• High bandwidth consumption on
Facebook can indicate lost employee
productivity.
367 MB
total Facebook traffic
• Large multimedia usage can impact
network performance. This can be an
easy area to recapture bandwidth.
9.2 GB
data used by top Spotify user
INDUSTRY FACTS
• Browser-based/cloud apps were the
largest source of data leakage in 2014 at
35.1% – InfoWatch 2014 Report
• Nearly 80% of employees surveyed cited
non-work related Internet use or social
media as a top productivity killer
– CareerBuilder
• Estimated growth in datacenter traffic by
23% and cloud traffic 33% year over year
through 2018 is driving need to increase
bandwidth – Cisco Global Cloud Index
See the Appendix for Cloud App Monitoring
dashboards
13. ExtraHop | Fast 360 Assessment
FTP MONITORING – FINDINGS
13
KEY FINDINGS FOR
FTP MONITORING INDUSTRY FACTS
• 68% of organizations use FTP as a
mainstay file transfer method
– Osterman Research
• PCI Data Security Standard 2.0 requires
monitoring data access and capturing audit
data – PCI Standards Security Council
• FTP should be monitored for both data
breaches and data stashing
• The hackers who stole millions of credit
card details from Target in 2013 used FTP
to exfiltrate the data – Krebs on Security
• During the observed period, there were
242,000 FTP errors (550 – Failed to
open file) attributed to
whoami.akamai.net.
242K
FTP errors
• There were no FTP requests
originating outside of corporate
headquarters. This is expected; FTP
requests originating elsewhere can
indicate malicious behavior.
0
FTP requests originating
outside of headquarters
• Only four files were transferred during
the observed period. ExtraHop analysis
includes file names and sizes.
4
files transferred
See the Appendix for FTP Monitoring
dashboards
14. ExtraHop | Fast 360 Assessment
SECURITY VULNERABILITY MONITORING – FINDINGS
14
KEY FINDINGS FOR SMTP
PERFORMANCE MONITORING INDUSTRY FACTS
• Within days of the discovery of the
Shellshock vulnerability, CloudFlare
reported blocking more than 1.1M attacks –
CloudFlare
• At the time of Heartbleed’s disclosure, more
than 500,00 (17%) of the internet’s secure
web servers were believed to be vulnerable
to attack. The Community Health Systems
breach compromised 4.5M patient records –
Wikipedia
• Researchers at the University of Michigan
estimated that 36.7% of browser-trusted
sites were vulnerable to the FREAK attack
– Threatpost
• Number of Shellshock attempts
detected in HTTP and DHCP payloads.
2,500
Shellshock attempts
• Number of exploit attempts of the
HTTP.sys Range Sec vulnerability in
the payload of HTTP requests. This
vulnerability impacts Microsoft
Windows and Windows Server.
0
HTTP.sys attempts
• Number of SSL heartbeats, which can
be exploited by the Heartbleed bug.
Validate that the correct version of
OpenSSL is in use.
500
Heartbleed attempts
See the Appendix for Security Vulnerability
Monitoring dashboards
15. ExtraHop | Fast 360 Assessment
CITRIX MONITORING - FINDINGS
15
KEY FINDINGS FOR CITRIX XENAPP AND
XENDESKTOP PERFORMANCE MONITORING INDUSTRY FACTS
• Citrix admins spend over 30% of their time
troubleshooting performance issues.
(DABCC)
• Over 50% of performance issues Citrix
admins encounter are not caused by
Citrix. (DABCC)
• ~50% logon time improvement can be
achieved with profile size reduction, growth
mitigation, and appropriate profile
management tactics. (Citrix)
• ExtraHop is verified as Citrix Ready for
Citrix XenApp, XenDesktop and NetScaler.
• Average time to logon to mission
critical applications delivered by Citrix.
20
seconds per Citrix login (95th
percentile)
• Lost hours of productivity due to slow
Citrix login (enterprise-wide).
166
hours per month
• A high number of CIFS errors
correlated to one device indicates a
likely corrupted Citrix profile.
Troubleshoot immediately.
5%
CIFS traffic resulting in errors
• High maximum load times indicate that
some of your Citrix users are having a
bad user experience. Remediate
quickly.
2.46
minute load times
See the Appendix for Citrix Monitoring
dashboards
16. ExtraHop | Fast 360 Assessment
ASSET CLASSIFICATION – FINDINGS
16
INDUSTRY FACTS
• 45% of surveyed IT pros said they manage
multiple pieces of software providing
duplicative functionality
– Information Week
• 20% of all racked IT equipment isn’t being
used and organizations could benefit from
decommissioning them
– Uptime Institute
• It takes 205 days on average to discover
for companies to detect their environment
has been compromised
– FireEye
KEY FINDINGS FOR ASSET CLASSIFICATION
• This number shows a large growth in
devices communicating using TCP
devices and can be a leading indicator
that more capacity is needed.
300
new active devices
communicating w/TCP
• Could indicate a system using a
protocol that shouldn’t be in use has
been detected.
2
FTP servers in use
• This is a sizable deployment of DNS
servers and could indicate an
opportunity to consolidate and save
money.
53
DNS servers in use
See the Appendix for Asset Classification
dashboards
17. ExtraHop | Fast 360 Assessment
NETWORK HEALTH AND UTILIZATION – FINDINGS
17
INDUSTRY FACTS
• Average orgs spend 11% of their IT budget
on network and telecommunications.
– ESG Research
• 39% of organizations have turned off
firewall functions to improve network
performance
– Intel
• Datacenter traffic will grow 23% CAGR
between 2013 and 2018
– Cisco Global Cloud Index Survey
KEY FINDINGS FOR NETWORK
HEALTH AND UTILIZATION
• A number of servers and clients are
using IPv6 even though this is not our
internal policy. This can cause delays
as these lookups resolve.
3.2m
IPv6 frames
• Over the observed period, there was
4.9TB sent over TCP compared with
475GB sent over UDP. This baseline
should be monitored to track growth of
custom protocols based on UDP.
4.9TB
bytes sent over TCP
• TCP retransmission timeouts represent
roughly 5 second delays for the user as
the client and server attempt to
complete a transaction. Servers with
high RTOs may be overloaded.
1.04m
retransmission timeouts
See the Appendix for Network Health and
Utilization dashboards