Democratising Security: Update Your Policies or Update Your CVExtraHop Networks
Security is everyone's responsibility. That’s the lesson learned as enterprises seek to improve their detection and response for cyber incidents. This session introduces a new model where InfoSec sets the policies and delegates monitoring to application teams.
Ransomware: Hard to Stop for Enterprises, Highly Profitable for CriminalsExtraHop Networks
Ransomware attacks doubled in 2015 and the trend is sure to continue. To meet this growing threat, enterprises must gain real-time visibility into anomalous behaviour. This session explains how organisations can detect and mitigate ransomware attacks using wire data.
A concise report on the state of your environment, with prescriptive recommendations for how you can improve performance, efficiency and security in up to 14 vital IT domains.
What’s New: Splunk App for Stream and Splunk MINTSplunk
Join us to learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Wire data provides deep insights across IT, security and business use cases by capturing the communications transmitted over the wire between machines and applications in real-time. The Splunk App for Stream enables new operational intelligence by indexing this wire data without needing instrumentation. It provides enhanced visibility, efficient cloud-ready collection, and fast time to value through interface-driven deployment. Key features include protocol decoding, attribute filtering, aggregations, and custom content extraction for analysis in Splunk.
The ExtraHop wire data analytics platform enables IT teams to answer questions they hadn't known to ask before, such as "Which SSL servers are receiving heartbeats?" and "Where are heartbeat messages coming from?"
Democratising Security: Update Your Policies or Update Your CVExtraHop Networks
Security is everyone's responsibility. That’s the lesson learned as enterprises seek to improve their detection and response for cyber incidents. This session introduces a new model where InfoSec sets the policies and delegates monitoring to application teams.
Ransomware: Hard to Stop for Enterprises, Highly Profitable for CriminalsExtraHop Networks
Ransomware attacks doubled in 2015 and the trend is sure to continue. To meet this growing threat, enterprises must gain real-time visibility into anomalous behaviour. This session explains how organisations can detect and mitigate ransomware attacks using wire data.
A concise report on the state of your environment, with prescriptive recommendations for how you can improve performance, efficiency and security in up to 14 vital IT domains.
What’s New: Splunk App for Stream and Splunk MINTSplunk
Join us to learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Wire data provides deep insights across IT, security and business use cases by capturing the communications transmitted over the wire between machines and applications in real-time. The Splunk App for Stream enables new operational intelligence by indexing this wire data without needing instrumentation. It provides enhanced visibility, efficient cloud-ready collection, and fast time to value through interface-driven deployment. Key features include protocol decoding, attribute filtering, aggregations, and custom content extraction for analysis in Splunk.
The ExtraHop wire data analytics platform enables IT teams to answer questions they hadn't known to ask before, such as "Which SSL servers are receiving heartbeats?" and "Where are heartbeat messages coming from?"
By passively analyzing your wire data, ExtraHop provides deep visibility into HL7 messages, Citrix performance, EHR behavior, ICD-10 conversion, and more.
Proactive monitoring and remediation
Optimization and continuous improvement
Pervasive security monitoring and compliance
Clinical and operations analytics
Splunk App for Stream for Enhanced Operational Intelligence from Wire DataSplunk
The Splunk App for Stream provides concise summaries of wire data in 3 sentences or less:
The Splunk App for Stream enables capturing and analyzing wire data from public, private, and hybrid cloud infrastructures for real-time operational insights. It delivers rapid deployment and scalability along with efficient wire data collection. The app captures critical events not found in logs to enhance operational intelligence through wire data analysis.
This document discusses how organizations can use big data and operational analytics to transform IT operations. It outlines how taking a data-driven approach that combines machine data and wire data can provide real-time visibility across networks, applications, databases and other systems. This approach overcomes limitations of using individual monitoring tools by silo. The document also covers key considerations for implementing IT big data solutions such as data gravity, improving the signal-to-noise ratio, and understanding when data needs to be accessed in real-time. It provides an example of how healthcare company McKesson used network traffic analysis to improve Citrix application performance and reduce IT costs.
Affecto Informatica World Tour 2015: The Age of EngagementAffecto
We are moving from an era of cost optimisation and productivity to an age of engagement. This engagement is with customers, partners, third parties, and machines. At the center of every winning business strategy is data. How to accelerate your organisation’s results using the Intelligent Data Platform to manage data of any type and from any source to drive better business outcomes.
A presentation by Greg Hanson, Vice President Business Operations EMEA, Informatica
PaNDA - a platform for Network Data Analytics: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. PaNDA is a platform for data aggregation and distribution which can be used for data analytics applications being developed at Cisco. PaNDA was incubated in Intercloud and is now being further developed for the Virtual Managed Services (VMS) solution and other Cisco solutions. The session will details why we need a platform for OSS analytics and then how we tackle this point.
A Framework for Infrastructure Visibility, Analytics & Operational IntelligenceStephen Collins
This document proposes an open framework for infrastructure visibility, analytics, and operational intelligence. It discusses the increasing scale and complexity of modern IT infrastructure and the need for a common analytics platform to overcome operational silos. The framework would provide pervasive visibility across networks and applications through instrumentation. It would utilize big data technologies like publish/subscribe pipelines, various data storage options, and multiple analytics engines. The goal is to deliver real-time operational and business intelligence through analytics-driven automation.
The ExtraHop platform is designed to turn wire data into real-time IT and business insights. It provides visibility across teams in an organization to empower them with operational intelligence. This allows organizations to transform their operations to become more efficient, proactive, and improve performance, availability, and security on-premises and in the cloud. The ExtraHop platform can do what multiple products from different vendors could previously do, but in a non-invasive, all-in-one platform.
This document discusses Splunk's data onboarding process, which provides a systematic way to ingest new data sources into Splunk. It ensures new data is instantly usable and valuable. The process involves several steps: pre-boarding to identify the data and required configurations; building index-time configurations; creating search-time configurations like extractions and lookups; developing data models; testing; and deploying the new data source. Following this process helps get new data onboarding right the first time and makes the data immediately useful.
Power of Splunk Search Processing Language (SPL)Splunk
The document discusses Splunk's Search Processing Language (SPL) for searching and analyzing machine data. It provides an overview of SPL and its commands, and gives examples of how SPL can be used for tasks like searching, charting, enriching data, identifying anomalies, transactions, and custom commands. The presentation aims to showcase the power and flexibility of SPL for tasks like searching large datasets, visualizing data, combining different data sources, and extending SPL's capabilities through custom commands.
Make Streaming IoT Analytics Work for YouHortonworks
1) Streaming analytics platforms for IoT need to focus on ingesting data from various sources, processing data in real-time, analyzing data, responding to events, and visualizing data.
2) Key areas for building such a platform include using a common abstraction layer, minimizing latency, integrating static and real-time data using lambda architecture, scaling out linearly, enabling rapid application development, and providing data visualization.
3) An example use case of a connected car generates large amounts of data that can be used for various purposes through a streaming analytics platform like predictive maintenance and customized experiences.
The document provides an overview of new features in Splunk Enterprise 6, including more powerful analytics capabilities for both technical and non-technical users. Key updates include an intuitive pivot interface that allows drag-and-drop report building without knowledge of the search language, defined data models to represent relationships in machine data, and an analytics store that can accelerate searches and reports up to 1000 times faster than previous versions. The release also includes simplified cluster management for large enterprise deployments and enhanced developer tools.
Splunk is a powerful platform that can harness your machine data and turn it into valuable information thereby enabling your business to make informed decisions, taking your organization from reactive to proactive. Just like any other platform, Splunk is only as powerful as the data it has access to, therefore in this session we will be conducting a walk thru of how to successfully on-board data, with samples of data ranging from simple to complex. We will also be taking a look at how to use common TA’s to bring valuable data into Splunk. This session is designed to give you a better understanding of how to onboard data into Splunk enabling you to unlock the power of your data.
This document provides an overview of how Garmin International uses Splunk to monitor and analyze machine data. It introduces Tyler Rutschman, a Linux systems administrator at Garmin, and describes how Garmin started using Splunk in 2009 to help with Sarbanes-Oxley compliance. Splunk has provided benefits like reduced mean time to resolution, better reporting capabilities, cost savings, and improved compliance. The implementation collects up to 150 GB of data per day from sources like servers, databases, and load balancers. Future plans include indexer upgrades and adding more Garmin application data to Splunk.
Apache NiFi is a dataflow system developed at NSA that was donated to the Apache Software Foundation in 2014. It provides real-time data routing, transformation, and system mediation capabilities with an intuitive visual interface. Key features include flow-based programming, provenance tracking, security controls, and clustering support. The system aims to automate dataflows from any source to systems that analyze or store the data.
Building a future-proof cyber security platform with Apache MetronDataWorks Summit
Qsight IT gives you insight in how we use Metron in securing our customers by continuously analyzing and monitoring users, applications, data, and networks. We show you how we implemented Metron as a replacement for our former security platform based on rule-based security. Since we are dealing with a non-conventional use case “serving many customers with one platform,” we developed a business classification module that enables us to score threats according to the customer’s input.
To be future ready, we are working on extending this rule-based way of detection with machine learning models like web defacement, suspicious URL’s, UEBA, and many more to come.
In order to provide all the necessary information to the SOC analysts at a glance, we are developing a custom SOC application from where they can handle security alarms, analyze captured data, and have historical data at hand. We regard our new Metron based Security Platform as an emerging giant—a future-proof cyber security platform!
Speaker
Bas van de Lustgraaf, Big Data Engineer, QSight IT
Machiel van Tilborg, BI Engineer, QSight IT
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
How USCIS Powered a Digital Transition to eProcessing with Kafka (Rob Brown &...confluent
USCIS is modernizing its data systems to more efficiently process immigration benefit requests. It uses Kafka streaming for intake, case management, payments, scheduling and other processes. USCIS follows standards for common naming, security architecture, and reusable Kafka integration code. The agency aims to streamline forms intake, case processing, and move to microservices with domain-driven design. Security is ensured through regular vulnerability scans, asset management, and achieving an authority to operate.
This document provides an overview of data enrichment techniques in Splunk including tags, field aliases, calculated fields, event types, and lookups. It describes how tags can add context and categorize data, field aliases can simplify searches by normalizing field labels, and lookups can augment data with additional external fields. The document also discusses various data sources that Splunk can index such as network data, HTTP events, alerts, scripts, databases, and modular inputs for custom data collection.
This document discusses how Staples uses Splunk to gain insights from machine data across their organization. It provides details on:
- Staples' Splunk infrastructure consisting of 8 index servers and 9 search heads that can handle 1TB of data per day.
- The key use cases of operational support, application insights, and business intelligence.
- How Splunk provides a single pane of glass for visibility across their web apps, servers, monitoring tools, and more.
- Examples of how Splunk has helped identify issues, reduced resolution times, and optimized website searches to improve the customer experience.
Starting Your DevOps Journey – Practical Tips for OpsDynatrace
To watch, please see:
https://info.dynatrace.com/apm_wc_getting_started_with_devops_na_registration.html
Starting Your DevOps Journey: Practical Tips for Ops
In this webinar, Andreas Grabner, Chief DevOps Activist at Dynatrace, shares practical tips that all IT groups from Dev to Ops can use to start their DevOps journey quickly. With experience from hundreds of DevOps deployments, Andi provides insights it would take your team months or years to learn firsthand.
- Learn how everyone on your Ops team can use APM to better understand and monitor SLAs, Performance and End User Impact of their applications.
- Foster better collaboration between Ops and architects by extending basic system monitoring to monolith and microservices architectures.
- Shift-left your testing and QA by working with metrics that you and the architects agreed on up front, resulting in early relevant feedback and faster code deployments.
- Hear why changing the cultural mindset from “fear of change” to “Continuous Innovation and Optimization” is critical for success.
Andi is joined by guest speaker, Brian Chandler, Systems Engineer at Raymond James, who shares commonly used Ops dashboards that increase collaboration across IT teams and pro-actively break down silos!
This troubleshooting guide shows you how to identify and troubleshoot common web application performance problems using the ExtraHop Discovery Edition, a free virtual appliance for wire data analytics.
By passively analyzing your wire data, ExtraHop provides deep visibility into HL7 messages, Citrix performance, EHR behavior, ICD-10 conversion, and more.
Proactive monitoring and remediation
Optimization and continuous improvement
Pervasive security monitoring and compliance
Clinical and operations analytics
Splunk App for Stream for Enhanced Operational Intelligence from Wire DataSplunk
The Splunk App for Stream provides concise summaries of wire data in 3 sentences or less:
The Splunk App for Stream enables capturing and analyzing wire data from public, private, and hybrid cloud infrastructures for real-time operational insights. It delivers rapid deployment and scalability along with efficient wire data collection. The app captures critical events not found in logs to enhance operational intelligence through wire data analysis.
This document discusses how organizations can use big data and operational analytics to transform IT operations. It outlines how taking a data-driven approach that combines machine data and wire data can provide real-time visibility across networks, applications, databases and other systems. This approach overcomes limitations of using individual monitoring tools by silo. The document also covers key considerations for implementing IT big data solutions such as data gravity, improving the signal-to-noise ratio, and understanding when data needs to be accessed in real-time. It provides an example of how healthcare company McKesson used network traffic analysis to improve Citrix application performance and reduce IT costs.
Affecto Informatica World Tour 2015: The Age of EngagementAffecto
We are moving from an era of cost optimisation and productivity to an age of engagement. This engagement is with customers, partners, third parties, and machines. At the center of every winning business strategy is data. How to accelerate your organisation’s results using the Intelligent Data Platform to manage data of any type and from any source to drive better business outcomes.
A presentation by Greg Hanson, Vice President Business Operations EMEA, Informatica
PaNDA - a platform for Network Data Analytics: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. PaNDA is a platform for data aggregation and distribution which can be used for data analytics applications being developed at Cisco. PaNDA was incubated in Intercloud and is now being further developed for the Virtual Managed Services (VMS) solution and other Cisco solutions. The session will details why we need a platform for OSS analytics and then how we tackle this point.
A Framework for Infrastructure Visibility, Analytics & Operational IntelligenceStephen Collins
This document proposes an open framework for infrastructure visibility, analytics, and operational intelligence. It discusses the increasing scale and complexity of modern IT infrastructure and the need for a common analytics platform to overcome operational silos. The framework would provide pervasive visibility across networks and applications through instrumentation. It would utilize big data technologies like publish/subscribe pipelines, various data storage options, and multiple analytics engines. The goal is to deliver real-time operational and business intelligence through analytics-driven automation.
The ExtraHop platform is designed to turn wire data into real-time IT and business insights. It provides visibility across teams in an organization to empower them with operational intelligence. This allows organizations to transform their operations to become more efficient, proactive, and improve performance, availability, and security on-premises and in the cloud. The ExtraHop platform can do what multiple products from different vendors could previously do, but in a non-invasive, all-in-one platform.
This document discusses Splunk's data onboarding process, which provides a systematic way to ingest new data sources into Splunk. It ensures new data is instantly usable and valuable. The process involves several steps: pre-boarding to identify the data and required configurations; building index-time configurations; creating search-time configurations like extractions and lookups; developing data models; testing; and deploying the new data source. Following this process helps get new data onboarding right the first time and makes the data immediately useful.
Power of Splunk Search Processing Language (SPL)Splunk
The document discusses Splunk's Search Processing Language (SPL) for searching and analyzing machine data. It provides an overview of SPL and its commands, and gives examples of how SPL can be used for tasks like searching, charting, enriching data, identifying anomalies, transactions, and custom commands. The presentation aims to showcase the power and flexibility of SPL for tasks like searching large datasets, visualizing data, combining different data sources, and extending SPL's capabilities through custom commands.
Make Streaming IoT Analytics Work for YouHortonworks
1) Streaming analytics platforms for IoT need to focus on ingesting data from various sources, processing data in real-time, analyzing data, responding to events, and visualizing data.
2) Key areas for building such a platform include using a common abstraction layer, minimizing latency, integrating static and real-time data using lambda architecture, scaling out linearly, enabling rapid application development, and providing data visualization.
3) An example use case of a connected car generates large amounts of data that can be used for various purposes through a streaming analytics platform like predictive maintenance and customized experiences.
The document provides an overview of new features in Splunk Enterprise 6, including more powerful analytics capabilities for both technical and non-technical users. Key updates include an intuitive pivot interface that allows drag-and-drop report building without knowledge of the search language, defined data models to represent relationships in machine data, and an analytics store that can accelerate searches and reports up to 1000 times faster than previous versions. The release also includes simplified cluster management for large enterprise deployments and enhanced developer tools.
Splunk is a powerful platform that can harness your machine data and turn it into valuable information thereby enabling your business to make informed decisions, taking your organization from reactive to proactive. Just like any other platform, Splunk is only as powerful as the data it has access to, therefore in this session we will be conducting a walk thru of how to successfully on-board data, with samples of data ranging from simple to complex. We will also be taking a look at how to use common TA’s to bring valuable data into Splunk. This session is designed to give you a better understanding of how to onboard data into Splunk enabling you to unlock the power of your data.
This document provides an overview of how Garmin International uses Splunk to monitor and analyze machine data. It introduces Tyler Rutschman, a Linux systems administrator at Garmin, and describes how Garmin started using Splunk in 2009 to help with Sarbanes-Oxley compliance. Splunk has provided benefits like reduced mean time to resolution, better reporting capabilities, cost savings, and improved compliance. The implementation collects up to 150 GB of data per day from sources like servers, databases, and load balancers. Future plans include indexer upgrades and adding more Garmin application data to Splunk.
Apache NiFi is a dataflow system developed at NSA that was donated to the Apache Software Foundation in 2014. It provides real-time data routing, transformation, and system mediation capabilities with an intuitive visual interface. Key features include flow-based programming, provenance tracking, security controls, and clustering support. The system aims to automate dataflows from any source to systems that analyze or store the data.
Building a future-proof cyber security platform with Apache MetronDataWorks Summit
Qsight IT gives you insight in how we use Metron in securing our customers by continuously analyzing and monitoring users, applications, data, and networks. We show you how we implemented Metron as a replacement for our former security platform based on rule-based security. Since we are dealing with a non-conventional use case “serving many customers with one platform,” we developed a business classification module that enables us to score threats according to the customer’s input.
To be future ready, we are working on extending this rule-based way of detection with machine learning models like web defacement, suspicious URL’s, UEBA, and many more to come.
In order to provide all the necessary information to the SOC analysts at a glance, we are developing a custom SOC application from where they can handle security alarms, analyze captured data, and have historical data at hand. We regard our new Metron based Security Platform as an emerging giant—a future-proof cyber security platform!
Speaker
Bas van de Lustgraaf, Big Data Engineer, QSight IT
Machiel van Tilborg, BI Engineer, QSight IT
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
How USCIS Powered a Digital Transition to eProcessing with Kafka (Rob Brown &...confluent
USCIS is modernizing its data systems to more efficiently process immigration benefit requests. It uses Kafka streaming for intake, case management, payments, scheduling and other processes. USCIS follows standards for common naming, security architecture, and reusable Kafka integration code. The agency aims to streamline forms intake, case processing, and move to microservices with domain-driven design. Security is ensured through regular vulnerability scans, asset management, and achieving an authority to operate.
This document provides an overview of data enrichment techniques in Splunk including tags, field aliases, calculated fields, event types, and lookups. It describes how tags can add context and categorize data, field aliases can simplify searches by normalizing field labels, and lookups can augment data with additional external fields. The document also discusses various data sources that Splunk can index such as network data, HTTP events, alerts, scripts, databases, and modular inputs for custom data collection.
This document discusses how Staples uses Splunk to gain insights from machine data across their organization. It provides details on:
- Staples' Splunk infrastructure consisting of 8 index servers and 9 search heads that can handle 1TB of data per day.
- The key use cases of operational support, application insights, and business intelligence.
- How Splunk provides a single pane of glass for visibility across their web apps, servers, monitoring tools, and more.
- Examples of how Splunk has helped identify issues, reduced resolution times, and optimized website searches to improve the customer experience.
Starting Your DevOps Journey – Practical Tips for OpsDynatrace
To watch, please see:
https://info.dynatrace.com/apm_wc_getting_started_with_devops_na_registration.html
Starting Your DevOps Journey: Practical Tips for Ops
In this webinar, Andreas Grabner, Chief DevOps Activist at Dynatrace, shares practical tips that all IT groups from Dev to Ops can use to start their DevOps journey quickly. With experience from hundreds of DevOps deployments, Andi provides insights it would take your team months or years to learn firsthand.
- Learn how everyone on your Ops team can use APM to better understand and monitor SLAs, Performance and End User Impact of their applications.
- Foster better collaboration between Ops and architects by extending basic system monitoring to monolith and microservices architectures.
- Shift-left your testing and QA by working with metrics that you and the architects agreed on up front, resulting in early relevant feedback and faster code deployments.
- Hear why changing the cultural mindset from “fear of change” to “Continuous Innovation and Optimization” is critical for success.
Andi is joined by guest speaker, Brian Chandler, Systems Engineer at Raymond James, who shares commonly used Ops dashboards that increase collaboration across IT teams and pro-actively break down silos!
This troubleshooting guide shows you how to identify and troubleshoot common web application performance problems using the ExtraHop Discovery Edition, a free virtual appliance for wire data analytics.
Top Java Performance Problems and Metrics To Check in Your PipelineAndreas Grabner
Why is Performance Important? What are the most common reasons applications dont scale and perform well. Which technical metrics to look at. How to check it automated in the pipeline
The document provides details of a proposed network solution for ACME Inc. that will allow 70 users to work productively from the company's 3-story office. Key aspects include:
- Implementing Active Directory, file/print services, and a company intranet to centralize management and sharing of files and communications.
- Dividing the network into subnets for different floors/departments and assigning IP addresses and devices.
- Specifying the required hardware, software, and licenses including laptops, desktops, servers, networking equipment, and applications.
- Outlining the conceptual network design with remote and on-site clients connecting through a firewall, VPN server, and other servers.
-
Iwsm2014 performance measurement for cloud computing applications using iso...Nesma
This document discusses measuring performance of cloud computing applications using ISO 25010 standard characteristics. It presents a case study of a private cloud hosting a Microsoft Exchange application. The study collected performance log data from nodes over one week. It analyzed the data focusing on the time behavior characteristic. It calculated statistics on measures like transmission rate and created a performance index to identify peaks and valleys in system performance over time. The study demonstrated mapping measures to ISO characteristics but noted challenges in data collection, processing and representation for large cloud infrastructures.
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...Prolifics
Abstract: Recent projects have stressed the "need for speed" while handling large amounts of data, with near zero downtime. An analysis of multiple environments has identified optimizations and architectures that improve both performance and reliability. The session covers data gathering and analysis, discussing everything from the network (multiple NICs, nearby catalogs, high speed Ethernet), to the latest features of extreme scale. Performance analysis helps pinpoint where time is spent (bottlenecks) and we discuss optimization techniques (MQ tuning, IIB performance best practices) as well as helpful IBM support pacs. Log Analysis pinpoints system stress points (e.g. CPU starvation) and steps on the path to near zero downtime.
Andreas Grabner maintains that most performance and scalability problems don’t need a large or long running performance test or the expertise of a performance engineering guru. Don’t let anybody tell you that performance is too hard to practice because it actually is not. You can take the initiative and find these often serious defects. Andreas analyzed and spotted the performance and scalability issues in more than 200 applications last year. He shares his performance testing approaches and explores the top problem patterns that you can learn to spot in your apps. By looking at key metrics found in log files and performance monitoring data, you will learn to identify most problems with a single functional test and a simple five-user load test. The problem patterns Andreas explains are applicable to any type of technology and platform. Try out your new skills in your current testing project and take the first step toward becoming a performance diagnostic hero.
This document discusses best practices for inter-process communication in microservices architectures. It covers various options for synchronous and asynchronous communication between services including RPC, publish/subscribe, and request/response patterns. It also discusses service discovery, load balancing, serialization formats, transport protocols, failure handling techniques like circuit breakers and bulkheads, monitoring, and debugging distributed requests across microservices.
The packet capture from the user's PC provided some insights but did not conclusively identify the cause of the application launch issues. It showed high packet retransmission rates between the client and server. While one capture showed a "good" launch taking over 90 seconds, a "bad" launch saw a transaction terminated by the server after 245 seconds, with increasing delays between server responses. Packet timing analysis suggested a network device was buffering and releasing packets in bursts, but retransmissions and delay increases pointed to TCP retransmission timeouts. However, without additional capture points, the root cause remained unclear.
Start Up Austin 2017: Production Preview - How to Stop Bad Things From HappeningAmazon Web Services
The document discusses key areas to review for a production readiness review:
1. Architecture design, monitoring, logging, documentation, alerting, service level agreements, expected throughput, and testing are identified as important areas to review.
2. Specific topics within each area are discussed like defining system behavior for monitoring, using consistent logging formats, and implementing canary deployments.
3. The importance of automation, understanding performance baselines, and implementing dark launches are emphasized for production readiness.
Yazid Boutejder: AWS San Francisco Startup Day, 9/7/17
Operations: Production Readiness Review – how to stop bad things from happening - There is more to deploying code than pushing the deploy button. A good practice that many companies follow is a Production Readiness Review (PRR) which is essentially a pre-flight check list before a service launches. This helps ensure new services are properly architected, monitored, secured, and more. We’ll walk through an example PRR and discuss the value of ensuring each of these is properly taken care of before your service launches.
The document summarizes a new overload control mechanism called Additive Increase and Probabilistic Change (AIPC) for SIP servers. AIPC works by synchronizing the sending rate between senders and receivers through a probabilistic change factor. When overload is detected at the receiver, it will reject incoming requests with a probability based on the current load level. The sender then adjusts its transmission rate accordingly. Simulation results showed AIPC is effective at reducing overload while maintaining reliability and fairness compared to existing mechanisms. AIPC allows servers to react faster to overload conditions while avoiding drastic throughput variations.
This document summarizes an article from the International Journal of Computer Engineering and Technology. The article proposes a new overload control mechanism called Additive Increase and Probabilistic Change (AIPC) to address overload situations for SIP servers. AIPC works by synchronizing probabilistic rate changes between the sender and receiver. When overload is detected, the receiver will decrease its transmission rate substantially, while informing the sender. The sender will then adjust its transmission strategy according to the receiver's capacity. The goal of AIPC is to reduce requests rejected by the overloaded SIP server in order to improve throughput and resource utilization. The mechanism is analyzed based on factors like effectiveness, efficiency, fairness and stability.
The document proposes a new overload control mechanism called Additive Increase and Probabilistic Change (AIPC) for SIP servers. AIPC works by having the sender probabilistically change its sending rate during overload conditions in a synchronized manner with the receiver. The mechanism aims to be effective, counter-active, reliable and fair. It analyzes factors like effectiveness, efficiency, fairness and stability through simulation. AIPC introduces probabilistic changes to the sending rate instead of the fixed increases and decreases used in traditional AIMD algorithms. This provides a more gradual adjustment of transmission rates between sender and receiver during overload situations.
The document discusses microservice architecture and data stream processing. It provides a history of these approaches and challenges they aim to address like growing application complexity and data size. Microservices are proposed as a solution, breaking applications into small, independent, communicating services. Advantages include fault tolerance, scalability, and easier development. Disadvantages include additional complexity for deployment, updates and monitoring. Examples and implementation suggestions are also provided.
This document discusses techniques for optimizing the performance of PeopleSoft applications. It covers tuning several aspects within a PeopleSoft environment, including server performance, web server performance, Tuxedo performance management, application performance, and database performance. Some key recommendations include implementing a methodology to monitor resource consumption without utilizing critical resources, ensuring load balancing strategies are sound, measuring historical patterns of server resource utilization, capturing key performance metrics for Tuxedo, and focusing on tuning high-resource consuming SQL statements and indexes.
Operations: Production Readiness Review – How to stop bad things from HappeningAmazon Web Services
The document provides an overview of key areas to review for production readiness including architecture design, monitoring, logging, documentation, alerting, service level agreements, expected throughput, testing, and deployment strategy. It summarizes best practices and considerations for each area such as using circuit breakers in monitoring, consistent logging formats, storing documentation near code, automating level 1 operations, and strategies for testing, deployments, and managing error budgets.
This presentation for Inside Analysis' Briefing Room explains the ExtraHop architecture for stream analytics. This concept enables you to mine all your wire data, which is all the data in motion in your environment.
With stream analytics for your data in motion from ExtraHop, you can confidently migrate applications to virtualized environments and manage their performance.
With insight from ExtraHop, the six-person IT team at Geel has correlated, cross-tier visibility across all applications and systems, both on-premises and in the cloud.
The IT team at Zonar is leveraging wire data from ExtraHop to streamline their own operations and ensure better performance across the infrastructure. In addition to a large-scale infrastructure mapping initiative, the team is also using wire data to troubleshoot issues from code-level errors to machines throwing millions of DNS requests.
Managed Services Provider Serves Customers Better with Wire DataExtraHop Networks
ACS Solutions GmbH (ACS) is a managed services provider, delivering hosting, application and infrastructure, and cloud computing services. Lack of visibility into Citrix performance problems meant not only unhappy customers, but failure to satisfy SLAs. Analysis of ACS' wire data delivered critical insight into performance across the entire infrastructure, including the Citrix environment.
Conga case study: Application visibility in AWS with ExtraHopExtraHop Networks
Conga is a leading Salesforce application partner. They use the ExtraHop platform to gain new insights into their application performance in AWS as well as the real-time activities of their users.
Learn more at http://www.extrahop.com.
ExtraHop provides real-time insights through wire data analytics and their Operational Excellence offering helps organizations quickly apply those insights through a tailored engagement. The engagement includes analyzing business requirements, deploying ExtraHop, integrating other systems, customizing metrics and visualizations, and providing training and documentation. Examples demonstrate how the engagement helped customers track customer account activity, monitor purchasing behavior and supply chains, monitor Citrix performance, and understand application usage.
The QuickStart Deployment from ExtraHop's Solutions Architecture team provides an expertly deployed ExtraHop solution aligned with business priorities. It ensures a fast return on investment by having ExtraHop experts handle the deployment instead of burdening internal teams. The deployment includes an initial scoping meeting, network discovery, prioritizing data collection for critical applications, implementing the baseline configuration, and creating customized dashboards aligned with business needs.
Health IT has a Big Data opportunity with HL7 analytics. Learn about what is possible from Wes Wright, CIO at Seattle Children's Hospital, and Erik Giesa, SVP of Marketing and Business Development at ExtraHop.
EMA Presentation: Driving Business Value with Continuous Operational Intellig...ExtraHop Networks
In this presentation, EMA Vice President of Research Jim Frey and ExtraHop SVP Erik Giesa explain how IT organizations can derive real-time IT and business insights from their wire data, as well as the unique capabilities included in the fourth-generation ExtraHop platform that make this continuous operational intelligence possible. For more information, visit www.extrahop.com
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Overcoming the PLG Trap: Lessons from Canva's Head of Sales & Head of EMEA Da...
Atlas Services Remote Analysis Report Sample
1. This sample demonstrates the type of
in-depth insight that your organization
will receive from your monthly Atlas
Services Remote Analysis Reports.
Annotations are provided in this
document that highlight the types of
analysis provided.
Remote&Analysis&Report&
Enabling&Continual&Service&Improvement&in&Critical&Systems&
&&
Overall Health
&
&
Aug& Sep& Oct&
Web Application Database
&
Middleware Citrix
&
Storage Supporting Application
Infrastructure
&
Application
Communication Network
PREPARATION
Month: October 2014
Report: Sample
Prepared for:
Customer
Analyst:
Analyst
ExtraHop Networks
Configuration:
EH8000
Firmware: 4.0
ID: XXXXX
CONFIDENTIAL 1
2. Atlas Services | Remote Analysis Report
Day 1 – Day 7
&&&&&
WEB APPLICATION
A review of the web application protocols including HTTP and HTTPS.
FINDINGS:
File&Not&Found&errors&(HTTP&status&code&404)&on&device1&have&significantly&decreased.&
(Trend:&Resolbed)& ↑&&
&
&
Previous finding reviews
can give you confidence
that performed actions
are addressing the
issues.
Resolved
&
Investigate&Internal&Server&errors&(HTTP&status&code&500)&that&occurred&on&the&AAAAA&
server&and&were&associated&with&a&single&URI.&Internal&Server&errors&were¬&previously&
☀&&
noted&on&this&server.&(New&finding)& &
Investigate&improvements&that&can&be&made&to&the&ZZZZZ&server&that&is&experiencing&a&
lengthy&processing&time&on&average.&Processing&time&on&this&server&has&become&less&
↗&&
severe&since&the&previous&analysis&period.&(Trend:&Improvement)&
&
CONFIDENTIAL 2
3. Atlas Services | Remote Analysis Report
Day 1 – Day 7
CRITICAL CONCERNS:
86.9% of HTTP responses on the AAAAA server were Internal Server errors (HTTP status codes 500).
Internal Server errors indicate that HTTP server encountered an unexpected condition that prevented
it from fulfilling the request.
Internal Server errors on AAAAA (indicated by the vertical red bars) appeared to correlate with the
HTTP transaction rate (indicated by the green line). At peak, 3,859 Internal Server errors
occurred on this device in a single hour.
100% of Internal Server errors on AAAAA occurred while attempting to access a single URI
resource, xxxx.xxxxxxx/PrePayService.
Trend graphs
help determine
if errors occur
during acute
events or if
they are part of
a chronic
problem.
CONFIDENTIAL 3
4. Atlas Services | Remote Analysis Report
Day 1 – Day 7
IMPROVEMENT OPPORTUNITIES:
Several HTTP servers are experiencing lengthy processing time on average. Notice that the ZZZZZ
server accounted for 55,742 responses and experienced an average processing time of over 2
seconds.
Utilizing the ExtraHop Heatmaps feature, we see that a high concentration of transactions on
ZZZZZ experienced approximately 5 seconds of processing time. A darker area on the graph
below indicates a high concentration of transactions.
Note the large standard deviation tied to processing time for the
xxx.xxx.xxx.xx:xxxx/EAI/OA URI. This indicates that the processing times
experienced for this URI were very “dispersed” and had a large amount of variation, meaning
that much larger processing times were also observed. Using these standard deviation and
mean measurements, we can conclude that approximately 1,277 transactions experienced
processing times of approximately 12.7 seconds.
Heatmaps
give a visual
representation
of processing
times.
CONFIDENTIAL 4
5. Atlas Services | Remote Analysis Report
Day 1 – Day 7
&&&&&
DATABASE
A review of all parsed database protocol traffic, regardless of the type of database. Protocols include
(if licensed): TNS (Oracle), TDS (MS SQL), DB2, Informix, Sybase, PostgreSQL, and MySQL
FINDINGS:
Investigate&database&errors&on&the&BBBBB&server&that&occurred&constantly;&these&errors&
were&related&to&failed&logins&for&the&ZZZ_ZZZZZ&database.&(New&finding)& ☀&&
&
CRITICAL CONCERNS:
None noted.
IMPROVEMENT OPPORTUNITIES:
1.0% of all database responses were errors.
Percentage
calculations allow for
quick determination of
the relative impact of
findings.
93.3% of all database errors were concentrated on the BBBBB server. Also note that
approximately 200% of all responses from this server resulted in errors, indicating that each
response sent from this server resulted in two errors.
CONFIDENTIAL 5
6. Atlas Services | Remote Analysis Report
Day 1 – Day 7
Error rate on this server (indicated below by the red vertical bars) stayed in excess of 700
errors per hour for a majority of the observation period.
100% of database errors from BBBBB were returned to the YYYYYY client.
Additionally, 100% of database errors on BBBBB had one of two messages. The messages of
these errors suggest that 100% of errors on BBBBB result from the YYYYYY client attempting
to log on to BBBBB and open an ZZZ_ZZZZZ database. 100% of these login and open
attempts are failing. Investigate scheduled tasks that may be causing these errors.
Also worth noting are the processing times observed on this database server. While a
majority of transactions were non-concerning (75% of all database transactions took, at most,
3 milliseconds of processing time), note that database transactions on BBBBB experienced as
much as a minute of processing time.
Plotting
transactions
against errors
provides insight
into the
behavior of
error
generation.
CONFIDENTIAL 6
7. Atlas Services | Remote Analysis Report
Day 1 – Day 7
The ExtraHop Heatmaps feature reveals that a “concentration” of transactions experienced
around 3 seconds (3,000 milliseconds) of processing time. A darker area on the graph below
indicates a higher concentration of transactions so while a large volume of transactions
experienced less than 400 milliseconds of processing time, it may be worth researching what
is causing some of the previously discussed failed logins to experience such lengthy
processing times.
CONFIDENTIAL 7
8. Atlas Services | Remote Analysis Report
Day 1 – Day 7
&&&&&
MIDDLEWARE
A review of all parsed middleware protocol traffic (if licensed): FTP, MQSeries, and Memcache.
FINDINGS:
Investigate&FTP&errors&that&occurred&on&the&CCCCC&server&and&appear&to&correlate&with&
SITE&method&calls.&The&overall&volume&of&FTP&errors&has&decreased&since&the&previous&
analysis&period.&(Trend:&Improvement)&
↗&&
&
CRITICAL CONCERNS:
16.8% of FTP responses resulted in an error. This is a decrease from the 25.4% FTP error rate noted
in the previous report.
38.4% of FTP errors originated on the CCCCC server.
CONFIDENTIAL 8
9. Atlas Services | Remote Analysis Report
Day 1 – Day 7
Spikes, in both FTP error rate (indicated by the vertical red bars) and transaction rate
(indicated by the green line) on CCCCC, occurred that the same time each day. The nightly
spike is highly suggestive of an automated FTP process that is broken or otherwise
misconfigured.
100% of FTP errors outbound from CCCCC were returned to a single client IP
(xxx.xxx.xxx.xxx).
100% of FTP errors on CCCCC affected the XXX_XXX user.
FTP errors on CCCCC had two error messages. The messages are available below.
CONFIDENTIAL 9
10. Atlas Services | Remote Analysis Report
Day 1 – Day 7
Further analysis of FTP errors suggests that there is a relationship between FTP 500 errors
and the use of the FTP SITE method. FTP 500 errors are indicative of erroneous syntax
resulting in an unrecognized action that, as a result, could not take place.
Looking at the busiest FTP server (CCCCC), we see an almost 1:1 relationship between the
use of the SITE method and FTP error code 500.
Time trending
errors can
also help
uncover other
correlations.
IMPROVEMENT OPPORTUNITIES:
Not evaluated.
CONFIDENTIAL 10
11. Atlas Services | Remote Analysis Report
Day 1 – Day 7
&&&&&
CITRIX
A review of Citrix performance
FINDINGS:
Citrix analysis can
help spot poor
application
performance,
unrelated to the
Citrix ICA
protocol.
Investigate&lengthy&session&load×&on&the&DDDDD&device&that&primarily&affected&two&
clients&and&were&related&to&a&single&application.&Citrix&load×&have&slightly&decreased&
since&the&previous&observation&period.&(Trend:&Improvement)&&
&
&
CRITICAL CONCERNS:
Several ICA servers are experiencing lengthy load times in excess of 40 seconds per session launch.
When launching an ICA session, lengthy load times will delay the start of the ICA session and cause
latency in overall application processing. ICA session launches transiting the DDDDD device
experienced a high number of launches with long load times.
CONFIDENTIAL 11
12. Atlas Services | Remote Analysis Report
Day 1 – Day 7
Drilling into DDDDD, we can see that session launches transiting two Cisco devices are primarily
affecting two clients: FFFFF and GGGGGG.
Three #MMMMMM application was most impacted by lengthy load times. Investigate
transactions that may be impacted by lengthy load times for this application.
IMPROVEMENT OPPORTUNITIES:
Not evaluated.
CONFIDENTIAL 12
13. Atlas Services | Remote Analysis Report
Day 1 – Day 7
&&&&&
STORAGE
A review of all parsed storage protocol traffic. Protocols include (if licensed): CIFS, NFS, and iSCSI.
FINDINGS:
Investigate&STATUS_ACCESS_DENIED&CIFS&errors&that&transited&the&NNNNN&device&and&
appeared&to&have&originated&at&yy.yy.yy.yy.&The&volume&of&CIFS&errors&significantly&
increased&since&the&previous&observation&period.&(Trend:&Worse)&
↓&&
&
CRITICAL CONCERNS:
49.6% of CIFS responses were errors. Severity of CIFS errors ranges widely from informational to
severe. High volumes of errors should be investigated to determine if action is required to fix or if
changes can be made to reduce unnecessary processing time.
70.7% of CIFS errors transited the NNNNN device.
CONFIDENTIAL 13
14. Atlas Services | Remote Analysis Report
Day 1 – Day 7
CIFS errors on NNNNN were returned to 118 client IPs.
Looking client-side at some of the top contributors of CIFS errors on the NNNNN device, it
appears that a large portion of CIFS errors that transited NNNNN originated on SSSSS at
yy.yy.yy.yy.
CONFIDENTIAL 14
15. Atlas Services | Remote Analysis Report
Day 1 – Day 7
The majority of CIFS errors on NNNNN have variations of STATUS_ACCESS_DENIED error
messages.
CIFS error rate (indicated by the vertical red bars) on NNNNN directly correlates with
transaction rate (indicated by the green line). Investigate transactions that may be impacted
by these CIFS errors. At peak, this device experienced 1,049,331 errors over the course of a
single hour, or more than 291 errors every second. Note that this server was only active
for four days during the observation period.
IMPROVEMENT OPPORTUNITIES:
Not evaluated.
CONFIDENTIAL 15
16. Atlas Services | Remote Analysis Report
Day 1 – Day 7
&&&&&
DNS analysis spots
problems contributing to
overall latency that can often
be fixed with minimal effort.
SUPPORTING APPLICATION
INFRASTRUCTURE
A review of protocol traffic related to supporting application infrastructure, including DNS, SSL, SMTP,
and LDAP.
FINDINGS:
Investigate&the&high&volume&of&DNS&response&errors&concentrated&on&the&HHHHH&device&
&&
that&were&related&to&reverse&IP&lookups.&(New&finding)& ☀&
Investigate&excessive&use&of&the&ANY&method&by&the&PPPPP&server;&a&significant&volume&of&
ANY&method&calls&originated&in&Australia.&The&volume&of&ANY&method&calls&has&slightly&
↗&&
decreased&since&the&previous&analysis&period.&&(Trend:&Improvement)&
&
CRITICAL CONCERNS:
91.4% of all DNS responses were errors. A DNS response error occurs when a client makes a DNS
lookup and the DNS server responds with some sort of error. These errors may not break an
application, but they add latency to application transactions and cause unnecessary processing on the
DNS server.
48.6% of DNS response errors originated on the HHHHH device. Note that 99.5% of requests
made to this device result in a DNS response error.
CONFIDENTIAL 16
17. Atlas Services | Remote Analysis Report
Day 1 – Day 7
The DNS response error rate (indicated by the vertical red bars) on HHHHH directly correlates
with transaction rate (indicated by the green line). Investigate transactions that may be
impacted by DNS response errors.
Nearly 100% of DNS response errors outbound from HHHHH were returned to LLLLL via a
Cisco device.
DNS response errors outbound from HHHHH are related a number of reverse IP lookups. Note
that these queries are erring nearly 100% of the time they are called.
CONFIDENTIAL 17
18. Atlas Services | Remote Analysis Report
Day 1 – Day 7
Over 15,500,000 instances of the DNS “ANY” method occurred during the observation period. This is a
decrease in the volume of ANY method requests noted in the previous report, however, this is still a
concerning volume. Use of the ANY method returns all known information about a DNS zone in a
single request, and is usually indicative of a DNS Amplification Attack. More information available
here: http://www.us-cert.gov/ncas/alerts/TA13-088A.
86.3% of ANY method calls occurred on the PPPPP DNS server at xx.yy.zz.aa.
The following Geomap identifies the physical location of IPs that sent ANY requests to the
server at xx.yy.zz.aa. A denser dot indicates a higher volume of transactions. Note that
the AAA.BB.XXX.ZZ IP located in Canberra, Australia accounts for a large portion of these
ANY method requests; this may be related to malicious activity.
Geomaps allow
for a
geographical
visualization of
devices
communicating
on your
network.
IMPROVEMENT OPPORTUNITIES:
Not evaluated.
CONFIDENTIAL 18
19. Atlas Services | Remote Analysis Report
Day 1 – Day 7
&&&&&
APPLICATION COMMUNICATION
FINDINGS:
TCP analysis
provides insight into a
commonly overlooked
region, where the
network meets the
application
Investigate&Zero&Windows&that&occurred&on&the&RRRR&device.&Zero&Windows&occurred&in&
spikes;&these&spikes&have&become&much&more&severe&since&the&previous&observation&
period.&(Trend:&Worse)&
↓&&
&
CRITICAL CONCERNS:
More than 77,000,000 Zero Windows were observed on the XXXXXXX network over the course of the
seven-day observation period. A Zero Window indicates that the connection between two devices has
stalled and that the device sending the Zero Window is unable to keep up with the rate of data that a
peer is sending. In effect, the device sending the Zero Window is saying, “send no data until further
notice.” 52.4% of Zero Windows were outbound from the RRRR device.
CONFIDENTIAL 19
20. Atlas Services | Remote Analysis Report
Day 1 – Day 7
At peak, 4,620,000 Zero Windows were sent from RRRR over the course of a single hour, or more
than 1,283 Zero Windows sent each second.
60.5% of Zero Windows outbound from RRRR were sent to the TTTTT device.
100% of Zero Windows sent from RRRR were related to the CIFS protocol.
IMPROVEMENT OPPORTUNITIES:
Not evaluated.
Tying TCP
metrics to an L7
protocol can help
diagnose
underlying
communication
problems.
CONFIDENTIAL 20
21. Atlas Services | Remote Analysis Report
Day 1 – Day 7
&&&&&
NETWORK
FINDINGS:
Investigate&high&volume&of&IP&fragments&outbound&from&the&UUUUU&device.&Outbound&IP&
fragments&were¬&previously¬ed&on&this&device.&(New&finding)& ☀&&
&
CRITICAL CONCERNS:
More than 29,300,000 IP fragments were sent onto the XXXXXXX network over the course of the
seven-day observation period. IP fragmentation may be caused by an MTU mismatch between
devices on the network. This results in high volumes of segments being sent across the network,
which can overwhelm both the network as well as devices.
44.4% of IP fragments were outbound from the UUUUU device at aa.bbb.ccc.dd.
100% IP fragments from UUUUU were sent to uu.xx.yy.zz via broadcast traffic on UDP
port 8156.
& &
CONFIDENTIAL 21