Eventually, every website fails. If it's a household-name site like Amazon, then news of that failure gets around faster than a rocket full of monkeys. That's because downtime hurts. As a for-instance, in 2013 Amazon suffered a 40-minute outage that allegedly cost the company $5 million in lost sales. That's a big number, and everybody loves big numbers.
But when it comes to performance-related losses, is it the biggest number?
In this presentation from the CMG Performance and Capacity 2014 conference, Radware Web Performance Expert Tammy Everts reviews real-world examples that compare the cost of site slowdowns versus outages. We also talk about how to overcome the challenges of creating as much urgency around the topic of slow time as there is around the topic of downtime.
Presented by Adrien Grand, Software Engineer, Elasticsearch
Although people usually come to Lucene and related solutions in order to make data searchable, they often realize that it can do much more for them. Indeed, its ability to handle high loads of complex queries make Lucene a perfect fit for analytics applications and, for some use-cases, even a credible replacement for a primary data-store. It is important to understand the design decisions behind Lucene in order to better understand the problems it can solve and the problems it cannot solve. This talk will explain the design decisions behind Lucene, give insights into how Lucene stores data on disk and how it differs from traditional databases. Finally, there will be highlights of recent and future changes in Lucene index file formats.
This is the first time I introduced the concept of Schema-on-Read vs Schema-on-Write to the public. It was at Berkeley EECS RAD Lab retreat Open Mic Session on May 28th, 2009 at Santa Cruz, California.
From cache to in-memory data grid. Introduction to Hazelcast.Taras Matyashovsky
This presentation:
* covers basics of caching and popular cache types
* explains evolution from simple cache to distributed, and from distributed to IMDG
* not describes usage of NoSQL solutions for caching
* is not intended for products comparison or for promotion of Hazelcast as the best solution
Espresso: LinkedIn's Distributed Data Serving Platform (Paper)Amy W. Tang
This paper, written by the LinkedIn Espresso Team, appeared at the ACM SIGMOD/PODS Conference (June 2013). To see the talk given by Swaroop Jagadish (Staff Software Engineer @ LinkedIn), go here:
http://www.slideshare.net/amywtang/li-espresso-sigmodtalk
ClickHouse Data Warehouse 101: The First Billion Rows, by Alexander Zaitsev a...Altinity Ltd
Columnar stores like ClickHouse enable users to pull insights from big data in seconds, but only if you set things up correctly. This talk will walk through how to implement a data warehouse that contains 1.3 billion rows using the famous NY Yellow Cab ride data. We'll start with basic data implementation including clustering and table definitions, then show how to load efficiently. Next, we'll discuss important features like dictionaries and materialized views, and how they improve query efficiency. We'll end by demonstrating typical queries to illustrate the kind of inferences you can draw rapidly from a well-designed data warehouse. It should be enough to get you started--the next billion rows is up to you!
Communication between Microservices is inherently unreliable. These integration points may produce cascading failures, slow responses, service outages. We will walk through stability patterns like timeouts, circuit breaker, bulkheads and discuss how they improve stability of Microservices.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
Presented by Adrien Grand, Software Engineer, Elasticsearch
Although people usually come to Lucene and related solutions in order to make data searchable, they often realize that it can do much more for them. Indeed, its ability to handle high loads of complex queries make Lucene a perfect fit for analytics applications and, for some use-cases, even a credible replacement for a primary data-store. It is important to understand the design decisions behind Lucene in order to better understand the problems it can solve and the problems it cannot solve. This talk will explain the design decisions behind Lucene, give insights into how Lucene stores data on disk and how it differs from traditional databases. Finally, there will be highlights of recent and future changes in Lucene index file formats.
This is the first time I introduced the concept of Schema-on-Read vs Schema-on-Write to the public. It was at Berkeley EECS RAD Lab retreat Open Mic Session on May 28th, 2009 at Santa Cruz, California.
From cache to in-memory data grid. Introduction to Hazelcast.Taras Matyashovsky
This presentation:
* covers basics of caching and popular cache types
* explains evolution from simple cache to distributed, and from distributed to IMDG
* not describes usage of NoSQL solutions for caching
* is not intended for products comparison or for promotion of Hazelcast as the best solution
Espresso: LinkedIn's Distributed Data Serving Platform (Paper)Amy W. Tang
This paper, written by the LinkedIn Espresso Team, appeared at the ACM SIGMOD/PODS Conference (June 2013). To see the talk given by Swaroop Jagadish (Staff Software Engineer @ LinkedIn), go here:
http://www.slideshare.net/amywtang/li-espresso-sigmodtalk
ClickHouse Data Warehouse 101: The First Billion Rows, by Alexander Zaitsev a...Altinity Ltd
Columnar stores like ClickHouse enable users to pull insights from big data in seconds, but only if you set things up correctly. This talk will walk through how to implement a data warehouse that contains 1.3 billion rows using the famous NY Yellow Cab ride data. We'll start with basic data implementation including clustering and table definitions, then show how to load efficiently. Next, we'll discuss important features like dictionaries and materialized views, and how they improve query efficiency. We'll end by demonstrating typical queries to illustrate the kind of inferences you can draw rapidly from a well-designed data warehouse. It should be enough to get you started--the next billion rows is up to you!
Communication between Microservices is inherently unreliable. These integration points may produce cascading failures, slow responses, service outages. We will walk through stability patterns like timeouts, circuit breaker, bulkheads and discuss how they improve stability of Microservices.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkDatabricks
In this talk, we will introduce some of the new available APIs around stateful aggregation in Structured Streaming, namely flatMapGroupsWithState. We will show how this API can be used to power many complex real-time workflows, including stream-to-stream joins, through live demos using Databricks and Apache Kafka.
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Databricks
Structured Streaming has proven to be the best platform for building distributed stream processing applications. Its unified SQL/Dataset/DataFrame APIs and Spark’s built-in functions make it easy for developers to express complex computations. Delta Lake, on the other hand, is the best way to store structured data because it is a open-source storage layer that brings ACID transactions to Apache Spark and big data workloads Together, these can make it very easy to build pipelines in many common scenarios. However, expressing the business logic is only part of the larger problem of building end-to-end streaming pipelines that interact with a complex ecosystem of storage systems and workloads. It is important for the developer to truly understand the business problem that needs to be solved. Apache Spark, being a unified analytics engine doing both batch and stream processing, often provides multiples ways to solve the same problem. So understanding the requirements carefully helps you to architect your pipeline that solves your business needs in the most resource efficient manner.
In this talk, I am going examine a number common streaming design patterns in the context of the following questions.
WHAT are you trying to consume? What are you trying to produce? What is the final output that the business wants? What are your throughput and latency requirements?
WHY do you really have those requirements? Would solving the requirements of the individual pipeline actually solve your end-to-end business requirements?
HOW are going to architect the solution? And how much are you willing to pay for it?
Clarity in understanding the ‘what and why’ of any problem can automatically much clarity on the ‘how’ to architect it using Structured Streaming and, in many cases, Delta Lake.
Webinar: Deep Dive on Apache Flink State - Seth WiesmanVerverica
Apache Flink is a world class stateful stream processor presents a huge variety of optional features and configuration choices to the user. Determining out the optimal choice for any production environment and use-case be challenging. In this talk, we will explore and discuss the universe of Flink configuration with respect to state and state backends.
We will start with a closer look under the hood, at core data structures and algorithms, to build the foundation for understanding the impact of tuning parameters and the costs-benefit-tradeoffs that come with certain features and options. In particular, we will focus on state backend choices (Heap vs RocksDB), tuning checkpointing (incremental checkpoints, ...) and recovery (local recovery), serializers and Apache Flink's new state migration capabilities.
Stream Processing using Apache Flink in Zalando's World of Microservices - Re...Zalando Technology
In this talk we present Zalando's microservices architecture, introduce Saiki – our next generation data integration and distribution platform on AWS and show how we employ stream processing for near-real time business intelligence.
Zalando is one of the largest online fashion retailers in Europe. In order to secure our future growth and remain competitive in this dynamic market, we are transitioning from a monolithic to a microservices architecture and from a hierarchical to an agile organization.
We first have a look at how business intelligence processes have been working inside Zalando for the last years and present our current approach - Saiki. It is a scalable, cloud-based data integration and distribution infrastructure that makes data from our many microservices readily available for analytical teams.
We no longer live in a world of static data sets, but are instead confronted with an endless stream of events that constantly inform us about relevant happenings from all over the enterprise. The processing of these event streams enables us to do near-real time business intelligence. In this context we have evaluated Apache Flink vs. Apache Spark in order to choose the right stream processing framework. Given our requirements, we decided to use Flink as part of our technology stack, alongside with Kafka and Elasticsearch.
With these technologies we are currently working on two use cases: a near real-time business process monitoring solution and streaming ETL.
Monitoring our business processes enables us to check if technically the Zalando platform works. It also helps us analyze data streams on the fly, e.g. order velocities, delivery velocities and to control service level agreements.
On the other hand, streaming ETL is used to relinquish resources from our relational data warehouse, as it struggles with increasingly high loads. In addition to that, it also reduces the latency and facilitates the platform scalability.
Finally, we have an outlook on our future use cases, e.g. near-real time sales and price monitoring. Another aspect to be addressed is to lower the entry barrier of stream processing for our colleagues coming from a relational database background.
Traditionally database systems were optimized either for OLAP either for OLTP workloads. Such mainstream DBMSes like Postgres,MySQL,... are mostly used for OLTP, while Greenplum, Vertica, Clickhouse, SparkSQL,... are oriented on analytic queries. But right now many companies do not want to have two different data stores for OLAP/OLTP and need to perform analytic queries on most recent data. I want to discuss which features should be added to Postgres to efficiently handle HTAP workload.
Evening out the uneven: dealing with skew in FlinkFlink Forward
Flink Forward San Francisco 2022.
When running Flink jobs, skew is a common problem that results in wasted resources and limited scalability. In the past years, we have helped our customers and users solve various skew-related issues in their Flink jobs or clusters. In this talk, we will present the different types of skew that users often run into: data skew, key skew, event time skew, state skew, and scheduling skew, and discuss solutions for each of them. We hope this will serve as a guideline to help you reduce skew in your Flink environment.
by
Jun Qin & Karl Friedrich
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceDatabricks
Zeus is an efficient, highly scalable and distributed shuffle as a service which is powering all Data processing (Spark and Hive) at Uber. Uber runs one of the largest Spark and Hive clusters on top of YARN in industry which leads to many issues such as hardware failures (Burn out Disks), reliability and scalability challenges.
NoSQL databases are currently used in several applications scenarios in contrast to Relations Databases. Several type of Databases there exist. In this presentation we compare Key Value, Column Oriented, Document Oriented and Graph Databases. Using a simple case study there are evaluated pros and cons of the NoSQL databases taken into account.
Apache Flink is a popular stream computing framework for real-time stream computing. Many stream compute algorithms require trailing data in order to compute the intended result. One example is computing the number of user logins in the last 7 days. This creates a dilemma where the results of the stream program are incomplete until the runtime of the program exceeds 7 days. The alternative is to bootstrap the program using historic data to seed the state before shifting to use real-time data.
This talk will discuss alternatives to bootstrap programs in Flink. Some alternatives rely on technologies exogenous to the stream program, such as enhancements to the pub/sub layer, that are more generally applicable to other stream compute engines. Other alternatives include enhancements to Flink source implementations. Lyft is exploring another alternative using orchestration of multiple Flink programs. The talk will cover why Lyft pursued this alternative and future directions to further enhance bootstrapping support in Flink.
Speaker
Gregory Fee, Principal Engineer, Lyft
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewenconfluent
Flink and Kafka are popular components to build an open source stream processing infrastructure. We present how Flink integrates with Kafka to provide a platform with a unique feature set that matches the challenging requirements of advanced stream processing applications. In particular, we will dive into the following points:
Flink’s support for event-time processing, how it handles out-of-order streams, and how it can perform analytics on historical and real-time streams served from Kafka’s persistent log using the same code. We present Flink’s windowing mechanism that supports time-, count- and session- based windows, and intermixing event and processing time semantics in one program.
How Flink’s checkpointing mechanism integrates with Kafka for fault-tolerance, for consistent stateful applications with exactly-once semantics.
We will discuss “”Savepoints””, which allows users to save the state of the streaming program at any point in time. Together with a durable event log like Kafka, savepoints allow users to pause/resume streaming programs, go back to prior states, or switch to different versions of the program, while preserving exactly-once semantics.
We explain the techniques behind the combination of low-latency and high throughput streaming, and how latency/throughput trade-off can configured.
We will give an outlook on current developments for streaming analytics, such as streaming SQL and complex event processing.
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkDatabricks
In this talk, we will introduce some of the new available APIs around stateful aggregation in Structured Streaming, namely flatMapGroupsWithState. We will show how this API can be used to power many complex real-time workflows, including stream-to-stream joins, through live demos using Databricks and Apache Kafka.
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Databricks
Structured Streaming has proven to be the best platform for building distributed stream processing applications. Its unified SQL/Dataset/DataFrame APIs and Spark’s built-in functions make it easy for developers to express complex computations. Delta Lake, on the other hand, is the best way to store structured data because it is a open-source storage layer that brings ACID transactions to Apache Spark and big data workloads Together, these can make it very easy to build pipelines in many common scenarios. However, expressing the business logic is only part of the larger problem of building end-to-end streaming pipelines that interact with a complex ecosystem of storage systems and workloads. It is important for the developer to truly understand the business problem that needs to be solved. Apache Spark, being a unified analytics engine doing both batch and stream processing, often provides multiples ways to solve the same problem. So understanding the requirements carefully helps you to architect your pipeline that solves your business needs in the most resource efficient manner.
In this talk, I am going examine a number common streaming design patterns in the context of the following questions.
WHAT are you trying to consume? What are you trying to produce? What is the final output that the business wants? What are your throughput and latency requirements?
WHY do you really have those requirements? Would solving the requirements of the individual pipeline actually solve your end-to-end business requirements?
HOW are going to architect the solution? And how much are you willing to pay for it?
Clarity in understanding the ‘what and why’ of any problem can automatically much clarity on the ‘how’ to architect it using Structured Streaming and, in many cases, Delta Lake.
Webinar: Deep Dive on Apache Flink State - Seth WiesmanVerverica
Apache Flink is a world class stateful stream processor presents a huge variety of optional features and configuration choices to the user. Determining out the optimal choice for any production environment and use-case be challenging. In this talk, we will explore and discuss the universe of Flink configuration with respect to state and state backends.
We will start with a closer look under the hood, at core data structures and algorithms, to build the foundation for understanding the impact of tuning parameters and the costs-benefit-tradeoffs that come with certain features and options. In particular, we will focus on state backend choices (Heap vs RocksDB), tuning checkpointing (incremental checkpoints, ...) and recovery (local recovery), serializers and Apache Flink's new state migration capabilities.
Stream Processing using Apache Flink in Zalando's World of Microservices - Re...Zalando Technology
In this talk we present Zalando's microservices architecture, introduce Saiki – our next generation data integration and distribution platform on AWS and show how we employ stream processing for near-real time business intelligence.
Zalando is one of the largest online fashion retailers in Europe. In order to secure our future growth and remain competitive in this dynamic market, we are transitioning from a monolithic to a microservices architecture and from a hierarchical to an agile organization.
We first have a look at how business intelligence processes have been working inside Zalando for the last years and present our current approach - Saiki. It is a scalable, cloud-based data integration and distribution infrastructure that makes data from our many microservices readily available for analytical teams.
We no longer live in a world of static data sets, but are instead confronted with an endless stream of events that constantly inform us about relevant happenings from all over the enterprise. The processing of these event streams enables us to do near-real time business intelligence. In this context we have evaluated Apache Flink vs. Apache Spark in order to choose the right stream processing framework. Given our requirements, we decided to use Flink as part of our technology stack, alongside with Kafka and Elasticsearch.
With these technologies we are currently working on two use cases: a near real-time business process monitoring solution and streaming ETL.
Monitoring our business processes enables us to check if technically the Zalando platform works. It also helps us analyze data streams on the fly, e.g. order velocities, delivery velocities and to control service level agreements.
On the other hand, streaming ETL is used to relinquish resources from our relational data warehouse, as it struggles with increasingly high loads. In addition to that, it also reduces the latency and facilitates the platform scalability.
Finally, we have an outlook on our future use cases, e.g. near-real time sales and price monitoring. Another aspect to be addressed is to lower the entry barrier of stream processing for our colleagues coming from a relational database background.
Traditionally database systems were optimized either for OLAP either for OLTP workloads. Such mainstream DBMSes like Postgres,MySQL,... are mostly used for OLTP, while Greenplum, Vertica, Clickhouse, SparkSQL,... are oriented on analytic queries. But right now many companies do not want to have two different data stores for OLAP/OLTP and need to perform analytic queries on most recent data. I want to discuss which features should be added to Postgres to efficiently handle HTAP workload.
Evening out the uneven: dealing with skew in FlinkFlink Forward
Flink Forward San Francisco 2022.
When running Flink jobs, skew is a common problem that results in wasted resources and limited scalability. In the past years, we have helped our customers and users solve various skew-related issues in their Flink jobs or clusters. In this talk, we will present the different types of skew that users often run into: data skew, key skew, event time skew, state skew, and scheduling skew, and discuss solutions for each of them. We hope this will serve as a guideline to help you reduce skew in your Flink environment.
by
Jun Qin & Karl Friedrich
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceDatabricks
Zeus is an efficient, highly scalable and distributed shuffle as a service which is powering all Data processing (Spark and Hive) at Uber. Uber runs one of the largest Spark and Hive clusters on top of YARN in industry which leads to many issues such as hardware failures (Burn out Disks), reliability and scalability challenges.
NoSQL databases are currently used in several applications scenarios in contrast to Relations Databases. Several type of Databases there exist. In this presentation we compare Key Value, Column Oriented, Document Oriented and Graph Databases. Using a simple case study there are evaluated pros and cons of the NoSQL databases taken into account.
Apache Flink is a popular stream computing framework for real-time stream computing. Many stream compute algorithms require trailing data in order to compute the intended result. One example is computing the number of user logins in the last 7 days. This creates a dilemma where the results of the stream program are incomplete until the runtime of the program exceeds 7 days. The alternative is to bootstrap the program using historic data to seed the state before shifting to use real-time data.
This talk will discuss alternatives to bootstrap programs in Flink. Some alternatives rely on technologies exogenous to the stream program, such as enhancements to the pub/sub layer, that are more generally applicable to other stream compute engines. Other alternatives include enhancements to Flink source implementations. Lyft is exploring another alternative using orchestration of multiple Flink programs. The talk will cover why Lyft pursued this alternative and future directions to further enhance bootstrapping support in Flink.
Speaker
Gregory Fee, Principal Engineer, Lyft
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewenconfluent
Flink and Kafka are popular components to build an open source stream processing infrastructure. We present how Flink integrates with Kafka to provide a platform with a unique feature set that matches the challenging requirements of advanced stream processing applications. In particular, we will dive into the following points:
Flink’s support for event-time processing, how it handles out-of-order streams, and how it can perform analytics on historical and real-time streams served from Kafka’s persistent log using the same code. We present Flink’s windowing mechanism that supports time-, count- and session- based windows, and intermixing event and processing time semantics in one program.
How Flink’s checkpointing mechanism integrates with Kafka for fault-tolerance, for consistent stateful applications with exactly-once semantics.
We will discuss “”Savepoints””, which allows users to save the state of the streaming program at any point in time. Together with a durable event log like Kafka, savepoints allow users to pause/resume streaming programs, go back to prior states, or switch to different versions of the program, while preserving exactly-once semantics.
We explain the techniques behind the combination of low-latency and high throughput streaming, and how latency/throughput trade-off can configured.
We will give an outlook on current developments for streaming analytics, such as streaming SQL and complex event processing.
Walmart proves the obvious, devknob wonders why people don't understand why page speed matters. This has been true and known to be true since the beginning of the internet. Do you think people won't get distracted easily and bounce when they're surfing on 2g, 3g and even 4g connections? Page speed matters, devknob is probably the best page speed optimizer in the world so if you need conversion optimization, you may want to visit devknob online at devknob.com
Intro to open source telemetry linux con 2016Matthew Broberg
Abstract
As part of the team delivering Snap, an open telemetry framework, I've run through dozens of use cases where gathering disparate metrics from services can roll up into meaningful diagrams for operations engineers and developers alike. We will use Snap's plugin model to collect, process and publish these measurements into meaningful graphs using open source tools. By joining this session, you can follow along and install industry-standard open source projects, deploy them and then use Snap to collect, process and visualize these metrics.
Audience
Anyone with an operations-background (or future ahead of them) that wants to see the breadth of available open source tooling around telemetry. This proposal is designed for the hands-on user, who is comfortable running containers or virtual machines locally.
Experience Level
Intermediate
Benefits to the Ecosystem
By joining this session, you can follow along and install industry-standard open source projects, deploy them and then use Snap to collect, process and visualize these metrics. This empowers users within the Linux ecosystem to see their knowledge as powerful when visualized next to other layers of the datacenter.
Good designing is also an act of communication between the user and designer and the user. Gets here all the important tips and techniques of user experience design by our expert.
Mobile Web Stress: Understanding the Neurological Impact of Poor PerformanceRadware
Slow pages hurt mobile user metrics, from bounce rate to online revenues and long-term user retention. At Radware, we wanted to understand the science behind this, so we engaged in the first documented study of the neurological impact of poor performance on mobile users. Your takeaway from this presentation is hard data that you can use to make a case for investing in mobile performance in your organization.
Based on similar research performed on desktop users, our study involved using a groundbreaking combination of eyetracking and electroencephalography (EEG) technologies to monitor brain wave activity in a group of mobile users who were asked to perform a series of online transactions via mobile devices.
In our study, participants were asked to complete standardized shopping tasks on four ecommerce sites while using a smartphone. We studied participants during these tasks, both at the normal speed over Wifi and also at a consistently slowed-down speed (using software that allowed us to create a 500ms network delay). The participants did not know that speed was a factor in the tests; rather, they believed that they were participating in a generic usability/brand perception study. From the data, we were able to extract measures of frustration and emotional engagement for the browsing and checkout stages of both the normal and slowed-down versions of all four sites.
This presentation, shared by Radware Web Performance Evangelist Tammy Everts at the 2014 Velocity Conference and the CMG Performance and Capacity 2014 Conference, provides a deeper understanding of the impact of performance on mobile users.
For even more on the research, you can also download it here: http://www.radware.com/mobile-eeg2013/
MeasureWorks - Why your customers don't like to wait!MeasureWorks
My presentation at the Zycko breakfast session... About why your users don't like to wait and why you should care as a site owner. This presentation covers the importance of perception of speed, navigation and how to do proper performance monitoring...
My slides from Emerce - Digital Marketing Live 2014... About why your users don't like to wait, why this is important for you as a site owner and best practices to align content and speed to create a fast user experience...
MeasureWorks - Shoppingtoday - 5 must-do's for the holiday seasonMeasureWorks
Slides from my session at Shoppingtoday 2015, a 5 step approach to get your site fast and ready for peak volumes for your customers during the holidays season...
Building Value - Understanding the TCO and ROI of Apache Kafka & Confluentconfluent
For a product or service to be cost effective, it must be considered to be good value, where the benefits are worth at least what is paid for them. But how do we measure this, to prove the case? Given that value can be intangible, it can be hard to quantify and may have little relationship to cost. Added to this, the open source nature of Apache Kafka means that many companies skip the requirement to build a business case for it, until it has become mission critical and demands financial and human resources.
In this presentation, Lyndon Hedderly, Team Lead of Business Value Consulting at Confluent, will cover how Confluent works with customers to measure the business value of data streaming.
Show Me the Money: Connecting Performance Engineering to Real Business ResultsCorrelsense
Performance testing and optimization are often neglected parts of enterprise application roll out and upgrade initiatives.
The challenge for many IT managers is communicating the value of IT performance projects to business stakeholders who would benefit the most.
An interactive discussion with Walter Kuketz, CTO of Collaborative Consulting where he shares:
- How to align key business drivers with your performance engineering projects
- Ways to bridge the IT-business stakeholder communication gap
- A new approach to model business transactions and their IT dependencies
Host: Frank Days
Title: VP of Marketing, Correlsense
Presentation from Cars.com and Keynote Systems at Internet Retailer Conference 2011 on the business value of site speed and best practices to minimize impact of 3rd party content.
Understanding the TCO and ROI of Apache Kafka & Confluentconfluent
For a product or service to be cost effective, it must be considered to be good value, where the benefits are worth at least what is paid for them. But how do we measure this, to prove the case? Given that value can be intangible, it can be hard to quantify and may have little relationship to cost. Added to this, the open source nature of Apache Kafka means that many companies skip the requirement to build a business case for it, until it has become mission critical and demands financial and human resources.
In this presentation, Lyndon Hedderly, Team Lead of Business Value Consulting at Confluent, will cover how Confluent works with customers to measure the business value of data streaming
Making Your Digital Twin Come to Life.pdfAvinashBatham
Tredence is excited to co-innovate with Databricks to deliver the solutions required for
enterprises to create digital twins from the ground up and implement them swiftly to
maximize their ROI.
The Light Bulb Moment – Learning to-identify-robotic-automation-opportunitiesOpenSpan
The “Light Bulb Moment” is the point when you realize how Robotic Automation and Robotic Desktop Automation could help you boost productivity and drive superior results in the process-centric parts of the enterprise.
Blue Triangle Technologies - The Business Case For Choosing a CDN (White Paper)Josh Carter
Looking to evaluate your content delivery network? Thinking about switching providers? You’ve come to the right place. This white paper will show you how you can make a more informed business decision around your CDN investment, then ensure you receive the greatest ROI from your CDN choice.
Leveraging KPI’s to Maximize the ROI of Support MetricNet
Most service and support professionals are familiar with the concept of return on investment (ROI), but very few service and support organizations measure or track it. However, support groups that understand and communicate ROI on a regular basis gain a number of important advantages, chief among which is the ability to justify investments in people and technology. In this session, attendees will learn about three critical KPIs for measuring ROI, simple calculations for ROI in service and support, techniques for communicating ROI to key stakeholders, and more!
Slides from my session for the marketing students at Windesheim College. About why performance matters to your end user, how to measure performance and what to look for when optimizing performance of your website...
State of Web Readiness: E-Commerce 2013Load Impact
Website performance and stability problems directly impact the bottom line. While shoppers demand page load speeds in the milliseconds, most e-commerce sites have response times closer to 8 seconds – nearly twice the latency of other website categories studied.
This report combines the statistics of over 5000 performance tests with 500 survey responses to shed light on the importance of performance testing and the reasons e-retailers neglect to run such tests, despite the fact that increased latency almost always leads to a decrease in sales.
TierPoint white paper_How_to_Position_Cloud_ROI_2015sllongo3
Traditional ROI calculators do an ineffective job of measuring the value of cloud services. This white paper serves as a guide to calculating cloud ROI using seven metrics you may not have considered.
Cyber Security Through the Eyes of the C-Suite (Infographic)Radware
C-level executives are grappling with a new breed of cyber-attacks. How are they responding to ransom-based threats? Why are they turning to ex-hackers for help? Radware interviewed 200 IT executives in the U.S. and U.K. to find out.
What’s the Cost of a Cyber Attack (Infographic)Radware
How much does a cyber-attack actually cost an organization in hard dollars? What are the potential business impacts? This infographic answers these questions and more via two surveys Radware recently conducted of IT professionals.
The enterprise perimeter is disappearing. Migration to the cloud means a more distributed network infrastructure. Transition of web based applications to the cloud renders on premise mitigation tools ineffective against web attacks and requires organizations to protect applications both on premise and in-the-cloud.
Introducing Radware's Hybrid Cloud WAF Service - a fully-managed, always on service that integrates cloud-based with on premise protection against a broad range of attack vectors.
Visit here http://www.radware.com/social/hybridcloudwaf/ to read "The Dawn of Hybrid Cloud WAF" and to learn how the industry's first hybrid cloud-based WAF service addresses today's most challenging web-based cyber-attacks.
The Expanding Role and Importance of Application Delivery Controllers [Resear...Radware
When it Comes to ADCs, Perception is Not Reality.
The Enterprise Strategy Group and Radware recently conducted a collaborative research project about the current use and future strategies of application delivery controllers (ADCs).
Based on a survey of 243 IT professionals, the research reveals that the role of ADCs has expanded well beyond the historical perception of hardware-based load balancers.
What’s most interesting is that ADCs are becoming a critical component of a defense-in-depth security strategy as enterprises fine-tune security policy and enforcement to align with their sensitive business applications. Organizations are also deploying ADCs as virtual appliances at an increasing rate and taking advantage of ADC functionality from the network through the application layer.
There is a lesson to be learned here: enterprise organizations can get creative with ADC deployments for performance tuning, application-specific services, and critical system protection. Read this research http://www.radware.com/social/esg-adc-research/ to understand the benefits of applying ADCs in this fashion.
The Art of Cyber War [From Black Hat Brazil 2014]Radware
With cyber-attacks becoming a growing concern for organizations, availability-based attacks, also known as Denial of Service or Distributed Denial of Service attacks, have long moved from a form of cyber protest to a destructive weapon that is used by cyber criminals, hacktivists and even governments.
In 2013 we saw a growing use of a new type of attack where attackers used legitimate transactions to saturate application servers’ resources. In this presentation, Security Expert Werner Thalmeier demonstrates how such an advanced attack can be created from a laptop running in an anonymous public WiFi network. He also evaluates the attack landscape and its impact on organizations as well as shares the best practices to protect against such cyber-attacks.
Understand the current availability-based threat landscape and learn about new types of cyber-attacks that are being used to saturate resources. For more information on the state of Application and Network Security, please visit: http://www.radware.com/ert-report-2013/
The Cyber Attack landscape is evolving with new attack vectors and dangerous trends that can affect the security of your business. Some attacks can take only minutes to complete, yet months to be discovered.
Determine your attack risk and learn what to look for in a quality cyber attack defense.
Please visit here: http://www.radware.com/social/amn/ for information on Radware's AMN (Attack Mitigation Network.
An Important Notice About Shellshock Bash Protection
Since the news about “Shellshock Bash” vulnerabilities came out, we have been working around the clock to ensure our customers and partners are getting the best solutions from us. We have published this Shellshock Security Advisory, which will help protect your business with:
• Two IPS signatures that can be used by DefensePro to block the vulnerability
• Recommendations provided by Radware’s Emergency Response Team (ERT) that can be applied immediately
• Recommended reference sources and vendor information
Radware's team of cyber-security experts is available to our customers, 24/7. Contact us if you require immediate support for this vulnerability. We assure you that we continue to closely monitor the situation in order to ensure we provide you with the best cyber-attack protection mechanisms.
The Art of Cyber War: Cyber Security Strategies in a Rapidly Evolving TheatreRadware
Is the world in the midst of a cyber-war? If so, what are the implications?
In this presentation Carl Herberger, Radware's VP of Security Solutions, explores some of the most notable recent cyber-attacks and how many of the findings correlate with the tenets of warfare as defined in The Art of War by Sun Tzu, the ancient military general, strategist and tactician.
How should organizations be preparing for an information security landscape that is shaped by ideologically motivated cyber warfare rather than just opportunistic cyber-crime? Learn the techniques being employed to safeguard IT operations in a theatre that is witnessing ever more sophisticated attacks.
For more on how to help detect, mitigate and win this cyber war battle, visit here: http://www.radware.com/ert-report-2013/ to download the 2013 Global Application and Network Security Report.
This is your brain.
This is your brain on a mobile site with throughput throttled just enough to frustrate the heck out of you.
This is your brain thinking about all the tests you could run if you had your own lightweight, wireless EEG braincap to directly but passively monitor brain activity in your customers as they interact with your digital assets.
From the eMetrics Conference in Chicago, Radware Evangelist Tammy Everts describes a mobile web stress test conducted to gauge the impact of network speed on emotional engagement and brand perception. Neural marketing has escaped the lab and has found its way into practical applications. For even more on the web stress tests, please visit: http://www.radware.com/mobile-eeg2013/
InfoSecurity Europe 2014: The Art Of Cyber WarRadware
With cyber-attacks becoming a growing concern for organizations, availability-based attacks, also known as Denial of Service or Distributed Denial of Service attacks, have long moved from a form of cyber protest to a destructive weapon that is used by cyber criminals, hacktivists and even governments.
In 2013 we saw a growing use of a new type of attack where attackers used legitimate transactions to saturate application servers’ resources. In this presentation, Security Expert Werner Thalmeier demonstrates how such an advanced attack can be created from a laptop running in an anonymous public WiFi network. He also evaluates the attack landscape and its impact on organizations as well as shares the best practices to protect against such cyber-attacks.
Understand the current availability-based threat landscape and learn about new types of cyber-attacks that are being used to saturate resources. For more information on the state of Application and Network Security, please visit: http://www.radware.com/ert-report-2013/
OpenStack Networking: Developing and Delivering a Commercial Solution for Lo...Radware
Why would you want to have an open source driver?
Samuel Bercovici, Radware's Director of Automation & Cloud Integration, answers this and offers an introduction to Drivers in Havana in this presentation from his recent appearance at OpenStack Israel.
Read more in our Press Release: http://www.radware.com/NewsEvents/PressReleases/Radware-Alteon-Provides-Load-Balancing-for-OpenStack-Cloud-Applications/
SecureWorld St. Louis: Survival in an Evolving Threat LandscapeRadware
David Hobbs’ presentation from SecureWorld Expo - St. Louis discusses availability-based threats; attacks on U.S. banks and other popular attack patterns & trends.
In the Line of Fire - The Morphology of Cyber-AttacksRadware
Presentation from Dennis Usle during TakeDownCon in Huntsville, AL that discusses Availability-based threats; Attacks on U.S. banks and others popular attack patterns & trends.
From his series of presentations during SecureWorld and also the iTech 2013 Conference, Radware Attack Mitigation Specialist David Hobbs presents “Survival in an Evolving Threat Landscape.” The discussion covers availability-based threats, attacks on the U.S. banks and others popular patterns & trends.
In the Line of Fire-the Morphology of Cyber AttacksRadware
Dennis Ulse's Presentation from SecureWorld Expo Atlanta that discusses Availability-based threats; Attacks on U.S. banks and other popular attack patterns and trends.
In the Line of Fire-the Morphology of Cyber AttacksRadware
David Hobbs’ Presentation from his series of presentations during SecureWorld that discusses Availability-based threats; Attacks on U.S. banks and others popular attack patterns & trends.
Radware DefenseFlow-The SDN Application That Programs Networks for DoS Security Radware
http://www.radware.com/Products/DefenseFlow/
Learn about the industry's first SDN application that enables network operators to program the network to provide DDoS protection as a native network service.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
7. Downtime is better for a B2C web service than slowness. Slowness makes you hate using the service, downtime you just try again later.
Lenny Rachitsky
Slide 7
“
”
9. Performance affects many business KPIs every day
Revenue
Conversions/downloads
User satisfaction
User retention
Time on site
Page views
Bounce rate
Organic search traffic
Brand perception
Slide 9
26. Average revenue loss per hour of downtime
Average revenue loss due to one hour of performance slowdown (slower than 4.4s)
Slide 26
TRAC Research/AlertSite: Online Performance Is Business Performance
$21,000
$4,100
However…
27. …website slowdowns occur 10 times more frequently than outages.
Slide 27
TRAC Research/AlertSite: Online Performance Is Business Performance
29. Whether it’s a public website or an internal web-based application, most of us believe that a successful DoS/DDoS attack results in a service outage.
However, our Security Industry Survey (conducted with 198 respondents within a wide variety of global companies, most of which were not Radware customers) uncovered that the biggest impact of DoS/DDoS attacks in 2013 was service level degradation, which in most cases is felt as service slowness.
Slide 29
Radware 2013 Global Application and Network Security Report
“
”
33. Customer lifetime value (CLV)
Total dollars flowing from a customer over the entire relationship with that customer
CLV is one of the biggest predictors of retail success
Slide 33
34. Slide 34
New visitors
Return visitors
Time on site (minutes)
2:31
5:31
Page views/visit
3.88
5.55
Purchase intent
Return visitor is 9X more likely to make a purchase than a first-time visitor.
35. How to measure the short-term and long-term impact of slow time
42. Challenges of measuring impact of slow performance
Need actionable performance data
Need visibility into performance of third-party scripts
Need visibility into quality of experience (QoE) for users (availability PLUS speed)
Need to be able to measure the business impact of performance issues
Slide 42
44. How to calculate short-term losses due to sub-optimal performance
Step 1: Identify your cut-off performance threshold
4.4 seconds = average delay in response time when business performance starts to decline (TRAC)
Slide 44
46. Step 2: Measure TTI / AFT / Speed Index for pages in flows for typical use cases
Slide 46
47. Slide 47
Step 3: Calculate difference between threshold and actual measurement
5.6
-4.4
2.2 seconds
48. Slide 48
Step 4: Pick a KPI
1-second delay =
2.1% decrease in cart size
3.5 - 7% decrease in conversions
9 - 11% decrease in page views
8% increase in bounce rate
16% decrease in customer satisfaction
49. Slide 49
Step 5: Calculate losses
3.5% decrease in conversion rate
x 2.2s slowdown______________
7.7% decrease in conversion rate
50. How to calculate long-term losses due to slow performance
1.Identify percentage of converting traffic that experiences speeds slower than 8-second poverty line threshold.
2.Identify current CLV for those customers’ (individual or aggregated).
3.Using the stat that 28% of those customers will permanently abandon pages that are unacceptably slow, identify the lost CLV.
For example…
Slide 50
51. CLV loss sample scenario
If median total value of customers over the past 3 years is $1000, then predicted future value for the next three years is $1000.
(CLV is $2000.)
Current converting user base of 10,000.
10% of those customers (1000) experience TTI of 8+ seconds.
28% of those customers (280) will not return.
CLV loss = $280,000
Slide 51