View full webinar on demand at http://bit.ly/nginxbenchmarking
Whether you’re doing performance testing or planning for infrastructure needs, benchmarking can be a big deal. Join us for this webinar where we cover NGINX benchmarking best practices, including:
- the test environment
- configuring NGINX
- using benchmarking tools
- and more!
You’ll learn how to approach doing benchmarks so that you obtain results that are more accurate, better understood, and do a better job of addressing the needs of your project.
On-demand recording: nginx.com/resources/webinars/nginx-basics-best-practices
You’ve heard of NGINX and the benefits it can provide to your web application, but maybe you’re not sure how to get started. There are a lot of tutorials online, but they can be outdated and contradict each other, making things more challenging. In this webinar we’ll cover the basics of NGINX to help you effectively begin using it as part of your existing or new web app.
This webinar covers how to:
* Install NGINX and verify it's properly running
* Create NGINX configurations for reverse proxy, load balancer, etc.
* Improve performance using keepalives and other NGINX directives
* Debug and troubleshoot using NGINX logs
This is a presentation I held at "DevOps and Security" -meetup on 5th of April 2016 at RedHat.
Source is available at: https://github.com/jerryjj/devsec_050416
On-demand recording: nginx.com/resources/webinars/nginx-basics-best-practices
You’ve heard of NGINX and the benefits it can provide to your web application, but maybe you’re not sure how to get started. There are a lot of tutorials online, but they can be outdated and contradict each other, making things more challenging. In this webinar we’ll cover the basics of NGINX to help you effectively begin using it as part of your existing or new web app.
This webinar covers how to:
* Install NGINX and verify it's properly running
* Create NGINX configurations for reverse proxy, load balancer, etc.
* Improve performance using keepalives and other NGINX directives
* Debug and troubleshoot using NGINX logs
This is a presentation I held at "DevOps and Security" -meetup on 5th of April 2016 at RedHat.
Source is available at: https://github.com/jerryjj/devsec_050416
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Henning Jacobs
Kubernetes has the concept of resource requests and limits. Pods get scheduled on the nodes based on their requests and optionally limited in how much of the resource they can consume. Understanding and optimizing resource requests/limits is crucial both for reducing resource "slack" and ensuring application performance/low-latency. This talk shows our approach to monitoring and optimizing Kubernetes resources for 80+ clusters to achieve cost-efficiency and reducing impact for latency-critical applications. All shown tools are Open Source and can be applied to most Kubernetes deployments.
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
In this hands-on workshop, we'll explore how to deploy resources to azure using terraform. First we'll peek into the basics of terraform (HCL language, CLI, providers, provisioners, modules, plans, state files etc).
Then in our hand-on exercise, we'll author terraform scripts to deploy virtual networks, virtual machines and app services to azure. Finally we'll walk through some azure tooling & integrations for terraform (azure cloud shell, hosted images in azure devops, azure marketplace images, VSCode extensions etc).
Author: Mithun Shanbhag
The Zen of High Performance Messaging with NATS NATS
The Zen of High Performance Messaging with NATS
Waldemar Quevedo Salinas, Senior Software Engineer
NATS is an open source, high performant messaging system with a design oriented towards both being as simple and reliable as possible without at the same time trading off scalability. Originally written in Ruby, and then rewritten in Go, a NATS server can nowadays push over 11M messages per second.
In this talk, we will cover how following simplicity as the main design constraint as well as focusing on a limited built-in feature set, resulted in a system which is easy to operate and reason about, making up for an attractive choice for when building many types of distributed systems where low latency and high availability are very important.
You can learn more about NATS at http://www.nats.io
NGINX for Application Delivery & AccelerationNGINX, Inc.
NGINX is an HTTP request and load balancing server that powers many of the world's busiest websites. Learn why NGINX is such a popular choice, and see how it improves the capacity of web applications through HTTP intelligence and caching.
Learn more at www.nginx.com.
Monitoring Highly Dynamic and Distributed Systems with NGINX AmplifyNGINX, Inc.
On-demand webinar: nginx.com/resources/webinars/monitoring-highly-dynamic-distributed-services-nginx-amplify
In this webinar, we describe the challenges of monitoring highly dynamic and distributed systems, and show how NGINX Amplify can help.
Cloud services, containers, and microservices provide flexibility of deployment, but conventional monitoring tools can't always keep up with rapidly changing systems. NGINX Amplify helps you overcome challenges in collecting logs and metrics and acting on the results.
Join us in this webinar to discover:
- The specific characteristics of distributed systems, immutable infrastructure, and systems that use a microservices architecture.
- Why NGINX and NGINX Plus are the go-to application delivery solutions with popular with users of container technologies such as Docker and cloud technologies such as AWS.
- What special challenges you may find in monitoring and managing containerized, distributed, and microservices-based systems.
- How NGINX Amplify overcomes monitoring and management challenges.
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Henning Jacobs
Kubernetes has the concept of resource requests and limits. Pods get scheduled on the nodes based on their requests and optionally limited in how much of the resource they can consume. Understanding and optimizing resource requests/limits is crucial both for reducing resource "slack" and ensuring application performance/low-latency. This talk shows our approach to monitoring and optimizing Kubernetes resources for 80+ clusters to achieve cost-efficiency and reducing impact for latency-critical applications. All shown tools are Open Source and can be applied to most Kubernetes deployments.
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
In this hands-on workshop, we'll explore how to deploy resources to azure using terraform. First we'll peek into the basics of terraform (HCL language, CLI, providers, provisioners, modules, plans, state files etc).
Then in our hand-on exercise, we'll author terraform scripts to deploy virtual networks, virtual machines and app services to azure. Finally we'll walk through some azure tooling & integrations for terraform (azure cloud shell, hosted images in azure devops, azure marketplace images, VSCode extensions etc).
Author: Mithun Shanbhag
The Zen of High Performance Messaging with NATS NATS
The Zen of High Performance Messaging with NATS
Waldemar Quevedo Salinas, Senior Software Engineer
NATS is an open source, high performant messaging system with a design oriented towards both being as simple and reliable as possible without at the same time trading off scalability. Originally written in Ruby, and then rewritten in Go, a NATS server can nowadays push over 11M messages per second.
In this talk, we will cover how following simplicity as the main design constraint as well as focusing on a limited built-in feature set, resulted in a system which is easy to operate and reason about, making up for an attractive choice for when building many types of distributed systems where low latency and high availability are very important.
You can learn more about NATS at http://www.nats.io
NGINX for Application Delivery & AccelerationNGINX, Inc.
NGINX is an HTTP request and load balancing server that powers many of the world's busiest websites. Learn why NGINX is such a popular choice, and see how it improves the capacity of web applications through HTTP intelligence and caching.
Learn more at www.nginx.com.
Monitoring Highly Dynamic and Distributed Systems with NGINX AmplifyNGINX, Inc.
On-demand webinar: nginx.com/resources/webinars/monitoring-highly-dynamic-distributed-services-nginx-amplify
In this webinar, we describe the challenges of monitoring highly dynamic and distributed systems, and show how NGINX Amplify can help.
Cloud services, containers, and microservices provide flexibility of deployment, but conventional monitoring tools can't always keep up with rapidly changing systems. NGINX Amplify helps you overcome challenges in collecting logs and metrics and acting on the results.
Join us in this webinar to discover:
- The specific characteristics of distributed systems, immutable infrastructure, and systems that use a microservices architecture.
- Why NGINX and NGINX Plus are the go-to application delivery solutions with popular with users of container technologies such as Docker and cloud technologies such as AWS.
- What special challenges you may find in monitoring and managing containerized, distributed, and microservices-based systems.
- How NGINX Amplify overcomes monitoring and management challenges.
When one server just isn’t enough, how can you scale out? In this webinar, you'll learn how to build out the capacity of your website. You'll see a variety of scalability approaches and some of the advanced capabilities of NGINX Plus.
View full webinar on demand at http://nginx.com/resources/webinars/nginx-load-balancing-software/
Content caching is one of the most effective ways to dramatically improve the performance of a web site. In this webinar, we’ll deep-dive into NGINX’s caching abilities and investigate the architecture used, debugging techniques and advanced configuration. By the end of the webinar, you’ll be well equipped to configure NGINX to cache content exactly as you need.
View full webinar on demand at http://nginx.com/resources/webinars/content-caching-nginx/
Load Balancing Apps in Docker Swarm with NGINXNGINX, Inc.
On-demand webinar recording: http://bit.ly/2mRjk2g
Docker and other container technologies continue to gain in popularity. We recently surveyed the broad community of NGINX and NGINX Plus users and found that two-thirds of organizations are either investigating containers, using them in development, or using them in production. Why? Because abstracting your applications from the underlying infrastructure makes developing, distributing, and running software simpler, faster, and more robust than ever before.
But when you move from running your app in a development environment to deploying containers in production, you face new challenges – such as how to effectively run and scale an application across multiple hosts with the performance and uptime that your customers demand.
The latest Docker release, 1.12, supports multihost container orchestration, which simplifies deployment and management of containers across a cluster of Docker hosts. In a complex environment like this, load balancing plays an essential part in delivering your container-based application with reliability and high performance.
Join us in this webinar to learn:
* The basic built-in load balancing options available in Docker Swarm Mode
* The pros and cons of moving to an advanced load balancer like NGINX
* How to integrate NGINX and NGINX Plus with Swarm Mode to provide an advanced load-balancing solution for a cluster with orchestration
* How to scale your Docker-based application with Swarm Mode and NGINX Plus
What does the new HTTP/2 standard really mean for your applications? Are you at risk of being left behind as more browsers shift to HTTP/2? What pitfalls might there be in adoption?
Watch the webinar on demand: https://www.nginx.com/resources/webinars/http2-ask-me-anything/
WordPress + NGINX Best Practices with EasyEngineNGINX, Inc.
Whether for speed, security or scalability, a WordPress site can be improved using NGINX.
View full webinar on-demand at: http://nginx.com/resources/webinars/taste-nginx-conf-wordpress-nginx-best-practices-easyengine/
When dynamic becomes static - the next step in web caching techniquesWim Godden
Although tools like Varnish can improve performance and scalability for static sites, when user-specific content is needed, a hit to the PHP/Ruby/Python/.Net backend is still required, causing scalability issues. We’ll look at a brand-new Nginx module which implements an ultra-fast and scalable solution to this problem, changing the way developers think about designing sites with user-specific content.
3 Ways to Automate App Deployments with NGINXNGINX, Inc.
Watch on demand: www.nginx.com/resources/webinars/three-ways-to-automate-with-nginx-and-nginx-plus
The process of deploying applications in many organizations today is slowed down by manual processes. These manual processes create extra work for developers and operations teams, cause unnecessary delays, and increase the time it takes to get new features and critical bug and security fixes into the hands of customers. Automating common tasks – using tools, scripts, and other techniques – is a great way to improve operational efficiency and accelerate the rollout of new features and apps.
The potential improvements of automation are impressive. With the proper components in place, some companies have been able to deploy new code to production more than 50 times per day, creating a more stable application and increasing customer satisfaction.
High-performing DevOps teams turn to open source NGINX and NGINX Plus to build fully automated, self-service pipelines that developers use to effortlessly push out new features, security patches, bug fixes, and whole new applications to production with no manual intervention.
* Best practices for continuous delivery and automated deployments
* How to quickly and easily deploy new features or bug fixes into production with the push of a button
* Techniques to orchestrate and manage your NGINX-powered infrastructure using tools like Ansible, Chef, and Puppet
* How to use Jenkins to modify your NGINX configuration
* Methods to automate the discovery of new services using NGINX Plus
Reduce IT Spend with Software Load BalancingNGINX, Inc.
Learn how you can replace your hardware load balancers with NGINX Plus, a complete software application delivery platform for the modern web. Moving to NGINX Plus not only saves you money, but provides the flexibility, performance, and scalability that only software can provide.
Improve App Performance & Reliability with NGINX AmplifyNGINX, Inc.
Watch on-demand: www.nginx.com/resources/webinars/amplify-launched-as-public-beta
Gain greater visibility and control for your applications and infrastructure. NGINX Amplify is a new diagnostic tool designed specifically to monitor and troubleshoot NGINX and the applications it delivers. This webinar will introduce you to this new product and show you how to use it to make immediate improvements to your applications. See NGINX Amplify’s powerful features, including a customizable analytics dashboard, diagnostics of your app performance and NGINX configurations, and a configurable alerting system.
With NGINX Amplify you get deep diagnostics and actionable insights to improve the performance and security of your apps. Join this webinar to learn how you can use NGINX Amplify to:
* Begin understanding and improving application performance in as little as five minutes
* Learn now to analyze your NGINX configuration and improve performance and security
* See how NGINX Amplify lets you visualize key metrics from multiple NGINX instances
* Configure smart alerts that notify you when parts of your system need attention
* Augment your existing APM tools with dedicated and much more comprehensive NGINX analytics
Watch the webinar on-demand: nginx.com/resources/webinars/whats-new-in-nginx-plus-r9
Achieve flawless delivery of all your applications with NGINX Plus R9. The latest version of our complete application delivery platform provides two significant new features: the ability to dynamically install a rich array of modules to NGINX Plus and commercially-supported UDP load balancing.
After 16 years of solid use, the HTTP protocol finally got a major update this year. HTTP is the standard that defines how computers communicate over the Internet, and had not changed since 1999. The modern web, however, has become much more complex and HTTP/2 helps to address this brave new world.
Watch the webinar on demand: https://www.nginx.com/resources/webinars/whats-new-in-http2/
Microservices and Container Management with NGINX Plus and Mesosphere DC/OSNGINX, Inc.
Webinar recording: nginx.com/resources/webinars/microservices-container-management-nginx-plus-mesosphere-dcos
NGINX and NGINX Plus are emerging as the standard for connecting, securing, caching, and scaling microservices. We hope you found it valuable to learn how to use Mesosphere DC/OS and containers, such as Docker containers, to create and run microservices applications in an NGINX Plus environment.
Watch the webinar on demand: https://www.nginx.com/resources/webinars/maximize-php-performance-with-nginx
Is your PHP app slowing to a crawl? PHP is a powerful programming language that powers roughly 80% of the internet, but it’s unfortunately subject to performance problems – as we all know. Luckily, for thousands of PHP-based applications, some relatively simple changes can lead to noticeable improvements in performance.
NGINX has greatly improved application performance for more than 150 million sites in production today. Using NGINX greatly improves the performance of PHP apps with features such as caching, load balancing, HTTP/2 support, and more, included in open source NGINX software and in our commercial-grade application delivery platform NGINX Plus.
Secure Your Apps with NGINX Plus and the ModSecurity WAFNGINX, Inc.
On-demand recording: https://nginx.webex.com/nginx/lsr.php?RCID=e62ece89fb21133d312f02af7be8e2c0
The NGINX Plus with ModSecurity WAF (web application firewall) protects your applications from a wide variety of threats, including DDoS and Layer 7 attacks. Improve application uptime, block malicious users, and log crucial data about suspicious transactions with this new offering from NGINX.
The NGINX Plus with ModSecurity WAF is built on a new architecture, offered first to NGINX Plus customers. Our new WAF will help you protect your site against top threats and comply with PCI-DSS Requirement 6.6.
Join us in this webinar to learn:
* The top security attacks against websites
* How much attacks are increasing and why
* How a WAF adds to your site's security protection
* How NGINX Plus with ModSecurity WAF works, in a live demo
Chill, Distill, No Overkill: Best Practices to Stress Test Kafka with Siva Ku...HostedbyConfluent
"So, you have built/inherited/discovered one of your many Kafka clusters. How now do you know that it is good enough to sustain and grow your applications? Do you stress test it as a data store, a messaging system, as middleware, or like a REST API? Or are you in production and worried about the next unprecedented surge? Find out from those who have asked and answered before.
Repeatable, and recordable stress testing for Kafka is a challenge for novices and some legends. Real supplies like storage, compute, network, threads etc. do not naturally map to demands of messages, bytes, and milliseconds. In the session, we will cover ways to:
* Define parameters and variables before beginning
* Accommodate for changing conditions - brokers, applications, config, network
* Overlap infrastructure, test design, latency, and throughput
* Meet cost, service level agreements, and multi-tenancy needs while testing
* Do it all without entirely relying on estimation, and extrapolation
We will also discuss common and innovative practices observed in the industry to meet this challenge. At the end of the session, you would walk away with the knowledge needed to set up a repeatable stress test suite without stress."
Mtc learnings from isv & enterprise interactionGovind Kanshi
This is one of the dated presentation for which I keep getting requests for, please do reach out to me for status on various things as Azure keeps fixing/innovating whole of things every day.
There are bunch of other things I can help you on to ensure you can take advantage of Azure platform for oss, .net frameworks and databases.
Mtc learnings from isv & enterprise (dated - Dec -2014)Govind Kanshi
This is little dated deck for our learnings - I keep getting multiple requests for it. I have removed one slide for access permissions (RBAC -which are now available).
Adding Value in the Cloud with Performance TestRodolfo Kohn
System quality attributes such performance, scalability, and availability are among the main concerns for cloud application developers and product managers. There are many examples of notable system failures that show how a company business can be affected during key events like a Cyber Monday. However, many difficulties come up when a team intends to consciously manage these type of quality attributes during development and operations. It is possible to group these difficulties in two main aspects: human aspects and technical aspects. During this presentation, I will share main technical difficulties we had to deal with in the last seven years working with different cloud services as well as key technical performance, scalability, and availability issues we were able to find and solve. It is about cases that are relevant through different products, technologies, and teams.
Measuring CDN performance and why you're doing it wrongFastly
Integrating content delivery networks into your application infrastructure can offer many benefits, including major performance improvements for your applications. So understanding how CDNs perform — especially for your specific use cases — is vital. However, testing for measurement is complicated and nuanced, and results in metric overload and confusion. It's becoming increasingly important to understand measurement techniques, what they're telling you, and how to apply them to your actual content.
In this session, we'll examine the challenges around measuring CDN performance and focus on the different methods for measurement. We'll discuss what to measure, important metrics to focus on, and different ways that numbers may mislead you.
More specifically, we'll cover:
Different techniques for measuring CDN performance
Differentiating between network footprint and object delivery performance
Choosing the right content to test
Core metrics to focus on and how each impacts real traffic
Understanding cache hit ratio, why it can be misleading, and how to measure for it
DrupalCamp LA 2014 - A Perfect Launch, Every TimeSuzanne Aldrich
Launches are tough on a new developer. Everyone remembers the lump in their throats around launch time; the rush to finish content, make final theme tweaks, adjust for sudden browser weirdness. As momentum picks up, the odd change request always appears, databases are slingshot hither and yon, while everyone scrambles to resolve merge conflicts like a Tokyo train at rush hour.
We emerge scarred but smarter, intent on making the next launch less painful. But with different teams launching different sites, it can be hard to establish an iterative process. Especially as new work accumulates in the backlog, we reap what we sow in technical debt from rushed launches, quick & dirty choices made under the gun, and unimplemented ideas from retrospectives.
Pantheon, however, has the same Customer Success team launching several enterprise sites per week, while assisting hundreds of self serve customers when they need a hand. Because we need to work effectively, we have developed the tools and process to ensure:
* Great Site Performance - On Day One
* Less problems over the long run
* Clear Expectations from Informed Stakeholders
The session will cover other key areas:
* Preparing For Launch for the PM, Stakeholder, Developer & Sys Admin
* Auditing the Site for landmines, carnivorous acid pool islands, and deadweight
* Load Testing to obliterate surprises with actionable results
This session is Platform Agnostic; whether you use PAAS, shared hosting, or wield your own hardware, PMs, developers, and clients will leave with new tools in their belt to launch with less agita. We will share some of our challenges and how we overcame them, and hopefully hear from you about how you overcame yours!
Debugging Microservices - key challenges and techniques - Microservices Odesa...Lohika_Odessa_TechTalks
Microservice architecture is widespread our days. It comes with a lot of benefits and challenges to solve. Main goal of this talk is to go through troubleshooting and debugging in the distributed micro-service world. Topic would cover:
main aspects of the logging,
monitoring,
distributed tracing,
debugging services on the cluster.
About speaker:
Andrеy Kolodnitskiy is Staff engineer in the Lohika and his primary focus is around distributed systems, microservices and JVM based languages.
Majority of time engineers spend debugging and fixing the issues. This talk will be dedicated to best practicies and tools Andrеys team uses on its project which do help to find issues more efficiently.
Load testing with Visual Studio and Azure - Andrew SiemerAndrew Siemer
In this presentation we will look at what web performance testing is and the various types of testing that can be performed. We will then dig into Visual Studio 2013 Ultimate to see that the Visual Studio platform is now a real contender in performance testing automation. And we will see how the Visual Studio integration with Visual Studio Online and Azure can take your web performance tests and spin up impressive load tests in a truly useful way.
Grails has great performance characteristics but as with all full stack frameworks, attention must be paid to optimize performance. In this talk Lari will discuss common missteps that can easily be avoided and share tips and tricks which help profile and tune Grails applications.
Supercharge Application Delivery to Satisfy UsersNGINX, Inc.
Users expect websites and applications to be quick and reliable. A slow user experience can have a significant impact on your business. Join us for this webinar where we will show you a number of ways you can use NGINX and other tools and techniques to supercharge your application delivery, including:
- Client Caching
- Content Delivery Networks (CDN)
- OCSP stapling
- Dynamic Content Caching
View full webinar on demand at http://bit.ly/nginxsupercharge
The venerable Servlet Container still has some performance tricks up its sleeve - this talk will demonstrate Apache Tomcat's stability under high load, describe some do's (and some don'ts!), explain how to performance test a Servlet-based application, troubleshoot and tune the container and your application and compare the performance characteristics of the different Tomcat connectors. The presenters will share their combined experience supporting real Tomcat applications for over 20 years and show how a few small changes can make a big, big difference.
Where to start? - the first 2 hours of performance troubleshooting
• The performance cheat sheet: cover all the basics before you start
• Data collections and mining the logs
• Common techniques to improve performance
Grails has great performance characteristics but as with all full stack frameworks, attention must be paid to optimize performance. In this talk Lari will discuss common missteps that can easily be avoided and share tips and tricks which help profile and tune Grails applications.
Resilience Planning & How the Empire Strikes BackC4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1pGpnbd.
Bhakti Mehta approaches best practices for building resilient, stable and predictable services: preventing cascading failures, timeouts pattern, retry pattern, circuit breakers and other techniques which have been pervasively used at Blue Jeans Network. Filmed at qconsf.com.
Bhakti Mehta is the author of "RESTful Java Patterns and Best practices” and "Developing RESTful Services with JAX-RS 2.0, WebSockets, and JSON”. Bhakti is a Senior Software Engineer at Blue Jeans Network. As part of her current role, she works on developing RESTful services that can be consumed by ISV partners and the developer community.
Similar to Benchmarking NGINX for Accuracy and Results (20)
Managing Kubernetes Cost and Performance with NGINX & KubecostNGINX, Inc.
Kubecost and NGINX have recently partnered together to provide a more comprehensive solution for managing cost and performance when deploying Kubernetes. The Kubecost platform helps organizations optimize and monitor their Kubernetes costs, while NGINX is a leading open source software web server, reverse proxy and ingress controller. Together, they offer a powerful combination of cost optimization and application delivery capabilities, enabling you to gain greater visibility into your Kubernetes environments and achieve better performance and efficiency.
On-Demand Link https://www.nginx.com/resources/webinars/managing-kubernetes-cost-performance-with-nginx-kubecost/
Manage Microservices Chaos and Complexity with ObservabilityNGINX, Inc.
Learn about the three principal classes of observability data, the importance of infrastructure and app alignment, and ways to start analyzing deep data.
Accelerate Microservices Deployments with AutomationNGINX, Inc.
Managing a microservice application means managing numerous moving parts, where changes to one container can have a negative impact on another and potentially bring down the entire application. With automation you can streamline the validation of containers and standardize deployment, and ensure your apps are updated correctly and securely. Join this session to learn:
• How to use GitHub Actions to streamline your processes
• About managing security
• Why automation simplifies quick recovery from failure
Easily View, Manage, and Scale Your App Security with F5 NGINXNGINX, Inc.
Organizations typically use between 200 and 1,000 applications, many of them public facing and a direct gateway to customers and their data. While these apps enable critical functions, they’re also a common target for bad actors. A web application firewall (WAF) is a critical tool for securing apps by providing protection, detection, and mitigation against vulnerabilities and attacks. However, WAFs can be difficult to maintain and manage at scale. In this webinar, we explore how centralized visibility and configuration management of WAFs can decrease risk and save time.
Keep Ahead of Evolving Cyberattacks with OPSWAT and F5 NGINXNGINX, Inc.
With advancing technology and the ever-evolving landscape of cybercrime, it is more important today than ever to reduce file-borne attacks, secure encrypted traffic, and protect your networks.
In this webinar, we discuss the latest developments in the threat landscape, why shared responsibility matters for critical infrastructure, and how you can mitigate future threat vectors with the F5 NGINX Plus Certified Module from OPSWAT.
Install and Configure NGINX Unit, the Universal Application, Web, and Proxy S...NGINX, Inc.
In this hands-on demo and lab, we take you step-by-step through installing NGINX Unit on a Linux system, then configuring it as an app server, web server, and reverse proxy. Following a short review of production features and demo of the lab environment, we let you loose in a disposable lab environment to try NGINX Unit for yourself. During the lab, we’re available online to answer questions or demo anything you might be stuck on.
Protecting Apps from Hacks in Kubernetes with NGINXNGINX, Inc.
Kubernetes has become the platform of choice for deploying modern applications. A Web Application Firewall (WAF) is the most common solution to providing run-time protection for applications (well, second most common, after blind -faith and protective amulets). The question is, how do you put a WAF in place for applications running on Kubernetes?
As for most IT questions, the obvious answer is, of course, “it depends.” But on what?
In this webinar, we look at how a WAF works, where to insert a WAF in your infrastructure, and the best way for a platform engineering team to create self-service WAF configuration on Kubernetes. We explore some sample configurations, and provide a demo of NGINX App Protect WAF in action.
Successfully Implement Your API Strategy with NGINXNGINX, Inc.
On-Demand Recording:
https://www.nginx.com/resources/webinars/successfully-implement-your-api-strategy-with-nginx/
About the Webinar
Cloud-native applications are distributed and decentralized by design, composed of dozens, hundreds, or even thousands of APIs connecting services deployed across cloud, on-premises, and edge environments. Without an effective API strategy in place, API sprawl quickly gets out-of-control and becomes unmanageable as the number of APIs in production outpaces your ability to govern and secure them.
In this webinar we explore trends that are accelerating API sprawl and look at some well-established best practices for managing, governing, and securing APIs in distributed environments. Our presenters also demo how to use API Connectivity Manager, part of F5 NGINX Management Suite, to streamline and accelerate your API operations.
Installing and Configuring NGINX Open SourceNGINX, Inc.
This pre-recorded 101-level lab and demo takes you from a “blank” LINUX system to a full-featured NGINX application delivery configuration for serving web content and load balancing.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
2. About this webinar
When you want to know how many resources to allocate for your NGINX
servers or what the capacity of your current NGINX servers is, you need
to be able to perform proper benchmark testing, but this can be
complicated. In this webinar, you'll learn the considerations that need to
go into planning, configuring and running benchmarks.
3. Agenda
• Introduction
• Common Pitfalls
• Tips and Techniques
• Demonstration
• Questions
5. What is NGINX?
Internet
Proxy
Caching, Load Balancing… HTTP traffic
N
Web Server
Serve content from disk
Application Server
FastCGI, uWSGI, Passenger…
Application Acceleration
SSL and SPDY termination
Performance Monitoring
High Availability
Advanced Features: Bandwidth Management
Content-based Routing
Request Manipulation
Response Rewriting
Authentication
Video Delivery
Mail Proxy
GeoLocation
6. Why do Benchmarking?
• Stress test
• Capacity planning
• Comparison testing (bake off)
7. Benchmarking Considerations
• It’s complicated
• What is the goal?
• What kind of test environment do you have?
• What testing tools do you have?
• How well can you simulate production traffic?
8. What areas are you testing?
• Web server
• Application Server
• Reverse Proxy
• All of the above
9. What are you testing?
• Can you simulate production traffic?
• If not, what are you concerned about:
– Connections
– Request rate
– Bandwidth
– SSL
– All of the Above
10. What are you testing?
• Can you do a full production test?
• If not, do a smaller scale test and extrapolate
• Know your traffic
– Get vs. Post, request/response sizes, etc.
• What you need to test vs. what you are
actually testing
11. What are you testing?
• You may want to test a single variable
– Connections
– Request rate
– Bandwidth
– New SSL handshakes
12. Test Environment
• You are always testing the whole environment
– Testing Tools
• Load generators
• Web Servers
– Systems under Test (SUT)
• Reverse proxies
• Web servers
13. Test Environment
• Good rule of thumb for cores needed:
– Load Generators: 2N
– Reverse Proxies: N
– Web Servers: 2N
15. Not Knowing What You Are Testing
• Know your testing tools
• What question does a test answer?
– For example:
• How many requests per second can a SUT handle?
• Can a SUT handle a certain number of requests per
second
16. Real Clients versus Synthetic Clients
Real Clients Synthetic Clients
Latency Low-High Low
Packet Loss Low-High Low
Bandwidth Low-High High
Time between requests Long Short
Idle connections Yes No
17. Unrealistic Synthetic Clients
• Misleading results
– System looks good during benchmark
– System has problems with real clients
• Why is this?
– Synthetic clients are ideal for the server
• Low latency, low packet loss, busy connections
– Real clients are not
• High latency, packet loss, idle connections
19. Misconfiguration
• Many configuration settings can impact tests
– Some Linux kernel settings may be too low for heavy loads
– Keepalives
– SSL key sizes make a big difference
– Compression
– Benchmark clients or servers
20. Tips and Techniques
• Use multiple approaches
– Real world simulations for real world results
– Simple tests for baselines and debugging
21. Tips and Techniques
• If you have found the real limit of a SUT then:
– At least one system resource should be exhausted
• CPU
• Memory
• Bandwidth
• Disk I/O
– If not, then a bottleneck exists elsewhere
22. Tips and Techniques
• Start with the NGINX defaults
• NGINX directives can impact performance
– accept_mutex, worker_processes,
worker_connections, keepalive_timeout,
lingering_close, sendfile, keepalive, aio,
open_file_cache
23. Tips and Techniques
• Some 10G drivers don’t use cores effectively
– Some cores at 100%, others have low usage
– Solution: driver dependent
• Scripting
• New driver version
• New Card
24. Tips and Techniques
• Monitor error files and errors from testing tools
– Errors can return faster
– You may be hitting system limits or errors
• Don’t have load generation on a SUT
• Double check the result figures
25. Tips and Techniques
• Run all tests multiple times
• If using virtualization, be aware of other loads
on the host
• If you don’t understand the results - simplify
28. Closing thoughts
• 38% of the busiest websites use NGINX
• Check out the previous webinar on tuning at
nginx.com
• Future webinars: nginx.com/webinars
Try NGINX F/OSS (nginx.org) or NGINX Plus (nginx.com)
Editor's Notes
Thank you Owen. Doing benchmarking can be a very important component to any project. You may be trying to determine how many servers and what size of servers you will need or you may want to discover the limit of a particular system. Benchmarking can be complicated and confusing, and to be successful you need to do proper planning and you need to understand the environment, the tools and the tests you will be using. In this webinar we will discuss many of the things you need to consider when doing benchmarking.
I we will start with an introduction and go over some of general things to consider when doing benchmarking, then we will move on to some of the common pitfalls to watch out for during your testing and then we will cover some tips and techniques and we fill finish with a demonstration.
So let’s get to it.
This webinar will be focusing on benchmarking in general, but NGINX can often be the answer when your benchmark tests show that you need to increase the performance of your system. NGINX is a high performance web server of static content, including video and also a full reverse proxy or Application Delivery Controller with many advanced features. For dynamic content it can connect to application servers over HTTP, FastCGI, uWSGI, Passenger and other methods.
There are many reasons you might be doing benchmarking. You may want to do a stress test to find the limit of a particular system or systems. Or you may want to do capacity planning where you have a particular performance level in mind and you want to find the environment that will satisfy that need. These two goals can be related in that in order to do capacity planning you may stress test a particular configuration and then extrapolate the results of that test to figure out what system would be required to meet you capacity goal. You may also be testing to do a comparison between different solutions, for example different reverse proxy servers. And a key questions is, are you trying to simulate real world traffic from real clients. When trying to test a real application it is normally desirable to find out how it will perform in the real world, but there can also be value in doing more controlled testing. With your goals in mind you can design your tests to meet these goals.
When it comes to doing benchmarking, I’m not going to sugar coat it, doing proper benchmarking is complicated. It takes planning and you need to know what you are doing. There are many things that are important to consider.
What is the goal of the testing? It is not good enough to say that you want to test the performance of you system. Are you trying to find the limit of a particular configuration? Are you trying to find a configuration that will meet a certain level or performance? Are you interested in the number of simultaneous users the system can support, or the amount of bandwidth it can send to clients, etc.
What test environment do you have? How close is it to production?
What tools do you have for doing the testing? The tools used can make a huge difference.
Can you simulate production traffic? The closer you can get to testing with production like traffic the closer your results will be to matching the real world.
What areas of you system do you want to test? For this discussion we will limit ourselves to web servers, application servers and reverse proxies. The more layers of you system you want to test, the more complicated the testing will be. If you are having issues with testing multiple layers, you can test each layer individually.
What are you actually going to test? You aren’t going to want to test on an actual production system, so what can you test? Are you able to simulate production traffic? If so that is great because then your test will be more real world, but I have dealt with many organizations that are unable to simulate production traffic. In that case it is often useful to be able to pinpoint what particular area of performance you are concerned about based on your application. For example, are concurrent connections what you are worried about, or is it the request rate, or the amount of bandwidth the system can handle, etc. or a combination of factors.
Do you have an environment that can match production? If you are trying to find out what size and quantity of systems you will need for production, can you scale your tests until you find the answer? Often times this is not possible, so you can do smaller scale testing and then extrapolate these results. One of the nice things about NGINX is that it scales in a linear fashion allowing you to extrapolate test results. If you don’t have the ability to capture live traffic and use it as input to your benchmarking tests and are going to try to simulate this input, then it is important to understand the nature of your live traffic. For example, what is the percentage of GET’s to PUT’s? What are the minimum, maximum and average request and response sizes. Are you using SSL? And be sure that you are testing what you think you are testing. If tests are not designed correctly or understood, then you may find that the test isn’t doing what you thought it was.
If you are unable to simulate production traffic or if you want get a basic set of metrics, you can do simplified testing where your are looking for individual limits. For example if you want to know how many connections per second a system can handle, in isolation from other factors, you can do a test where you request a zero byte file with keepalives disabled. That way all the system is doing is connection handling. For the request rate you can request files of different sizes and see what the request rate is for each size file. For bandwidth you can request a large file (1 meg is usually enough).
Earlier I said that you need to understand what areas of your system you are going to test, but in reality you are always testing the entire environment. So in a test where you have load generators to a reverse proxy to web servers, even if the system you are really looking to test is the reverse proxy, you are also testing the load generators and the web servers, as well as switches and anything else in the network path. So this means that you must know which systems are the focus of your testing but you must also realize that the entire test environment is part of the test. If we limit ourselves to load generators, reverse proxies and web servers, then we can always look at the load generators as testing tools, and if we are testing a reverse proxy, then the web servers are also testing tools in this case.
When testing a reverse proxy, a good rule of thumb is that you will need twice as much computing power for the load generators and web servers as you need for the reverse proxy. This means that benchmarking can be resource intensive, but if you don’t allocate enough resources for the load generators or web servers, then they will become the bottleneck and you will not be able to adequately stress the reverse proxy.
Now let’s start to talk about some common issues you may run into while doing your testing and how to avoid them.
It is very important that you know how your testing tools work. The more full featured and better a tool can simulate real clients, the more complex it is. Some of the commercial products are very sophisticated but also hard to learn and understand. I have dealt with customers using these tools who did not really understand how the tests worked and so didn’t really understand what they were testing. For example I worked with a customer who was trying to find the maximum number of requests per second a system could handle, but they set up the test so that the tool had a goal of a certain number of requests per second. When they set this goal so that it was greater then the maximum the SUT could handle, the SUT would start to slow down as it reached its maximum, but since the tool was told to drive a certain number of requests per second, as the SUT slowed down, it sent even more requests to it thus overloading it further and in the end the SUT showed a requests per second rate far lower then its actual maximum. This is just one example, but the bottom line is that if you don’t really understand how a test is being run, you can’t interpret the results.
One of the biggest issues when testing a reverse proxy or a web server, is the type of clients used in the test. By there nature, during a benchmarking test the clients will be synthetic in that the client load will be generated by a tool rather then a real user. Synthetic clients, in their simplest and least sophisticated form, are very unlike real clients and this table highlights some of these differences. By a simple synthetic client I am referring to one that is co-located with the SUT and doesn’t do anything to simulate real client behavior, so it sends requests as fast as it can. Real clients, if they are remote will have higher latency, increased packet loss and lower bandwidth compared to a simple synthetic client. And real clients can have long gaps between requests. Think of a user on a browser who downloads a web page which causes a number of requests, but then the user stops to read the page for several seconds before clicking on a link causing a new set of requests. With a simple synthetic client, it will send requests continuously. This means that the connections from real clients will often be idle while the connections from simple synthetic clients will never be idle.
So these simple synthetic clients are unrealistic if you are trying to see how your system will perform against real world traffic. And what you can see when using such tools is that during the benchmark, the system shows one level of performance but when rolled into production and faced with real clients, the performance is much lower. The reason for this is that these synthetic clients act in a way that is ideal for servers. From the servers perspective they are very well behaved. They have very low latency and packet loss and don’t leave connections idle, just the opposite of of real clients. That is not to say that simple tools don’t have value. The good thing about doing more real world like testing is that it will give you results that should better match what you will see when you roll a system into production, but by their nature these are more complex and add more variables. So, it is often valuable to use both types of tools. The simple tools can be good for getting initial baselines and also for trying to debug your tests when you get results that you don’t understand. They are also good if your goal is not to simulate real traffic, but rather to find a set of limits for a system in isolation. This is common for vendor testing, where you want to find individual limits, such as how many connections, requests or bandwidth a system can handle. This is the single variable testing I discussed earlier.
There are many load generation tools available, both open source and commercial. Here are a just a few examples.
One of the most well known, simple testing tools, is Apache Bench (ab). Some open source tools that do have some level of real client simulation are Siege and wrk. There are many commercial products available such as Spirent, Ixia, Cloudtest and there are other cloud services.
Other things you can do to help simulate real clients is to add latency and packet loss to your test. Latency can be added simply using the Linux tc command and latency and packet loss can be added using a WAN simulator.
It is important to note, that properly simulating real clients is essential for real world results, but when doing the simplified testing I mentioned early, this is not important since for those tests we are trying to remove variable, such as the complexity of real clients.
Another thing that can lead to inaccurate results, is the misconfiguration of one or more of the machines involved in the testing.
For example you may need to do some Linux kernel tuning on the load generators and SUT to avoid hitting an OS limit. An example of some of the settings that may need to be increases are the number of file descriptors, the ephemeral port range, the number of time wait buckets, and the size of the input queue.
Keepalive connections can have a large impact on performance. Using keepalives removes processing workload from both clients are servers so it is important to know where they can be used. If you are testing a reverse proxy then it will have both client and server side connections and keepalives can be set independently.
SSL key sizes have a large impact on performance. 2048 bit keys require five times the processing power of 1024 bit keys when handling new SSL handshakes. If you are doing real world testing, make sure to test with the size of key you will use in production. If you are trying to maximize the results, use a 1024 bit key. Also, to get real world results you need to have an idea of how many SSL connections will require new handshakes.
Compression also impacts performance. Turning compression on increases the demand of the CPU to do the compression/decompression but reduces the bandwidth. So it is important to test with the compression you will use in production.
It is vital that the machines used to create the load and serve the content, in the case where you are testing a reverse proxy, are configured correctly and have enough resources allocated to be able to generate and server the load. If not, your testing tools will be the bottleneck and you will not be able to find the limit of the SUT.
You may think from my discussion of realistic versus unrealistic synthetic clients that I am recommending using the more complex test to simulate real world results except where you don’t care about real world results. But in fact, even if in the end you want your tests to mimic a real world use case, you will usually want to also so some more simplistic testing. This is because the more complex tests add a lot of variables and this can make interpreting the results more difficult. It can be helpful to begin by doing some simple testing to make sure you don’t have any obvious bottlenecks, because these bottlenecks may not be at all obvious when doing more complex testing. During simple testing allows you make sure that get the CPU to 100%, and saturate the available bandwidth, etc. For example, I was involved in a test where we had an issue on a machine with 2x20G NICs, but when doing a simple bandwidth test we were barely able to get 5G. Since we were doing a very simple test we were more easily able to pinpoint the problem as a setting on a switch. Once this was fixed, the same test was able to saturate the 2 NICS. One these initial simple test show that the system seems to be working as expected you can move to more complex, real world testing. You may then find it useful to use these simple tests again to help while troubleshooting these more complex tests. For doing these simple test you can use simple tools, like apache bench or you can use the more complex tools.
Let’s talk a bit about interpreting results. When you run a tests to try and find the limit of a SUT, the fact that your tests seems to have found some sort of maximum value doesn’t necessarily mean that you have found the limit of the SUT. If you have truly found the limit of the SUT, then you should have maxed out a system resource, such as CPU, memory, bandwidth or disk I/O. If not, then you have hit a bottleneck somewhere else. It could be in the software you are testing, or it could be a driver, or the load generators or the web servers, if you are testing a reverse proxy, or it could be somewhere else in the infrastructure. Use the simple testing I talked about in the previous slide can help to troubleshoot these issues.
The NGINX default setting have been designed to be optimal for most environments—so don’t change them if you don’t have too.
If you do need to adjust the NGINX configuration, then certain directives can impact performance. Some of the directives to look into are accept_mutex, worker_processes, worker_connections, keepalive_timeout, lingering_close, sendfile, keepalive, aio and open_file_cache. Please refer to the NGINX documentation for details.
I have found that some 10G NIC drivers do not properly distribute interrupts across all the cores of a machine. If this happens, you will see some of the cores at 100% utilization while other cores show very low utilization, so you are effectively only getting the processing power of the number of cores the driver is spreading interrupts across. The solution to this problem depends on the driver. You may be able to work around it with some scripting, or you may need a new driver, or in the worst case you may want to get a card from a different vendor.
It is very important that you monitor error files. This includes error files from the testing tools, SUT and OS. Errors can return faster then actual requests so if you are unwittingly getting errors from a web server you may see a deceptively high request rate. You may also be hitting some system limits that are impacting the tests.
You should never run a load generator on the same machine as SUT if you want any real performance results because they will be competing for resources.
You should double check the numbers you get from the tests to be sure they make sense. For example, in the demonstration I will show you shortly, I am requesting a 10K sized file. After a test is run, the amount of bandwidth received by the client should match the number of requests times 10K. If instead of getting the 10K file I get a 404, I will not only see a deceptively high request rate but if I check the bandwidth I will see that it is less then it should be.
All tests should be run multiple times. Depending on the amount of variation in the results you may want to average all the results or you may want discard outliers.
If you are running in a virtualized environment, be aware of what else is running on the host, if you can.
With regards to the previous point, if you are testing in the cloud, then you will have little control or visibilty into the infrastructure used for you testing. You may therefore see very inconsistent results, depending on what else is going on in the infrastructure. This makes it even more important to run tests multiple times, ideally at different times of the day.
If you are getting results that you don’t understand, then simplfy your test to remove variables, as I discussed earlier with regards to testing for single system limits.
Now I will do some demonstrations to show a few of the things I’ve been talking about. For these tests I have a very simple setup with three Ubuntu instances in a cloud. Two of the instances are being used as load generators and they each have two load generation tools installed, apache bench and siege. The third instance has Apache and NGINX installed. Apache has a default configuration using the worker multi-processing module, and NGINX has a default configuration. In addition, I’ve added 2ms of delay to each machine to better simulate a LAN connection versus a host-to-host virtual network.
We will start with a very simple test, using apache bench in the upper left window to generate traffic directly to Apache.
[Show: ab –t 10 –s 10 –c 5 http://webserver/10k.html]
Apache bench has a number of options, but for this test we will only use the –t option to specify that the test will run for 10 seconds, -s to specify that we want apache bench to timeout after 10 seconds rather then the default of 30 seconds (this will come in handy in a later test), and the –c option to tell apache bench to open 5 concurrent connections to Apache. By default Apache Bench does not use keepalive connections, so it will open and close each connection for each request and response. This should give us a conservative requests per second number, because as I mentioned earlier, using keepalive connections reduces the amount of CPU needed by the client and server because the machines don’t have the overhead of setting up and tearing down connections after each request and response. So by not using keepalives we are making the machines do some extra work. We will address keepalives again during another test. The requests will be for a 10K file. While I run this test in the upper left window, we will also monitor the CPU utilization for the webserver in the bottom window.
[run: vmstat 3]
We will be looking at the idle CPU %, which the third column from the right.
[run ab]
While the test is running we see that the CPU idle % drops a bit, but the CPU is still pretty idle. And we see that we get around 550 requests per second. Since the CPU was still very idle, this tells us that we haven’t hit the maximum for this test (with this size of a direct HTTP requests we won’t be stressing memory, bandwidth or disk I/O).
Now lets run the same test, but this time with keepalives enabled. So now apache bench will open 5 connections but keep them open for the duration of the test.
[Run: ab –t 10 –s 10 –c 5 –k http://webserver/10k.html]
And we see that the request rate about doubled. Keep this in mind as we do some additional test.
We will return to doing our testing without keepalives, but now we want to try and find out how many connections Apache can handle before the CPU becomes 100% busy. This should gives an idea of the maximum requests rate for this test scenario. We don’t have time to run a whole series of tests to see what number of concurrent connections causes the CPU to get to 100% busy, but from my previous testing, I know that this occurs around 40 concurrent connections.
[Run: ab –t 10 –s 10 –c 40 http://webserver/10k.html]
If we run a test with –c 40 we see that the idle % goes to zero and the request rate increases to around 2500 per second. We can increase this to 400 concurrent connections.
[Run: ab –t 10 –s 10 –c 400 http://webserver/10k.html]
And we see that the request rate stays about the same and we could keep increasing this value, but for our tests we will stop here. Let’s say that these results are within our performance requirements and so based on these results we are confident that the system will perform well in production. One note, you may remember that previously I talked about having to sometimes increase system limits for your testing. Well, in this case I had to increase the number of open files the OS would allow, otherwise apache bench errored when trying to do 400 connections.
As I said earlier, Apache Bench does not do a good job of simulating actual clients, so let’s do a test with another tool, siege, in the upper right window, that does a better job of simulating actual clients, although it is by no means complete in this area.
[Run: siege –t 20s –c 150 –d 10 http://webserver/10k.html]
Two of the options for siege, -t and –c are the same as with Apache Bench, except for this test will let the test run longer and we will try 150 concurrent connections. The key differences with this test are that we can specify the –d option that tells siege to add a delay between requests - It will add a random number of seconds between zero and the number we specify, in this case 10 seconds – and siege will be using keepalive connections, as real clients usually do. This means that siege will open 150 connections and keep them open, unlike the apache bench tests I have been running, where apache bench was opening and closing connections for each request/response. So since siege is using keepalives and adding a delay between requests, it is adding less load on Apache both by removing some connection handling overhead and also by making fewer requests. This test takes about 15 seconds to warm up so I’ve let it run for 20 seconds. During this test we see that the CPU remains mostly idle and we get a requests per second rate of only about a hundredth of what we saw with our apache bench test. This makes since because siege is making far fewer requests, but this doesn’t appear to tell us anything about the limits of this machine. Now let’s run the siege test and the apache bench test, with just 5 concurrent connections, at the same time.
[Run: siege –t 60s –c 150 –d 10 http://webserver/10k.html]
[Show: ab –t 10 –s 10 –c 5 http://webserver/10k.html]
Remember that for the last Apache Bench test we were opening 400 concurrent connections and now we have siege opening 150 and apache bench opening 5, so let’s see what happens. We will wait about 15 seconds for siege to warmup and then kick off the apache bench test and we will again watch the CPU utilization.
[Run: ab]
We see that we get an error from apache bench. This error means that it was unable to get a response from Apache, even though we saw that the CPU utilization remained mostly idle. This is why I’m using the I said earlier that if you have reached the maximum of a machine then you should see that the system has exhausted some resource, in this case CPU, or you have hit a bottleneck. Here the bottleneck is Apache’s connection handling. By having siege open 150 keepalive connections, and sending a request only occasionally, it takes up all the connections that Apache can handle, so when the apache bench test is run, it is unable to get any requests processed. You would think that by using keepalive connections to help reduce the processing needed, you would see better performance but here we see worse performance. This is because of how Apace handles connections and this is a case where a more real world tests shows a far lower level of performance then a simpler test.
So now a plug for NGINX. On the machine we are running Apache on, we also have NGINX running and setup to listen on port 8080 and proxy requests to Apache. So lets run the same test again, but this time we will have both siege and apache bench send traffic to NGINX instead of Apache, and NGINX will do nothing but proxy all the requests to Apache, so it will take over the job of handling the client connections.
[Run: siege –t 60s –c 150 –d 10 http://webserver:8080/10k.html]
[Show: ab –t 10 –s 10 –c 5 http://webserver:8080/10k.html]
We will again give siege about 15 seconds to warmup.
[Run ab]
This time when apache bench is run we see the CPU become busier and we get a request rate similar to what we saw in our original apache bench test, without siege running. So NGINX allows you to get around this bottleneck because of its better connection handling and better overall efficiency.
You might find it interesting to know how NGINX would perform in these tests if it was serving the file rather then Apache. I have configured NGINX to also listen on port 8000 and serve the same 10K file. Remember that when we did our direct test against Apache, we saw the CPU go to 100% busy around 40 concurrent connections. Let’s run the same test against NGINX.
[Run: ab –t 10 –s 10 –c 40 http://webserver:8000/10k.html]
Here we see that the CPU is not 100% busy, but the requests rate has gone from around 2500 to around 3800.
[Run: ab –t 10 –s 10 –c 80 http://webserver:8000/10k.html]
In my testing in this environment I see that it takes about 80 connections to consistently get the CPU to 100% busy.
Finally let me demonstrate something else I mentioned earlier and that is that errors can return more quickly then successful requests. If we rerun the previous test
[Run: ab –t 10 –s 10 –c 80 http://webserver:8000/11k.html]
But request a file that doesn’t exists so that we get a 404, file not found error, w see that our request rate as gone from just under 4000 to just under 5000.
Now we will open the floor for questions.
Thank you for taking the time to attend this session. I hope it was of use to you and I hope you attend some of our future webinars. You will recordings of previous webinars at nginx.com and this includes one on tuning. And it is easy to test out either the open source version of NGINX or NGINX Plus.