5 things you didn't know nginx could dosarahnovotny
NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we'll cover several things that most developers or administrators could implement to further delight their end users.
Learn how to load balance your applications following best practices with NGINX and NGINX Plus.
Join this webinar to learn:
- How to configure basic HTTP load balancing features
- The essential elements of load balancing: session persistence, health checks, and SSL termination
- How to load balance MySQL, DNS, and other common TCP/UDP applications
- How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment
NGINX: Basics & Best Practices - EMEA BroadcastNGINX, Inc.
On-demand recording: nginx.com/resources/webinars/nginx-basics-best-practices-live-emea
You have heard of NGINX and the benefits it can provide to your web application, but maybe you are not sure how to get started. There are a lot of tutorials online, but they can be outdated and contradict each other – making things more challenging.
This webinar will teach you how to:
* Install NGINX and verify it’s properly running
* Create NGINX configurations for reverse proxy, load balancing, and more
* Improve performance using keepalives and other NGINX directives
* Debug and troubleshoot using NGINX logs
5 things you didn't know nginx could dosarahnovotny
NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we'll cover several things that most developers or administrators could implement to further delight their end users.
Learn how to load balance your applications following best practices with NGINX and NGINX Plus.
Join this webinar to learn:
- How to configure basic HTTP load balancing features
- The essential elements of load balancing: session persistence, health checks, and SSL termination
- How to load balance MySQL, DNS, and other common TCP/UDP applications
- How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment
NGINX: Basics & Best Practices - EMEA BroadcastNGINX, Inc.
On-demand recording: nginx.com/resources/webinars/nginx-basics-best-practices-live-emea
You have heard of NGINX and the benefits it can provide to your web application, but maybe you are not sure how to get started. There are a lot of tutorials online, but they can be outdated and contradict each other – making things more challenging.
This webinar will teach you how to:
* Install NGINX and verify it’s properly running
* Create NGINX configurations for reverse proxy, load balancing, and more
* Improve performance using keepalives and other NGINX directives
* Debug and troubleshoot using NGINX logs
Joined by Rick Nelson, Technical Solutions architect from NGINX Server Density take you though the do's and don'ts of monitoring NGINX. Critical and non critical metrics to monitor, important alerts to configure and the best monitoring tools available.
Basic concept of nginx , Apache Vs Nginx , Nginx as Loadbalancer , Nginx as Reverse proxy , Configuration of nginx as load balancer and reverse proxy .
5 things you didn't know nginx could do velocitysarahnovotny
NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we’ll cover several things that most developers or administrators could implement to further delight their end users.
On-demand recording: nginx.com/resources/webinars/whats-new-nginx-plus-r12
NGINX Plus Release 12 (R12) is a significant release of the high-performance software application delivery platform, including award-winning customer support, a load balancer, content cache, and web server.
R12 adds improved configuration sharing, additional monitoring statistics, enhanced caching, improved health checks, and the general availability (GA) release of nginScript, which increases dynamic configuration capabilities for NGINX and NGINX Plus.
Join Liam Crilly, Director of Product Management for NGINX and NGINX Plus, to learn:
* How to use a new and improved method for synchronizing configuration across a cluster of servers
* What new features have been added to nginScript, the unique JavaScript implementation for NGINX and NGINX Plus
* Which new statistics have been added to NGINX Plus monitoring, such as response time for upstream servers, response codes for TCP/UDP upstreams, and upstream hostnames
* How improved health checks can help you maximize server uptime
Content caching is one of the most effective ways to dramatically improve the performance of a web site. In this webinar, we’ll deep-dive into NGINX’s caching abilities and investigate the architecture used, debugging techniques and advanced configuration. By the end of the webinar, you’ll be well equipped to configure NGINX to cache content exactly as you need.
View full webinar on demand at http://nginx.com/resources/webinars/content-caching-nginx/
High Availability Content Caching with NGINXNGINX, Inc.
On-Demand Recording:
https://www.nginx.com/resources/webinars/high-availability-content-caching-nginx/
You trust NGINX to be your web server, but did you know it’s also a high-performance content cache? In fact, the world’s most popular CDNs – CloudFlare, MaxCDN, and Level 3 among them – are built on top of the open source NGINX software.
NGINX content caching can drastically improve the performance of your applications. We’ll start with basic configuration, then move on to advanced concepts and best practices for architecting high availability and capacity in your application infrastructure.
Join this webinar to:
* Enable content caching with the key configuration directives
* Use micro caching with NGINX Plus to cache dynamic content while maintaining low CPU utilization
* Partition your cache across multiple servers for high availability and increased capacity
* Log transactions and troubleshoot your NGINX content cache
NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.
Watch the webinar on demand: https://www.nginx.com/resources/webinars/maximize-php-performance-with-nginx
Is your PHP app slowing to a crawl? PHP is a powerful programming language that powers roughly 80% of the internet, but it’s unfortunately subject to performance problems – as we all know. Luckily, for thousands of PHP-based applications, some relatively simple changes can lead to noticeable improvements in performance.
NGINX has greatly improved application performance for more than 150 million sites in production today. Using NGINX greatly improves the performance of PHP apps with features such as caching, load balancing, HTTP/2 support, and more, included in open source NGINX software and in our commercial-grade application delivery platform NGINX Plus.
Rate Limiting with NGINX and NGINX PlusNGINX, Inc.
On-demand recording: https://www.nginx.com/resources/webinars/rate-limiting-nginx/
Learn how to mitigate DDoS and password-guessing attacks by limiting the number of HTTP requests a user can make in a given period of time.
This webinar will teach you how to:
* How to protect application servers from being overwhelmed with request limits
* About the burst and no‑delay features for minimizing delay while handling large bursts of user requests
* How to use the map and geo blocks to impose different rate limits on different HTTP user requests
* About using the limit_req_log_level directive to set logging levels for rate‑limiting events
About the webinar
A delay of even a few seconds for a screen to render is interpreted by many users as a breakdown in the experience. There are many reasons for these breakdowns in the user experience, one of which is DDoS attacks which tie up your system’s resources.
Rate limiting is a powerful feature of NGINX that can mitigate DDoS attacks, which would otherwise overload your servers and hinder application performance. In this webinar, we’ll cover basic concepts as well as advanced configuration. We will finish with a live demo that shows NGINX rate limiting in action.
under the covers -- chef in 20 minutes or lesssarahnovotny
Learn how to automate your infrastructure to make more time for fun things. In this rapid fire intro to Chef, an open source provisioning and automation platform, we'll touch on the strengths of it's flexible architecture as well as showing some concrete and simple starting points on your path to become an executive chef.
Delivering High Performance Websites with NGINXNGINX, Inc.
NGINX Plus is an easy-to-install, proven software solution to deliver your sites and applications through state-of-the-art intelligent load balancing and high performance acceleration. Improve your servers’ performance, scalability, and reliability with application delivery from NGINX Plus.
NGINX Plus significantly increases application performance during periods of high load with its caching, HTTP connection processing, and efficient offloading of traffic from slow networks. NGINX Plus offers enterprise application load balancing, sophisticated health checks, and more, to balance workloads and avoid user-visible errors.
Check out this webinar to:
* Learn why web performance matters more than ever, in the face of growing application complexity and traffic volumes
* Get the lowdown on the performance challenges of HTTP, and why the real world is so different to a development environment
* Understand why NGINX and NGINX Plus are such popular solutions for mitigating these problems and restoring peak performance
* Look at some real-world deployment examples of accelerating traffic in complex scenarios
Nginx pronounced as "Engine X" is an open source high performance web and reverse proxy server which supports protocols like HTTP, HTTPS, SMTP, IMAP. It can also be used for load balancing and HTTP caching.
Key external invitees will each give a 10min lightning talk about their Company, their interest in ARM servers and any requirements to port their software solutions on ARM 64-bit platforms.
Video: https://www.youtube.com/watch?v=XWxrVM1i7gA&list=UUIVqQKxCyQLJS6xvSmfndLA
Joined by Rick Nelson, Technical Solutions architect from NGINX Server Density take you though the do's and don'ts of monitoring NGINX. Critical and non critical metrics to monitor, important alerts to configure and the best monitoring tools available.
Basic concept of nginx , Apache Vs Nginx , Nginx as Loadbalancer , Nginx as Reverse proxy , Configuration of nginx as load balancer and reverse proxy .
5 things you didn't know nginx could do velocitysarahnovotny
NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we’ll cover several things that most developers or administrators could implement to further delight their end users.
On-demand recording: nginx.com/resources/webinars/whats-new-nginx-plus-r12
NGINX Plus Release 12 (R12) is a significant release of the high-performance software application delivery platform, including award-winning customer support, a load balancer, content cache, and web server.
R12 adds improved configuration sharing, additional monitoring statistics, enhanced caching, improved health checks, and the general availability (GA) release of nginScript, which increases dynamic configuration capabilities for NGINX and NGINX Plus.
Join Liam Crilly, Director of Product Management for NGINX and NGINX Plus, to learn:
* How to use a new and improved method for synchronizing configuration across a cluster of servers
* What new features have been added to nginScript, the unique JavaScript implementation for NGINX and NGINX Plus
* Which new statistics have been added to NGINX Plus monitoring, such as response time for upstream servers, response codes for TCP/UDP upstreams, and upstream hostnames
* How improved health checks can help you maximize server uptime
Content caching is one of the most effective ways to dramatically improve the performance of a web site. In this webinar, we’ll deep-dive into NGINX’s caching abilities and investigate the architecture used, debugging techniques and advanced configuration. By the end of the webinar, you’ll be well equipped to configure NGINX to cache content exactly as you need.
View full webinar on demand at http://nginx.com/resources/webinars/content-caching-nginx/
High Availability Content Caching with NGINXNGINX, Inc.
On-Demand Recording:
https://www.nginx.com/resources/webinars/high-availability-content-caching-nginx/
You trust NGINX to be your web server, but did you know it’s also a high-performance content cache? In fact, the world’s most popular CDNs – CloudFlare, MaxCDN, and Level 3 among them – are built on top of the open source NGINX software.
NGINX content caching can drastically improve the performance of your applications. We’ll start with basic configuration, then move on to advanced concepts and best practices for architecting high availability and capacity in your application infrastructure.
Join this webinar to:
* Enable content caching with the key configuration directives
* Use micro caching with NGINX Plus to cache dynamic content while maintaining low CPU utilization
* Partition your cache across multiple servers for high availability and increased capacity
* Log transactions and troubleshoot your NGINX content cache
NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.
Watch the webinar on demand: https://www.nginx.com/resources/webinars/maximize-php-performance-with-nginx
Is your PHP app slowing to a crawl? PHP is a powerful programming language that powers roughly 80% of the internet, but it’s unfortunately subject to performance problems – as we all know. Luckily, for thousands of PHP-based applications, some relatively simple changes can lead to noticeable improvements in performance.
NGINX has greatly improved application performance for more than 150 million sites in production today. Using NGINX greatly improves the performance of PHP apps with features such as caching, load balancing, HTTP/2 support, and more, included in open source NGINX software and in our commercial-grade application delivery platform NGINX Plus.
Rate Limiting with NGINX and NGINX PlusNGINX, Inc.
On-demand recording: https://www.nginx.com/resources/webinars/rate-limiting-nginx/
Learn how to mitigate DDoS and password-guessing attacks by limiting the number of HTTP requests a user can make in a given period of time.
This webinar will teach you how to:
* How to protect application servers from being overwhelmed with request limits
* About the burst and no‑delay features for minimizing delay while handling large bursts of user requests
* How to use the map and geo blocks to impose different rate limits on different HTTP user requests
* About using the limit_req_log_level directive to set logging levels for rate‑limiting events
About the webinar
A delay of even a few seconds for a screen to render is interpreted by many users as a breakdown in the experience. There are many reasons for these breakdowns in the user experience, one of which is DDoS attacks which tie up your system’s resources.
Rate limiting is a powerful feature of NGINX that can mitigate DDoS attacks, which would otherwise overload your servers and hinder application performance. In this webinar, we’ll cover basic concepts as well as advanced configuration. We will finish with a live demo that shows NGINX rate limiting in action.
under the covers -- chef in 20 minutes or lesssarahnovotny
Learn how to automate your infrastructure to make more time for fun things. In this rapid fire intro to Chef, an open source provisioning and automation platform, we'll touch on the strengths of it's flexible architecture as well as showing some concrete and simple starting points on your path to become an executive chef.
Delivering High Performance Websites with NGINXNGINX, Inc.
NGINX Plus is an easy-to-install, proven software solution to deliver your sites and applications through state-of-the-art intelligent load balancing and high performance acceleration. Improve your servers’ performance, scalability, and reliability with application delivery from NGINX Plus.
NGINX Plus significantly increases application performance during periods of high load with its caching, HTTP connection processing, and efficient offloading of traffic from slow networks. NGINX Plus offers enterprise application load balancing, sophisticated health checks, and more, to balance workloads and avoid user-visible errors.
Check out this webinar to:
* Learn why web performance matters more than ever, in the face of growing application complexity and traffic volumes
* Get the lowdown on the performance challenges of HTTP, and why the real world is so different to a development environment
* Understand why NGINX and NGINX Plus are such popular solutions for mitigating these problems and restoring peak performance
* Look at some real-world deployment examples of accelerating traffic in complex scenarios
Nginx pronounced as "Engine X" is an open source high performance web and reverse proxy server which supports protocols like HTTP, HTTPS, SMTP, IMAP. It can also be used for load balancing and HTTP caching.
Key external invitees will each give a 10min lightning talk about their Company, their interest in ARM servers and any requirements to port their software solutions on ARM 64-bit platforms.
Video: https://www.youtube.com/watch?v=XWxrVM1i7gA&list=UUIVqQKxCyQLJS6xvSmfndLA
More on bpftrace for MariaDB DBAs and Developers - FOSDEM 2022 MariaDB DevroomValeriy Kravchuk
bpftrace is a relatively new open source tracer for modern Linux (kernels 5.x.y) that may help to troubleshoot performance issues in production as well as to get insights on how software really works. I use it for a couple of years and would like to present more details on how to do it efficiently, including but not limited to adding user probes to different lines of the code inside functions, checking values of local variables and using bpftrace as a code coverage tool.
Conférence données à l'Open World Forum, 05 octobre 2013.
Comment créer une base de données noSQL par paires clés-valeurs en moins d'une heure, en se basant sur le bibliothèques Nanomsg et LightningDB.
MuleSoft Meetup Roma - Runtime Fabric Series (From Zero to Hero) - Sessione 2Alfonso Martino
Questa presentazione tratta le seguenti tematiche:
- Service discovery su Kubernetes (Service, Ingress Controller)
- Ingress Controller setup su EKS
- Ingress Controller Template setup su RTF
- Strategie di segregazione del traffico (interno ed esterno)
- Differenze tra RTF BYOK (Bring Your Own Kubernetes) e Self-managed
Ansible is an open source automation platform, written in Python, that can be used for configuration-management, application deployment, cloud provisioning, ad-hoc task-execution, multinode orchestration and so on. This talk is an introduction to Ansible for beginners, including tips like how to use containers to mimic multiple machines while iteratively automating some tasks or testing.
Я поделюсь с вами опытом разработки конвейерных скриптов Jenkins для организации процессов непрерывной интеграции и развертывания микросервисов. Акцент будет сделан на применении средств Jenkins для разделяемых библиотек. Я продемонстрирую подходы к созданию модульных, тестируемых и повторно используемых компонентов для сборки и развертывания. Доклад будет полезен каждому, кто так или иначе связан с автоматизацией непрерывной интеграции и развертывания ПО, будь то разработчик или же DevOps
Linux kernel tracing superpowers in the cloudAndrea Righi
The Linux 4.x series introduced a new powerful engine of programmable tracing (BPF) that allows to actually look inside the kernel at runtime. This talk will show you how to exploit this engine in order to debug problems or identify performance bottlenecks in a complex environment like a cloud. This talk will cover the latest Linux superpowers that allow to see what is happening “under the hood” of the Linux kernel at runtime. I will explain how to exploit these “superpowers” to measure and trace complex events at runtime in a cloud environment. For example, we will see how we can measure latency distribution of filesystem I/O, details of storage device operations, like individual block I/O request timeouts, or TCP buffer allocations, investigating stack traces of certain events, identify memory leaks, performance bottlenecks and a whole lot more.
MongoDB is the trusted document store we turn to when we have tough data store problems to solve. For this talk we are going to go a little bit off the path and explore what other roles we can fit MongoDB into. Others have discussed how to turn MongoDB’s capped collections into a publish/subscribe server. We stretch that a little further and turn MongoDB into a full fledged broker with both publish/subscribe and queue semantics, and a the ability to mix them. We will provide code and a running demo of the queue producers and consumers. Next we will turn to coordination services: We will explore the fundamental features and show how to implement them using MongoDB as the storage engine. Again we will show the code and demo the coordination of multiple applications.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
2. What is nginx?
•The 2nd most used HTTP-server (Apache is 1st)
•Asynchronous event-driven approach to request-handling.
•Written in C
•Readable source code!
3. What kind of modules do we have?
•handler modules - they process a request and produce output
•filter modules - they manipulate the output produced by a
handler
•load-balancers - they choose a backend server to send a
request to, when more than one backend server is eligible
4. What will you learn through the course of this
presentation?
•Step 0: Building Nginx (with a module)
•Step 1: Create a stub http content module, aka "Hello, World"
•Step 2: Update the module to use an external library for
content creation
•Step 3: Give your module its own configuration directives
•Step 4: Fetching arguments from the URL
6. Download & build
# wget http://nginx.org/download/nginx-1.0.11.tar.gz
# tar zxvf nginx-1.0.11.tar.gz
# cd nginx-1.0.11.tar.gz
# ./configure
# make
Straight forward. No surprises there.
7. Building a third party module
Modules are statically linked, so adding a new module requires
a new nginx-build.
# ./configure --add-module=$HOME/git/some_cool_module_I_found_somewhere_on_github
# make
Easy. (Want more modules? --add-module can be added several
times)
8. This is boring, show me the code!
Step 1: Create a stub http content module, aka "Hello, World"
9. Creating your own module
A minimal HTTP module - let's call it "fun" - consists of a
directory with two files:
config
and a source file
ngx_http_fun_module.c
A significant amount of the source-file will be stub-code.
10. The config file
Is sourced as a shell-script, and should set different
environment variables to help the build-system understand
what to do.
A minimal config-file for a module without any external
dependencies could look like this:
ngx_addon_name=ngx_http_fun_module
HTTP_MODULES="$HTTP_MODULES ngx_http_fun_module"
NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_fun_module.c"
11. The source file (1)
Include your headers.
#include <ngx_config.h>
#include <ngx_core.h>
#include <ngx_http.h>
Declare a handler.
static char * ngx_http_fun(ngx_conf_t * cf, ngx_command_t * cmd, void * conf);
Create our HTTP body.
static u_char ngx_fun_string[] = "This is fun!";
12. How does nginx know about my configuration
directives?
Quoting the nginx source-code:
struct ngx_command_s {
ngx_str_t name;
ngx_uint_t type;
char * (* set)(ngx_conf_t * cf,
ngx_command_t * cmd, void * conf);
ngx_uint_t conf;
ngx_uint_t offset;
void * post;
};
#define ngx_null_command { ngx_null_string, 0, NULL, 0, 0, NULL }
typedef struct ngx_command_s ngx_command_t;
An array of ngx_command_t's, terminated by a
ngx_null_command should be defined in your module.
13. Declaring the configuration parameters
available by your module
Create a static array of the ngx_command_t-type, terminate it
with ngx_null_command. Populate it with the commands
available for your module.
static ngx_command_t ngx_http_fun_commands[] = {
{ // Our command is named "fun":
ngx_string("fun"),
// The directive may be specified in the location-level of your nginx-config.
// The directive does not take any arguments (NGX_CONF_NOARGS)
NGX_HTTP_LOC_CONF|NGX_CONF_NOARGS,
// A pointer to our handler-function.
ngx_http_fun,
// We're not using these two, they're related to the configuration structures.
0, 0,
// A pointer to a post-processing handler. We're not using any here.
NULL },
ngx_null_command
};
14. Declaring the module context
Create a static ngx_http_module_t, populate all 8 elements with
NULL.
static ngx_http_module_t ngx_http_fun_module_ctx = {
NULL, // preconfiguration
NULL, // postconfiguration
NULL, // create main configuration
NULL, // init main configuration
NULL, // create server configuration
NULL, // merge server configuration
NULL, // create location configuration
NULL // merge location configuration
};
This is used to determine multiple levels of configuration, and
gives you control over which directives beat which in a race.
Our module does not use any configuration parameters other
than "fun;", so we don't need any handlers.
15. Declaring the module description structure
Create a (not static) structure of type ngx_module_t, reference
the structures we've made like this:
ngx_module_t ngx_http_fun_module = {
NGX_MODULE_V1,
&ngx_http_fun_module_ctx, // module context
ngx_http_fun_commands, // module directives
NGX_HTTP_MODULE, // module type
NULL, // init master
NULL, // init module
NULL, // init process
NULL, // init thread
NULL, // exit thread
NULL, // exit process
NULL, // exit master
NGX_MODULE_V1_PADDING
};
16. Creating the actual handler (1)
A static function that returns an ngx_int_t, and receives a
pointer to a ngx_http_request_t.
static ngx_int_t
ngx_http_fun_handler(ngx_http_request_t * r)
{
ngx_int_t rc;
ngx_buf_t * b;
ngx_chain_t out;
We use rc to store the return value of certain function calls.
We use * b to store our buffer pointer.
out is the buffer chain.
17. Creating the actual handler (2)
A simple test of the kind of request we're receiving.
// we response to 'GET' and 'HEAD' requests only
if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) {
return NGX_HTTP_NOT_ALLOWED;
}
18. Creating the actual handler (3)
The request-body is useful if we're dealing with POST. We're
not.
// discard request body, since we don't need it here
rc = ngx_http_discard_request_body(r);
if (rc != NGX_OK) {
return rc;
}
This attempts to detach the request body from the request. It's
an optimization.
19. Creating the actual handler (4)
Setting the content-type is important --
// set the 'Content-type' header
r->headers_out.content_type_len = sizeof("text/html") - 1;
r->headers_out.content_type.len = sizeof("text/html") - 1;
r->headers_out.content_type.data = (u_char * ) "text/html";
20. Creating the actual handler (5)
// send the header only, if the request type is http 'HEAD'
if (r->method == NGX_HTTP_HEAD) {
r->headers_out.status = NGX_HTTP_OK;
r->headers_out.content_length_n = sizeof(ngx_fun_string) - 1;
return ngx_http_send_header(r);
}
21. Creating the actual handler (6)
// allocate a buffer for your response body
b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));
if (b == NULL) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
22. Creating the actual handler (7)
// attach this buffer to the buffer chain
out.buf = b;
out.next = NULL;
23. Creating the actual handler (8)
// adjust the pointers of the buffer
b->pos = ngx_fun_string;
b->last = ngx_fun_string + sizeof(ngx_fun_string) - 1;
b->memory = 1; // This buffer is in read-only memory
// This means that filters should copy it, and not try to rewrite in place.
b->last_buf = 1; // this is the last buffer in the buffer chain
24. Creating the actual handler (9)
// set the status line
r->headers_out.status = NGX_HTTP_OK;
r->headers_out.content_length_n = sizeof(ngx_fun_string) - 1;
25. Creating the actual handler (10)
// send the headers of your response
rc = ngx_http_send_header(r);
if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) {
return rc;
}
// send the buffer chain of your response
return ngx_http_output_filter(r, &out);
}
... almost done!
26. Attaching the handler
static char *
ngx_http_fun(ngx_conf_t * cf, ngx_command_t * cmd, void * conf)
{
ngx_http_core_loc_conf_t * clcf;
clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module);
clcf->handler = ngx_http_fun_handler; // handler to process the 'fun' directive
return NGX_CONF_OK;
}
In this function, we can validate on the incoming
ngx_command_t, and attach different handlers.
27. Does this really work?
Trygve will attempt to demonstrate.
# make
# sudo make install
Update the nginx-configuration with a location handler,
something like this:
Location: /fun {
fun;
}
Start nginx, and access http://localhost/fun/
28. Of course it worked! But static text isn't any fun
Step 2: Update the module to use an external library for
content creation
29. What are we going to do? Tell me now!
•We're going to replace the static text with an image,
dynamically created with libcairo.
•We're going to learn about buffers and chains
30. The config file - dependency handling
Testing for dependencies is done with the feature tests
provided by nginx
ngx_feature="cairo"
ngx_feature_name=
ngx_feature_run=no
ngx_feature_incs="#include <cairo.h>"
ngx_feature_path="/usr/include/cairo"
ngx_feature_libs=-lcairo
ngx_feature_test="cairo_version()"
. auto/feature
if [ $ngx_found = yes ]; then
ngx_addon_name=ngx_http_fun_module
HTTP_MODULES="$HTTP_MODULES ngx_http_fun_module"
NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_fun_module.c"
CORE_LIBS="$CORE_LIBS `pkg-config cairo cairo-png --libs`"
CFLAGS="$CFLAGS `pkg-config cairo cairo-png --cflags`"
else
cat << END
$0: error: the fun module requires the cairo library.
END
exit 1
fi
31. The Cairo part (1)
Include the header, and define M_PI
#include <cairo.h>
#define M_PI 3.14159265
struct closure {
ngx_http_request_t * r;
ngx_chain_t * chain;
uint32_t length;
};
The struct will be used by a callback function we're going to
create later.
Throw away ngx_fun_string while you're at it, we're not going to
be using that.
32. The Cairo part (2)
In ngx_http_fun_handler(), remove the ngx_buf_t (we'll do these
things in a callback function later), and add our struct.
static ngx_int_t
ngx_http_fun_handler(ngx_http_request_t * r)
{
ngx_int_t rc;
ngx_chain_t out;
struct closure c = { r, &out, 0 };
33. The Cairo part (3)
Remove our header-handling from before - we won't be able to
calculate content-length before we've created the png.
// set the 'Content-type' header
r->headers_out.content_type_len = sizeof("text/html") - 1;
r->headers_out.content_type.len = sizeof("text/html") - 1;
r->headers_out.content_type.data = (u_char * ) "text/html";
// send the header only, if the request type is http 'HEAD'
if (r->method == NGX_HTTP_HEAD) {
r->headers_out.status = NGX_HTTP_OK;
r->headers_out.content_length_n = sizeof(ngx_fun_string) - 1;
return ngx_http_send_header(r);
}
// allocate a buffer for your response body
b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));
if (b == NULL) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
34. The Cairo part (4)
This is basically ripped from one of their many examples on
their web page.
cairo_surface_t * surface;
cairo_t * cr;
surface = cairo_image_surface_create (CAIRO_FORMAT_ARGB32, 256, 256);
cr = cairo_create (surface);
double xc = 128.0;
double yc = 128.0;
double radius = 100.0;
double angle1 = 270.0 * (M_PI/180.0); // angles are specified
double angle2 = 180.0 * (M_PI/180.0); // in radians
36. The Cairo part (6)
Make sure that our buffer chain is NULL'ed. Remove any
references to the ngx_buf_t from earlier.
out.buf = NULL;
out.next = NULL;
// Copy the png image to our buffer chain (we provide our own callback-function)
rc = cairo_surface_write_to_png_stream(surface, copy_png_to_chain, &c);
// Free cairo stuff.
cairo_destroy(cr);
cairo_surface_destroy(surface);
// If we for some reason didn't manage to copy the png to our buffer, throw 503.
if ( rc != CAIRO_STATUS_SUCCESS )
{
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
The cairo_surface_write_to_png_stream uses a callback-function
to copy the png-data to a buffer, in the way we want.
Remember to free resources, and check that everything went
well.
37. The Cairo part (7)
We're going to set our headers now. New content-type - the
length has been calculated by our callback function and is
stored in our struct.
// set the 'Content-type' header
r->headers_out.content_type_len = sizeof("image/png") - 1;
r->headers_out.content_type.len = sizeof("image/png") - 1;
r->headers_out.content_type.data = (u_char * ) "image/png";
// set the status line
r->headers_out.status = NGX_HTTP_OK;
r->headers_out.content_length_n = c.length;
// send the headers of your response
rc = ngx_http_send_header(r);
// We've added the NGX_HTTP_HEAD check here, because we're unable to set content length before
// we've actually calculated it (which is done by generating the image).
// This is a waste of resources, and is why caching solutions exist.
if (rc == NGX_ERROR || rc > NGX_OK || r->header_only || r->method == NGX_HTTP_HEAD) {
return rc;
}
38. Allocating memory in an nginx-module
•Nginx has its own memory management system.
•ngx_pcalloc(pool, amount) will give you a pointer to some
memory you can do what you want with.
•'pool' attaches the memory to an owner. Some memory will
be allocated for the request (ngx_http_request_t->pool), and
some will be allocated for the module configuration
(ngx_conf_t->pool).
•Nginx deals with freeing/reusing this memory when it's owner
doesn't need it anymore.
39. How does buffers and chains work?
•ngx_chain_t is a linked list of ngx_buf_t's.
•You provide a pointer to the first ngx_chain_t in the chain to
ngx_http_output_filter()
•It will walk through the chain, and pass all the buffers to the
output filter
•You need to get comfortable with this, or attaching bits and
pieces together will be tricky.
40. Forgot what a linked list is?
struct element {
int value;
struct element * next;
};
A linked list is typically a struct, referencing an element of its
own type. The last element is often set to NULL, but they can
also be circular, or double (having an additional reference for
the previous item in the list), etc.
41. Creating our callback function (1)
Cairo provides you with an interface for accessing png-images
however you'd like. What we want to do, is to attach them to
our chain. We do that by implementing our own callback
function. This function will be called an undefined number of
times.
static cairo_status_t
copy_png_to_chain(void * closure, const unsigned char * data, unsigned int length)
{
// closure is a 'struct closure'
struct closure * c = closure;
// Just a helper pointer, to help us traverse the linked list.
ngx_chain_t * ch = c->chain;
// We track the size of the png-file in our closure struct.
c->length += length;
42. Creating our callback function (2)
// The allocated memory belongs to the request-pool.
ngx_buf_t * b = ngx_pcalloc(c->r->pool, sizeof(ngx_buf_t));
unsigned char * d = ngx_pcalloc(c->r->pool, length);
// We make sure to fail if we're unable to allocate memory.
if (b == NULL || d == NULL) {
return CAIRO_STATUS_NO_MEMORY;
}
// Copy data to our new buffer, and set the pointers.
ngx_memcpy(d, data, length);
b->pos = d;
b->last = d + length;
b->memory = 1;
b->last_buf = 1;
43. Creating our callback function (3)
If the first element isn't put into place, we can quit early.
// Handle the first element in our linked list.
if ( c->chain->buf == NULL )
{
c->chain->buf = b;
return CAIRO_STATUS_SUCCESS;
}
44. Creating our callback function (4)
// Skip to the end of the linked list.
while ( ch->next )
{
ch = ch->next;
}
45. Creating our callback function (5)
// Allocate a new link in our chain.
ch->next = ngx_pcalloc(c->r->pool, sizeof(ngx_chain_t));
if ( ch->next == NULL )
{
return CAIRO_STATUS_NO_MEMORY;
}
// Attach our buffer at the end.
ch->next->buf = b;
ch->next->next = NULL;
return CAIRO_STATUS_SUCCESS;
}
Presto! That was easy :)
47. Yeah, but what if I want to be able to configure
my module?
•Step 3: Give your module its own configuration directives
48. Create a datatype for config-storage
typedef struct {
ngx_uint_t radius;
} ngx_http_fun_loc_conf_t;
49. Extend the ngx_http_fun_commands[]-array
{ // New parameter: "fun_radius":
ngx_string("fun_radius"),
// Can be specified on the main level of the config,
// can be specified in the server level of the config,
// can be specified in the location level of the config,
// the directive takes 1 argument (NGX_CONF_TAKE1)
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
// A builtin function for setting numeric variables
ngx_conf_set_num_slot,
// We tell nginx how we're storing the config.
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_fun_loc_conf_t, radius),
NULL
},
50. Create a function for dealing with the config
creation
static void *
ngx_http_fun_create_loc_conf(ngx_conf_t * cf)
{
ngx_http_fun_loc_conf_t * conf;
conf = ngx_pcalloc(cf->pool, sizeof(ngx_http_fun_loc_conf_t));
if (conf == NULL) {
return NGX_CONF_ERROR;
}
conf->radius = NGX_CONF_UNSET_UINT;
return conf;
}
51. Create a function for merging config
static char *
ngx_http_fun_merge_loc_conf(ngx_conf_t * cf, void * parent, void * child)
{
ngx_http_fun_loc_conf_t * prev = parent;
ngx_http_fun_loc_conf_t * conf = child;
ngx_conf_merge_uint_value(conf->radius, prev->radius, 100);
if (conf->radius < 1) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "radius must be equal or more than 1");
return NGX_CONF_ERROR;
}
if (conf->radius > 1000 ) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "radius must be equal or less than 1000");
return NGX_CONF_ERROR;
}
return NGX_CONF_OK;
}
52. Add pointers to our new function in the
ngx_htt_fun_module_ctx
ngx_http_fun_create_loc_conf, // create location configuration
ngx_http_fun_merge_loc_conf // merge location configuration
53. Give our handler access to the configuration
data
ngx_http_fun_loc_conf_t * cglcf;
cglcf = ngx_http_get_module_loc_conf(r, ngx_http_fun_module);
54. Using the config-data in our module
We override the dimensions of our image, and we override the
center-position.
surface = cairo_image_surface_create (CAIRO_FORMAT_ARGB32, cglcf->radius*2 + 64, cglcf->radius*2 + 64);
cr = cairo_create (surface);
double xc = cglcf->radius + 32;
double yc = cglcf->radius + 32;
double radius = cglcf->radius;
55. It can't possibly be that easy! Show me!
Update the configuration to something like this:
Location: /funbig {
fun;
fun_radius 500;
}
Location: /funsmall {
fun;
fun_radius 50;
}
The different URLs result in different sized images.
56. I want to deal with user-input
•Step 4: Fetching arguments from the URL
57. This is as easy as parsing strings in C. cough
The request has an uri-element, with members .data and .len.
Adding something like this to your handle will give you the 3
last characters of the URI as an integer;
char * uri;
int angle = 0;
if ( r->uri.len > 3 )
{
uri = (char * )r->uri.data + r->uri.len - 3;
angle = strtol(uri, NULL, 10);
}
58. Which you then can use ...
double angle1 = 0.0; // angles are specified
double angle2 = angle * (M_PI/180.0); // in radians