This document provides an overview of NginX, HAProxy, and DNS stack technologies presented at a WordCamp conference. It discusses how NginX uses an asynchronous event-driven architecture to handle high loads more efficiently than threaded architectures. It then demonstrates configuring NginX as a reverse proxy, including logging, caching, and gzip compression. Finally, it briefly introduces HAProxy as an open-source load balancer and discusses installing and configuring it on Ubuntu.
Where is my cache architectural patterns for caching microservices by exampleRafał Leszko
The document discusses various architectural patterns for caching microservices, including embedded caching, embedded distributed caching, client-server caching, cloud caching, sidecar caching, reverse proxy caching, and reverse proxy sidecar caching. It provides examples and pros and cons of each pattern. The presentation concludes with a summary of when each pattern may be best suited based on factors like whether the application is aware of the cache, if it uses containers, the data volume, security restrictions, language agnostic needs, and cloud usage.
This document discusses using a copy-on-write cache accelerator pattern and Varnish cache to optimize a microservice that allows customers to specify a favorite delivery location. Implementing a persistent cache with Varnish Cache Plus and transaction logging would make the solution robust and scalable by handling cache updates asynchronously and repopulating the cache if needed. This caching approach is well-suited for services with an expected low cache hit rate and improves performance by consuming fewer resources than directly calling the microservice.
Architectural caching patterns for kubernetesRafał Leszko
The document discusses various architectural caching patterns for Kubernetes, including embedded, embedded distributed, client-server, cloud, sidecar, reverse proxy, and reverse proxy sidecar caching. It provides examples of implementing each pattern using Hazelcast and discusses the pros and cons of each approach.
Skype uses PostgreSQL for its databases and has over 100 database servers handling over 10,000 transactions per second across 200 databases storing billions of records. The databases are split both vertically and horizontally to scale with load. All database access is through stored procedures for security and flexibility. Replication and remote procedure calls connect the databases. Tools like pgBouncer, PL/Proxy, PgQ, and Londiste help partition, connect, and load balance the databases to form a large and complex but manageable distributed database architecture.
Ceph Day London 2014 - The current state of CephFS development Ceph Community
The document discusses recent developments in CephFS. It provides an overview of CephFS architecture including components like clients, servers, storage and data placement. The focus is on improving resilience and making CephFS production-ready with features like online filesystem checking, journal resilience tools, client management and online diagnostics. The goal is to handle failures and diagnose problems in a distributed filesystem environment.
Smart contracts and NFTs call for a revised approach to store data. In these slides, 3 options for distributed and fault-tolerant data storage are presented:
IPFS
Filecoin
Arweave
Hands-on tutorial on installation IPFS node and creation of smart contracts that use IPFS for data storage. As an example of IPFS usage in smart contracts, we create ERC-721 NFT that reference file in IPFS.
Tools and technologies used in this tutorial:
GCP https://console.cloud.google.com/home
ApiDapp https://apidapp.com/
Etherscan https://kovan.etherscan.io/
Solidity https://solidity.readthedocs.io/en/v0.6.1/
Open Zeppelin https://openzeppelin.com/contracts/
SPDY is an experimental protocol developed by Google that aims to reduce web page load latency and improve security. It achieves this through compression, multiplexing requests over a single connection, and prioritizing content. SPDY modifies how HTTP requests and responses are transmitted but does not replace HTTP. The IETF is considering SPDY as a starting point for the development of HTTP 2.0.
Where is my cache architectural patterns for caching microservices by exampleRafał Leszko
The document discusses various architectural patterns for caching microservices, including embedded caching, embedded distributed caching, client-server caching, cloud caching, sidecar caching, reverse proxy caching, and reverse proxy sidecar caching. It provides examples and pros and cons of each pattern. The presentation concludes with a summary of when each pattern may be best suited based on factors like whether the application is aware of the cache, if it uses containers, the data volume, security restrictions, language agnostic needs, and cloud usage.
This document discusses using a copy-on-write cache accelerator pattern and Varnish cache to optimize a microservice that allows customers to specify a favorite delivery location. Implementing a persistent cache with Varnish Cache Plus and transaction logging would make the solution robust and scalable by handling cache updates asynchronously and repopulating the cache if needed. This caching approach is well-suited for services with an expected low cache hit rate and improves performance by consuming fewer resources than directly calling the microservice.
Architectural caching patterns for kubernetesRafał Leszko
The document discusses various architectural caching patterns for Kubernetes, including embedded, embedded distributed, client-server, cloud, sidecar, reverse proxy, and reverse proxy sidecar caching. It provides examples of implementing each pattern using Hazelcast and discusses the pros and cons of each approach.
Skype uses PostgreSQL for its databases and has over 100 database servers handling over 10,000 transactions per second across 200 databases storing billions of records. The databases are split both vertically and horizontally to scale with load. All database access is through stored procedures for security and flexibility. Replication and remote procedure calls connect the databases. Tools like pgBouncer, PL/Proxy, PgQ, and Londiste help partition, connect, and load balance the databases to form a large and complex but manageable distributed database architecture.
Ceph Day London 2014 - The current state of CephFS development Ceph Community
The document discusses recent developments in CephFS. It provides an overview of CephFS architecture including components like clients, servers, storage and data placement. The focus is on improving resilience and making CephFS production-ready with features like online filesystem checking, journal resilience tools, client management and online diagnostics. The goal is to handle failures and diagnose problems in a distributed filesystem environment.
Smart contracts and NFTs call for a revised approach to store data. In these slides, 3 options for distributed and fault-tolerant data storage are presented:
IPFS
Filecoin
Arweave
Hands-on tutorial on installation IPFS node and creation of smart contracts that use IPFS for data storage. As an example of IPFS usage in smart contracts, we create ERC-721 NFT that reference file in IPFS.
Tools and technologies used in this tutorial:
GCP https://console.cloud.google.com/home
ApiDapp https://apidapp.com/
Etherscan https://kovan.etherscan.io/
Solidity https://solidity.readthedocs.io/en/v0.6.1/
Open Zeppelin https://openzeppelin.com/contracts/
SPDY is an experimental protocol developed by Google that aims to reduce web page load latency and improve security. It achieves this through compression, multiplexing requests over a single connection, and prioritizing content. SPDY modifies how HTTP requests and responses are transmitted but does not replace HTTP. The IETF is considering SPDY as a starting point for the development of HTTP 2.0.
The Web is broken. HTTP is inefficient and expensive, especially for large files. Webpages are being deleted constantly, with the average lifespan of a web being 100 days. The Web's centralization limits opportunity and innovation. And it causes problems in the developing world, with natural disasters or faulty connections. We can do better. In this talk, I'll explain IPFS, a project intended to replace HTTP and build a better web. IPFS is a peer-to-peer hypermedia protocol to make the web faster, safer, and more open. In addition, IPFS will use Filecoin as a reward mechanism. Filecoin aims to provide a decentralized network for digital storage through which users can effectively rent out their spare capacity, receiving filecoins as payment. Filecoin raised 200M$ last month, breaking all records in blockchain ICOs to date.
High Availability Content Caching with NGINXKevin Jones
This document discusses caching content with NGINX to improve performance and reduce load on origin servers. It provides an overview of NGINX caching functionality and how to configure basic caching using directives like proxy_cache_path, proxy_cache_key, proxy_cache, and proxy_cache_valid. It also covers more advanced caching techniques like micro-caching, which caches dynamic content for short periods, and configuring NGINX for high availability.
The document discusses high performance networking in Chrome. It describes how Chrome's architecture has evolved from a single process model to a multi-process model with isolated processes and memory for each tab. This makes the browser more resilient and prevents crashes in one tab from affecting others. It also notes that browser performance involves efficiently fetching resources, laying out pages, and executing JavaScript.
IPFS is a distribution protocol that enables the creation of completely distributed applications through content addressing. A very ambitious open source project in Go, IPFS adopts a peer-to-peer hypermedia protocol to protect against a single point of failure. This presentation aims to highlight the design and ideas of IPFS and also touches upon a real world use case.
The constrained application protocol (coap) part 2Hamdamboy (함담보이)
This document discusses the Constrained Application Protocol (CoAP), which is a web transfer protocol for resource-constrained devices. It describes CoAP methods like GET, POST, PUT, DELETE and their usage. It also explains CoAP message format, options, caching model, and how CoAP can be used for machine-to-machine applications and can be proxied to HTTP.
The document provides an overview of the InterPlanetary File System (IPFS) and its key components. IPFS aims to create a distributed file system that addresses issues with the existing internet such as bandwidth, latency, offline support, and data security. It utilizes various technologies including distributed hash tables (DHTs), BitTorrent exchanges, and a Merkle directed acyclic graph (DAG) to store and retrieve versioned files in a decentralized manner. The document discusses IPFS concepts like content identifiers (CIDs), IPNS for mutable links, pinning for long-term data retention, and UnixFS for file representation. It also outlines several potential use cases for IPFS and challenges around automatic data replication.
NATS is a high performance messaging server and also one of the latest additions to the CNCF. In this talk, we will make a deep dive to the internals of the project covering its design, protocol, clustering implementation, security and authorization features that make it an attractive solution for microservices and low latency applications.
gRPC in Golang presentation
In this talk, I introduced gRPC, Protocol buffer, and how to use them with golang.
Source code used in the presentation: http://github.com/AlmogBaku/grpc-in-go
This document discusses using GlusterFS for Hadoop. GlusterFS is an open source distributed file system that aggregates storage and provides a unified global namespace. It can be used with Hadoop as the underlying storage system instead of HDFS. Using GlusterFS offers advantages like no need for a metadata server and ability to use the same storage for both MapReduce jobs and application data. It also supports features like geo-replication and erasure coding that are useful for big data workloads.
What’s New in NGINX Ingress Controller for Kubernetes Release 1.5.0NGINX, Inc.
On-Demand Recording:
https://www.nginx.com/resources/webinars/whats-new-nginx-ingress-controller-kubernetes-version-150/
Kubernetes is the leading orchestration platform for deploying, scaling, and managing containerized applications. Infrastructure operators constantly impose new application delivery requirements as they adopt Kubernetes for production workloads. The NGINX Ingress controller is the most popular ingress load balancer for Kubernetes, providing a complete and supported solution for delivering your containerized applications to clients.
Attend this webinar to learn about the latest developments in NGINX Ingress Controller for Kubernetes Release 1.5.0.
When people hear the word NGINX, they usually associate the open source platform for its popular adoption as an HTTP web server or load balancer. What a lot of people don't know is the vast amount of powerful features contained in the platform that can be used to build an HTTP caching layer and why NGINX is often used as a framework to build powerful, scalable and highly available content delivery networks. In this talk we will dive into each unique NGINX directive and its configuration options that are available. We will show different architectural approaches that can be used to build a highly available HTTP content cache layer. We will show various other NGINX configurations that can be critical to your NGINX deployment. Walking away from this presentation, attendees will have the knowledge required to configure basic and advanced caching of their NGINX servers.
There is a renaissance underway in the messaging space. Due to the demands of IoT networks, cloud native apps, and microservices developers are looking for simple, fast, messaging systems. This is a sharp contrast to how traditional messaging was done.
This webinar will cover:
- The basics of messaging patterns
- What makes NATS unique
- Using a demo inspired by Pokemon Go as an example
NGINX Plus is often deployed in a cluster, and the new features in R16 help our customers working in a clustered environment. New features include global rate limiting, a cluster-aware key-value store, Random with Two Choices load-balancing algorithm, and more.
Join this webinar to learn:
- About the new cluster-aware features in NGINX Plus R16: global rate limiting, key-value store, and Random with Two Choices load balancing
- How to use key-value stores in use cases such as DDoS mitigation and dynamic bandwidth limiting
- About enhanced UDP load balancing, AWS PrivateLink support, and additional new features
- How the NGINX Plus R16 features behave in action, in a live demo
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r16-emea/
NGINX Plus is often deployed in a cluster, and the new features in R16 help our customers working in a clustered environment. New features include global rate limiting, a cluster-aware key-value store, Random with Two Choices load-balancing algorithm, and more.
Join this webinar to learn:
- About the new cluster-aware features in NGINX Plus R16: global rate limiting, key-value store, and Random with Two Choices load balancing
- How to use key-value stores in use cases such as DDoS mitigation and dynamic bandwidth limiting
- About enhanced UDP load balancing, AWS PrivateLink support, and additional new features
- How the NGINX Plus R16 features behave in action, in a live demo
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r16/
This document discusses speeding up the ZingMe-NTVV2 application by writing a PHP extension module. It introduces NTVV2, which has high traffic volumes. Writing a PHP extension can make complicated business functions run faster and use less memory compared to pure PHP. The document explains what a PHP extension is, its lifecycle, and how to set up the build environment. It recommends using SWIG, an interface compiler, to more easily connect C/C++ programs to PHP. SWIG allows defining types, wrapping classes/functions, and exposing functions to PHP. The document provides steps for using SWIG, including defining the module, generating code, creating a project, and compiling. Caching data in the PHP module
Automatically partitioning packet processing applications for pipelined archi...Ashley Carter
This document describes a technique for automatically partitioning sequential packet processing applications into coordinated parallel subtasks that can be efficiently mapped to pipelined network processor architectures. The technique balances work among pipeline stages and minimizes data transmission between stages. It was implemented in an auto-partitioning C compiler for Intel network processors. Experimental results showed over 4x speedups for IPv4 and IP forwarding benchmarks on a 9-stage pipeline compared to non-partitioned code.
About the webinar
The use of an API gateway and the move to microservices are two of the most important trends in application development. But are they similar, or different; complementary, or contradictory? In this webinar, we discuss the advantages of an API gateway, the advantages of microservices development, and how and when they can work together.
The NGINX Microservices Reference Architecture (MRA) uses three different network architectures, with service mesh as a fourth. We describe how an API gateway relates to each of these network architectures and how to reduce rework if your application needs to evolve from one architecture to another.
Speakers:
Charles Pretzer, Technical Architect, NGINX, Inc.
Floyd Smith, Director of Content Marketing, NGINX, Inc.
Cloud native IPC for Microservices Workshop @ Containerdays 2022QAware GmbH
This document provides an agenda and overview for a workshop on migrating from REST to gRPC. The agenda covers exercises on using Protocol Buffers with Quarkus and JAX-RS, building a gRPC API with Quarkus, implementing a gRPC REST gateway, and using a gRPC web client with Envoy. Additional sections provide background on tools like Protocol Buffers, gRPC, and the gRPC ecosystem. The document is intended to guide participants through hands-on exercises demonstrating techniques for migrating a REST API to a gRPC API.
Introducing the Microservices Reference Architecture Version 1.2NGINX, Inc.
About the webinar
Application development using microservices is changing very quickly, even as many organizations are gearing up to produce their first full-fledged microservices apps, or expand microservices development. Among these changes are the emergence of Kubernetes as the most widely -used approach to container management and the arrival of service mesh architectures. The Istio service mesh architecture has reached version 1.0.
There is also an increasing recognition of the need for security in service-to-service communications. In the upcoming Version 1.2 of the Microservices Reference Architecture, NGINX will offer an update to its robust and flexible array of models for microservices development, giving developers much more choice and the opportunity to “right-size” the microservices model they choose to the task at hand, while preserving the opportunity for future growth.
The Web is broken. HTTP is inefficient and expensive, especially for large files. Webpages are being deleted constantly, with the average lifespan of a web being 100 days. The Web's centralization limits opportunity and innovation. And it causes problems in the developing world, with natural disasters or faulty connections. We can do better. In this talk, I'll explain IPFS, a project intended to replace HTTP and build a better web. IPFS is a peer-to-peer hypermedia protocol to make the web faster, safer, and more open. In addition, IPFS will use Filecoin as a reward mechanism. Filecoin aims to provide a decentralized network for digital storage through which users can effectively rent out their spare capacity, receiving filecoins as payment. Filecoin raised 200M$ last month, breaking all records in blockchain ICOs to date.
High Availability Content Caching with NGINXKevin Jones
This document discusses caching content with NGINX to improve performance and reduce load on origin servers. It provides an overview of NGINX caching functionality and how to configure basic caching using directives like proxy_cache_path, proxy_cache_key, proxy_cache, and proxy_cache_valid. It also covers more advanced caching techniques like micro-caching, which caches dynamic content for short periods, and configuring NGINX for high availability.
The document discusses high performance networking in Chrome. It describes how Chrome's architecture has evolved from a single process model to a multi-process model with isolated processes and memory for each tab. This makes the browser more resilient and prevents crashes in one tab from affecting others. It also notes that browser performance involves efficiently fetching resources, laying out pages, and executing JavaScript.
IPFS is a distribution protocol that enables the creation of completely distributed applications through content addressing. A very ambitious open source project in Go, IPFS adopts a peer-to-peer hypermedia protocol to protect against a single point of failure. This presentation aims to highlight the design and ideas of IPFS and also touches upon a real world use case.
The constrained application protocol (coap) part 2Hamdamboy (함담보이)
This document discusses the Constrained Application Protocol (CoAP), which is a web transfer protocol for resource-constrained devices. It describes CoAP methods like GET, POST, PUT, DELETE and their usage. It also explains CoAP message format, options, caching model, and how CoAP can be used for machine-to-machine applications and can be proxied to HTTP.
The document provides an overview of the InterPlanetary File System (IPFS) and its key components. IPFS aims to create a distributed file system that addresses issues with the existing internet such as bandwidth, latency, offline support, and data security. It utilizes various technologies including distributed hash tables (DHTs), BitTorrent exchanges, and a Merkle directed acyclic graph (DAG) to store and retrieve versioned files in a decentralized manner. The document discusses IPFS concepts like content identifiers (CIDs), IPNS for mutable links, pinning for long-term data retention, and UnixFS for file representation. It also outlines several potential use cases for IPFS and challenges around automatic data replication.
NATS is a high performance messaging server and also one of the latest additions to the CNCF. In this talk, we will make a deep dive to the internals of the project covering its design, protocol, clustering implementation, security and authorization features that make it an attractive solution for microservices and low latency applications.
gRPC in Golang presentation
In this talk, I introduced gRPC, Protocol buffer, and how to use them with golang.
Source code used in the presentation: http://github.com/AlmogBaku/grpc-in-go
This document discusses using GlusterFS for Hadoop. GlusterFS is an open source distributed file system that aggregates storage and provides a unified global namespace. It can be used with Hadoop as the underlying storage system instead of HDFS. Using GlusterFS offers advantages like no need for a metadata server and ability to use the same storage for both MapReduce jobs and application data. It also supports features like geo-replication and erasure coding that are useful for big data workloads.
What’s New in NGINX Ingress Controller for Kubernetes Release 1.5.0NGINX, Inc.
On-Demand Recording:
https://www.nginx.com/resources/webinars/whats-new-nginx-ingress-controller-kubernetes-version-150/
Kubernetes is the leading orchestration platform for deploying, scaling, and managing containerized applications. Infrastructure operators constantly impose new application delivery requirements as they adopt Kubernetes for production workloads. The NGINX Ingress controller is the most popular ingress load balancer for Kubernetes, providing a complete and supported solution for delivering your containerized applications to clients.
Attend this webinar to learn about the latest developments in NGINX Ingress Controller for Kubernetes Release 1.5.0.
When people hear the word NGINX, they usually associate the open source platform for its popular adoption as an HTTP web server or load balancer. What a lot of people don't know is the vast amount of powerful features contained in the platform that can be used to build an HTTP caching layer and why NGINX is often used as a framework to build powerful, scalable and highly available content delivery networks. In this talk we will dive into each unique NGINX directive and its configuration options that are available. We will show different architectural approaches that can be used to build a highly available HTTP content cache layer. We will show various other NGINX configurations that can be critical to your NGINX deployment. Walking away from this presentation, attendees will have the knowledge required to configure basic and advanced caching of their NGINX servers.
There is a renaissance underway in the messaging space. Due to the demands of IoT networks, cloud native apps, and microservices developers are looking for simple, fast, messaging systems. This is a sharp contrast to how traditional messaging was done.
This webinar will cover:
- The basics of messaging patterns
- What makes NATS unique
- Using a demo inspired by Pokemon Go as an example
NGINX Plus is often deployed in a cluster, and the new features in R16 help our customers working in a clustered environment. New features include global rate limiting, a cluster-aware key-value store, Random with Two Choices load-balancing algorithm, and more.
Join this webinar to learn:
- About the new cluster-aware features in NGINX Plus R16: global rate limiting, key-value store, and Random with Two Choices load balancing
- How to use key-value stores in use cases such as DDoS mitigation and dynamic bandwidth limiting
- About enhanced UDP load balancing, AWS PrivateLink support, and additional new features
- How the NGINX Plus R16 features behave in action, in a live demo
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r16-emea/
NGINX Plus is often deployed in a cluster, and the new features in R16 help our customers working in a clustered environment. New features include global rate limiting, a cluster-aware key-value store, Random with Two Choices load-balancing algorithm, and more.
Join this webinar to learn:
- About the new cluster-aware features in NGINX Plus R16: global rate limiting, key-value store, and Random with Two Choices load balancing
- How to use key-value stores in use cases such as DDoS mitigation and dynamic bandwidth limiting
- About enhanced UDP load balancing, AWS PrivateLink support, and additional new features
- How the NGINX Plus R16 features behave in action, in a live demo
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r16/
This document discusses speeding up the ZingMe-NTVV2 application by writing a PHP extension module. It introduces NTVV2, which has high traffic volumes. Writing a PHP extension can make complicated business functions run faster and use less memory compared to pure PHP. The document explains what a PHP extension is, its lifecycle, and how to set up the build environment. It recommends using SWIG, an interface compiler, to more easily connect C/C++ programs to PHP. SWIG allows defining types, wrapping classes/functions, and exposing functions to PHP. The document provides steps for using SWIG, including defining the module, generating code, creating a project, and compiling. Caching data in the PHP module
Automatically partitioning packet processing applications for pipelined archi...Ashley Carter
This document describes a technique for automatically partitioning sequential packet processing applications into coordinated parallel subtasks that can be efficiently mapped to pipelined network processor architectures. The technique balances work among pipeline stages and minimizes data transmission between stages. It was implemented in an auto-partitioning C compiler for Intel network processors. Experimental results showed over 4x speedups for IPv4 and IP forwarding benchmarks on a 9-stage pipeline compared to non-partitioned code.
About the webinar
The use of an API gateway and the move to microservices are two of the most important trends in application development. But are they similar, or different; complementary, or contradictory? In this webinar, we discuss the advantages of an API gateway, the advantages of microservices development, and how and when they can work together.
The NGINX Microservices Reference Architecture (MRA) uses three different network architectures, with service mesh as a fourth. We describe how an API gateway relates to each of these network architectures and how to reduce rework if your application needs to evolve from one architecture to another.
Speakers:
Charles Pretzer, Technical Architect, NGINX, Inc.
Floyd Smith, Director of Content Marketing, NGINX, Inc.
Cloud native IPC for Microservices Workshop @ Containerdays 2022QAware GmbH
This document provides an agenda and overview for a workshop on migrating from REST to gRPC. The agenda covers exercises on using Protocol Buffers with Quarkus and JAX-RS, building a gRPC API with Quarkus, implementing a gRPC REST gateway, and using a gRPC web client with Envoy. Additional sections provide background on tools like Protocol Buffers, gRPC, and the gRPC ecosystem. The document is intended to guide participants through hands-on exercises demonstrating techniques for migrating a REST API to a gRPC API.
Introducing the Microservices Reference Architecture Version 1.2NGINX, Inc.
About the webinar
Application development using microservices is changing very quickly, even as many organizations are gearing up to produce their first full-fledged microservices apps, or expand microservices development. Among these changes are the emergence of Kubernetes as the most widely -used approach to container management and the arrival of service mesh architectures. The Istio service mesh architecture has reached version 1.0.
There is also an increasing recognition of the need for security in service-to-service communications. In the upcoming Version 1.2 of the Microservices Reference Architecture, NGINX will offer an update to its robust and flexible array of models for microservices development, giving developers much more choice and the opportunity to “right-size” the microservices model they choose to the task at hand, while preserving the opportunity for future growth.
Implementing data and databases on K8s within the Dutch governmentDoKC
A small walkthrough of projects within the dutch government running Data(bases) on OpenShift. This talk shares success stories, provides a proven recipe to `get it done` and debunks some of the FUD.
About Sebastiaan:
I have always been a weird DBA, trying to combine Databases with out-of-the-box thinking and a DevOps mindset. Around 2016 I fell in love with both Postgres and Kubernetes, and I then committed my life to enabling Dutch organisations with running their Database workloads CloudNative.
Over the last few years I worked as a private contractor for 2 large government agencies doing exactly that, and I want to share my and others (success stories) hoping to enable and inspire Data on Kubernetes adoption.
Delivering High Performance Websites with NGINXNGINX, Inc.
NGINX Plus is an easy-to-install, proven software solution to deliver your sites and applications through state-of-the-art intelligent load balancing and high performance acceleration. Improve your servers’ performance, scalability, and reliability with application delivery from NGINX Plus.
NGINX Plus significantly increases application performance during periods of high load with its caching, HTTP connection processing, and efficient offloading of traffic from slow networks. NGINX Plus offers enterprise application load balancing, sophisticated health checks, and more, to balance workloads and avoid user-visible errors.
Check out this webinar to:
* Learn why web performance matters more than ever, in the face of growing application complexity and traffic volumes
* Get the lowdown on the performance challenges of HTTP, and why the real world is so different to a development environment
* Understand why NGINX and NGINX Plus are such popular solutions for mitigating these problems and restoring peak performance
* Look at some real-world deployment examples of accelerating traffic in complex scenarios
OSDC 2017 - Casey Callendrello -The evolution of the Container Network InterfaceNETWAYS
The Container Network Interface (CNI) is a simple specification for connecting containers to an arbitrary network. It promises interoperability between diverse networking technologies and container orchestration engines. Since its release two years ago, the CNI standard has grown in adoption. It is now a cross-industry effort, with contributors from CoreOS, RedHat, Google, Microsoft, and WeaveWorks, for example. CNI is used by the Kubernetes, CloudFoundry, and Mesos container orchestration engines. After a brief overview of the project, this talk will cover recent and coming developments in the CNI. As a specification, the CNI must balance the desire for new features with that of stability. I’ll cover the implications of that need for balance, design considerations, changes in the CNI spec, and the new use cases made possible.
OSDC 2017 | The evolution of the Container Network Interface by Casey Callend...NETWAYS
The Container Network Interface (CNI) is a simple specification for connecting containers to an arbitrary network. It promises interoperability between diverse networking technologies and container orchestration engines. Since its release two years ago, the CNI standard has grown in adoption. It is now a cross-industry effort, with contributors from CoreOS, RedHat, Google, Microsoft, and WeaveWorks, for example. CNI is used by the Kubernetes, CloudFoundry, and Mesos container orchestration engines. After a brief overview of the project, this talk will cover recent and coming developments in the CNI. As a specification, the CNI must balance the desire for new features with that of stability. I’ll cover the implications of that need for balance, design considerations, changes in the CNI spec, and the new use cases made possible.
In this session we will talk about the history of NGINX and NGINX Plus and the role it has played in the development of the internet.
We will discuss some of the most recent changes and additions to the popular software project and touch base on some planned feature enhancements coming in the next months
The document discusses developing content caching of IP traffic for 3G, 4G and next generation networks. It describes how caching popular content at different points in the network can significantly reduce traffic load and transmission costs. Caching is proposed at the packet core, radio access network, and aggregation points. Intelligently caching video, images, and social media content can enhance quality of experience for users and energy efficiency for network operators.
Architecting Analytic Pipelines on GCP - Chicago Cloud Conference 2020Mariano Gonzalez
Modernizing analytics data pipelines to gain the most of your data while optimizing costs can be challenging. However, today cloud providers offer a good set of services that can help with this endeavor. We will do a tour across some GCP services during this hands-on session, using DataFlow (apache beam) as the backbone to architect a modern analytics pipeline to wire them all together.
Reduce IT Spend with Software Load BalancingNGINX, Inc.
Learn how you can replace your hardware load balancers with NGINX Plus, a complete software application delivery platform for the modern web. Moving to NGINX Plus not only saves you money, but provides the flexibility, performance, and scalability that only software can provide.
Netronome's Nick Tausanovitch, VP of Solutions Architecture and Silicon Product Management, Linley Data Center Conference in Santa Clara, CA on February 9, 2016.
NGINX powers over half of the world’s busiest sites and applications. Attend this NGINX Basics webinar to hear answers to questions about NGINX and NGINX Plus. https://www.nginx.com/resources/webinars/nginx-basics-ask-anything-emea/
Watch this webinar to:
- The answers to your questions on NGINX
- About how others use NGINX and NGINX Plus
- About common application delivery design patterns
- Key insights from the presenter' more than 20 years of industry experience
Apache and Nginx are the two most popular open source web servers. While they share many qualities, they have key differences that make each better suited for certain situations. Apache excels at running PHP applications without external software. It also works well in shared hosting environments. However, Nginx is more efficient at serving static content and scaling to handle high concurrency loads. Many choose to run Nginx as a reverse proxy in front of Apache to take advantage of both servers' strengths.
Node.js and the MEAN Stack Building Full-Stack Web Applications.pdflubnayasminsebl
Welcome To
Node.js and the MEAN Stack: Building Full-Stack Web Applications
Nowadays, picking the best web app development technology is difficult. Because there are so many programming languages, frameworks, and technologies available right now, it can be challenging for business owners and entrepreneurs to SEO Expate Bangladesh Ltd choose the best development tool. Maintaining project efficiency has now become crucial in the era of web app development. Your firm will incur more expenses as you delay doing the assignment. A ground-breaking technology with distinctive characteristics, Node.js for web development. It is regarded by developers as one of the most successful cross-platform JavaScript environments for building reliable and powerful REST APIs, mobile applications, and online applications.
Describe Node.js
Node.js is a standalone runtime environment, not just a library or framework. It is dependent on Chrome's V8, a JavaScript engine capable of NodeJs Web Development running application code independently of the operating system or type of browser. Node.js is regarded as a standalone application on any machine because of its independence.
Frameworks for web applications
Any Node.js web application will require the web application framework as one of its most crucial requirements. Although the HTTP module allows you to construct your own, it is strongly advised that you build on the shoulders of others who came before you and utilize their work. If you haven't already decided which is your favorite, there are SEO Expate Bangladesh Ltd several to chose from. Express has a higher developer share than all other frameworks combined, according to a report by Eran Hammer. Second place went to Hammer's own Hapi.js, while many other frameworks followed with smaller market shares. In this situation, Express is not only the most widely used but also provides you with the best possibility of being able to pick up most new codebases rapidly. Additionally.
Security
Although web security has always been important, recent breaches and problems have made it absolutely essential. Learn about the OWASP Top 10, a list of the most significant internet security issues that is periodically updated. You can use this list to find potential security gaps in your application and conduct an audit there. Find out how to give your web application secure authentication. Popular middleware called Passport is used to authenticate users using many types of schemes. Learn effective Node.js encryption techniques. The hashing method known as Bcrypt is also the name of a popular npm package for encryption. Despite the probability that your code is secure, there is always a chance that one of your dependencies.
The front end
Although writing Node.js code for the back end of a website makes up a big portion of the job description for a Node.js Web Developer, you will probably also need to work on the front end occasionally to design the user interface. The occasional mo
Skype uses PostgreSQL databases that are split both vertically and horizontally to handle their large load. They connect databases using stored procedures and PL/Proxy for remote calls. Key components include pgBouncer for connection pooling, PgQ for queueing between databases, and SkyTools which contains many reusable database scripts and tools. While complex, this architecture has allowed Skype to scale their database infrastructure significantly while remaining manageable.
Netflix Open Source Meetup Season 4 Episode 2aspyker
In this episode, we will take a close look at 2 different approaches to high-throughput/low-latency data stores, developed by Netflix.
The first, EVCache, is a battle-tested distributed memcached-backed data store, optimized for the cloud. You will also hear about the road ahead for EVCache it evolves into an L1/L2 cache over RAM and SSDs.
The second, Dynomite, is a framework to make any non-distributed data-store, distributed. Netflix's first implementation of Dynomite is based on Redis.
Come learn about the products' features and hear from Thomson and Reuters, Diego Pacheco from Ilegra and other third party speakers, internal and external to Netflix, on how these products fit in their stack and roadmap.
The Future of Web Application ArchitecturesLucas Carlson
This document discusses emerging trends in web application architectures and how Docker and microservices are shaping the future. It begins with an introduction to the author and then covers:
1) How architectures are shifting from monolithic backends to distributed, share-nothing systems.
2) The benefits of microservices architectures that use REST APIs and other techniques like asset hosting, session management, and asynchronous processing.
3) How Docker and containers allow applications to be packaged and distributed more easily while maintaining consistency across environments.
The document concludes that automation, containers, and microservices are dominating future web application architectures.
1. NginX, HAProxy and DNS Stack
Presentation at WordCamp Belgrade 2015.
April 19th
Authors:
Ivan Dabic, General Manager @MaxCDN - NginX
Jovan Katic, Support Engineer @MaxCDN - HAProxy
Karlo Butigan Markovic, NOC Engineer @MaxCDN - DNS
4. ○ we don’t want to have “wild” requests cached even though they will never be
requested again
○ we assume whatever is requested two times is valid request as it’s probably
going to be requested 3rd, 4th,... time.
● proxy_cache_valid defines status code we treat as valid and how long we want to
cache it in nginx cache. In this case status codes 200 and we’ll cache it for 10
seconds (NOT a good practise but, for the same of showing the load balancing
method below we wanted short caching time). You’ll usually set this to at least one
week or more.
What we may want to deal with separately is the cache key. To show the purpose of it I am
setting the cache key to following:
proxy_cache_key $request_uri$http_accept_encoding;
This will, basically, define caching parameters that distinguish cached asset by:
1. Requested asset (uri)
2. AcceptEncoding request header
What showe to be the perfect setup is:
proxy_cache_key $scheme$request_uri$http_accept_encoding$param$args;
Above setup defines:
1. $scheme: Local nginx variable that holds the value of protocol used to access/request
cached asset (http, https,...)
2. $request_uri: Same as in default example, it’s the nginx variable holding the value
of requested asset uri
3. $http_accept_encoding: variable holding the value of request header
“AcceptEncoding”
4. $param: Custom variable we can use to alter the cache key in certain scenarios use
it with caution! Changing cache key may affect cache clearing!
5. $args: Query strings in request
So, let’s show an example of cache affection by cache_key. We have defined the cache_key
“distinguisher” by usign “$http_accept_encoding” variable. This means that any request with
different AcceptEncoding request header value for the same file will result in different cache
entry:
~$ curl I http://vps2.net/index.html
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Sun, 26 Apr 2015 22:56:13 GMT
3
9.
~$ service haproxy restart
* Restarting haproxy haproxy [ OK ]
service haproxy reload
* Reloading haproxy haproxy [ OK ]
service haproxy status
haproxy is running.
service haproxy stop
* Stopping haproxy haproxy [ OK ]
service haproxy status
haproxy not running.
To be honest, you won't be able to do anything with the init script before you configure the
load load balancer itself. So let's check what do we get “out of the box”:
~$cat /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
maxconn 2000
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
8
14. BIND
BIND is the oldest host > IP translator. It uses root name servers, TLD name servers, and
authoritative name servers to translate domains into IP addresses. For a full description and
more on DNS please look below the setup and presentation in regards to the WordCamp
presentation.
To install BIND on Ubuntu server use the following command:
aptget install bind9
(suggestion: aptget install dnsutils)
To install BIND CentOS use the following command:
yum install bind9
Configuring BIND to be an authoritative DNS server:
Open the /etc/bind/named.conf.options file with the text editor that you are most comfortable
with (vi, nano, etc.) and input:
options {
directory "/var/cache/bind";
recursion no;
allowtransfer {none;};
dnssecvalidation auto;
authnxdomain yes; # conform to RFC1035
listenonv6 { any; };
};
This tells BIND where the directory for caching is. It also tells it not to be in the recursion
mode which is important for security reasons. Allowing transfer can be set to none or to a
slave or master/slave server IP address (or multiple addresses). dnssecvalidation option tells
the server if it the domains should be signed and validate using dnssec. authnxdomain tells
the server to answer authoritatively (the AA bit is set). listenonv6 sets the IPv6 IP address
on which the server should listen on.
Save the file and then open the /etc/bind/named.conf.local and in our case we have set the
zone name to maxcdn.com, set the type as master (as in master DNS server), location of the
file of the zone itself and allowtransfer when the allowtransfer is set to none in the options
file.
zone "maxcdn.com" in {
type master;
file "/etc/bind/zones/maxcdn.com";
allowtransfer {none;};
};
13
15.
Save that file and create a directory called zones in /etc/bind using mkdir /etc/bind/zones then
using then go to that directory using cd /etc/bind/zones and create a new file called
maxcdn.com using your favorite text editor like so:
vi maxcdn.com
and input the following:
$TTL 86400 ; 24 hours could have been written as 24h or 1D
maxcdn.com. IN SOA @ root (
2002022401 ; serial
3H ; refresh
15 ; retry
1w ; expire
3h ; minimum
)
IN NS localhost.
IN A 178.62.160.79
www IN A 178.62.160.79
The above file tells BIND that the time to live for this zone is 24 hours and that it is the Start of
Authority record. Incrementing the serial number tells the slave server with the same zone to
update the zone record. IN NS tells which is/are the default name server/s of the zone. IN A
gives the translation of the domain/host to IP.
Once everything is configured restart BIND service so that it can accept all of the new
settings.
When testing the zone from the local BIND server using dig command you would get an
answer like so:
dig @localhost maxcdn.com
; <<>> DiG 9.9.53ubuntu0.2Ubuntu <<>> @localhost maxcdn.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; >>HEADER<< opcode: QUERY, status: NOERROR, id: 44418
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 3
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
14
18. DNS - bind
1) DNS brief background
Paul Mockapetris designed the Domain Name System in 1983 at the University of California
Jon Postel was the person who actually asked Paul to write the first implementation for DNS
The Stanford Research Institute was the one who held the largest HOSTS.TXT file at that
time and that file was taken
UC Berkeley students Douglas Terry, Mark Painter, David Riggle and Songnian Zhou are
the first people to write the code for Unix DNS implementation and they called it BIND
(Berkeley Internet Name Domain) (1984)
Kevin Dunlap of DEC substantially revised the DNS implementation in 1985
Mike Karels, Phil Almquist, and Paul Vixie have maintained BIND since then.
In 1987, RFC822 and RFC823 get supressed by RFC1034, RFC1035 and a few more. (For
more details look in the links section)
In the days before DNS either you remembered all of the IPs that you needed to visit, you had
your own hosts file written or you downloaded the hosts.txt file from Stanford Research
Institute.
(Basically you had to ask the person for their IP address so that you can visit their website)
Host file locations that can still be used and are used in some cases:
/etc/hosts Unix based systems
%WinDir%HOSTS Win 3.1
%WinDir%hosts Win 95, 98, ME
%SystemRoot%System32driversetchosts all versions above, and including, Win NT.
Short version:
People around the world who had access to the Internet were using hosts file to store all of
the
host> IP translations
You would have to call the person and ask them for their IP address to get to their website
US government advanced research agency decided to invest in the DNS project
1983 first implementation was written
BIND came to life in 1984
2) What is BIND/PDNS (PowerDNS) and differences between the two
BIND/pdns is a Domain Name System software that communicates with the root servers to
get a translation of a hostname (to an IP address) or acts as a authoritative master/slave
server depending on the configuration.
The difference between BIND and PowerDNS:
BIND is the first a is the most widely used Domain Name System (DNS) software on the
Internet
Uses flat files (only)
17
20. You can see all of the current root servers and news regarding them on
http://www.rootservers.org/
Ex.:
Generic: .com, .net ...
CountryCode: .rs (RNIDS)
Sponsored: .mil, .gov, .xxx (must be eligible to get it)
infrastructure: .arpa (Used for instance in reverse lookup of IPv4 and IPv6)
"The Internet Assigned Numbers Authority (IANA) is responsible for the global coordination of
the DNS Root, IP addressing, and other Internet protocol resources." https://www.iana.org/
5) What are authoritative servers master/slave
Authoritative server can be master or slave and it holds the authority over a domain name.
When registering for a domain name, person registering the domain is aked to insert at least
two domain servers
Usually named ns1 and ns2, ns1 being the master and ns2 the salve
Those two servers usually have either a similar IP address with a different third octet or
completely different IPs
In a redundant network the two servers would be on separate locations/ISPs or just on
separate ISPs
"A second name server splits the load with the first server or handles the whole load if the
first
server is down." O'Reilly, DNS and BIND (Fourth Edition)
6) Brief bind config file explanation for authoritative servers
Master
options {
directory "/var/cache/bind";
# do NOT want your authoritative server to be recursive as well because of
# security and performance reasons
recursion no;
allowtransfer { none; }; # or put the IP of the slave server or slave/master
dnssecvalidation auto;
authnxdomain no; # conform to RFC1035
listenonv6 { any; };
};
zone "site.edu" in {
type master;
file "/path/to/file/movie.edu";
# IP of the slave or slave/master that are allowed to receive the specific zone file
allowtransfer { xxx.xxx.xxx.xxx; };
};
19
22. mail will go to mail.example.org. If both values are equal (ex. MX10 and MX10) then it will
loadbalance between the two in a way that smtp hosts would then round robin between the
two
hosts.
Round Robin: http://en.wikipedia.org/wiki/Roundrobin_DNS
8) DNS uses TCP and UDP port 53
DNS uses TCP and UDP port 53 for queries. TCP port 53 is used for transfer of zones over
the
external network and is usually blocked for protection purposes which will change in the future
because. As Scott Hogg, CTO for Global Technology Resources, Inc. (GTRI), nicely said
"The reality is that DNS queries can also use TCP port 53 if UDP port 53 is not accepted."
"the
practice of denying TCP port 53 to and from DNS servers is starting to cause some problems.
There are two good reasons that we would want to allow both TCP and UDP port 53
connections to our DNS servers. One is DNSSEC and the second is IPv6."
9) The path of DNS resolution of a host name
21
23. 10) When a page is asked for from a site that uses CDN: What a browser gets and from
where.
Explanation: When your browser requests example.com it has to get the IP address from your
IPS's DNS server (this process is explained in “The path of DNS resolution of a host name”).
After the browser gets the IP address it opens a connection to it and gets the page. That page
consists of:
1. Dynamic content: Which is loaded from the Origin Server
2. Third Party content: Which is loaded from the Third Party Server (which can be google ads,
images from pinterest, facebook images, youtube videos, etc.)
3. Static content: Which is loaded from MaxCDN edge/flex boxes nearest to the client to the
client using Anycast or GeoDNS.
Explain anycast:
Anycast is networking technique where the same IP prefix is advertised from multiple
locations. It then uses one of the two methods to determine where to route. The first method
is determining the routing protocol costs and also the status of the server (response time,
number of requests, etc.). The other method being the upstream provider, partially, manually
setting the shortest path to the IP. As soon as a BGP announcement drops in one part of the
network, traffic will be rerouted to the other nearest advertised location with the same ASN.
22
24.
Explain geoDNS:
GeoDNS is basically routing to unicast IPs, usually with the same service and the same
content, depending from which part of the world request came from. GeoDNS uses the DNS
server and a plugin with a list of GeoIPs which can be found for free or recieved monthly
using a commercial service. What the plugin does is basically create ACL (access control
lists) and connect those ACLs to BIND views. Bind views have the ability to give a different
zone file for the same domain/zone thus controlling which server is hit (reached) when
requesting a resource.
Example links:
History of DNS:
http://cyber.law.harvard.edu/icann/pressingissues2000/briefingbook/dnshistory.html
http://en.wikipedia.org/wiki/Domain_Name_System#History
http://www.cybertelecom.org/dns/history.htm
http://tools.ietf.org/html/rfc882
http://tools.ietf.org/html/rfc883
http://tools.ietf.org/html/rfc1034
http://tools.ietf.org/html/rfc1035
ARPANET
http://en.wikipedia.org/wiki/ARPANET
List of root servers and their IPs
https://www.iana.org/domains/root/servers
http://www.internic.net/domain/named.root
Updated list of TLDs
https://data.iana.org/TLD/tldsalphabydomain.txt
https://www.iana.org/domains/root/db
PDNS or BIND:
http://www.quora.com/DomainNameSystem%28DNS%29/WhichisbetterBindorPowerD
NS
Config file example for recursive DNS
https://www.digitalocean.com/community/tutorials/howtoconfigurebindasacachingor
forwardingdnsserveronubuntu1404
Root servers and news
http://www.rootservers.org/
https://www.iana.org/ Internet Assigned Numbers Authority
Configuring authoritative only DNS server
https://www.digitalocean.com/community/tutorials/howtoconfigurebindasanauthoritativeo
nly
dnsserveronubuntu1404
Example zone
https://www.centos.org/docs/5/html/Deployment_GuideenUS/s1bindzone.html
23