Redis provides several tools to achieve atomicity of operations. Single commands are atomic by default. Pipelining ensures commands are executed in order but is not fully atomic. Transactions using MULTI and EXEC are fully atomic but don't allow command chaining. Lua scripting allows complex multi-step operations to be run atomically and allows command results to be passed between operations.
(Stephane Maarek, DataCumulus) Kafka Summit SF 2018
Security in Kafka is a cornerstone of true enterprise production-ready deployment: It enables companies to control access to the cluster and limit risks in data corruption and unwanted operations. Understanding how to use security in Kafka and exploiting its capabilities can be complex, especially as the documentation that is available is aimed at people with substantial existing knowledge on the matter.
This talk will be delivered in a “hero journey” fashion, tracing the experience of an engineer with basic understanding of Kafka who is tasked with securing a Kafka cluster. Along the way, I will illustrate the benefits and implications of various mechanisms and provide some real-world tips on how users can simplify security management.
Attendees of this talk will learn about aspects of security in Kafka, including:
-Encryption: What is SSL, what problems it solves and how Kafka leverages it. We’ll discuss encryption in flight vs. encryption at rest.
-Authentication: Without authentication, anyone would be able to write to any topic in a Kafka cluster, do anything and remain anonymous. We’ll explore the available authentication mechanisms and their suitability for different types of deployment, including mutual SSL authentication, SASL/GSSAPI, SASL/SCRAM and SASL/PLAIN.
-Authorization: How ACLs work in Kafka, ZooKeeper security (risks and mitigations) and how to manage ACLs at scale
Introduction to rust: a low-level language with high-level abstractionsyann_s
The document discusses the Rust programming language. It notes that Rust is a low-level language that provides high-level abstractions. It allows for high performance due to being compiled without a garbage collector or virtual machine, but also provides high-level features like types, type inference, pattern matching and functional programming. Rust aims to combine the performance of low-level languages with the safety and abstraction of high-level languages.
1. gRPC is an open source RPC framework developed at Google in 2015 that provides high performance communication between services. It uses protocol buffers for serialization and HTTP/2 for transport.
2. gRPC supports .NET Core since version 3.0 and can replace WCF in the .NET ecosystem. It uses protocol buffers definitions to define service contracts and message types in a language-neutral way.
3. gRPC is well suited for microservices, real-time communication, multi-language environments, and network-constrained scenarios due to its high performance, interoperability and streaming capabilities. It has become a popular RPC framework for building distributed systems.
C++은 10년 만에 C++11/14를 발표하면서 '모던 C++'이라는 이름으로 발전했습니다. 그만큼 새로운 기능들이 많이 추가되었습니다. 그리고 2017년, C++은 C++17이라는 이름으로 또 한 번의 발전을 준비하고 있습니다. 3년 주기로 빠르게 변화하는 모던 C++에 대비하기 위해, C++17에 추가될 주요 기능들을 살펴보고자 합니다.
이 발표는 이전에 발표했던 내용에서 일부 사례 추가 및 최신 내용으로 갱신한 버전입니다.
Redis provides several tools to achieve atomicity of operations. Single commands are atomic by default. Pipelining ensures commands are executed in order but is not fully atomic. Transactions using MULTI and EXEC are fully atomic but don't allow command chaining. Lua scripting allows complex multi-step operations to be run atomically and allows command results to be passed between operations.
(Stephane Maarek, DataCumulus) Kafka Summit SF 2018
Security in Kafka is a cornerstone of true enterprise production-ready deployment: It enables companies to control access to the cluster and limit risks in data corruption and unwanted operations. Understanding how to use security in Kafka and exploiting its capabilities can be complex, especially as the documentation that is available is aimed at people with substantial existing knowledge on the matter.
This talk will be delivered in a “hero journey” fashion, tracing the experience of an engineer with basic understanding of Kafka who is tasked with securing a Kafka cluster. Along the way, I will illustrate the benefits and implications of various mechanisms and provide some real-world tips on how users can simplify security management.
Attendees of this talk will learn about aspects of security in Kafka, including:
-Encryption: What is SSL, what problems it solves and how Kafka leverages it. We’ll discuss encryption in flight vs. encryption at rest.
-Authentication: Without authentication, anyone would be able to write to any topic in a Kafka cluster, do anything and remain anonymous. We’ll explore the available authentication mechanisms and their suitability for different types of deployment, including mutual SSL authentication, SASL/GSSAPI, SASL/SCRAM and SASL/PLAIN.
-Authorization: How ACLs work in Kafka, ZooKeeper security (risks and mitigations) and how to manage ACLs at scale
Introduction to rust: a low-level language with high-level abstractionsyann_s
The document discusses the Rust programming language. It notes that Rust is a low-level language that provides high-level abstractions. It allows for high performance due to being compiled without a garbage collector or virtual machine, but also provides high-level features like types, type inference, pattern matching and functional programming. Rust aims to combine the performance of low-level languages with the safety and abstraction of high-level languages.
1. gRPC is an open source RPC framework developed at Google in 2015 that provides high performance communication between services. It uses protocol buffers for serialization and HTTP/2 for transport.
2. gRPC supports .NET Core since version 3.0 and can replace WCF in the .NET ecosystem. It uses protocol buffers definitions to define service contracts and message types in a language-neutral way.
3. gRPC is well suited for microservices, real-time communication, multi-language environments, and network-constrained scenarios due to its high performance, interoperability and streaming capabilities. It has become a popular RPC framework for building distributed systems.
C++은 10년 만에 C++11/14를 발표하면서 '모던 C++'이라는 이름으로 발전했습니다. 그만큼 새로운 기능들이 많이 추가되었습니다. 그리고 2017년, C++은 C++17이라는 이름으로 또 한 번의 발전을 준비하고 있습니다. 3년 주기로 빠르게 변화하는 모던 C++에 대비하기 위해, C++17에 추가될 주요 기능들을 살펴보고자 합니다.
이 발표는 이전에 발표했던 내용에서 일부 사례 추가 및 최신 내용으로 갱신한 버전입니다.
My slides for understanding Pentesting for GraphQL Applications. I presented this content at c0c0n and bSides Delhi 2018. Also contains details of my Burp Extension for GraphQL parsing and scanning located here https://github.com/br3akp0int/GQLParser
Windows IOCP vs Linux EPOLL Performance ComparisonSeungmo Koo
1. The document compares the performance of IOCP and EPOLL for network I/O handling on Windows and Linux servers.
2. Testing showed that throughput was similar between IOCP and EPOLL, but IOCP had lower overall CPU usage without RSS/multi-queue enabled.
3. With RSS/multi-queue enabled on the NIC, CPU usage was nearly identical between IOCP and EPOLL.
The document compares REST and gRPC approaches to building APIs. It notes that while REST uses JSON over HTTP, gRPC uses protocol buffers over HTTP/2, allowing for benefits like binary encoding, multiplexing, and simpler parsing. An example shows a character request is 205 characters in JSON but only 44 bytes in protocol buffers. gRPC is also said to allow lower latency, better CPU utilization, and server push capabilities compared to REST for real-time use cases.
The document discusses PostgreSQL's roadmap for supporting JSON data. It describes how PostgreSQL introduced JSONB in 2014 to allow binary storage and indexing of JSON data, providing better performance than the text-based JSON type. The document outlines how PostgreSQL has implemented features from the SQL/JSON standard over time, including JSON path support. It proposes a new Generic JSON API (GSON) that would provide a unified way to work with JSON and JSONB data types, removing duplicated code and simplifying the addition of new features like partial decompression or different storage formats like BSON. GSON would help PostgreSQL work towards a single unified JSON data type as specified in SQL standards.
Redis allows running Lua scripts via its embedded Lua engine. Lua scripts have full access to Redis data and commands. Scripts run atomically and block the server during execution. Redis caches compiled scripts to avoid recompilation. Scripts should be parameterized to avoid cache explosions. Lua provides powerful data types like tables and control structures that can be used to build complex logic in scripts.
Developing RESTful Web APIs with Python, Flask and MongoDBNicola Iarocci
Presented at EuroPython 2012. The abstract: "In the last year we have been working on a full featured, Python powered, RESTful Web API. We learned quite a few things on REST best patterns, and we got a chance to put Python’s renowned web capabilities under review, even releasing a couple Open Source projects in the process. In my talk I will share what we learned. We will consider ‘pure’ REST API design and its many hurdles. We will look at what Python as to offer in this field and finally, we will dig further down by looking at some of the code we developed. Some of the technologies/stacks I’ll cover are (in no particular order): Flask, PyMongo, MongoDB, REST, JSON, XML, Heroku. Did you know? Like it or not, there is going to be a REST API in your future."
Reinventing the Transaction Script (NDC London 2020)Scott Wlaschin
The Transaction Script pattern organizes business logic as a single procedure. It has always been considered less sophisticated and flexible than a layered architecture with a rich domain model. But is that really true?
In this talk, we'll reinvent the Transaction Script using functional programming principles. We'll see that we can still do domain-driven design, and still have code which is decoupled and reusable, all while preserving the simplicity and productivity of the original one-script-per-workflow approach.
Learn how to load balance your applications following best practices with NGINX and NGINX Plus.
Join this webinar to learn:
- How to configure basic HTTP load balancing features
- The essential elements of load balancing: session persistence, health checks, and SSL termination
- How to load balance MySQL, DNS, and other common TCP/UDP applications
- How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment
The document discusses using RxSwift to add numbers from multiple text fields and display the result. It shows how to:
1. Combine the latest values from the text fields using Observable.combineLatest.
2. Call analytics methods on text field value changes and result calculation.
3. Map the combined values to the result string and bind it to the result label.
Data-oriented design (DOD) focuses on how data is accessed and transformed, rather than how code is organized. This improves performance by minimizing cache misses and allowing better utilization of parallelism. The document provides an example comparing an object-oriented design (OOD) approach that stores related data together in objects, resulting in scattered memory access and many cache misses, versus a DOD approach that groups together data that is accessed together, resulting in fewer cache misses and faster performance.
Comparison of features between ShEx (Shape Expressions) and SHACL (Shapes Constraint Language)
Changelog:
11/06/17
- Removed slides about compositionality
31/May/2017
- Added slide 30 about validation report
- Added slide 32 about stems
- Changed slides 7 and 8 adapting compact syntax to new operator .
23/05/2017:
Slide 14: Repaired typos in typos in sh:entailment, rdfs:range
21/05/2017:
- Slide 8. Changed the example to be an IRI and a datatype
- Added typically in slide 9
- Slide 10: Removed the phrase: "Target declarations can problematic when reusing/importing shapes"
and created slide 27 to talk about reuability
- Added slide 11 to talk about the differences in triggering validation
- Created slide 14 to talk about inference
- Renamed slide 15 as "Inference and triggering mechanism"
- Added slides 27 and 28 to talk about reuability
- Added slide 29 to talk about annotations
18/05/2017
- Slides 9 now includes an example using ShEx RDF vocabulary
- Slide 10 now says that target declarations are optional
- Slide 13 now says that some RDF Schema terms have special treatment in SHACL
- Example in slide 18 now uses sh:or instead of sh:and
- Added slides 22, 23 and 24 which show some features supported by SHACL but not supported by ShEx (property pair constraints, uniqueLang and owl:imports)
Posons-nous et profitons de ce talk pour prendre un peu de hauteur sur l’état de l’industrie tech autour de la création d’API de persistence (CRUD).
D’où venons-nous, ou allons-nous ? Pourquoi le choix entre RPC, SOAP, REST et GraphQL n’est peut-être qu’un sujet de surface qui cache un problème bien plus profond…
Youtube: https://www.youtube.com/watch?v=IskE3m3VjRY
어느 해커쏜에 참여한 백엔드 개발자들을 위한 교육자료
쉽게 만든다고 했는데도, 많이 어려웠나봅니다.
제 욕심이 과했던 것 같아요. 담번엔 좀 더 쉽게 !
- 독자 : 백엔드 개발자를 희망하는 사람 (취준생, 이직 희망자), 5년차 이하
- 주요 내용 : 백엔드 개발을 할 때 일어나는 일들(개발팀의 일)
- 비상업적 목적으로 인용은 가능합니다. (출처 명기 필수)
Spring Data provides a unified model for data access and management across different data access technologies such as relational, non-relational and cloud data stores. It includes utilities such as repository support, object mapping and templating to simplify data access layers. Spring Data MongoDB provides specific support for MongoDB including configuration, mapping, querying and integration with Spring MVC. It simplifies MongoDB access through MongoTemplate and provides a repository abstraction layer.
My slides for understanding Pentesting for GraphQL Applications. I presented this content at c0c0n and bSides Delhi 2018. Also contains details of my Burp Extension for GraphQL parsing and scanning located here https://github.com/br3akp0int/GQLParser
Windows IOCP vs Linux EPOLL Performance ComparisonSeungmo Koo
1. The document compares the performance of IOCP and EPOLL for network I/O handling on Windows and Linux servers.
2. Testing showed that throughput was similar between IOCP and EPOLL, but IOCP had lower overall CPU usage without RSS/multi-queue enabled.
3. With RSS/multi-queue enabled on the NIC, CPU usage was nearly identical between IOCP and EPOLL.
The document compares REST and gRPC approaches to building APIs. It notes that while REST uses JSON over HTTP, gRPC uses protocol buffers over HTTP/2, allowing for benefits like binary encoding, multiplexing, and simpler parsing. An example shows a character request is 205 characters in JSON but only 44 bytes in protocol buffers. gRPC is also said to allow lower latency, better CPU utilization, and server push capabilities compared to REST for real-time use cases.
The document discusses PostgreSQL's roadmap for supporting JSON data. It describes how PostgreSQL introduced JSONB in 2014 to allow binary storage and indexing of JSON data, providing better performance than the text-based JSON type. The document outlines how PostgreSQL has implemented features from the SQL/JSON standard over time, including JSON path support. It proposes a new Generic JSON API (GSON) that would provide a unified way to work with JSON and JSONB data types, removing duplicated code and simplifying the addition of new features like partial decompression or different storage formats like BSON. GSON would help PostgreSQL work towards a single unified JSON data type as specified in SQL standards.
Redis allows running Lua scripts via its embedded Lua engine. Lua scripts have full access to Redis data and commands. Scripts run atomically and block the server during execution. Redis caches compiled scripts to avoid recompilation. Scripts should be parameterized to avoid cache explosions. Lua provides powerful data types like tables and control structures that can be used to build complex logic in scripts.
Developing RESTful Web APIs with Python, Flask and MongoDBNicola Iarocci
Presented at EuroPython 2012. The abstract: "In the last year we have been working on a full featured, Python powered, RESTful Web API. We learned quite a few things on REST best patterns, and we got a chance to put Python’s renowned web capabilities under review, even releasing a couple Open Source projects in the process. In my talk I will share what we learned. We will consider ‘pure’ REST API design and its many hurdles. We will look at what Python as to offer in this field and finally, we will dig further down by looking at some of the code we developed. Some of the technologies/stacks I’ll cover are (in no particular order): Flask, PyMongo, MongoDB, REST, JSON, XML, Heroku. Did you know? Like it or not, there is going to be a REST API in your future."
Reinventing the Transaction Script (NDC London 2020)Scott Wlaschin
The Transaction Script pattern organizes business logic as a single procedure. It has always been considered less sophisticated and flexible than a layered architecture with a rich domain model. But is that really true?
In this talk, we'll reinvent the Transaction Script using functional programming principles. We'll see that we can still do domain-driven design, and still have code which is decoupled and reusable, all while preserving the simplicity and productivity of the original one-script-per-workflow approach.
Learn how to load balance your applications following best practices with NGINX and NGINX Plus.
Join this webinar to learn:
- How to configure basic HTTP load balancing features
- The essential elements of load balancing: session persistence, health checks, and SSL termination
- How to load balance MySQL, DNS, and other common TCP/UDP applications
- How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment
The document discusses using RxSwift to add numbers from multiple text fields and display the result. It shows how to:
1. Combine the latest values from the text fields using Observable.combineLatest.
2. Call analytics methods on text field value changes and result calculation.
3. Map the combined values to the result string and bind it to the result label.
Data-oriented design (DOD) focuses on how data is accessed and transformed, rather than how code is organized. This improves performance by minimizing cache misses and allowing better utilization of parallelism. The document provides an example comparing an object-oriented design (OOD) approach that stores related data together in objects, resulting in scattered memory access and many cache misses, versus a DOD approach that groups together data that is accessed together, resulting in fewer cache misses and faster performance.
Comparison of features between ShEx (Shape Expressions) and SHACL (Shapes Constraint Language)
Changelog:
11/06/17
- Removed slides about compositionality
31/May/2017
- Added slide 30 about validation report
- Added slide 32 about stems
- Changed slides 7 and 8 adapting compact syntax to new operator .
23/05/2017:
Slide 14: Repaired typos in typos in sh:entailment, rdfs:range
21/05/2017:
- Slide 8. Changed the example to be an IRI and a datatype
- Added typically in slide 9
- Slide 10: Removed the phrase: "Target declarations can problematic when reusing/importing shapes"
and created slide 27 to talk about reuability
- Added slide 11 to talk about the differences in triggering validation
- Created slide 14 to talk about inference
- Renamed slide 15 as "Inference and triggering mechanism"
- Added slides 27 and 28 to talk about reuability
- Added slide 29 to talk about annotations
18/05/2017
- Slides 9 now includes an example using ShEx RDF vocabulary
- Slide 10 now says that target declarations are optional
- Slide 13 now says that some RDF Schema terms have special treatment in SHACL
- Example in slide 18 now uses sh:or instead of sh:and
- Added slides 22, 23 and 24 which show some features supported by SHACL but not supported by ShEx (property pair constraints, uniqueLang and owl:imports)
Posons-nous et profitons de ce talk pour prendre un peu de hauteur sur l’état de l’industrie tech autour de la création d’API de persistence (CRUD).
D’où venons-nous, ou allons-nous ? Pourquoi le choix entre RPC, SOAP, REST et GraphQL n’est peut-être qu’un sujet de surface qui cache un problème bien plus profond…
Youtube: https://www.youtube.com/watch?v=IskE3m3VjRY
어느 해커쏜에 참여한 백엔드 개발자들을 위한 교육자료
쉽게 만든다고 했는데도, 많이 어려웠나봅니다.
제 욕심이 과했던 것 같아요. 담번엔 좀 더 쉽게 !
- 독자 : 백엔드 개발자를 희망하는 사람 (취준생, 이직 희망자), 5년차 이하
- 주요 내용 : 백엔드 개발을 할 때 일어나는 일들(개발팀의 일)
- 비상업적 목적으로 인용은 가능합니다. (출처 명기 필수)
Spring Data provides a unified model for data access and management across different data access technologies such as relational, non-relational and cloud data stores. It includes utilities such as repository support, object mapping and templating to simplify data access layers. Spring Data MongoDB provides specific support for MongoDB including configuration, mapping, querying and integration with Spring MVC. It simplifies MongoDB access through MongoTemplate and provides a repository abstraction layer.
The document introduces new features of JBoss EAP 7 and JBoss Data Grid 7. JBoss EAP 7 includes support for Java EE 7, Java 8, improved clustering and web server Undertow. Undertow can be used as a reverse proxy and load balancer with mod_cluster. JBoss Data Grid 7 provides distributed caching and integrates with Apache Spark, allowing cached data to be accessed from Spark jobs and Spark data to be cached.
Jenkins X Hands-on - automated CI/CD solution for cloud native applications o...Ted Won
This document provides an overview of hands-on training for using Jenkins X (JX), an automated CI/CD solution for building and deploying modern cloud applications on Kubernetes. It outlines prerequisites, and steps to install JX, create a Kubernetes cluster on GKE, and build a sample Spring Boot application with CI/CD pipelines and GitOps promotion between environments. It also discusses using Minikube for local development and provides additional references on JX and related tools.
Jenkins X - automated CI/CD solution for cloud native applications on KubernetesTed Won
Let's have a look at CI/CD best practices to help developers on the cloud platform Kubernetes, which is becoming an industry standard, as we move to the era of cloud native application development, which is about to come.
Hawkular is an open source monitoring project that is the successor to JBoss ON (RHQ). It provides REST services for collecting and storing metrics and for alerting. Hawkular started in 2014 and provides solutions for monitoring containers, applications, middleware, and IoT devices. It includes projects for services and alerts, metrics storage, and formerly application performance monitoring (which is now handled by Jaeger). Hawkular integrates with ManageIQ and is used to provide middleware management within CloudForms.
This document discusses Complex Event Processing (CEP) using Esper. It defines CEP as detecting patterns among events. Esper is an open source CEP engine that provides an SQL-like Event Processing Language (EPL) to define queries over event streams. The document outlines Esper's architecture, features like filtering, windows, aggregation, and joins. It provides examples of EPL queries for topics detection, continuous queries, and pattern matching.
This document discusses the Infinispan Spark connector, which provides integration between JBoss Data Grid 7 (JDG 7) and Apache Spark. It introduces JDG 7 and Apache Spark and their features. The Infinispan Spark connector allows users to create Spark RDDs and DStreams from JDG cache data, write RDDs and DStreams to JDG caches, and perform real-time stream processing with JDG as the data source for Spark. The connector supports various configurations and provides seamless functional programming with Spark. A demo of examples is referenced.
Human: Thank you for the summary. Can you provide another summary in 2 sentences or less?
JBoss Community's Application Monitoring PlatformTed Won
This document introduces two open source projects, RHQ and Byteman, that can help software engineers broaden the scope of their development activities. RHQ is a platform for monitoring JBoss applications, while Byteman allows testing and debugging of applications. The presentation aims to share stories about these tools in order to help developers expand their work.
2. "default task-1" #97 prio=5 os_prio=0 tid=0x000000000401cfe0 nid=0x44d5 runnable [0x00007fd1f6dd7000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000e2fa5718> (a sun.nio.ch.Util$3)
- locked <0x00000000e2fa5708> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000e2fa55f0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
at org.xnio.nio.SelectorUtils.await(SelectorUtils.java:51)
at org.xnio.nio.NioSocketConduit.awaitReadable(NioSocketConduit.java:358)
Issue
● Intermittent long-running request thread 발생
○ over 5 secs, normally under 1 secs
○ Causing running threads accumulation !!
● Performance Issue !!!
● Why does this occur ?
● Needs investigation
2
3. "default task-1" #97 prio=5 os_prio=0 tid=0x000000000401cfe0 nid=0x44d5 runnable [0x00007fd1f6dd7000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000e2fa5718> (a sun.nio.ch.Util$3)
- locked <0x00000000e2fa5708> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000e2fa55f0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
at org.xnio.nio.SelectorUtils.await(SelectorUtils.java:51)
at org.xnio.nio.NioSocketConduit.awaitReadable(NioSocketConduit.java:358)
at org.xnio.conduits.AbstractSourceConduit.awaitReadable(AbstractSourceConduit.java:66)
at org.xnio.conduits.AbstractSourceConduit.awaitReadable(AbstractSourceConduit.java:66)
at io.undertow.conduits.ReadDataStreamSourceConduit.awaitReadable(ReadDataStreamSourceConduit.java:101)
at io.undertow.conduits.FixedLengthStreamSourceConduit.awaitReadable(FixedLengthStreamSourceConduit.java:285)
at org.xnio.conduits.ConduitStreamSourceChannel.awaitReadable(ConduitStreamSourceChannel.java:151)
at io.undertow.channels.DetachableStreamSourceChannel.awaitReadable(DetachableStreamSourceChannel.java:77)
at io.undertow.server.HttpServerExchange$ReadDispatchChannel.awaitReadable(HttpServerExchange.java:2161)
at org.xnio.channels.Channels.readBlocking(Channels.java:295)
at io.undertow.servlet.spec.ServletInputStreamImpl.readIntoBuffer(ServletInputStreamImpl.java:184)
at io.undertow.servlet.spec.ServletInputStreamImpl.read(ServletInputStreamImpl.java:160)
at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.ensureLoaded(ByteSourceJsonBootstrapper.java:522)
...
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:326)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:812)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
3
4. ● Web Browser => Apache => JBoss EAP
● Where is it from ?
○ EAP ?
○ Apache ?
● Access log
● Capturing tcpdumps
Investigation
4
5. ● Web Browser => Apache => JBoss EAP
● By the tcpdump analysis,
● There is time gap between request header and body packet
● in Client side.
Investigation - conclusion
5
6. Undertow 버전
● EAP 7.1 은 Undertow 1.4.18.Final 버전을 사용
○ https://access.redhat.com/articles/112673#EAP_7
○ https://github.com/undertow-io/undertow/tree/1.4.18.Final
6
8. RequestBufferingHandler 기본 소개
● Undertow 가 제공하는 HTTP Handler(filter) 중 하나
● 기능:
○ Request 의 모든 패킷 데이터가 도착 하면 Request를 Worker Thread 로 dispatch 한다.
● 효과:
○ 해당 필터를 추가하면 간헐적 네트워크 지연등으로 Request 패킷 전송이 늦어지는 경우
Worker Thread 의 데이터 읽기 대기 상태(epollWait)로 인한 thread blocking 상태를 피할 수
있다.
● 영향:
○ GC 대상 객체 추가 발생. 지연 발생 요청 건마다 RequestBufferingHandler 의 이벤트 listener
객체(40 bytes)가 생성되고 요청 완료 후 GC의 대상이됨.
● 소스코드:
○ https://github.com/undertow-io/undertow/blob/1.4.18.Final/core/src/main/java/io/undertow/serv
er/handlers/RequestBufferingHandler.java
8
10. ● Undertow의 Thread 실행은 IO Thread 와 Worker Thread 로 구성되어 실행
RequestBufferingHandler 수행 로직 소개
10
11. ● IO Thread 는 이름 그대로 I/O 만 담당하는 Thread
● Request의 비즈니스 로직 수행은 Worker Thread 에서 실행
● IO Thread 는 항상 non-blocking 방식으로 처리된다.
○ IO Thread 는 I/O 수행 요청이 생기면 즉시 I/O 에 대한 부분만 처리 후 다른 처리 부분은
Worker Thread 로 넘기고 즉시 또 다른 I/O 수행을 처리하거나 대기하게됨
● 반면 Worker Thread 는 하나의 Request의 완전한 처리를 위해서 blocking 되어 처리를 수행함
● Undertow는 request/response 의 모든 데이터를 HttpServerExchange 객체에 담아서 처리한다.
RequestBufferingHandler 수행 로직 소개
11
12. ● Undertow는 Request header가 도착하면 HttpServerExchange 객체를 생성하여
RequestBufferingHandler.handleRequest() 메소드를 호출한다.
● RequestBufferingHandler.handleRequest() 메소드는 Request의 모든 데이터를 읽기 시도한다.
● 하지만 만약 패킷 전송 지연과 같은 경우 데이터 읽기를 기다려야 하는 경우 ChannelListener
객체를 생성하여 Connection 객체에 event listener로 등록한다.
● RequestBufferingHandler.handleRequest() 메소드의 수행은 종료된다.
● 이제 다시 나머지 패킷 데이터가 전송되면 Connection 객체가 등록된 listener 객체에 데이터 전송
이벤트를 알려주어 ChannelListener 객체의 handleEvent() 메소드가 콜백되어 다시 데이터를
읽게된다.
● 데이터를 모두 읽었다면 버퍼링 데이터 객체(bufferedData[]) 에 데이터를 넣고 HttpServerExchange
객체에 담은 후 다음 handler로 호출을 넘긴다.
RequestBufferingHandler 수행 로직 소개
12
14. ● 새로운 Request가 도착하면, 가장 먼저 해당 Request 의 모든 데이터가 이미 모두 읽혀졌는지
확인한다. 그리고 동시에 header에 "Expect: 100-continue" 가 존재하는지 확인한다.
○ if(!exchange.isRequestComplete() &&
!HttpContinue.requiresContinueResponse(exchange.getRequestHeaders()))
● 만약 Request 의 모든 데이터가 이미 모두 읽혀졌다면 RequestBufferingHandler의 버퍼링 로직
수행 없이 바로 다음 handler로 호출이 넘겨진다(=> 다음 filter 수행 또는 Worker Thread 로
dispatch).
● 만약 아직 Request 의 모든 데이터를 읽지 못 하였고, "Expect: 100-continue" header 가 존재하지
않는다면, 다음의 버퍼링 로직을 수행하게된다.
RequestBufferingHandler 수행 로직 상세
14
15. RequestBufferingHandler 수행 로직 상세
● Connection 채널에서 데이터를 읽어서 버퍼(b)에 채워넣는 루프를 수행한다.
● 데이터를 모두 읽었다면 버퍼(b) 데이터를 bufferedData[] 에 넣고 exchange 객체에 담은 후 다음
handler로 호출을 넘기고 루프를 종료한다.
● 하지만 데이터를 읽는 중에 네트워크 지연등의 이유로 데이터가 아직 도착하지 않고 기다려야 하는
조건이 발생하면 기다리지 않고(non-blocking) 즉시 ChannelListener 객체를 생성하여 Connection
객체에 event listener로 등록한다. 이렇게되면 더이상 Thread 실행 없이 기다릴 수 있게된다
(non-blocking).
● (나머지 패킷) 데이터가 다시 도착하면 IO Thread에서 Connection 객체가 등록된 event listener
객체에 데이터 전송 이벤트를 알려주어 handleEvent() 메소드가 콜백되어 데이터를 읽게된다.
● 이제 다시 나머지 패킷 데이터가 전송되면, 위의 수행 로직과 동일한 "Connection 채널에서
데이터를 읽어서 버퍼(b)에 채워넣는 루프"를 수행한다. 데이터를 모두 읽었다면 버퍼(b) 데이터를
bufferedData[] 에 넣고 exchange 객체에 담은 후 다음 handler로 호출을 넘기고 루프를 종료한다. 15
16. public void handleRequest(final HttpServerExchange exchange) throws Exception {
if(!exchange.isRequestComplete() && !HttpContinue.requiresContinueResponse(exchange.getRequestHeaders())) {
do {
// 채널에서 데이터를 읽어서 버퍼(b)에 채워넣기기 루프
ByteBuffer b = buffer.getBuffer();
r = channel.read(b);
if (r == -1) {
// 데이터 다읽어서 exchange 객체에 담고 리턴
} else if(r == 0) {
// 데이터 없어서 기다림
channel.getReadSetter().set(new ChannelListener<StreamSourceChannel>() {
public void handleEvent(StreamSourceChannel channel) {
do {
// 채널에서 데이터를 읽어서 버퍼(b)에 채워넣기 루프
if (r == -1) {
// 데이터 다읽어서 exchange 객체에 담고 리턴
} else if (r == 0) {
// 데이터 없어서 기다림
} else if (!b.hasRemaining()) {
// 버퍼(b)가 꽉참 => 데이터 크기가 버퍼보다 크다는 의미
// expression="buffer-request(buffers=2)" 와 같이 buffers 사이즈가 1보다 크다면
// bufferedData[]에 버퍼(b)를 백업하고 새로운 버퍼(b)를 할당 받아서 "채널에서 데이터 읽어서 버퍼(b)에 채워넣기기 루프"를 돌면서 나머지 데이터를 읽게됨
}
// 버퍼(b)에 아직 빈 공간이 남아있음
} while (true);
}
});
} else if (!b.hasRemaining()) {
// 버퍼(b)가 꽉참 => 데이터 크기가 버퍼보다 크다는 의미
}
// 버퍼(b)에 아직 빈 공간이 남아있음
} while (true);
}
next.handleRequest(exchange);
}
16
17. ● Connection 채널에서 데이터를 읽는 단위가 버퍼(b) 사이즈이다.
○ 버퍼(b) 사이즈는 IO subsystem의 buffer-size 값으로 설정된다.
○ <buffer-pool name="default" buffer-size="<정수값>"/>
○ 기본값 16 kbytes
● buffer-size * buffers 배열 사이즈 만큼 Request 데이터를 버퍼링 할 수 있다.
○ buffers 는 버퍼(b)들을 담는 배열 객체
○ <expression-filter name="buf" expression="buffer-request(buffers=<정수값>)"/>
● buffer-size 를 기본값 보다 크게 buffers 값을 1보다 크게 설정한다면 “java.lang.OutOfMemoryError:
Direct buffer memory” 에러가 발생할 수 있다.
● RequestBufferingHandler 에서 버퍼링 사이즈가 부족하더라도
FixedLengthStreamSourceConduit.read() 에서 데이터를 모두 읽어서 Request는 정상 처리됨
● 기본 설정값 사용 권장 buffer-size 기본값(16kb), buffers=1
추가 상세 정보
17
18. 추가 상세 정보
[1] buffer-size 를 기본값 보다 크게 buffers 값을 1보다 크게 설정한다면 “java.lang.OutOfMemoryError: Direct buffer memory”
에러가 발생할 수 있다.
2018-02-17 11:25:30,259 ERROR [org.xnio.listener] (default I/O-1) org.xnio.ChannelListeners:94 - XNIO001007: A channel event listener threw an
exception: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:57)
at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:55)
at org.xnio.ByteBufferSlicePool.allocate(ByteBufferSlicePool.java:147)
at io.undertow.server.XnioByteBufferPool.allocate(XnioByteBufferPool.java:53)
at io.undertow.server.handlers.RequestBufferingHandler$1.handleEvent(RequestBufferingHandler.java:177)
at io.undertow.server.handlers.RequestBufferingHandler$1.handleEvent(RequestBufferingHandler.java:97)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at io.undertow.channels.DetachableStreamSourceChannel$SetterDelegatingListener.handleEvent(DetachableStreamSourceChannel.java:231)
at io.undertow.channels.DetachableStreamSourceChannel$SetterDelegatingListener.handleEvent(DetachableStreamSourceChannel.java:218)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:591) 18
19. [2] 데이터를 읽는 중에 네트워크 지연등의 이유로 데이터가 아직 도착하지 않고 기다려야 하는 조건이 발생하면
ChannelListener 객체를 생성하여 Connection 객체에 event listener로 등록한다.
https://github.com/undertow-io/undertow/blob/1.4.18.Final/core/src/main/java/io/undertow/server/handlers/RequestBufferingHandler.java#L124-L125
RETURN *** Called io.undertow.server.handlers.RequestBufferingHandler.handleRequest() in thread default I/O-1
io.undertow.server.handlers.RequestBufferingHandler.handleRequest(RequestBufferingHandler.java:125)
io.undertow.predicate.PredicatesHandler.handleRequest(PredicatesHandler.java:93)
io.undertow.server.handlers.SetHeaderHandler.handleRequest(SetHeaderHandler.java:90)
io.undertow.server.handlers.accesslog.AccessLogHandler.handleRequest(AccessLogHandler.java:138)
org.wildfly.extension.undertow.Host$HostRootHandler.handleRequest(Host.java:345)
io.undertow.server.handlers.NameVirtualHostHandler.handleRequest(NameVirtualHostHandler.java:54)
io.undertow.server.handlers.error.SimpleErrorPageHandler.handleRequest(SimpleErrorPageHandler.java:78)
io.undertow.server.handlers.CanonicalPathHandler.handleRequest(CanonicalPathHandler.java:49)
org.wildfly.extension.undertow.Server$DefaultHostHandler.handleRequest(Server.java:189)
io.undertow.server.handlers.ChannelUpgradeHandler.handleRequest(ChannelUpgradeHandler.java:211)
io.undertow.server.protocol.http2.Http2UpgradeHandler.handleRequest(Http2UpgradeHandler.java:129)
io.undertow.server.handlers.DisallowedMethodsHandler.handleRequest(DisallowedMethodsHandler.java:61)
io.undertow.server.Connectors.executeRootHandler(Connectors.java:326)
io.undertow.server.protocol.http.HttpReadListener.handleEventWithNoRunningRequest(HttpReadListener.java:254)
io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:136)
io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:59)
org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
org.xnio.nio.WorkerThread.run(WorkerThread.java:591)
19
20. [3] (나머지 패킷) 데이터가 다시 도착하면 IO Thread에서 Connection 객체가 등록된 event listener 객체에 데이터 전송
이벤트를 알려주어 handleEvent() 메소드가 콜백되어 데이터를 읽게된다.
https://github.com/undertow-io/undertow/blob/1.4.18.Final/core/src/main/java/io/undertow/server/handlers/RequestBufferingHandler.java#L77-L99
RETURN *** Called io.undertow.server.handlers.RequestBufferingHandler$1.handleEvent() in thread default I/O-1
io.undertow.server.handlers.RequestBufferingHandler$1.handleEvent(RequestBufferingHandler.java:99)
io.undertow.server.handlers.RequestBufferingHandler$1.handleEvent(RequestBufferingHandler.java:77)
org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
io.undertow.channels.DetachableStreamSourceChannel$SetterDelegatingListener.handleEvent(DetachableStreamSourceChannel.java:231)
io.undertow.channels.DetachableStreamSourceChannel$SetterDelegatingListener.handleEvent(DetachableStreamSourceChannel.java:218)
org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
org.xnio.nio.WorkerThread.run(WorkerThread.java:591)
20