This document provides an overview of HornetQ, an open source messaging system. It describes key features of HornetQ including its core architecture, modes of operation in both standalone and JBoss EAP environments, transport options, persistence, flow control, clustering, high availability, and support for large messages. Performance benchmarks are cited showing HornetQ can process over 8 million messages per second, significantly outperforming other messaging systems.
The document discusses implementing enterprise integration patterns through Apache Camel. It provides an overview of enterprise integration patterns, describes what Apache Camel is and how it is based on these patterns, and gives examples of implementing the Message Filter pattern in XML, Java, Scala and Spring configurations. It also discusses using beans with Camel for message translation and binding beans to endpoints.
Apache Camel is a message routing engine that allows for integration between different components by exchanging messages. It provides a simple way to define routing and mediation rules in a declarative manner. For example, a sample Camel route could read files from a directory and send those files to a JMS queue. Camel is highly flexible and can run in many environments including standalone, OSGi, Blueprint, Spring, and application servers. It has been used by customers for use cases such as building a Netty HTTP gateway with over 100 dynamic routes and load balancing incoming HTTP requests over JMS.
GOTO 2013: Why Zalando trusts in PostgreSQLHenning Jacobs
NoSQL is on the rise but sadly when people compare the usual NoSQL candidates (Redis, MongoDB, Riak, Cassandra, HBase, ..) to relational databases they often only mention MySQL. In our presentation we tried to explain the power of the world’s most advanced opensource database - PostgreSQL. In our session we showed various examples of why we at Zalando trust PostgreSQL to reliably handle all our data. We make use of it in various scenarios, from less complex CRUD applications on a single node, to highly critical and more complex scenarios. This involves customer and order data with strong constraints for high performance and availability, sharded across multiple nodes. We believe that PostgreSQL is massively underrated and that you should have very good reasons to ignore its great features.
This document provides a tutorial on using ParenScript, a tool for embedding Lisp code into JavaScript and HTML documents. It demonstrates several examples of using ParenScript, including embedding a simple onclick handler, generating a JavaScript file from ParenScript code, and rewriting an existing slideshow script in ParenScript. The slideshow example shows how to integrate data from Common Lisp into the generated JavaScript code to customize the behavior. Overall, the tutorial provides a good introduction to basic ParenScript usage and concepts through examples.
The document discusses using plProxy and pgBouncer to split a PostgreSQL database horizontally and vertically to improve scalability. It describes how plProxy allows functions to make remote calls to other databases and how pgBouncer can be used for connection pooling. The RUN ON clause of plProxy is also summarized, which allows queries to execute on all partitions or on a specific partition.
The document provides an overview of the nginx web server, describing its core features and modules for handling HTTP requests in an event-driven and non-blocking manner. It also outlines the process for creating custom modules, walking through the steps to create a simple "hello world" module that sets a request handler. The document encourages debugging and testing any new modules that are created.
Scalable Architecture Design
DEVIEW 2013 에서 발표한 "오픈소스를 활용한 분산 아키텍처 구현기술" 장표입니다.
Scalable Architecture 디자인을 위해 필요한 다양한 구현 기술 중 몇가지를 소개해 드립니다.
관련된 내용으로 문의 있으시면 메일로 연락 주세요~
The document discusses implementing enterprise integration patterns through Apache Camel. It provides an overview of enterprise integration patterns, describes what Apache Camel is and how it is based on these patterns, and gives examples of implementing the Message Filter pattern in XML, Java, Scala and Spring configurations. It also discusses using beans with Camel for message translation and binding beans to endpoints.
Apache Camel is a message routing engine that allows for integration between different components by exchanging messages. It provides a simple way to define routing and mediation rules in a declarative manner. For example, a sample Camel route could read files from a directory and send those files to a JMS queue. Camel is highly flexible and can run in many environments including standalone, OSGi, Blueprint, Spring, and application servers. It has been used by customers for use cases such as building a Netty HTTP gateway with over 100 dynamic routes and load balancing incoming HTTP requests over JMS.
GOTO 2013: Why Zalando trusts in PostgreSQLHenning Jacobs
NoSQL is on the rise but sadly when people compare the usual NoSQL candidates (Redis, MongoDB, Riak, Cassandra, HBase, ..) to relational databases they often only mention MySQL. In our presentation we tried to explain the power of the world’s most advanced opensource database - PostgreSQL. In our session we showed various examples of why we at Zalando trust PostgreSQL to reliably handle all our data. We make use of it in various scenarios, from less complex CRUD applications on a single node, to highly critical and more complex scenarios. This involves customer and order data with strong constraints for high performance and availability, sharded across multiple nodes. We believe that PostgreSQL is massively underrated and that you should have very good reasons to ignore its great features.
This document provides a tutorial on using ParenScript, a tool for embedding Lisp code into JavaScript and HTML documents. It demonstrates several examples of using ParenScript, including embedding a simple onclick handler, generating a JavaScript file from ParenScript code, and rewriting an existing slideshow script in ParenScript. The slideshow example shows how to integrate data from Common Lisp into the generated JavaScript code to customize the behavior. Overall, the tutorial provides a good introduction to basic ParenScript usage and concepts through examples.
The document discusses using plProxy and pgBouncer to split a PostgreSQL database horizontally and vertically to improve scalability. It describes how plProxy allows functions to make remote calls to other databases and how pgBouncer can be used for connection pooling. The RUN ON clause of plProxy is also summarized, which allows queries to execute on all partitions or on a specific partition.
The document provides an overview of the nginx web server, describing its core features and modules for handling HTTP requests in an event-driven and non-blocking manner. It also outlines the process for creating custom modules, walking through the steps to create a simple "hello world" module that sets a request handler. The document encourages debugging and testing any new modules that are created.
Scalable Architecture Design
DEVIEW 2013 에서 발표한 "오픈소스를 활용한 분산 아키텍처 구현기술" 장표입니다.
Scalable Architecture 디자인을 위해 필요한 다양한 구현 기술 중 몇가지를 소개해 드립니다.
관련된 내용으로 문의 있으시면 메일로 연락 주세요~
The Parenscript Common Lisp to JavaScript compilerVladimir Sedach
The document discusses the Parenscript Common Lisp to JavaScript compiler. It describes what Parenscript is and is not, how it works, and its history and uses. Key points include that Parenscript compiles Common Lisp to readable JavaScript code without runtime dependencies or new types, and is used in web frameworks, libraries, and commercial projects.
The document discusses developing with vert.x. It provides steps for creating a URL shortener including developing modules with static web server and MongoDB, testing modules individually, creating an API server using EventBus, deploying modules with scripts, and easily testing with auto-deploy. It also briefly explains some key features of vert.x like asynchronous programming, modularity, and polyglot programming.
The document provides an overview and progress report on Apache Tomcat NEXT. It discusses new features required by specifications like Java EE 8 and Servlet 4.0. Key changes include full support for HTTP/2, TLS improvements like SNI and multiple certificates, and removal of outdated features. Internal changes improved connectors and refactored WebSocket handling. The rationale for Apache Tomcat 8.5 was to provide new features sooner than waiting for Java EE 8's delayed release. HTTP/2, OpenSSL encryption, and TLS virtual hosting are highlighted.
Thread dumps provide snapshots of a Java application's threads and their states. When a slowdown occurs, get multiple thread dumps over time to analyze thread activity and identify potential issues like:
1) Lock contention between threads waiting to enter synchronized methods or blocks.
2) Deadlocks from circular wait conditions that can hang applications.
3) Threads waiting for I/O responses from databases or networks.
4) High CPU usage by specific threads as shown through monitoring tools.
Analyzing thread dumps helps locate performance bottlenecks and fix synchronization, resource contention, or inefficient code issues degrading application speed.
The document provides instructions for installing Apache Tomcat 8 application server on CentOS. It describes downloading and installing Java 8, downloading and extracting the Tomcat archive, configuring environment variables and ports, starting Tomcat, creating user accounts, deploying WAR files, and customizing the Java virtual machine settings. It also discusses using Nginx as a reverse proxy to route port 80 traffic to Tomcat running on port 8080.
Tomcat New Evolution discusses the new features introduced in Tomcat 6 and 7. Some key highlights include:
- Tomcat 6 introduced features like memory leak prevention, CSRF protection, session fixation protection, NIO connector, Comet support, logging improvements, web services support, and clustering.
- Tomcat 7 features included externalizing static resources, WebSocket support, easier embedded usage, and asynchronous logging.
- Both versions aimed to improve performance, security, and scalability through these new capabilities. Tomcat continues evolving to support newer standards and address common issues.
Apache Commons Pool and DBCP - Version 2 UpdatePhil Steitz
This document provides an overview of updates to the Commons Pool and DBCP projects in versions 2.0 and 2.2. It discusses new features in the object pooling framework like improved performance, metrics collection, and flexible configuration options. It also outlines changes in the DBCP connection pooling implementation, including better monitoring, validation, and security integration.
This document provides details on the PHP configuration including the version of PHP and extensions installed, the Apache and PHP API versions, and key configuration settings. PHP 5.3.8 is installed and configured to run as an Apache module with various extensions enabled like BC Math, BZip2, COM support, and calendar functions. The configuration reveals PHP is running on Windows 7 with the Apache server and various PHP settings at their default values.
Node.js is a server-side JavaScript platform for building scalable network applications. It allows writing code using JavaScript for non-browser environments like servers. Node.js uses an event-driven, asynchronous I/O model that makes it lightweight and efficient. A simple web server can be written in just a few lines of Node.js code. Node.js has a thriving ecosystem of external modules that help build full-stack JavaScript applications.
Toster - Understanding the Rails Web Model and Scalability OptionsFabio Akita
In my first time at Russia, I've presented about Reactor Pattern, Eventmachine, WebSocket and the Pusher service as options for when Rails alone is not enough
This document provides an overview of Apache Tomcat, a free and open-source web server and servlet container developed by the Apache Software Foundation (ASF) that implements the Java Servlet and JavaServer Pages (JSP) technologies. It discusses what Tomcat is, its role as a web application container, how to install and configure it, enable features like CGI and SSI, and addresses some common issues. The advantages of using Tomcat include that it is open source, lightweight, easily configured, stable, well documented, and free.
/* pOrt80BKK */ - PHP Day - PHP Performance with APC + Memcached for WindowsFord AntiTrust
This document discusses using APC and Memcached to improve PHP performance on Windows. It provides an overview of how APC works as an opcode cache to improve performance by caching compiled PHP scripts. Memcached is described as an in-memory database for additional caching of data like sessions. The document outlines how to install and configure both APC and Memcached on Windows and provides benchmark results showing significant performance improvements from using the caches.
Tomcat is an open-source Java Servlet container developed by the Apache Software Foundation that implements the Java Servlet and JavaServer Pages specifications from Sun Microsystems. It is written in Java, so it is platform independent. Tomcat requires setting the JAVA_HOME and CATALINA_HOME environment variables and extracting the source files to a directory before starting the server on port 8080 and accessing the welcome page. The server.xml file can be configured to serve files from a custom webapps directory.
APC (Alternative PHP Cache) is a PHP extension that caches and optimizes the opcodes generated by the Zend Engine. It stores precompiled PHP scripts in shared memory, improving performance by avoiding the need to parse and compile scripts on each request. APC offers opcode caching, content caching using functions like apc_add(), and upload progress reporting. Benchmark tests showed that using APC can nearly double the requests per second and reduce the average request time by over half compared to not using a opcode cache.
Tomcat clustering allows multiple Tomcat application servers to work together as a single unit to provide scalability and high availability. There are two types of clustering: vertical scaling uses multiple servers on a single machine, while horizontal scaling uses independent servers across multiple machines for better performance. A typical Tomcat cluster uses a load balancer like Apache mod_jk for request distribution and a session replication method for shared state. Configuring a cluster involves setting up multiple Tomcat instances, configuring the load balancer and workers, and enabling session sharing if needed.
The document discusses installing and configuring the Tomcat web server, including downloading and extracting Tomcat, configuring ports for multiple instances, directory structure, creating web applications, and basic server configuration using files like server.xml.
Fluentd is a data collector that can unify logging and metrics formats and enable real-time extraction, transformation, and storage of data. It will be used at 10xLab to collect logging data from their Co-Work app and infrastructure components and enable real-time analysis and long-term storage. Fluentd makes it easy to set up log collection pipelines and extend functionality through plugins. 10xLab plans to use Fluentd with Resque to reliably queue and process job data, store logs in S3, analyze logs in Treasure Data, and monitor systems. Fluentd will be installed via AWS cloud-init and managed using Chef.
Rapid java backend and api development for mobile devicesciklum_ods
This document discusses best practices for developing RESTful APIs and backend services for mobile applications. It recommends using Java, Maven, Spring, Jersey, and Protocol Buffers. Protocol Buffers provide a compact data interchange format that is faster than JSON and more widely supported than other protocols. The document provides an example of implementing authentication, API throttling, caching, testing, and error handling in a RESTful service using these technologies.
Peeking into the Black Hole Called PL/PGSQL - the New PL Profiler / Jan Wieck...Ontico
The new PL profiler allows you to easily get through the dark barrier, PL/pgSQL puts between tools like pgbadger and the queries, you are looking for.
Query and schema tuning is tough enough by itself. But queries, buried many call levels deep in PL/pgSQL functions, make it torture. The reason is that the default monitoring tools like logs, pg_stat_activity and pg_stat_statements cannot penetrate into PL/pgSQL. All they report is that your query calling function X is slow. That is useful if function X has 20 lines of simple code. Not so useful if it calls other functions and the actual problem query is many call levels down in a dungeon of 100,000 lines of PL code.
Learn from the original author of PL/pgSQL and current maintainer of the plprofiler extension how you can easily analyze, what is going on inside your PL code.
The document discusses the future of server-side JavaScript. It covers various Node.js frameworks and libraries that support both synchronous and asynchronous programming styles. CommonJS aims to provide interoperability across platforms by implementing synchronous proposals using fibers. Examples demonstrate how CommonJS allows for synchronous-like code while maintaining asynchronous behavior under the hood. Benchmarks show it has comparable performance to Node.js. The author advocates for toolkits over frameworks and continuing development of common standards and packages.
The document discusses Java EE 7 and its new features. It provides an overview of APIs added in Java EE 7 like JMS 2, batch processing, bean validation 1.1, JAX-RS 2, JSON processing, and concurrency utilities. The document also mentions some planned features for Java EE 8 like JSON-B, JCache, CDI 2.0 and highlights resources for learning more about Java EE.
The Parenscript Common Lisp to JavaScript compilerVladimir Sedach
The document discusses the Parenscript Common Lisp to JavaScript compiler. It describes what Parenscript is and is not, how it works, and its history and uses. Key points include that Parenscript compiles Common Lisp to readable JavaScript code without runtime dependencies or new types, and is used in web frameworks, libraries, and commercial projects.
The document discusses developing with vert.x. It provides steps for creating a URL shortener including developing modules with static web server and MongoDB, testing modules individually, creating an API server using EventBus, deploying modules with scripts, and easily testing with auto-deploy. It also briefly explains some key features of vert.x like asynchronous programming, modularity, and polyglot programming.
The document provides an overview and progress report on Apache Tomcat NEXT. It discusses new features required by specifications like Java EE 8 and Servlet 4.0. Key changes include full support for HTTP/2, TLS improvements like SNI and multiple certificates, and removal of outdated features. Internal changes improved connectors and refactored WebSocket handling. The rationale for Apache Tomcat 8.5 was to provide new features sooner than waiting for Java EE 8's delayed release. HTTP/2, OpenSSL encryption, and TLS virtual hosting are highlighted.
Thread dumps provide snapshots of a Java application's threads and their states. When a slowdown occurs, get multiple thread dumps over time to analyze thread activity and identify potential issues like:
1) Lock contention between threads waiting to enter synchronized methods or blocks.
2) Deadlocks from circular wait conditions that can hang applications.
3) Threads waiting for I/O responses from databases or networks.
4) High CPU usage by specific threads as shown through monitoring tools.
Analyzing thread dumps helps locate performance bottlenecks and fix synchronization, resource contention, or inefficient code issues degrading application speed.
The document provides instructions for installing Apache Tomcat 8 application server on CentOS. It describes downloading and installing Java 8, downloading and extracting the Tomcat archive, configuring environment variables and ports, starting Tomcat, creating user accounts, deploying WAR files, and customizing the Java virtual machine settings. It also discusses using Nginx as a reverse proxy to route port 80 traffic to Tomcat running on port 8080.
Tomcat New Evolution discusses the new features introduced in Tomcat 6 and 7. Some key highlights include:
- Tomcat 6 introduced features like memory leak prevention, CSRF protection, session fixation protection, NIO connector, Comet support, logging improvements, web services support, and clustering.
- Tomcat 7 features included externalizing static resources, WebSocket support, easier embedded usage, and asynchronous logging.
- Both versions aimed to improve performance, security, and scalability through these new capabilities. Tomcat continues evolving to support newer standards and address common issues.
Apache Commons Pool and DBCP - Version 2 UpdatePhil Steitz
This document provides an overview of updates to the Commons Pool and DBCP projects in versions 2.0 and 2.2. It discusses new features in the object pooling framework like improved performance, metrics collection, and flexible configuration options. It also outlines changes in the DBCP connection pooling implementation, including better monitoring, validation, and security integration.
This document provides details on the PHP configuration including the version of PHP and extensions installed, the Apache and PHP API versions, and key configuration settings. PHP 5.3.8 is installed and configured to run as an Apache module with various extensions enabled like BC Math, BZip2, COM support, and calendar functions. The configuration reveals PHP is running on Windows 7 with the Apache server and various PHP settings at their default values.
Node.js is a server-side JavaScript platform for building scalable network applications. It allows writing code using JavaScript for non-browser environments like servers. Node.js uses an event-driven, asynchronous I/O model that makes it lightweight and efficient. A simple web server can be written in just a few lines of Node.js code. Node.js has a thriving ecosystem of external modules that help build full-stack JavaScript applications.
Toster - Understanding the Rails Web Model and Scalability OptionsFabio Akita
In my first time at Russia, I've presented about Reactor Pattern, Eventmachine, WebSocket and the Pusher service as options for when Rails alone is not enough
This document provides an overview of Apache Tomcat, a free and open-source web server and servlet container developed by the Apache Software Foundation (ASF) that implements the Java Servlet and JavaServer Pages (JSP) technologies. It discusses what Tomcat is, its role as a web application container, how to install and configure it, enable features like CGI and SSI, and addresses some common issues. The advantages of using Tomcat include that it is open source, lightweight, easily configured, stable, well documented, and free.
/* pOrt80BKK */ - PHP Day - PHP Performance with APC + Memcached for WindowsFord AntiTrust
This document discusses using APC and Memcached to improve PHP performance on Windows. It provides an overview of how APC works as an opcode cache to improve performance by caching compiled PHP scripts. Memcached is described as an in-memory database for additional caching of data like sessions. The document outlines how to install and configure both APC and Memcached on Windows and provides benchmark results showing significant performance improvements from using the caches.
Tomcat is an open-source Java Servlet container developed by the Apache Software Foundation that implements the Java Servlet and JavaServer Pages specifications from Sun Microsystems. It is written in Java, so it is platform independent. Tomcat requires setting the JAVA_HOME and CATALINA_HOME environment variables and extracting the source files to a directory before starting the server on port 8080 and accessing the welcome page. The server.xml file can be configured to serve files from a custom webapps directory.
APC (Alternative PHP Cache) is a PHP extension that caches and optimizes the opcodes generated by the Zend Engine. It stores precompiled PHP scripts in shared memory, improving performance by avoiding the need to parse and compile scripts on each request. APC offers opcode caching, content caching using functions like apc_add(), and upload progress reporting. Benchmark tests showed that using APC can nearly double the requests per second and reduce the average request time by over half compared to not using a opcode cache.
Tomcat clustering allows multiple Tomcat application servers to work together as a single unit to provide scalability and high availability. There are two types of clustering: vertical scaling uses multiple servers on a single machine, while horizontal scaling uses independent servers across multiple machines for better performance. A typical Tomcat cluster uses a load balancer like Apache mod_jk for request distribution and a session replication method for shared state. Configuring a cluster involves setting up multiple Tomcat instances, configuring the load balancer and workers, and enabling session sharing if needed.
The document discusses installing and configuring the Tomcat web server, including downloading and extracting Tomcat, configuring ports for multiple instances, directory structure, creating web applications, and basic server configuration using files like server.xml.
Fluentd is a data collector that can unify logging and metrics formats and enable real-time extraction, transformation, and storage of data. It will be used at 10xLab to collect logging data from their Co-Work app and infrastructure components and enable real-time analysis and long-term storage. Fluentd makes it easy to set up log collection pipelines and extend functionality through plugins. 10xLab plans to use Fluentd with Resque to reliably queue and process job data, store logs in S3, analyze logs in Treasure Data, and monitor systems. Fluentd will be installed via AWS cloud-init and managed using Chef.
Rapid java backend and api development for mobile devicesciklum_ods
This document discusses best practices for developing RESTful APIs and backend services for mobile applications. It recommends using Java, Maven, Spring, Jersey, and Protocol Buffers. Protocol Buffers provide a compact data interchange format that is faster than JSON and more widely supported than other protocols. The document provides an example of implementing authentication, API throttling, caching, testing, and error handling in a RESTful service using these technologies.
Peeking into the Black Hole Called PL/PGSQL - the New PL Profiler / Jan Wieck...Ontico
The new PL profiler allows you to easily get through the dark barrier, PL/pgSQL puts between tools like pgbadger and the queries, you are looking for.
Query and schema tuning is tough enough by itself. But queries, buried many call levels deep in PL/pgSQL functions, make it torture. The reason is that the default monitoring tools like logs, pg_stat_activity and pg_stat_statements cannot penetrate into PL/pgSQL. All they report is that your query calling function X is slow. That is useful if function X has 20 lines of simple code. Not so useful if it calls other functions and the actual problem query is many call levels down in a dungeon of 100,000 lines of PL code.
Learn from the original author of PL/pgSQL and current maintainer of the plprofiler extension how you can easily analyze, what is going on inside your PL code.
The document discusses the future of server-side JavaScript. It covers various Node.js frameworks and libraries that support both synchronous and asynchronous programming styles. CommonJS aims to provide interoperability across platforms by implementing synchronous proposals using fibers. Examples demonstrate how CommonJS allows for synchronous-like code while maintaining asynchronous behavior under the hood. Benchmarks show it has comparable performance to Node.js. The author advocates for toolkits over frameworks and continuing development of common standards and packages.
The document discusses Java EE 7 and its new features. It provides an overview of APIs added in Java EE 7 like JMS 2, batch processing, bean validation 1.1, JAX-RS 2, JSON processing, and concurrency utilities. The document also mentions some planned features for Java EE 8 like JSON-B, JCache, CDI 2.0 and highlights resources for learning more about Java EE.
JBoss Application Server is an open source application server. It supports J2EE 1.3 technologies including EJB 2.0, JMS, JDBC, and more. JBoss installs easily and can be configured for clustering, web services, and CORBA integration. It uses Apache Tomcat as its web server and integrates the open source JBossMQ for JMS. Default topics, queues, and a Hypersonic database are provided for testing and development.
Node has captured the attention of early adopters by clearly differentiating itself as being asynchronous from the ground up while remaining accessible. Now that server side JavaScript is at the cutting edge of the asynchronous, real time web, it is in a much better position to establish itself as the go to language for also making synchronous, CRUD webapps and gain a stronger foothold on the server.
This talk covers the current state of server side JavaScript beyond Node. It introduces Common Node, a synchronous CommonJS compatibility layer using node-fibers which bridges the gap between the different platforms. We look into Common Node's internals, compare its performance to that of other implementations such as RingoJS and go through some ideal use cases.
RESTEasy is a framework for building RESTful web services in Java. It allows developers to write JAX-RS annotated Java classes to define resources and their representations. Resources are addressable via URIs and support standard HTTP methods like GET, PUT, POST, and DELETE. Resources return representations in formats like JSON, XML, and HTML. Communication is stateless and driven by hypermedia links between resources. RESTEasy supports features like interceptors, asynchronous jobs, caching, GZIP compression, and integration with Spring and other frameworks.
The document discusses new features in Java EE 7 including support for WebSockets, JSON processing, RESTful web services, batch applications, and concurrency utilities. Key enhancements include simplified APIs, support for asynchronous programming, and improved developer productivity and integration capabilities. The specifications covered include JSRs 356, 353, 339, 344, 236, and 352.
This document provides an overview of REST (Representational State Transfer), including the key aspects of RESTful architectures such as:
- Resources are addressed through URIs
- Standard HTTP methods like GET, PUT, POST, DELETE are used to manipulate resources
- Data is represented in various formats like JSON, XML, HTML
- Communication is stateless between client and server
It then discusses how these REST principles are implemented in RESTEasy, the JBoss RESTful Web Services framework, through annotations and APIs. Features like content negotiation, interceptors, asynchronous calls and caching are also covered.
JBoss AS7 is a major re-write of the JBoss application server with a modular architecture and improved performance. Key features include HornetQ as the default JMS provider, the JBoss module system for classloading, and support for CDI, JSF, RESTEasy and other Java EE 6 specifications. Testing can be simplified using Arquillian which integrates tests directly with the application server container. Migrating from earlier versions of JBoss AS requires changes to configuration, dependencies and tooling.
Service Oriented Integration With ServiceMixBruce Snyder
This document summarizes a presentation about Service Oriented Integration with Apache ServiceMix. The presentation introduces Enterprise Service Buses and their purpose in facilitating integration. It then discusses key aspects of Apache ServiceMix, an open source ESB, including its support for various protocols and engines. The presentation provides examples of how ServiceMix can be used to configure routing and mediation using tools like Apache Camel and content-based routing. It concludes by discussing newer developments in ServiceMix 4 that utilize OSGi and build upon integration patterns.
Rapid Network Application Development with Apache MINAtrustinlee
The document is a presentation about the Apache MINA framework. It summarizes that Apache MINA is a Java open-source network application framework that allows developers to easily build scalable, stable, and manageable network applications using any protocol. It provides core components like IoSession for connections, IoBuffer for message handling, IoHandlers for business logic, and IoFilters for cross-cutting concerns. It also supports integration with JMX for runtime management and has future plans to improve performance and expand protocol support.
Taking Jenkins Pipeline to the Extremeyinonavraham
Slide deck from Jenkins User Conference Tel Aviv 2018.
Talking about suggested (best?) practices, tips and tricks, using Jenkins pipeline scripts with shared libraries, managing shared libraries, using docker compose, and more.
This document discusses using WebSockets for bidirectional communication between a GWT client and server. It provides an overview of setting up WebSocket connections on both the client and server sides in GWT, including writing a JSNI wrapper to initialize the WebSocket on the client. It also discusses using GWT's existing RPC serialization mechanism to serialize and deserialize Java objects sent over the WebSocket connection, avoiding the need for additional serialization libraries. Code examples are provided for initializing the WebSocket and handling messages on both the client and server sides, as well as using GWT's serialization streams to serialize and deserialize objects between the client and server.
Service-Oriented Integration With Apache ServiceMixBruce Snyder
This document provides an overview of Service Oriented Integration with Apache ServiceMix. It discusses what an Enterprise Service Bus (ESB) is, introduces Java Business Integration (JBI) and its normalized message format. It then describes Apache ServiceMix, an open source ESB and JBI container, covering its architecture, features, and how it supports common integration patterns like content-based routing through the use of Apache Camel. Configuration and tooling options for ServiceMix are also reviewed.
OTN Tour 2013: What's new in java EE 7Bruno Borges
The document discusses the new features in Java EE 7, including WebSocket client/server endpoints, batch applications, JSON processing, concurrency utilities, simplified JMS API, transactional scopes, JAX-RS client API, and more annotated POJOs with less boilerplate code. The Java EE 7 release aims to provide more productivity, support for HTML5, and address enterprise demands.
JCConf 2022 - New Features in Java 18 & 19Joseph Kuo
This document summarizes Joseph Kuo's presentation on new features in Java 18 and 19. It discusses survey results on the state of the Java ecosystem from TIOBE Index, GitHub Octoverse, and Stack Overflow. It then covers new language features including simple web server, UTF-8 default encoding, code snippets in JavaDoc, pattern matching for switch/instanceof, record patterns, vector API, virtual threads, and preview features.
This document provides an introduction and overview of a Node.js tutorial presented by Tom Hughes-Croucher. The tutorial covers topics such as building scalable server-side code with JavaScript using Node.js, debugging Node.js applications, using frameworks like Express.js, and best practices for deploying Node.js applications in production environments. The tutorial includes exercises for hands-on learning and demonstrates tools and techniques like Socket.io, clustering, error handling and using Redis with Node.js applications.
Boost Development With Java EE7 On EAP7 (Demitris Andreadis)Red Hat Developers
JBoss EAP7 brings support for the most recent industry standards and technologies, including Java EE7, the latest edition of the premier enterprise development standard. This session will provide an overview of the major additions to Java EE7, and how your team can use these capabilities on the advanced EAP7 runtime to produce better applications with less code.
Vert.x is a tool for building reactive applications on the JVM. It is polyglot, allowing applications to be written in Java, Groovy, JavaScript and other languages. It uses an asynchronous and non-blocking model with shared-nothing communication between components. Modules communicate through publish/subscribe messaging on an event bus or directly through request-response patterns. Vert.x provides horizontal scaling and allows efficient use of server resources. It can also integrate with SockJS to provide WebSocket-like capabilities in browsers that do not support WebSockets.
This document discusses the use of Chef, an open source configuration management tool, for server management. It notes that Chef allows for repeatable system provisioning and ease of scaling servers without vendor lock-in. Chef manages over 120 servers across 10 environments for the company discussed. Chef uses Ruby code and resources like packages, templates and services to configure and maintain server configurations. It works both on single servers via chef-solo and with a centralized chef-server for cluster management. Common resources, attributes, definitions and recipes are discussed as the basic building blocks for automation with Chef. Gotchas around idempotency, package sources and attribute abuse are also covered.
OB1K is a new RPC container. it belongs to a new breed of frameworks that tries to improve on the classic JEE model by embedding the server and reducing redundant bloatware.
OB1K supports two modes of operations: sync and async, the async mode aims for maximum performance by adopting reactive principals like using non-blocking code and functional composition using futures.
Ob1k also aims to be ops/devops friendly by being self contained and easily configured.
Similar to Faster & Greater Messaging System HornetQ zzz (20)
JBoss Wise: breaking barriers to WS testingJBug Italy
JBoss Wise is a Java library that makes it easy to test web services with little to no code. It allows dynamic invocation of web service operations by browsing WSDL models and populating request parameters. Users can define their own data models to map to service parameters using mappers like Smooks. Wise also includes a web-based GUI that allows testing services visually without XML or Java knowledge. The goal is to lower the barrier to web service testing and enable business analysts to perform acceptance tests.
1. JBoss AS7 is a lightweight, modular Java application server with features like hot parallel deployment, elegant administration, and domain management.
2. The CLI provides a command line interface for managing AS7 resources through a detyped management model and supports features like tab completion, scripting, and deployment management.
3. The CLI allows viewing and modifying resources, connecting to controllers, sending operations, and deploying/undeploying packages through both interactive and non-interactive usage.
The document outlines the agenda for a JBoss User Group meeting in Milano on September 26, 2012. The agenda includes presentations on using TEIID in the European Open-DAI project and on JBoss Application Server 7 CLI administration. It also provides updates on JBoss news including webinars, products, pricing, acquisitions, and upcoming events.
Infinispan is a distributed, scalable, and transactional data grid that can be used as a NoSQL key-value store. It supports indexing and querying of data through integration with Apache Lucene. Queries can be executed on the data grid to search for objects by fields or perform more complex searches. Infinispan also supports MapReduce-style processing on the data grid. Hibernate Search leverages Infinispan to provide full-text search capabilities for Hibernate entities in a clustered environment.
Stefano Maestri is a committer for various JBoss projects including JBoss WS and IronJacamar. He is a member of the JBoss AS7 team and leads the Wise project. AS7 provides a modular, lightweight Java application server with fast startup times, easy management across multiple instances using domains, and simplified configuration.
JBoss BRMS - The enterprise platform for business logicJBug Italy
The document provides an agenda for a presentation on JBoss BRMS. It includes sections on the JBoss BRMS overview and benefits, how it integrates with Guvnor, the different types of assets that can be defined in JBoss BRMS like packages, facts, rules, decision tables, and test scenarios. It also discusses how JBoss BRMS supports business processes with JBPM5, authoring, deployment, and integration with Eclipse. Key assets include packages to organize logic, a fact model, rules, decision tables to define rules visually, and test scenarios to validate the system.
JBoss Application Server 7 (AS7) introduces major changes from previous versions including a new modular architecture, support for domain mode management across multiple servers, and a unified configuration model. AS7 aims to improve usability, manageability, and performance of the application server through these changes. The new architecture in AS7 includes concepts such as server groups that allow consistent configuration and deployment of applications across multiple server instances.
The document announces a JBoss User Group meeting in Milano on January 24th 2012. The agenda includes welcome coffee, news on JBoss, an introduction to Drools, Infinispan clustering in AS7.1 and Enterprise Data Grid, and a buffet lunch. Updates are provided on JBoss AS7 webinars, the release of AS7.1 CR1, additions to OpenShift, the release of Teiid 7.6 and RichFaces 4.1.0, and the Ceylon IDE M1 release. Information is also given on RHQ driftmonitoring and samples projects, the release of JBoss Operations Network 3.0, and HornetQ support for JBoss EAP 5
This document discusses JBoss Web Services and how it integrates Apache CXF into JBoss Application Server. It provides an overview of how JBoss WS works at runtime and during deployment. Key points include that JBoss WS allows CXF to be used on JBoss AS, addresses classloading issues, and provides features like web service reference injection. It also demonstrates configuring security using the WS-Security UT Profile and JAAS login modules.
Stefano Maestri is a long-time committer to JBoss projects who has worked at Red Hat since 2010. He is involved with JBoss AS7, leading the Wise project and serving on the AS7 team. AS7 features a highly modular, lightweight architecture with fast startup times, easy administration across a domain, and improved usability over previous versions.
The document outlines the agenda for a JBoss User Group meeting in Milan on September 20th, 2011. The agenda includes presentations on JBoss AS7, JBoss AS7 web services, using JBoss on OpenShift cloud, and time for networking and questions. Additional sections provide news and information on JBoss projects, events, books, and the differences between community and enterprise versions of JBoss middleware.
The document provides an overview of the key features and benefits of a Business Rules Management System (BRMS). It discusses what a BRMS is, its main components like Guvnor, assets, rules, processes, and how it can be used with Eclipse. A BRMS provides a centralized repository for business logic, enables separation of logic and data, and allows non-technical users to define rules through a graphical interface.
Infinispan is an in-memory data grid that provides a distributed key-value store. It allows for data replication across nodes for high availability and partitions data using consistent hashing to enable horizontal scalability. Infinispan supports transactions, caching, querying and more. It can be configured programmatically or via XML and integrates with various Java technologies like JPA, CDI and Spring.
Drools was originally created as a rule engine but has expanded to be a full business modeling platform through the integration of business rule management (Drools Expert), business process management (Drools Flow), and complex event processing (Drools Fusion). Drools 5 provides a single platform for developing business logic applications using these complementary techniques. It allows modeling problems as rules, processes, events, and more through tools like Drools Guvnor for managing knowledge bases.
This document discusses barriers to integration testing and introduces Arquillian and ShrinkWrap as tools to help address those barriers. It describes how Arquillian handles container lifecycles and test deployment, allowing tests to focus on logic. ShrinkWrap provides a fluent API for programmatically creating deployment archives. The presentation provides an overview of their capabilities and benefits, such as running tests directly in containers without full application builds. It also outlines future plans like additional container and framework support. Attendees are encouraged to get involved in the open source projects.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
How to Get CNIC Information System with Paksim Ga.pptx
Faster & Greater Messaging System HornetQ zzz
1. Faster & Greater Messaging System
HornetQ zzz
Giovanni Marigi
gmarigi at redhat.com
Middleware Consultant
JBoss, a Division of Red Hat
2. Agenda
o Intro
o Core
o EAP and standalone
o Transport
o Persistence & large messages
o Flow control
o Clustering & High Availability
o Other features
3. Some stats
HornetQ sets a record breaking score in the SPECjms2007 industry
standard benchmark for JMS messaging system performance.
HornetQ 2.0.GA obtained scores up to 307% higher than previously
published
SPECjms2007 benchmark results, on the same server hardware
and operating system set-up.
The peer-reviewed results are available on the spec.org web-site:
www.spec.org/jms2007/results/jms2007.html
8.2 messages per second with SpecJMS
http://planet.jboss.org/post/8_2_million_messages_second_with_specjms
The results were obtained by Kai Sachs and Stefan Appel from an
independent research group at the TU Darmstadt, Germany.
Their release announcement can be found here:
www.dvs.tu-darmstadt.de/news/specjms2007Results_HornetQ.html
4. HornetQ core
HornetQ core is designed simply as a set of POJOs.
It has also been designed to have as few dependencies on external
jars as possible.
As a result HornetQ core has only one more jar dependency than
the standard JDK classes; netty.jar
netty buffer classes are used internally.
Each HornetQ server has its own ultra high performance persistent
journal, which it uses for messaging and other persistence.
Using a high performance journal allows persistence message
performance, which is something not achievable when using a
relational database for persistence.
5. HornetQ modes
HornetQ currently provides two APIs for messaging at the client
side:
Core client API
simple intuitive Java API that allows the full set of
messaging functionality without some of the complexities of JMS.
JMS client API
standard JMS API
JMS semantics are implemented by a thin JMS facade layer on the
client side.
The HornetQ server does not associate with JMS and does not
know anything about JMS. It is a protocol agnostic messaging
server designed to be used with multiple different protocols.
When a user uses the JMS API on the client side, all JMS
interactions are translated into operations on the HornetQ core client
API before being transferred over the wire using the HornetQ wire
format.
13. HornetQ transport
HornetQ has a fully pluggable and highly flexible transport layer. The
transport layer defines its own Service Provider Interface (SPI) to
simplify plugging in a new transport provider.
Netty TCP
Netty SSL
Netty HTTP
Netty Servlet
acceptors are used on the server to define how connections are
accepted
hornetq-configuration.xml
<acceptor name="netty">
<factory-class>
org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-
class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</acceptor>
14. HornetQ transport
connectors are used by a client to define how it connects to a server
hornetq-configuration.xml
<connector name="netty">
<factory-
class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</fact
ory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</connector>
15. HornetQ persistence
HornetQ handles persistence with a high-performance journal,
which is optimized for messaging-specific use cases.
The HornetQ journal is append only with a configurable file size,
which improves performance by enabling single write operations.
It consists of a set of files on disk, which are initially pre-created to a
fixed size and filled with padding.
As server operations (add message, delete message, update
message, etc.) are performed, records of the operations are
appended to the journal until the journal file is full, at which point the
next journal file is used.
16. HornetQ persistence
configuration in hornetq-configuration.xml
journal-directory
location of the message journal. default value is data/journal.
journal-type
valid values are NIO or ASYNCIO.
If NIO, the Java NIO journal is used.
If ASYNCIO, Linux asynchronous IO is used. If ASYNCIO is set on
a non-Linux or non-libaio system, HornetQ detectsthis and falls back
to NIO.
journal-sync-transactional
If true, HornetQ ensures all transaction data is flushed to disk on
transaction boundaries(commit, prepare, and rollback).
default is true.
17. HornetQ persistence
journal-file-size
size of each journal file in bytes.
default value is 10485760 bytes (10 megabytes).
journal-min-files
minimum number of files the journal maintains
journal-max-io
maximum number of write requests to hold in the IO queue.
Write requests are queued here before being submitted to the
system for execution. If the queue fills, writes are blocked until
space becomes available in the queue.
journal-compact-min-files
minimum number of files before the journal will be compacted.
default value is 10.
18. HornetQ flow control
Flow control is used to limit the flow of data between a client and
server, or a server and another server. It does this in order to
prevent the client or server being overwhelmed with data.
Consumer flow control
HornetQ consumers improve performance by buffering a certain
number of messages in a client-side buffer before passing them to
be consumed.
By default, the consumer-window-size is set to 1 MiB
The value can be:
• -1 for an unbound buffer
• 0 to not buffer any messages.
• >0 for a buffer with the given maximum size in bytes.
19. HornetQ flow control
configuration in hornetq-jms.xml
<connection-factory name="ConnectionFactory">
<connectors>
<connector-ref connector-name="netty-connector"/>
</connectors>
<entries>
<entry name="ConnectionFactory"/>
</entries>
<consumer-window-size>0</consumer-window-size>
</connection-factory>
20. HornetQ flow control
It is also possible to control the rate at which a consumer can
consume messages.
This can be used to make sure that a consumer never consumes
messages at a rate faster than the rate specified.
<connection-factory name="ConnectionFactory">
<connectors>
<connector-ref connector-name="netty-connector"/>
</connectors>
<entries>
<entry name="ConnectionFactory"/>
</entries>
<consumer-max-rate>10</consumer-max-rate>
</connection-factory>
21. HornetQ flow control
It is possible to manage the flow control even for producers!
HornetQ also can limit the amount of data sent from a client to a
server to prevent the server being overwhelmed.
<connection-factory name="NettyConnectionFactory">
<connectors>
<connector-ref connector-name="netty-connector"/>
</connectors>
<entries>
<entry name="/ConnectionFactory"/>
</entries>
<producer-window-size>10</producer-window-size>
</connection-factory>
22. HornetQ message redelivery
An undelivered message returns to the queue ready to be
redelivered.
There are two options for these undelivered messages:
Delayed Redelivery
Message delivery can be delayed to allow the client time to
recover from transient failures and not overload its network or CPU
resources.
Dead Letter Address
Configure a dead letter address, to which messages are sent after
being determined undeliverable.
24. HornetQ large messages
HornetQ supports sending and receiving of large messages,
even when the client and server are running with limited memory.
As the InputStream is read, the data is sent to the server
as a stream of fragments. The server persists these fragments to
disk as it receives them. When the time comes to deliver them to a
consumer they are read back off the disk, also in fragments, and
sent down the wire.
When the consumer receives a large message it initially receives
just the message with an empty body. It can then set an
OutputStream on the message to stream the large message body to
a file on disk or elsewhere.
At no time is the entire message body stored fully in memory, either
on the client or the server.
27. HornetQ paging
HornetQ transparently supports huge queues containing millions of
messages while the server is running with limited memory.
In such a situation it's not possible to store all of the queues in
memory at one time, so HornetQ transparently pages messages
in and out of memory as they are needed. This allows massive
queues with a low memory footprint.
HornetQ will start paging messages to disk when the size of all
messages in memory for an address exceeds a configured
maximum size.
By default, HornetQ does not page messages; this must be explicitly
configured to activate it.
29. HornetQ high availabilty
HornetQ allows pairs of servers to be linked together as
live - backup pairs.
A backup server is owned by only one live server.
Backup servers are not operational until failover occurs.
When a live server crashes or is brought down in the correct mode,
the backup server currently in passive mode will become live and
another backup server will become passive. If a live server restarts
after a failover then it will have priority and be the next server to
become live when the current live server goes down, if the current
live server is configured to allow automatic failback then it will
detect the live server coming back up and automatically stop.
30. HornetQ high availabilty
configure the live and backup server to share their store, configure
both hornetq-configuration.xml
<shared-store>true</shared-store>
Additionally, the backup server must be flagged explicitly as a
backup:
<backup>true</backup>
In order for live - backup pairs to operate properly with a shared
store, both servers must have configured the location of journal
directory to point to the same shared location
31. HornetQ clustering
HornetQ clusters allow groups of HornetQ servers to be grouped
together in order to share message processing load. Each active
node in the cluster is an active HornetQ server which manages its
own messages and handles its own connections.
hornetq-configuration.xml
for each node set the parameter clustered to true
Server discovery is a mechanism by which servers can propagate
their connection details to:
Messaging clients. A messaging client wants to be able to connect to
the servers of the cluster without having specific knowledge of which
servers in the cluster are up at any one time.
Other servers. Servers in a cluster want to be able to create cluster
connections to each other without having prior knowledge of all the
other servers in the cluster.
Server discovery uses User Datagram Protocol (UDP) multicast to
broadcast server connection settings.
33. HornetQ clustering
In case the connection is not downloaded by JNDI...
final String groupAddress = "231.7.7.7";
final int groupPort = 9876;
ConnectionFactory jmsConnectionFactory =
HornetQJMSClient.createConnectionFactory(groupAddress, groupPort);
Connection jmsConnection1 = jmsConnectionFactory.createConnection();
Connection jmsConnection2 = jmsConnectionFactory.createConnection();
34. HornetQ clustering
Server Side load balancing
hornetq-configuration.xml
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<forward-when-no-consumers>false</forward-when-no-consumers>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
36. HornetQ other features
Routing messages with wildcards
e.g. queue is created with an address of queue.news.#
queue.news.europe or queue.news.usa or queue.news.usa.sport
Message expire
HornetQ will not deliver a message to a consumer after it's time to
live has been exceeded.
If the message hasn't been delivered before the time to live is
reached, the server can discard it.
// message will expire in 5000ms from now
message.setExpiration(System.currentTimeMillis() + 5000);
Expiry-address
<!-- expired messages in exampleQueue will be sent to the expiry
address expiryQueue -->
<address-setting match="jms.queue.exampleQueue">
<expiry-address>jms.queue.expiryQueue</expiry-address>
</address-setting>
37. HornetQ other features
Scheduled messages
TextMessage message = session.createTextMessage("MSG");
message.setLongProperty("_HQ_SCHED_DELIVERY", System.currentTimeMillis() + 5000);
producer.send(message);
...
// message will not be received immediately but 5 seconds later
TextMessage messageReceived = (TextMessage) consumer.receive();
Message group
Message groups are sets of messages that have the following characteristics:
• Messages in a message group share the same group id; that is, they have the same group
identifier property (JMSXGroupID for JMS, _HQ_GROUP_ID for HornetQ Core API).
• Messages in a message group are always consumed by the same consumer, even if there
are many consumers on a queue. They pin all messages with the same group id to the same
consumer.
If that consumer closes another consumer is chosen and will receive all messages with the
samegroup id.
38. HornetQ other features
Based on message
Message message = ...
message.setStringProperty("JMSXGroupID", "Group-0");
producer.send(message);
message = ...
message.setStringProperty("JMSXGroupID", "Group-0");
producer.send(message);
Based on connection factory...
<connection-factory name="ConnectionFactory">
hornetq-
<connectors>
jms.xml
<connector-ref connector-name="netty-connector"/>
</connectors>
<entries>
<entry name="ConnectionFactory"/>
</entries>
<group-id>Group-0</group-id>
</connection-factory>
39. HornetQ other features
Diverts
Diverts are objects that transparently divert messages routed to one address to
some other address,without making any changes to any client application logic.
Diverts can also be configured to apply a Transformer.
An exclusive divert diverts all matching messages that are routed to the old
address to the new address.
Matching messages do not get routed to the old address.
hornetq-configuration.xml
<divert name="prices-divert">
<address>jms.topic.priceUpdates</address>
<forwarding-address>jms.queue.priceForwarding</forwarding-address>
<filter string="office='New York'"/>
<transformer-class-name>
org.hornetq.jms.example.AddForwardingTimeTransformer
</transformer-class-name>
<exclusive>true</exclusive>
</divert>
40. HornetQ other features
Diverts
Non-exclusive diverts forward a copy of a message to a new address, allowing the original
message to continue to the previous address.
hornetq-configuration.xml
<divert name="order-divert">
<address>jms.queue.orders</address>
<forwarding-address>jms.topic.spyTopic</forwarding-address>
<exclusive>false</exclusive>
</divert>