Overview of Mulesoft's Quartz Connector with sample scenarios. Sample examples are also provided in the slides.
Quartz inbound and Quartz outbound endpoints are explained.
Basic example using quartz component in anypoint studioprudhvivreddy
The document discusses using the Quartz transport in Anypoint Studio to schedule and trigger events. It describes how an inbound Quartz endpoint can be used to trigger recurring events using a cron expression, and an outbound endpoint can schedule existing events. An example configuration is provided to trigger a flow with a logger every 30 seconds for 10 repetitions using Quartz inbound endpoint.
This document discusses scaling natural language processing (NLP) tasks by distributing work across multiple processors and machines. It describes running UIMA pipelines on a local cluster managed by Sun Grid Engine (SGE) to parallelize processing of independent documents. The local cluster, called Colfax, has 6 machines with 48 CPU cores and 96GB RAM that can be utilized through SGE job scripts to split work into arrays processed in parallel.
Torque is a resource manager that helps schedule jobs across a cluster to maximize machine utilization. It handles job submission, monitoring, and accounting. Maui is a scheduling manager that works with Torque to determine which jobs should run and when based on priority, resources, and dependencies to optimize cluster usage. Together they provide a way to automatically run jobs on any node in a distributed computing environment while monitoring status and resource usage.
The document discusses the evolution of Ceilometer, an OpenStack project that collects measurements from deployed clouds and persists the data for later retrieval and analysis. It describes how Ceilometer has scaled out its data collection capabilities over time by adding agents, partitioning workloads, and integrating with Gnocchi to provide more efficient time-series storage. The document also provides best practices for Ceilometer deployment and configuration to optimize data collection, storage and querying.
NYAN Conference: Debugging asynchronous scenarios in .netAlexandra Hayere
Times have changed. Multi-core CPUs have become the norm and multi-threading has been replaced by asynchronous programming. You think you know everything about async/await... until something goes wrong. While debugging synchronous code can be straightforward, investigating an asynchronous deadlock or race condition proves to be surprisingly tricky.
In this talk, follow us through real-life examples and investigations to cover the main asynchronous code patterns that can go wrong. You will tumble on deadlock and understand the reasons behind ThreadPool thread starvation.
In addition to WinDbg magic to follow async/await chains, Visual Studio goodies won't be forgotten to quickly analyze hundreds of call stacks or tasks status.
1. The document describes load testing done on an Odoo implementation to determine system limits and size requirements. Tests were run using Locust and varied the number of requests per second, users, and sessions to identify bottlenecks.
2. Testing showed the initial limit was around 500 requests/second. Further tests found that sessions stored in NFS caused issues, so they were changed to store in PostgreSQL.
3. Load balanced and monolithic architectures were compared, finding the load balanced setup using 4 servers performed similarly to a single server with 40 cores for the monolithic setup.
This document outlines an agenda for a Docker workshop on orchestrating Docker containers. It discusses various tools in the container ecosystem including Rancher, Mesos, Marathon, service discovery with Consul and Registrator, monitoring with Prometheus and Grafana, and continuous integration/deployment with Jenkins. Mesos is presented as a way to share resources across frameworks like Marathon. Consul and Registrator enable service discovery so applications can be accessed regardless of where they run. Prometheus and Grafana provide monitoring. The workshop emphasizes combining these tools - Docker, Mesos, Marathon, Rancher, Consul, Registrator, Consul Template, cAdvisor, Prometheus, Grafana, and Jenkins - for a complete container management solution.
This document discusses changes to the organization of projects in Openshift and the implementation of Jenkins pipelines for continuous integration and delivery.
For Openshift projects, a new product-based approach is proposed with fewer total projects that are named based on the product, environment, and attributes. This is compared to the old component-based approach with more projects.
For Jenkins pipelines, a new approach is proposed with a base pipeline to build Docker images and a product pipeline with build and test stages. This is compared to the old approach with multiple separate jobs instead of a unified pipeline. Examples of the new base and product pipelines are provided.
Basic example using quartz component in anypoint studioprudhvivreddy
The document discusses using the Quartz transport in Anypoint Studio to schedule and trigger events. It describes how an inbound Quartz endpoint can be used to trigger recurring events using a cron expression, and an outbound endpoint can schedule existing events. An example configuration is provided to trigger a flow with a logger every 30 seconds for 10 repetitions using Quartz inbound endpoint.
This document discusses scaling natural language processing (NLP) tasks by distributing work across multiple processors and machines. It describes running UIMA pipelines on a local cluster managed by Sun Grid Engine (SGE) to parallelize processing of independent documents. The local cluster, called Colfax, has 6 machines with 48 CPU cores and 96GB RAM that can be utilized through SGE job scripts to split work into arrays processed in parallel.
Torque is a resource manager that helps schedule jobs across a cluster to maximize machine utilization. It handles job submission, monitoring, and accounting. Maui is a scheduling manager that works with Torque to determine which jobs should run and when based on priority, resources, and dependencies to optimize cluster usage. Together they provide a way to automatically run jobs on any node in a distributed computing environment while monitoring status and resource usage.
The document discusses the evolution of Ceilometer, an OpenStack project that collects measurements from deployed clouds and persists the data for later retrieval and analysis. It describes how Ceilometer has scaled out its data collection capabilities over time by adding agents, partitioning workloads, and integrating with Gnocchi to provide more efficient time-series storage. The document also provides best practices for Ceilometer deployment and configuration to optimize data collection, storage and querying.
NYAN Conference: Debugging asynchronous scenarios in .netAlexandra Hayere
Times have changed. Multi-core CPUs have become the norm and multi-threading has been replaced by asynchronous programming. You think you know everything about async/await... until something goes wrong. While debugging synchronous code can be straightforward, investigating an asynchronous deadlock or race condition proves to be surprisingly tricky.
In this talk, follow us through real-life examples and investigations to cover the main asynchronous code patterns that can go wrong. You will tumble on deadlock and understand the reasons behind ThreadPool thread starvation.
In addition to WinDbg magic to follow async/await chains, Visual Studio goodies won't be forgotten to quickly analyze hundreds of call stacks or tasks status.
1. The document describes load testing done on an Odoo implementation to determine system limits and size requirements. Tests were run using Locust and varied the number of requests per second, users, and sessions to identify bottlenecks.
2. Testing showed the initial limit was around 500 requests/second. Further tests found that sessions stored in NFS caused issues, so they were changed to store in PostgreSQL.
3. Load balanced and monolithic architectures were compared, finding the load balanced setup using 4 servers performed similarly to a single server with 40 cores for the monolithic setup.
This document outlines an agenda for a Docker workshop on orchestrating Docker containers. It discusses various tools in the container ecosystem including Rancher, Mesos, Marathon, service discovery with Consul and Registrator, monitoring with Prometheus and Grafana, and continuous integration/deployment with Jenkins. Mesos is presented as a way to share resources across frameworks like Marathon. Consul and Registrator enable service discovery so applications can be accessed regardless of where they run. Prometheus and Grafana provide monitoring. The workshop emphasizes combining these tools - Docker, Mesos, Marathon, Rancher, Consul, Registrator, Consul Template, cAdvisor, Prometheus, Grafana, and Jenkins - for a complete container management solution.
This document discusses changes to the organization of projects in Openshift and the implementation of Jenkins pipelines for continuous integration and delivery.
For Openshift projects, a new product-based approach is proposed with fewer total projects that are named based on the product, environment, and attributes. This is compared to the old component-based approach with more projects.
For Jenkins pipelines, a new approach is proposed with a base pipeline to build Docker images and a product pipeline with build and test stages. This is compared to the old approach with multiple separate jobs instead of a unified pipeline. Examples of the new base and product pipelines are provided.
Structured concurrency with Kotlin Coroutines
1. Theory
- Coroutines
- Suspending functions
- Asynchronous Flows
- Channels
2. Practice
- Business lookup feature implementation in TransferWise app for Android
The document discusses options for testing software including manual testing with Vagrant and Robot Framework as well as automated testing. It recommends using OpenShift for automated testing as it provides a reliable solution with Kubernetes that is scalable and controllable while Docker provides speed and high density. Open vSwitch enables network connectivity in OpenShift. Workflow and reference details are also included.
Odoo Online platform: architecture and challengesOdoo
A short introduction to the technical architecture of the Odoo Online platform, including the advanced integrated features (instant DNS, email gateways, etc.), and the technical aspect of the SLA.
By Olivier Dony - Lead Developer & Community Manager, OpenERP
RxJava is a library for composing asynchronous and event-based programs using observable sequences for the Java Virtual Machine. It implements Reactive Extensions Observables from Microsoft to provide an API for asynchronous programming with observable streams. RxJava supports Java, Groovy, Clojure, and Scala and is used by Netflix to build reactive applications by merging and transforming streams of data from various sources.
This document discusses various tools used in software development including Trac, SVN, Jenkins, Maven, JUnit. It provides information on how these tools can be used together in an integrated development environment for version control, continuous integration, unit testing and builds.
Доклад Антона Поварова "Go in Badoo" с Golang MeetupBadoo Development
This document summarizes notes from a Go meetup at Badoo in April 2015. It discusses Badoo's use of Go in their backend systems, including replacing 25 C/C++ daemons with Go services. It provides examples of memory profiling Go code to reduce garbage collection pauses. It also discusses using protocol buffers with Go and strategies for reducing allocations when marshaling data.
Continuous Integration for Fun and Profitinovex GmbH
Agile Continuous Integration verspricht mithilfe von Pipelines die Entwicklung und Auslieferung von Software signifikant zu verbessern. Der Weg zur finalen Implementierung kann jedoch mit einigen unvorhergesehenen Aufwänden gepflastert sein. So werden wir uns einige hilfreiche Methoden und Tools zur Umsetzung solcher Pipelines mit dem Fokus auf Continuous Integration ansehen, um unseren agilen Entwicklungsprozess abzurunden und dadurch Zeit für die wichtigen Dinge im Alltag zu gewinnen.
Vorkenntnisse: Grundverständnis von Softwarentwicklung
Lernziele: Wir werden die Hintergründe von Continuous Integration/Delivery diskutieren und sehen uns ein Beispiel aus der Praxis an, das vor allem die Vorteile von CI/CD hevorheben soll.
Event: enterJS, 16.06.2016, Darmstadt
Speaker: Arnold Bechtoldt
Weitere Tech-Vorträge: https://www.inovex.de/de/content-pool/vortraege/
This document outlines labs for learning Node.js and Express.js. Lab 01 focuses on Node.js, with exercises on installing Node.js, writing basic console, HTTP, TCP, and UDP applications. Lab 02 covers Express.js, including installing and configuring Express.js, generating an Express application, and using MySQL with Express. Each exercise provides code snippets and links to video tutorials.
Matthew Treinish, HP - subunit2sql: Tracking 1 Test Result in Millions, OpenS...Cloud Native Day Tel Aviv
This document discusses subunit2sql, a Python library and utilities that aggregate and store subunit streams from OpenStack CI jobs in a SQL database. It allows tracking individual test results over time, including success and failure rates as well as run times for each test. Graphs can be generated from the SQL database to detect performance regressions in specific tests or track when failures are fixed. The subunit2sql tools include subunit2sql to load subunit streams into the database, sql2subunit to output streams from the database, and subunit2sql-graph for analysis.
This document summarizes different machine learning models for Android malware detection. It introduces Asaf Shabtai as the academic instructor and discusses past problems with malware and solutions. It then presents the prototype detectors for today which include a Byte3g detector using a decision tree model on dex file features, an Anatasia detector using a random forest model on intents, cmd calls, and api calls, a KNN detector using 3NN on permission features, and an SVM detector using an SVM model on api calls and permissions. It concludes by thanking the audience.
This document discusses how to create threads in Java. It shows that there are two main ways to create a thread: by implementing the Runnable interface and overriding the run() method, or by extending the Thread class and overriding the run() method. It also provides an example of how to create a multi-threaded server that uses a ServerSocket to listen for client connections on a fixed port, launches a new Service thread for each connection, and loops to continue accepting more connections.
Common Workflow Language (CWL) - George CarvalhoGeorge Carvalho
The document introduces the Common Workflow Language (CWL), which allows users to describe command line tools and connect them together into workflows. CWL aims to make workflows isolated, have explicit input/output, and be repeatable, modular, and scalable. It was created in 2015 by the community to enable collaborations and reproducible publications. The document provides an example of using CWL to annotate a VCF file with SnpEff, including installing CWL, cloning a demo repository, describing the workflow and inputs/outputs, and running it with a Docker container of SnpEff.
RxJava is an open source library for reactive programming that allows processing asynchronous streams of data. It provides operators to filter, transform, and combine Observables in a lazy manner. Observables represent asynchronous data streams that can be subscribed to receive push-based event notifications. Services return Observables to make their APIs asynchronous and reactive.
The document discusses how Docker containers currently reuse existing operating system distributions, which is problematic. It suggests moving away from relying on full operating systems inside containers towards a more minimal model. This would allow for more consistency between what a host system runs versus inside containers. The document outlines several recommendations, including starting containers from scratch rather than existing distributions and being more specific about versions for packages installed.
OB1K is a new RPC container. it belongs to a new breed of frameworks that tries to improve on the classic JEE model by embedding the server and reducing redundant bloatware.
OB1K supports two modes of operations: sync and async, the async mode aims for maximum performance by adopting reactive principals like using non-blocking code and functional composition using futures.
Ob1k also aims to be ops/devops friendly by being self contained and easily configured.
This presentation covers how to setup an Airflow instance as a cluster which spans multiple machines instead of the traditional 1 machine distribution. In addition, it covers an added step you can take to ensure High Availability in that cluster.
Ob1k is a new open source RPC container. it belongs to a new breed of frameworks that tries to improve on the classic J2EE model by embedding the server and reducing redundant bloatware. Ob1k supports two modes of operations: sync and async, the async mode aims for maximum performance by adopting reactive principals like using non-blocking code and functional composition using futures. Ob1k also aims to be ops/devops friendly by being self contained and easily configured.
This document discusses asynchronous processing in Spring, including:
1. The concept of thread pools for managing asynchronous task execution and avoiding overhead of creating new threads.
2. Configuring asynchronous support in Servlet 3 and Spring MVC, including setting thread pool properties and handling exceptions.
3. Annotation-based approaches for executing methods asynchronously (@Async) and scheduling periodic tasks (@Scheduled) using Spring's TaskExecutor abstraction.
4. Asynchronous request processing in Servlet 3 where the request processing is decoupled from the servlet container thread to improve scalability.
Quartz.NET is an open-source job scheduling library for .NET that allows scheduling jobs and tasks to run at specific times or intervals. The documentation provides information on configuring Quartz.NET to use an ADO job store with a database for storing job details and history. Several links discuss using Quartz.NET to run background tasks in ASP.NET applications, including how to write background tasks and use cron expressions to schedule them. The library website provides more information on the Quartz Enterprise Scheduler for .NET.
Structured concurrency with Kotlin Coroutines
1. Theory
- Coroutines
- Suspending functions
- Asynchronous Flows
- Channels
2. Practice
- Business lookup feature implementation in TransferWise app for Android
The document discusses options for testing software including manual testing with Vagrant and Robot Framework as well as automated testing. It recommends using OpenShift for automated testing as it provides a reliable solution with Kubernetes that is scalable and controllable while Docker provides speed and high density. Open vSwitch enables network connectivity in OpenShift. Workflow and reference details are also included.
Odoo Online platform: architecture and challengesOdoo
A short introduction to the technical architecture of the Odoo Online platform, including the advanced integrated features (instant DNS, email gateways, etc.), and the technical aspect of the SLA.
By Olivier Dony - Lead Developer & Community Manager, OpenERP
RxJava is a library for composing asynchronous and event-based programs using observable sequences for the Java Virtual Machine. It implements Reactive Extensions Observables from Microsoft to provide an API for asynchronous programming with observable streams. RxJava supports Java, Groovy, Clojure, and Scala and is used by Netflix to build reactive applications by merging and transforming streams of data from various sources.
This document discusses various tools used in software development including Trac, SVN, Jenkins, Maven, JUnit. It provides information on how these tools can be used together in an integrated development environment for version control, continuous integration, unit testing and builds.
Доклад Антона Поварова "Go in Badoo" с Golang MeetupBadoo Development
This document summarizes notes from a Go meetup at Badoo in April 2015. It discusses Badoo's use of Go in their backend systems, including replacing 25 C/C++ daemons with Go services. It provides examples of memory profiling Go code to reduce garbage collection pauses. It also discusses using protocol buffers with Go and strategies for reducing allocations when marshaling data.
Continuous Integration for Fun and Profitinovex GmbH
Agile Continuous Integration verspricht mithilfe von Pipelines die Entwicklung und Auslieferung von Software signifikant zu verbessern. Der Weg zur finalen Implementierung kann jedoch mit einigen unvorhergesehenen Aufwänden gepflastert sein. So werden wir uns einige hilfreiche Methoden und Tools zur Umsetzung solcher Pipelines mit dem Fokus auf Continuous Integration ansehen, um unseren agilen Entwicklungsprozess abzurunden und dadurch Zeit für die wichtigen Dinge im Alltag zu gewinnen.
Vorkenntnisse: Grundverständnis von Softwarentwicklung
Lernziele: Wir werden die Hintergründe von Continuous Integration/Delivery diskutieren und sehen uns ein Beispiel aus der Praxis an, das vor allem die Vorteile von CI/CD hevorheben soll.
Event: enterJS, 16.06.2016, Darmstadt
Speaker: Arnold Bechtoldt
Weitere Tech-Vorträge: https://www.inovex.de/de/content-pool/vortraege/
This document outlines labs for learning Node.js and Express.js. Lab 01 focuses on Node.js, with exercises on installing Node.js, writing basic console, HTTP, TCP, and UDP applications. Lab 02 covers Express.js, including installing and configuring Express.js, generating an Express application, and using MySQL with Express. Each exercise provides code snippets and links to video tutorials.
Matthew Treinish, HP - subunit2sql: Tracking 1 Test Result in Millions, OpenS...Cloud Native Day Tel Aviv
This document discusses subunit2sql, a Python library and utilities that aggregate and store subunit streams from OpenStack CI jobs in a SQL database. It allows tracking individual test results over time, including success and failure rates as well as run times for each test. Graphs can be generated from the SQL database to detect performance regressions in specific tests or track when failures are fixed. The subunit2sql tools include subunit2sql to load subunit streams into the database, sql2subunit to output streams from the database, and subunit2sql-graph for analysis.
This document summarizes different machine learning models for Android malware detection. It introduces Asaf Shabtai as the academic instructor and discusses past problems with malware and solutions. It then presents the prototype detectors for today which include a Byte3g detector using a decision tree model on dex file features, an Anatasia detector using a random forest model on intents, cmd calls, and api calls, a KNN detector using 3NN on permission features, and an SVM detector using an SVM model on api calls and permissions. It concludes by thanking the audience.
This document discusses how to create threads in Java. It shows that there are two main ways to create a thread: by implementing the Runnable interface and overriding the run() method, or by extending the Thread class and overriding the run() method. It also provides an example of how to create a multi-threaded server that uses a ServerSocket to listen for client connections on a fixed port, launches a new Service thread for each connection, and loops to continue accepting more connections.
Common Workflow Language (CWL) - George CarvalhoGeorge Carvalho
The document introduces the Common Workflow Language (CWL), which allows users to describe command line tools and connect them together into workflows. CWL aims to make workflows isolated, have explicit input/output, and be repeatable, modular, and scalable. It was created in 2015 by the community to enable collaborations and reproducible publications. The document provides an example of using CWL to annotate a VCF file with SnpEff, including installing CWL, cloning a demo repository, describing the workflow and inputs/outputs, and running it with a Docker container of SnpEff.
RxJava is an open source library for reactive programming that allows processing asynchronous streams of data. It provides operators to filter, transform, and combine Observables in a lazy manner. Observables represent asynchronous data streams that can be subscribed to receive push-based event notifications. Services return Observables to make their APIs asynchronous and reactive.
The document discusses how Docker containers currently reuse existing operating system distributions, which is problematic. It suggests moving away from relying on full operating systems inside containers towards a more minimal model. This would allow for more consistency between what a host system runs versus inside containers. The document outlines several recommendations, including starting containers from scratch rather than existing distributions and being more specific about versions for packages installed.
OB1K is a new RPC container. it belongs to a new breed of frameworks that tries to improve on the classic JEE model by embedding the server and reducing redundant bloatware.
OB1K supports two modes of operations: sync and async, the async mode aims for maximum performance by adopting reactive principals like using non-blocking code and functional composition using futures.
Ob1k also aims to be ops/devops friendly by being self contained and easily configured.
This presentation covers how to setup an Airflow instance as a cluster which spans multiple machines instead of the traditional 1 machine distribution. In addition, it covers an added step you can take to ensure High Availability in that cluster.
Ob1k is a new open source RPC container. it belongs to a new breed of frameworks that tries to improve on the classic J2EE model by embedding the server and reducing redundant bloatware. Ob1k supports two modes of operations: sync and async, the async mode aims for maximum performance by adopting reactive principals like using non-blocking code and functional composition using futures. Ob1k also aims to be ops/devops friendly by being self contained and easily configured.
This document discusses asynchronous processing in Spring, including:
1. The concept of thread pools for managing asynchronous task execution and avoiding overhead of creating new threads.
2. Configuring asynchronous support in Servlet 3 and Spring MVC, including setting thread pool properties and handling exceptions.
3. Annotation-based approaches for executing methods asynchronously (@Async) and scheduling periodic tasks (@Scheduled) using Spring's TaskExecutor abstraction.
4. Asynchronous request processing in Servlet 3 where the request processing is decoupled from the servlet container thread to improve scalability.
Quartz.NET is an open-source job scheduling library for .NET that allows scheduling jobs and tasks to run at specific times or intervals. The documentation provides information on configuring Quartz.NET to use an ADO job store with a database for storing job details and history. Several links discuss using Quartz.NET to run background tasks in ASP.NET applications, including how to write background tasks and use cron expressions to schedule them. The library website provides more information on the Quartz Enterprise Scheduler for .NET.
Mule Quartz component allows scheduling tasks to run at predefined dates and times. Cron expressions are used to define schedules using six or seven fields separated by spaces. Jobs perform scheduled actions, like generating events. An example uses a Quartz inbound endpoint configured with a cron expression to run a "hello world" logger message every 10 seconds. Quartz can run embedded, on an application server, or standalone to schedule jobs across a cluster.
Quartz.NET - Enterprise Job Scheduler for .NET PlatformGuo Albert
Quartz.NET is a port of the popular open source Java job scheduling framework Quartz. It allows scheduling of jobs to run tens, hundreds, or thousands of jobs on a defined schedule. The Quartz scheduler includes enterprise features like JTA transactions and clustering. To use Quartz.NET, add the Quartz and Common Logging DLL files to a project and create a job class with a schedule defined in Global.asax.cs to execute the job and write logs to a file every 5 seconds.
Using Spring Scheduler Mule allows scheduling tasks in Mule applications using a Java class and Spring task scheduler. This provides an alternative to using Quartz scheduler or Poll components. The document demonstrates creating a Java class that prints a message every 10 seconds, and configuring the Spring task scheduler in Mule to trigger that class repeatedly to continuously monitor an application.
Spark Streaming provides an easier API for streaming data than Storm, replacing Storm's spouts and bolts with Akka actors. It integrates better with Hadoop and makes time a core part of its API. This document provides instructions for setting up Spark Streaming projects using sbt or Maven and includes a demo reading from Kafka and processing a Twitter stream.
GDG Jakarta Meetup - Streaming Analytics With Apache BeamImre Nagi
Google slide version of this slide can be accessed from: https://docs.google.com/presentation/d/1Ws73JxlVH39HiKiYuF3vW903j8wFzxPQihXz4CQ_HZM/edit?usp=sharing
Amazon has been using and building workflow services for years now. They use Simple Workflow (SWF) internally to lay down OS and all required software onto a new Amazon server before it joins Amazon fleet. Every Amazon server being put in service is provisioned in a workflow using SWF.
During this brown-bag session you will be taken through the example of real application which uses SWF.
During this talk we'll cover the theory and practical implementation behind most common patterns in modern multi-threaded programming. How our everyday libraries and frameworks optimize use of operating system resources for maximum efficiency. We'll also try to understand differences between various approaches and what tradeoffs do they infer. Finally we'll take a look at how they are supported by various compilers and runtimes.
The document discusses modern concurrency primitives like threads, thread pools, coroutines, and schedulers. It covers why asynchronous programming with async/await is preferred over traditional threading. It also discusses challenges like sharing data across threads and blocking on I/O calls. Some solutions covered include using thread pools with dedicated I/O threads, work stealing, and introducing interruption points in long-running tasks.
Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes. It is written in Java and uses a pluggable backend. Presto is fast due to code generation and runtime compilation techniques. It provides a library and framework for building distributed services and fast Java collections. Plugins allow Presto to connect to different data sources like Hive, Cassandra, MongoDB and more.
Krzysztof Sobkowiak presented on serverless Java on Kubernetes. Serverless computing refers to building applications without server management by deploying functions that automatically scale in response to demand. Function as a Service (FaaS) platforms like Apache OpenWhisk allow running Java code as stateless functions on Kubernetes. OpenWhisk supports Java actions and integrates with services through triggers and rules to enable event-driven architectures. Spring Cloud Functions provides a framework for building serverless applications using Spring Boot and Java.
The document provides an overview of asynchronous programming in Python. It discusses how asynchronous programming can improve performance over traditional synchronous and threaded models by keeping resources utilized continuously. It introduces key concepts like callbacks, coroutines, tasks and the event loop. It also covers popular asynchronous frameworks and modules in Python like Twisted, Tornado, gevent and asyncio. Examples are provided to demonstrate asynchronous HTTP requests and concurrent factorial tasks using the asyncio module. Overall, the document serves as an introduction to asynchronous programming in Python.
Sharding and Load Balancing in Scala - Twitter's FinagleGeoff Ballinger
My presentation at Mostly Functional (http://mostlyfunctional.com), part of this year's Turing Festival Fringe (http://turingfestival.com) in Edinburgh. The example source code is up on Github at https://github.com/geoffballinger/simple-sharder
The objective of this tutorial is to demonstrate the steps required to execute an Oracle Stored Procedure with a Nested Table as a parameter from Mule Flow.
SOAP Web Services have a well established role in the enterprise, but aside from the many benefits of the WS-* standards, SOAP and XML also carry additional baggage for developers. Consequently, REST Web Services are gaining tremendous popularity within the developer community. This session will begin by comparing and contrasting the basic concepts of both SOAP and REST Web Services. Building on that foundation, Sam Brannen will show attendees how to implement SOAP-based applications using Spring-WS 2.0. He will then demonstrate how to build a similar REST-ful application using Spring MVC 3.0. The session will conclude with an in-depth look at both server-side and client-side development as well as efficient integration testing of Web Services using the Spring Framework.
This document discusses asynchronous I/O in Java and Scala using the Play Framework. It describes how LinkedIn uses a service-oriented architecture with hundreds of services making requests to each other. It then covers how Play supports non-blocking I/O using asynchronous code, promises, and futures to allow parallel requests without blocking threads. Key points covered include using map and flatMap to transform promises and futures, handling errors and timeouts, and the benefits of non-blocking I/O for scalability.
Aplicações assíncronas no Android com Coroutines & JetpackNelson Glauber Leal
The document discusses asynchronous applications in Android using Coroutines & Jetpack. It introduces coroutines as lightweight threads that make asynchronous programming easier by replacing callbacks with suspending functions. It covers key coroutine concepts like Job, Context, Scope and Dispatcher. It also discusses how coroutines can be used with Lifecycle, ViewModel and WorkManager components in Jetpack to build asynchronous Android applications.
Rhebok, High Performance Rack Handler / Rubykaigi 2015Masahiro Nagano
This document discusses Rhebok, a high performance Rack handler written in Ruby. Rhebok uses a prefork architecture for concurrency and achieves 1.5-2x better performance than Unicorn. It implements efficient network I/O using techniques like IO timeouts, TCP_NODELAY, and writev(). Rhebok also uses the ultra-fast PicoHTTPParser for HTTP request parsing. The document provides an overview of Rhebok, benchmarks showing its performance, and details on its internals and architecture.
The document discusses the future of server-side JavaScript. It covers various Node.js frameworks and libraries that support both synchronous and asynchronous programming styles. CommonJS aims to provide interoperability across platforms by implementing synchronous proposals using fibers. Examples demonstrate how CommonJS allows for synchronous-like code while maintaining asynchronous behavior under the hood. Benchmarks show it has comparable performance to Node.js. The author advocates for toolkits over frameworks and continuing development of common standards and packages.
The document discusses load testing and why it often fails. It recommends using a tool with the lowest barrier to entry, such as Blitz, which allows load testing on AWS with a web form or API. Blitz produces results but no reports, so the document shows how to modify Blitz's code to output JSON results for reporting purposes. It encourages integrating load testing into existing development workflows rather than treating it separately after deployment.
How Sparkling Water brings Fast Scalable Machine learning via H2O to Apache Spark.
By Michal Malohlava and H2O.ai
Our 100th Meetup at 0xdata, September 30, 2014
Open Source meets Out Door.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
The Combine Collections Transformer takes a payload that is a collection of collections and combines them into a single list. For example, if the payload contains one collection with elements A and B and another with C and D, it will output a single collection with elements A, B, C, D. It is often used after the Scatter-Gather component to flatten the collection of collections returned into a single collection. The transformer converts a MuleMessageCollection into a MuleMessage.
Global functions can be used anywhere in a Mule project and allow reusable code without special syntax. To create a global function, define it within <global-functions> tags in the project XML file before any flows. Global functions can call existing Java functions or define new ones, and can then be used in Mule expressions or DataWeave components throughout the project.
This document discusses how to create a custom object store in Mule. It explains that an object store persists data between messages. To create a custom store, you implement the ObjectStore interface and can override interfaces like ListableObjectStore. The document provides a code snippet of a custom MyObjectStore class that uses a Hashtable to store objects, implementing the required ObjectStore methods. It notes that the custom store can then be configured in components that require object stores.
The document discusses using a parse template component in Mule to load an external file as a template, insert data from Mule message variables into the template, and set the output as the message payload. Specifically, it provides an example of using a parse template to load an HTML template, insert employee data from a database query into the template using Mule expressions, and return the dynamic HTML as an HTTP response. The parse template allows pulling external files into flows, populating them with message data, and setting the results on the message.
The Groovy component allows developers to add Groovy scripts to applications without reengineering legacy code. It can intercept messages and alter property values as messages flow through. The component supports directly entering script text, referencing an external script file, or using a Java bean. Examples demonstrate string replacement using parameters, throwing exceptions, and making a thread sleep.
The Expression Component evaluates a Mule Expression and unlike the Expression Transformer, it does not replace the message payload with the results. It can be configured with a display name and expression to evaluate directly or by referencing an expression in a file. Examples show assigning a session variable to the payload.
The document discusses how to create a custom transformer in Mule by implementing the AbstractTransformer interface and overriding its methods. It provides an example of a KeyTransformer class that takes the source object, gets its string value, and prepends "Hello " to a substring. The document explains that this custom transformer can then be added via the Custom Transformer component in Mule to apply this transformation.
This document discusses creating a custom aggregation strategy for the Scatter-Gather routing component in Mule. It describes implementing the AggregationStrategy interface and overriding the default CollectAllAggregationStrategy. The snippet provided shows an example aggregation strategy class that collects successful responses even if errors occur, logging any exceptions. To use it, the custom strategy class would be provided in the Scatter-Gather component configuration.
This document provides instructions for creating a custom message aggregator in Mulesoft. It explains that a custom aggregator can be implemented by extending the AbstractAggregator interface and overriding its standard implementations. It includes a code snippet for a custom TestAggregator class that overrides the getCorrelatorCallback method to aggregate event payloads by appending them to a string buffer. The custom aggregator can then be added to Mule flows via the Custom Aggregator component.
This document describes a Byte Array to Hex String Transformer that can convert between byte arrays and hexadecimal strings. It requires knowledge of hexadecimal numbers and involves placing the transformer where transformations are needed between byte arrays and hex strings.
This document discusses how to create a custom filter in Mule by implementing the org.mule.api.routing.filter interface and overriding existing filter implementations. It provides an example of a custom InputStreamFilter that checks if the message payload is a java.io.InputStream. The filter can then be configured in a Mule flow using the <custom-filter> element.
This document describes a transformer that converts hexadecimal strings to byte arrays. It requires knowledge of hexadecimal numbers and involves placing the transformer where transformations between hex strings and byte arrays are needed. The transformer allows converting between those two data types.
The XML to DOM Transformer transforms an XML message payload into an org.w3c.dom.Document object. To use it, place the XML to DOM Transformer where needed and specify the returnClass as org.w3c.dom.Document or any supported org.w3c.dom.Document implementation. The transformer then converts the XML payload into a DOM document for further processing.
This document describes a DOM to XML Transformer that converts an XML payload such as a Document Object Model (DOM), XML stream, or Source object into a serialized String representation. It accepts common XML data types like DOM documents, XML streams, and Sources. To use it, place the transformer where transformation of the XML payload is needed.
The Object to Input Stream Transformer converts serializable objects to an input stream by taking objects and converting them to byte arrays using the String.getBytes() method. It should be placed where transformations are needed and converts objects to streams for further processing.
The Byte Array to Object Transformer converts a byte array to an object by either de-serializing the array or converting it to a string. It is used by placing the transformer where transformation is needed and specifying the return class to validate the correct object type is returned, such as an array using a class name postfixed with '[]'.
This transformer converts a byte array to a string. It allows specifying the MIME type like text/plain or application/json for the output. To use it, place the Byte Array to String transformer where transformation is needed from a byte array to string.
This transformer converts objects to human-readable strings, useful for debugging. It can transform input streams to strings. To use it, place the object to string transformer where object transformation is needed.
Mule supports converting CSV data to JSON format using the Dataweave component. The document describes setting the output to application/json and designating the payload as the output, which converts CSV data containing billing addresses for three companies into a JSON array with each company's data in an object.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Why Mobile App Regression Testing is Critical for Sustained Success_ A Detail...kalichargn70th171
A dynamic process unfolds in the intricate realm of software development, dedicated to crafting and sustaining products that effortlessly address user needs. Amidst vital stages like market analysis and requirement assessments, the heart of software development lies in the meticulous creation and upkeep of source code. Code alterations are inherent, challenging code quality, particularly under stringent deadlines.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
2. Prerequisites
Understanding of basic endpoints like http, file etc
Example : http://localhost:8081/app/b
file:///D/invoice/input
Understanding of CRON expressions
A good place to learn CRON expressions is
http://www.quartz-scheduler.org/documentation/quartz-2.1.x/tutorials/crontrigger.html
3. Quartz Connector
The Quartz Connector supports the scheduling of programmatic events, both inside and outside your Mule
flow. Through a quartz endpoint, you can trigger flows that don’t depend on receiving any external input to
execute at scheduled times.
For instance, an inbound Quartz endpoint can trigger inbound events, such as temperature reports from a
remote location, at regular intervals.
Outbound Quartz endpoints can delay otherwise imminent events. For example, you can prevent outgoing
email from being sent as soon as it has completed processing in your Mule flow. Instead, you can use
Quartz to delay sending it until the top of the next hour.
4. Inbound Quartz Endpoint
A Quartz inbound endpoint can be used to generate events. It is most useful when you want to trigger a
flow at a given interval (or cron expression) rather than have an external event trigger the flow.
Polling
Creating custom events
5. Polling with Quartz
The below Quartz endpoint polls for files using the “File” endpoint. It polls every 10 seconds starting at
12:15:00 PM everyday till 12:15:50 PM
<file:endpoint name="File" path="./src/main/resources/input" responseTimeout="10000"
doc:name="File"/>
<flow name="quartz-polling-with-incomingfile-in-folder">
<quartz:inbound-endpoint jobName="FilePollingJob" cronExpression="0/10 15 12 * * ?"
responseTimeout="10000" doc:name="Quartz" connector-ref="Quartz" repeatInterval="0">
<quartz:endpoint-polling-job>
<quartz:job-endpoint ref="File"/>
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
</flow>
6. Example 2
The below Quartz endpoint executes a HTTP GET request to http://localhost:8081/app/b every 30
seconds starting at 11:05:00 AM till 11:05:30 AM
<flow name="quartz-polling-with-http-get-invoke">
<quartz:inbound-endpoint jobName="HTTPPollingJob" cronExpression="0/30 5 11 * * ?" connector-ref="Quartz"
responseTimeout="10000" doc:name="Quartz" repeatInterval="0">
<quartz:endpoint-polling-job>
<quartz:job-endpoint address="http://localhost:8081/app/b"/>
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
</flow>
7. Generate Events with Quartz
The below Quartz endpoint creates an event which will post Payload “Event from Quartz !! ” every 10
seconds starting at 01:36:00 PM till 01:36:50 PM
<flow name="app7-3Flow1">
<quartz:inbound-endpoint jobName="SimpleEventCreateJob" cronExpression="0/10 36 13 * * ?" repeatInterval="0"
responseTimeout="10000" doc:name="Quartz">
<quartz:event-generator-job>
<quartz:payload>Event from Quartz !! </quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
</flow>
8. Outbound Quartz Endpoint
An outbound Quartz endpoint allows existing events to be stored and fired at a later time/date.
Dispatching events
Dispatching custom events
9. Dispatching events with Quartz
Example 1
The below Quartz endpoint Invokes the job endpoint which is file i.e creates file several times according to
CRON expression that is starting at 01:43 PM at interval of 10 seconds till 01:43:50 PM
<file:endpoint path="./src/main/resources/output" name="File-Outbound-Endpoint" responseTimeout="10000"
doc:name="File"/> <flow name="quartz-schedule-dispatch-file">
<http:listener config-ref="HTTP_Listener_Configuration" path="/app/c" doc:name="HTTP"/>
<set-payload value="#['Payload for Dispatch !']" doc:name="Set Payload"/>
<quartz:outbound-endpoint jobName="FileDispatchJob" cronExpression="0/10 43 13 * * ?" connector-ref="Quartz"
responseTimeout="10000" doc:name="Quartz">
<quartz:scheduled-dispatch-job>
<quartz:job-endpoint ref="File-Outbound-Endpoint"/>
</quartz:scheduled-dispatch-job>
</quartz:outbound-endpoint>
</flow>
10. Custom Quartz Job
We can write our own Quartz job by implementing the org.quartz.Job
interface, this allows us to leverage Java to dispatch an event at the
scheduled time
Override the public void execute(JobExecutionContext
jobExecutionContext) throws JobExecutionException; - to define what
should happen when the job fires.
Need to give source in evaluator and name of the created java class in the
expression. This will trigger the execute() function of that class at the cron
specified time.
11. Example of Custom Quartz Job:
package org.rahul.quartz.job;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
public class CustomQuartzJob implements org.quartz.Job{
private String data;
public String getData() {
return data;
}
public void setData(String data) {
this.data = data;
}
@Override
public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {
try {
FileOutputStream fos = new FileOutputStream("D:AnypointStudio-
5.4.3Workspaceapp7srcmainresourcesoutputoutput.txt");
fos.write(this.data.getBytes());
fos.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
12. We have to pass an instance of the created Quartz job and supply it to the Quartz endpoint as evaluator and the fully qualified name
of the created Quartz Job in the expression field. For this example i have passed evaluator from the Payload.
Example Snippet below:
<dw:transform-message doc:name="Transform Message">
<dw:set-payload><![CDATA[%dw 1.0
%output application/java
---
{
data: "Hello from Quartz!!"
} as :object {
class : "org.rahul.quartz.job.CustomQuartzJob"
}]]></dw:set-payload>
</dw:transform-message>
<quartz:outbound-endpoint jobName="customJob1" responseTimeout="10000" doc:name="Quartz" cronExpression="0/10 29 12
* * ?">
<quartz:custom-job-from-message evaluator="payload" expression="org.rahul.quartz.job.CustomQuartzJob" />
</quartz:outbound-endpoint>
Understanding of basic endpoints like http, file etc Example : http://localhost:8081/app/b file:///D/invoice/input
Understanding of CRON expressions A good place to learn CRON expressions ishttp://www.quartz-scheduler.org/documentation/quartz-2.1.x/tutorials/crontrigger.html
The Quartz Connector supports the scheduling of programmatic events, both inside and outside your Mule flow. Through a quartz endpoint, you can trigger flows that don’t depend on receiving any external input to execute at scheduled times.
For instance, an inbound Quartz endpoint can trigger inbound events, such as temperature reports from a remote location, at regular intervals.
Outbound Quartz endpoints can delay otherwise imminent events. For example, you can prevent outgoing email from being sent as soon as it has completed processing in your Mule flow. Instead, you can use Quartz to delay sending it until the top of the next hour.
A Quartz inbound endpoint can be used to generate events. It is most useful when you want to trigger a flow at a given interval (or cron expression) rather than have an external event trigger the flow.
Polling
Creating custom events
The below Quartz endpoint polls for files using the “File” endpoint. It polls every 10 seconds starting at 12:15:00 PM everyday till 12:15:50 PM
<file:endpoint name="File" path="./src/main/resources/input" responseTimeout="10000" doc:name="File"/>
<flow name="quartz-polling-with-incomingfile-in-folder"> <quartz:inbound-endpoint jobName="FilePollingJob" cronExpression="0/10 15 12 * * ?" responseTimeout="10000" doc:name="Quartz" connector-ref="Quartz" repeatInterval="0"> <quartz:endpoint-polling-job> <quartz:job-endpoint ref="File"/> </quartz:endpoint-polling-job> </quartz:inbound-endpoint> </flow>
Example 2
The below Quartz endpoint executes a HTTP GET request to http://localhost:8081/app/b every 30 seconds starting at 11:05:00 AM till 11:05:30 AM
<flow name="quartz-polling-with-http-get-invoke">
<quartz:inbound-endpoint jobName="HTTPPollingJob" cronExpression="0/30 5 11 * * ?" connector-ref="Quartz" responseTimeout="10000" doc:name="Quartz" repeatInterval="0">
<quartz:endpoint-polling-job>
<quartz:job-endpoint address="http://localhost:8081/app/b"/>
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
</flow>
The below Quartz endpoint creates an event which will post Payload “Event from Quartz !! ” every 10 seconds starting at 01:36:00 PM till 01:36:50 PM
<flow name="app7-3Flow1"> <quartz:inbound-endpoint jobName="SimpleEventCreateJob" cronExpression="0/10 36 13 * * ?" repeatInterval="0" responseTimeout="10000" doc:name="Quartz"> <quartz:event-generator-job> <quartz:payload>Event from Quartz !! </quartz:payload> </quartz:event-generator-job> </quartz:inbound-endpoint> </flow>
An outbound Quartz endpoint allows existing events to be stored and fired at a later time/date.
Dispatching events
Dispatching custom events
Example 1
The below Quartz endpoint Invokes the job endpoint which is file i.e creates file several times according to CRON expression that is starting at 01:43 PM at interval of 10 seconds till 01:43:50 PM
<file:endpoint path="./src/main/resources/output" name="File-Outbound-Endpoint" responseTimeout="10000" doc:name="File"/> <flow name="quartz-schedule-dispatch-file"> <http:listener config-ref="HTTP_Listener_Configuration" path="/app/c" doc:name="HTTP"/> <set-payload value="#['Payload for Dispatch !']" doc:name="Set Payload"/> <quartz:outbound-endpoint jobName="FileDispatchJob" cronExpression="0/10 43 13 * * ?" connector-ref="Quartz" responseTimeout="10000" doc:name="Quartz"> <quartz:scheduled-dispatch-job> <quartz:job-endpoint ref="File-Outbound-Endpoint"/> </quartz:scheduled-dispatch-job> </quartz:outbound-endpoint> </flow>
We can write our own Quartz job by implementing the org.quartz.Job interface, this allows us to leverage Java to dispatch an event at the scheduled time
Override the public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException; - to define what should happen when the job fires.
Need to give source in evaluator and name of the created java class in the expression. This will trigger the execute() function of that class at the cron specified time.
Example of Custom Quartz Job:
package org.rahul.quartz.job;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
public class CustomQuartzJob implements org.quartz.Job{
private String data;
public String getData() {
return data;
}
public void setData(String data) {
this.data = data;
}
@Override
public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {
try {
FileOutputStream fos = new FileOutputStream("D:\\AnypointStudio-5.4.3\\Workspace\\app7\\src\\main\\resources\\output\\output.txt");
fos.write(this.data.getBytes());
fos.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
We have to pass an instance of the created Quartz job and supply it to the Quartz endpoint as evaluator and the fully qualified name of the created Quartz Job in the expression field. For this example i have passed evaluator from the Payload.
Example Snippet below:
<dw:transform-message doc:name="Transform Message"> <dw:set-payload><![CDATA[%dw 1.0%output application/java---{ data: "Hello from Quartz!!"} as :object { class : "org.rahul.quartz.job.CustomQuartzJob"}]]></dw:set-payload> </dw:transform-message> <quartz:outbound-endpoint jobName="customJob1" responseTimeout="10000" doc:name="Quartz" cronExpression="0/10 29 12 * * ?"> <quartz:custom-job-from-message evaluator="payload" expression="org.rahul.quartz.job.CustomQuartzJob" /> </quartz:outbound-endpoint>