After explaining what problem Reactive Programming solves I will give an introduction to one implementation: RxJava. I show how to compose Observable without concurrency first and then with Scheduler. I finish the talk by showing examples of flow control and draw backs.
Inspired from https://www.infoq.com/presentations/rxjava-reactor and https://www.infoq.com/presentations/rx-service-architecture
Code: https://github.com/toff63/Sandbox/tree/master/java/rsjug-rx/rsjug-rx/src/main/java/rs/jug/rx
This document summarizes lessons learned from deploying Puppet code globally at high speed. The key changes were moving from SVN to Git for version control, parallelizing deployments using MCollective instead of SSH loops, using MCollective policies instead of sudo, and switching to a pull model over push. These changes allowed deployments to be reduced from 4 minutes to 4 seconds. Environments were used to separate code for different teams and stages. A custom MCollective agent was created to deploy Git branches as Puppet environments. Cron jobs were used to pull updates to environments. Overall this approach improved the speed, consistency, and security of global Puppet deployments.
My talk from the Bay area puppetcamp about deploying puppet code to a global network of puppet masters as quickly as possible.
Covers the design and implementation of the TIM Group (and now Yelp) puppetupdate mcollective agent: https://github.com/Yelp/puppetupdate/
This document discusses Apache Pulsar, which provides a unified solution for messaging, storage, and stream processing. It can handle real-time data processing by unifying messages, computing, and storage. Key features include guaranteed ordering, high throughput, durability, geo-replication, and delivery guarantees. Pulsar uses Apache BookKeeper for storage and supports streaming, queuing, and functions to enable stream processing.
Yuta Iwama completed an internship where they added several new features and enhancements to Fluentd. Some of the key additions included implementing a counter API, adding data compression to buffers and forwards, creating a new simpler output plugin for secondary sections, developing a CLI tool to read dumped log data, and optimizing multiple filter calls. The internship provided valuable experience contributing to an open source middleware project and learning about aspects of design, development, and code quality.
This document summarizes the technology stack and use of websockets at oneplaylist.fm. The key aspects are:
- The stack includes Ruby on Rails, Redis, EventMachine, HAProxy, Resque, MongoDB, CoffeeScript, and Elasticsearch.
- HAProxy is used for TCP load balancing and handles HTTP as well, distributing traffic across multiple Rails app servers, Elasticsearch instances, and the EventMachine websocket server.
- Websockets are handled via a TCP connection to the EventMachine server through a separate subdomain, keeping HTTP requests on the main app domain.
- Redis is used for centralized communication and state management via Pub/Sub, with tokens mapping users to channels and event data pushed
Internship final report@Treasure Data Inc.Ryuichi ITO
Ryuchi Ito completed an internship at Treasure Data Inc. where he worked on the open source machine learning library Hivemall. He conducted benchmarks of Hivemall's performance on logistic regression and random forest algorithms. He also added new features to Hivemall including a system testing framework, feature binning, feature selection, and Spark integrations. The benchmarks showed that Hivemall was relatively slow for logistic regression compared to other tools but had good scalability. For random forests, Hivemall performed well on small to medium datasets but struggled on very large datasets.
ESUG 2014, Cambridge
Wed, August 20, 11:00am – 11:45am
Video:
Part1: https://www.youtube.com/watch?v=_Mv7SX-8Vlk
Part2: https://www.youtube.com/watch?v=qdZq2IZBm4k
Description
Abstract: In this talk we will present the advances and new features in Pharo 3.0. We will present the current work on Pharo 4.0 and beyond.
After explaining what problem Reactive Programming solves I will give an introduction to one implementation: RxJava. I show how to compose Observable without concurrency first and then with Scheduler. I finish the talk by showing examples of flow control and draw backs.
Inspired from https://www.infoq.com/presentations/rxjava-reactor and https://www.infoq.com/presentations/rx-service-architecture
Code: https://github.com/toff63/Sandbox/tree/master/java/rsjug-rx/rsjug-rx/src/main/java/rs/jug/rx
This document summarizes lessons learned from deploying Puppet code globally at high speed. The key changes were moving from SVN to Git for version control, parallelizing deployments using MCollective instead of SSH loops, using MCollective policies instead of sudo, and switching to a pull model over push. These changes allowed deployments to be reduced from 4 minutes to 4 seconds. Environments were used to separate code for different teams and stages. A custom MCollective agent was created to deploy Git branches as Puppet environments. Cron jobs were used to pull updates to environments. Overall this approach improved the speed, consistency, and security of global Puppet deployments.
My talk from the Bay area puppetcamp about deploying puppet code to a global network of puppet masters as quickly as possible.
Covers the design and implementation of the TIM Group (and now Yelp) puppetupdate mcollective agent: https://github.com/Yelp/puppetupdate/
This document discusses Apache Pulsar, which provides a unified solution for messaging, storage, and stream processing. It can handle real-time data processing by unifying messages, computing, and storage. Key features include guaranteed ordering, high throughput, durability, geo-replication, and delivery guarantees. Pulsar uses Apache BookKeeper for storage and supports streaming, queuing, and functions to enable stream processing.
Yuta Iwama completed an internship where they added several new features and enhancements to Fluentd. Some of the key additions included implementing a counter API, adding data compression to buffers and forwards, creating a new simpler output plugin for secondary sections, developing a CLI tool to read dumped log data, and optimizing multiple filter calls. The internship provided valuable experience contributing to an open source middleware project and learning about aspects of design, development, and code quality.
This document summarizes the technology stack and use of websockets at oneplaylist.fm. The key aspects are:
- The stack includes Ruby on Rails, Redis, EventMachine, HAProxy, Resque, MongoDB, CoffeeScript, and Elasticsearch.
- HAProxy is used for TCP load balancing and handles HTTP as well, distributing traffic across multiple Rails app servers, Elasticsearch instances, and the EventMachine websocket server.
- Websockets are handled via a TCP connection to the EventMachine server through a separate subdomain, keeping HTTP requests on the main app domain.
- Redis is used for centralized communication and state management via Pub/Sub, with tokens mapping users to channels and event data pushed
Internship final report@Treasure Data Inc.Ryuichi ITO
Ryuchi Ito completed an internship at Treasure Data Inc. where he worked on the open source machine learning library Hivemall. He conducted benchmarks of Hivemall's performance on logistic regression and random forest algorithms. He also added new features to Hivemall including a system testing framework, feature binning, feature selection, and Spark integrations. The benchmarks showed that Hivemall was relatively slow for logistic regression compared to other tools but had good scalability. For random forests, Hivemall performed well on small to medium datasets but struggled on very large datasets.
ESUG 2014, Cambridge
Wed, August 20, 11:00am – 11:45am
Video:
Part1: https://www.youtube.com/watch?v=_Mv7SX-8Vlk
Part2: https://www.youtube.com/watch?v=qdZq2IZBm4k
Description
Abstract: In this talk we will present the advances and new features in Pharo 3.0. We will present the current work on Pharo 4.0 and beyond.
Infrastructure & System Monitoring using PrometheusMarco Pas
The document introduces infrastructure and system monitoring using Prometheus. It discusses the importance of monitoring, common things to monitor like services, applications, and OS metrics. It provides an overview of Prometheus including its main components and data format. The document demonstrates setting up Prometheus, adding host metrics using Node Exporter, configuring Grafana, monitoring Docker containers using cAdvisor, configuring alerting in Prometheus and Alertmanager, instrumenting application code, and integrating Consul for service discovery. Live code demos are provided for key concepts.
Spark Streaming provides an easier API for streaming data than Storm, replacing Storm's spouts and bolts with Akka actors. It integrates better with Hadoop and makes time a core part of its API. This document provides instructions for setting up Spark Streaming projects using sbt or Maven and includes a demo reading from Kafka and processing a Twitter stream.
Kafka Summit NYC 2017 - Running Hundreds of Kafka Clusters with 5 Peopleconfluent
Tom Crayford discusses his experience running hundreds of Apache Kafka clusters on Heroku with a small team. Some key points discussed include:
- Using automation to manage clusters and reduce manual work required
- Common issues encountered like disk growth from log compaction bugs and addressing them by scanning clusters for anomalies
- Kafka's built-in high availability and how it helped during an AWS EBS failure event
- Novel failure cases encountered like a JVM memory leak from gzip usage and working to fix it
- Importance of taking breaks and not wasting time when operating clusters at scale.
Orchestrated Functional Testing with Puppet-spec and Mspectator - PuppetConf ...Puppet
This document discusses using Puppet-spec and Mspectator to orchestrate functional testing of Puppet configurations. Puppet-spec allows running unit and integration tests as part of Puppet runs, while Mspectator provides RSpec matchers to run functional tests across nodes using MCollective. The tests validate resources, packages, files and more, failing runs when tests don't pass to ensure configurations meet standards.
This document provides an overview of Socorro, Mozilla's system for processing Firefox crash reports with Python. It describes the basic architecture, how a crash report moves through the system from collection to processing to storage in databases. It also discusses the scale of Socorro, currently processing over 2.5 million crash reports per day and storing over 110 terabytes of crash data. The document outlines Socorro's implementation including the various components, tools, and techniques used to manage complexity at this large scale.
This document discusses several minor technical issues and proposed solutions in ATS:
1. Thread initialization is done unsafely by starting threads and later updating data structures, which is risky. The proposal is to use continuations to initialize threads safely during startup.
2. Continuation tracking is added to identify the origin of continuations for debugging. A "plugin context" tracks the originating plugin to tag continuations.
3. std::chrono is proposed to replace custom time handling in ATS. It provides type-safe time durations and timepoints without loss of precision during conversions.
4. Other active projects include partial object caching, event loop improvements, plugin priorities, making assertions no-ops in
This document summarizes a talk about reflection and abstract syntax trees (AST) in Pharo5. It discusses how the AST is now more integrated and accessible via methods like #ast. Annotations called MetaLinks can be added to AST nodes to modify behavior. A reflective compiler called Basis uses AST annotations and metaobjects to implement features like live recompilation without restarting the image. Questions from the audience are then invited.
This document provides instructions for using the Vampir toolchain at Indiana University (IU) on the Quarry and BigRed clusters. It describes how to run Vampir, VampirServer, and VampirTrace on these clusters, including which software modules to load, how to run jobs in PBS, and where to find trace files. The document also lists the software versions of Vampir, VampirServer, VampirTrace, OpenMPI, and compilers available on each cluster.
This document discusses Java 8 concurrency abstractions including asynchronous result processing using CompletableFuture and optimistic locking using StampedLock. It provides an overview and comparison to previous concurrency APIs. The agenda includes exploring CompletableFuture features like asynchronous execution, chaining reactions, and exception handling. It also covers using StampedLock for optimistic reads, comparing it to the previous ReentrantReadWriteLock approach. Examples are shown for common use cases of these new concurrency APIs.
Apache Samza is a distributed stream processing framework, that's used Kafka for messaging, and YARN to provide fault tolerance, processor isolation, security, and resource management.
Introduction to Systems Management with SaltStackCraig Sebenik
This document provides an introduction and overview of SaltStack, an open source system and configuration management tool. It discusses SaltStack's architecture including the master and minion components, execution modules, states, grains and pillars for managing data. It also covers extending SaltStack through templates, custom modules, and the Python API. The document demonstrates SaltStack's capabilities through examples and concludes with a summary of key features and references for further information.
Ractor is a new experimental feature in Ruby 3.0 that allows Ruby code to run in parallel on CPUs. It manages objects per Ractor and can move objects between Ractors, making moved objects invisible to the original Ractor. It can share certain "shareable" objects like modules, classes, and frozen objects between Ractors. For web applications to fully utilize Ractors, an experimental application server called Right Speed was created that uses Rack and runs processing workers on Ractors. However, there are still problems to address like exceptions when closing connections and accessing non-shareable constants and instance variables across Ractors before Ractors can be ready for production use in web applications.
This document summarizes a presentation about Alpakka, a Reactive Enterprise Integration library for Java and Scala based on Reactive Streams and Akka Streams. Alpakka provides connectors to various data sources and messaging systems that allow them to be accessed and processed using Akka Streams. Examples of connectors discussed include Kafka, MQTT, JMS, Elasticsearch and various cloud platforms. The document also provides an overview of Akka Streams and how they allow building responsive, asynchronous and resilient data processing pipelines.
In this slidecast, Jeff Squyres from Cisco Systems presents: How to make MPI Awesome - MPI Sessions. As a proposal for future versions of the MPI Standard, MPI Sessions could become a powerful tool tool to improve system resiliency as we move towards exascale.
Watch the video presentation: http://wp.me/p3RLHQ-f4U
Learn more: http://blogs.cisco.com/performance/mpi-sessions-a-proposal-for-the-mpi-forum
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses the reactor pattern and event-driven programming. It explains that the reactor pattern uses an event loop to process I/O asynchronously without blocking. EventMachine is given as an example of a reactor implementation in Ruby. It also explains how the thin web server uses EventMachine and the reactor pattern to handle requests asynchronously by delegating I/O to EventMachine and processing requests with thin handlers.
With more businesses moving to cloud-based solutions everyday, we must re-think the strategies used to deploy Perl applications and related libraries, given the volatile aspects of the cloud and its constraints.
In this talk I go over the challenges posed by virtualised environments, and consider several solutions to them. The use cases are all related to Amazon's EC2, but will easily be adapted for GoGrid, Mosso, and others.
Server push is a feature of HTTP/2 that allows a server to preemptively send resources to a client before they are requested. This can reduce the number of round trips needed and decrease load times. Apache Traffic Server (TS) has introduced experimental support for server push via the TSHttpTxnServerPush API since version 7.0. However, correctly implementing server push is challenging, as servers do not know which resources to push and pushes could delay responses or send unneeded data. Future work aims to add server push metrics and improve the API based on feedback.
The document discusses async-await in C# and how it works under the hood. It begins with questions about what happens when await is used. It then explains that async-await is used to compose asynchronous code using task continuations generated by the compiler. It discusses how await marks a continuation and depends on the context to determine threading. Windows I/O is used as an example of asynchronous operations without threads. The document concludes with recommendations to avoid deadlocks by not blocking on async code and to only use async-void for event handlers.
Este documento resume los gastos semanales y hábitos alimenticios de familias de diferentes países. Se muestra que el gasto varía ampliamente, desde 0,89€ en Chad hasta 534€ en Noruega, dependiendo del poder adquisitivo. Cada país y familia tiende a consumir alimentos diferentes en función de su cultura y disponibilidad de recursos. Muchos países desperdician grandes cantidades de comida sin ser conscientes del daño que causa al medio ambiente.
Infrastructure & System Monitoring using PrometheusMarco Pas
The document introduces infrastructure and system monitoring using Prometheus. It discusses the importance of monitoring, common things to monitor like services, applications, and OS metrics. It provides an overview of Prometheus including its main components and data format. The document demonstrates setting up Prometheus, adding host metrics using Node Exporter, configuring Grafana, monitoring Docker containers using cAdvisor, configuring alerting in Prometheus and Alertmanager, instrumenting application code, and integrating Consul for service discovery. Live code demos are provided for key concepts.
Spark Streaming provides an easier API for streaming data than Storm, replacing Storm's spouts and bolts with Akka actors. It integrates better with Hadoop and makes time a core part of its API. This document provides instructions for setting up Spark Streaming projects using sbt or Maven and includes a demo reading from Kafka and processing a Twitter stream.
Kafka Summit NYC 2017 - Running Hundreds of Kafka Clusters with 5 Peopleconfluent
Tom Crayford discusses his experience running hundreds of Apache Kafka clusters on Heroku with a small team. Some key points discussed include:
- Using automation to manage clusters and reduce manual work required
- Common issues encountered like disk growth from log compaction bugs and addressing them by scanning clusters for anomalies
- Kafka's built-in high availability and how it helped during an AWS EBS failure event
- Novel failure cases encountered like a JVM memory leak from gzip usage and working to fix it
- Importance of taking breaks and not wasting time when operating clusters at scale.
Orchestrated Functional Testing with Puppet-spec and Mspectator - PuppetConf ...Puppet
This document discusses using Puppet-spec and Mspectator to orchestrate functional testing of Puppet configurations. Puppet-spec allows running unit and integration tests as part of Puppet runs, while Mspectator provides RSpec matchers to run functional tests across nodes using MCollective. The tests validate resources, packages, files and more, failing runs when tests don't pass to ensure configurations meet standards.
This document provides an overview of Socorro, Mozilla's system for processing Firefox crash reports with Python. It describes the basic architecture, how a crash report moves through the system from collection to processing to storage in databases. It also discusses the scale of Socorro, currently processing over 2.5 million crash reports per day and storing over 110 terabytes of crash data. The document outlines Socorro's implementation including the various components, tools, and techniques used to manage complexity at this large scale.
This document discusses several minor technical issues and proposed solutions in ATS:
1. Thread initialization is done unsafely by starting threads and later updating data structures, which is risky. The proposal is to use continuations to initialize threads safely during startup.
2. Continuation tracking is added to identify the origin of continuations for debugging. A "plugin context" tracks the originating plugin to tag continuations.
3. std::chrono is proposed to replace custom time handling in ATS. It provides type-safe time durations and timepoints without loss of precision during conversions.
4. Other active projects include partial object caching, event loop improvements, plugin priorities, making assertions no-ops in
This document summarizes a talk about reflection and abstract syntax trees (AST) in Pharo5. It discusses how the AST is now more integrated and accessible via methods like #ast. Annotations called MetaLinks can be added to AST nodes to modify behavior. A reflective compiler called Basis uses AST annotations and metaobjects to implement features like live recompilation without restarting the image. Questions from the audience are then invited.
This document provides instructions for using the Vampir toolchain at Indiana University (IU) on the Quarry and BigRed clusters. It describes how to run Vampir, VampirServer, and VampirTrace on these clusters, including which software modules to load, how to run jobs in PBS, and where to find trace files. The document also lists the software versions of Vampir, VampirServer, VampirTrace, OpenMPI, and compilers available on each cluster.
This document discusses Java 8 concurrency abstractions including asynchronous result processing using CompletableFuture and optimistic locking using StampedLock. It provides an overview and comparison to previous concurrency APIs. The agenda includes exploring CompletableFuture features like asynchronous execution, chaining reactions, and exception handling. It also covers using StampedLock for optimistic reads, comparing it to the previous ReentrantReadWriteLock approach. Examples are shown for common use cases of these new concurrency APIs.
Apache Samza is a distributed stream processing framework, that's used Kafka for messaging, and YARN to provide fault tolerance, processor isolation, security, and resource management.
Introduction to Systems Management with SaltStackCraig Sebenik
This document provides an introduction and overview of SaltStack, an open source system and configuration management tool. It discusses SaltStack's architecture including the master and minion components, execution modules, states, grains and pillars for managing data. It also covers extending SaltStack through templates, custom modules, and the Python API. The document demonstrates SaltStack's capabilities through examples and concludes with a summary of key features and references for further information.
Ractor is a new experimental feature in Ruby 3.0 that allows Ruby code to run in parallel on CPUs. It manages objects per Ractor and can move objects between Ractors, making moved objects invisible to the original Ractor. It can share certain "shareable" objects like modules, classes, and frozen objects between Ractors. For web applications to fully utilize Ractors, an experimental application server called Right Speed was created that uses Rack and runs processing workers on Ractors. However, there are still problems to address like exceptions when closing connections and accessing non-shareable constants and instance variables across Ractors before Ractors can be ready for production use in web applications.
This document summarizes a presentation about Alpakka, a Reactive Enterprise Integration library for Java and Scala based on Reactive Streams and Akka Streams. Alpakka provides connectors to various data sources and messaging systems that allow them to be accessed and processed using Akka Streams. Examples of connectors discussed include Kafka, MQTT, JMS, Elasticsearch and various cloud platforms. The document also provides an overview of Akka Streams and how they allow building responsive, asynchronous and resilient data processing pipelines.
In this slidecast, Jeff Squyres from Cisco Systems presents: How to make MPI Awesome - MPI Sessions. As a proposal for future versions of the MPI Standard, MPI Sessions could become a powerful tool tool to improve system resiliency as we move towards exascale.
Watch the video presentation: http://wp.me/p3RLHQ-f4U
Learn more: http://blogs.cisco.com/performance/mpi-sessions-a-proposal-for-the-mpi-forum
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses the reactor pattern and event-driven programming. It explains that the reactor pattern uses an event loop to process I/O asynchronously without blocking. EventMachine is given as an example of a reactor implementation in Ruby. It also explains how the thin web server uses EventMachine and the reactor pattern to handle requests asynchronously by delegating I/O to EventMachine and processing requests with thin handlers.
With more businesses moving to cloud-based solutions everyday, we must re-think the strategies used to deploy Perl applications and related libraries, given the volatile aspects of the cloud and its constraints.
In this talk I go over the challenges posed by virtualised environments, and consider several solutions to them. The use cases are all related to Amazon's EC2, but will easily be adapted for GoGrid, Mosso, and others.
Server push is a feature of HTTP/2 that allows a server to preemptively send resources to a client before they are requested. This can reduce the number of round trips needed and decrease load times. Apache Traffic Server (TS) has introduced experimental support for server push via the TSHttpTxnServerPush API since version 7.0. However, correctly implementing server push is challenging, as servers do not know which resources to push and pushes could delay responses or send unneeded data. Future work aims to add server push metrics and improve the API based on feedback.
The document discusses async-await in C# and how it works under the hood. It begins with questions about what happens when await is used. It then explains that async-await is used to compose asynchronous code using task continuations generated by the compiler. It discusses how await marks a continuation and depends on the context to determine threading. Windows I/O is used as an example of asynchronous operations without threads. The document concludes with recommendations to avoid deadlocks by not blocking on async code and to only use async-void for event handlers.
Este documento resume los gastos semanales y hábitos alimenticios de familias de diferentes países. Se muestra que el gasto varía ampliamente, desde 0,89€ en Chad hasta 534€ en Noruega, dependiendo del poder adquisitivo. Cada país y familia tiende a consumir alimentos diferentes en función de su cultura y disponibilidad de recursos. Muchos países desperdician grandes cantidades de comida sin ser conscientes del daño que causa al medio ambiente.
El documento resume conceptos clave sobre comportamiento del consumidor como la estratificación demográfica y social, tipos de aprendizaje, subculturas, grupos de referencia y motivación. Define estratificación como un orden jerárquico basado en la riqueza o reputación. Explica diferentes tipos de aprendizaje como receptivo, por descubrimiento, repetitivo y significativo. Define subculturas como grupos de personas jóvenes con preferencias compartidas. Describe grupos de referencia como personas o grupos de comparación o guía de comportamiento. Finalmente, dist
El documento compara las visitas de usuarios y ventas de pantalones de tres marcas populares: Levi's, Tommy, y Paco. Según el informe, Levi's tiene más visitas y ventas que las otras dos marcas, ya que es la preferida de los consumidores para pantalones. Tommy tiene la segunda cantidad más alta, mientras que Paco tiene la más baja.
E:\Mis Documentos\Santiago\Cuarto Ciclo\ComunicacióN Eficas En El Medio LaboralTecnológico Sudamericano
El documento habla sobre la importancia de la comunicación efectiva en el lugar de trabajo. Señala que mejorar la productividad de una empresa requiere enfocarse en la comunicación con los empleados y tratarlos con respeto. También menciona que los gerentes deben considerar el bienestar de los trabajadores, no solo las metas de la compañía, y que confianza es necesaria para lograr una buena comunicación. En conclusión, destaca que para alcanzar las metas de una organización se debe valorar el talento humano y mantener una buena relación con
La empresa Volkswagen se enfoca en los procesos de integración de recursos humanos y desarrollo de capacitación del personal. El documento lista a Estefanía Albarracín, Liliana Auquilla y Claudia Cruz como integrantes y menciona que la empresa Volkswagen se desempeña en todos los procesos de administración de recursos humanos pero pone más énfasis en la integración del talento humano y el desarrollo de la capacitación del personal.
El documento describe los lineamientos básicos para la docencia en la Universidad Iberoamericana Puebla, enfocándose en las competencias docentes. Explica que las competencias docentes permiten facilitar el desarrollo de las competencias genéricas en los estudiantes, como la comunicación, el liderazgo intelectual, la innovación, la organización, la perspectiva global humanista y el manejo de sí mismo. También define las competencias genéricas del profesor como la reflexión y renovación de la práctica, la comunicación, el liderazgo inte
El documento describe diferentes tipos de medios digitales como RSS, podcasts y videocasts. RSS es un formato XML para distribuir información actualizada a suscriptores. Los podcasts son archivos de audio descargables que se distribuyen a través de RSS para que los usuarios puedan suscribirse y escucharlos cuando quieran. Los videocasts son similares pero en formato de video.
Current and future challenges of the poultry industrymithu mehr
The document discusses current and future challenges facing the poultry industry. It identifies strong global competition, changes in social perceptions around food safety and animal welfare, and emerging diseases as major challenges. Controlling foodborne pathogens like Salmonella and Campylobacter in poultry products will be an ongoing public health issue. Developing antibiotic resistance in bacteria is another concern, as is ensuring high animal welfare standards as consumer expectations increase. Overall, the poultry industry will need to address these complex challenges through cooperation across the production chain.
Este documento describe un taller de Geogebra sobre el uso de la barra de entrada para crear las diagonales de un polígono variable. Se crean dos puntos iniciales A y B, y un deslizador n que varía el número de lados del polígono de 1 a 20. Usando la barra de entrada, se generan listas de puntos y segmentos para dibujar el polígono y sus diagonales, cuyos colores y estilos pueden modificarse. El documento incluye pasos para ocultar puntos, cambiar propiedades de objetos y guardar el archivo.
La pirámide alimenticia describe los diferentes grupos de alimentos que las personas deben consumir para mantener una dieta balanceada. Una alimentación balanceada implica ingerir todos los nutrientes necesarios de forma equilibrada a través de porciones adecuadas de los diversos grupos de alimentos, como frutas, verduras, granos integrales, proteínas y lácteos. Una dieta equilibrada es fundamental para mantener la salud y nutrición del cuerpo.
Clients often unwittingly provoke spite and revenge in their customers. Angry and motivated, they write bad reviews, create Facebook pages, or find other ways to get back after being wronged, even when it requires time and energy to do so. Understanding the evolutionary psychology of spite is the first step in developing experiences and products that minimize the possibility of generating spite.
El documento habla sobre el comercio electrónico (e-commerce), definido como la compra y venta de productos o servicios a través de medios electrónicos como Internet. Explica que el e-commerce ha crecido gracias a la propagación de Internet y ha estimulado innovaciones como pagos electrónicos. También describe algunas ventajas del e-commerce como mejoras en la distribución y comunicaciones comerciales electrónicas, así como beneficios operacionales para empresas. Finalmente, menciona algunas desventajas como la falta de cercanía entre comprador y v
La Tierra no es una esfera perfecta sino que está achatada en los polos y abultada en el ecuador. Gira sobre su eje cada 23 horas, 56 minutos y 4.1 segundos y orbita el Sol a lo largo de una circunferencia de aproximadamente 938,9 millones de km a una velocidad de 106,000 km/h. Está compuesta por la atmósfera, corteza, astenósfera y núcleo. Los terremotos y erupciones volcánicas se originan por movimientos en su interior.
This document provides guidance on asking questions during a job interview. It outlines 7 activities to help learners practice: 1) Reviewing vocabulary related to interviews; 2) Discussing experiences and examples of questions; 3) Listening to a sample interview dialogue; 4) Understanding why asking questions makes a good impression; 5) Analyzing case studies of good and bad questions; 6) Practicing dictation; and 7) Role playing asking questions. The key lessons are that interviewees should have 2-3 questions prepared and asking shows interest in the role, but questions should not be too personal or about non-work topics like holidays.
Universidad tecnica de amabato joha nticsjcaritoavila
Este documento presenta los objetivos estratégicos de la Facultad de Contabilidad y Auditoría de la Universidad Técnica de Ambato, Ecuador. Los tres objetivos principales son: 1) Formar profesionales con liderazgo y responsabilidad social para entender la realidad socioeconómica y contribuir al desarrollo de la región, 2) Realizar investigación científica, tecnológica y social que genere innovación y contribuya a la superación de problemas de desarrollo, y 3) Vincular la labor universitaria con
This document defines and explains marginal costing. It states that marginal cost is the same as variable cost, which is the increase in costs from producing one additional unit within existing capacity. Marginal cost is calculated as direct materials, labor, expenses and variable overheads. It also explains the differences between absorption costing and marginal costing and how marginal costing is useful for decision making.
Large-scaled Deploy Over 100 Servers in 3 MinutesHiroshi SHIBATA
Large-scaled Deploy Over 100 Servers in 3 Minutes
Deployment strategy for next generation involves creating OS images using Packer and cloud-init that allow deploying over 100 servers within 3 minutes through automation. The strategy uses Puppet for configuration management and builds minimal and application-specific images to reduce bootstrap time. All deployment operations are implemented through a CLI tool for rapid and automated scaling.
Hot to build continuously processing for 24/7 real-time data streaming platform?GetInData
You can read our blog post about it here: https://getindata.com/blog/how-to-build-continuously-processing-for-24-7-real-time-data-streaming-platform/
Hot to build continuously processing for 24/7 real-time data streaming platform?
From Ansible's website: "Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs."
This introduction is based on ansible official docs, capturing most important information to make it easy to understand Ansible main concepts.
This document discusses using message queues and AMQP for real-time system performance monitoring and job queuing. It describes how message queues allow producers and consumers to communicate asynchronously. The author details their experience building a job server using RabbitMQ, Catalyst, and Web::Hippie to queue and monitor long-running jobs. While the system is still a work in progress, all the major components are there and it is in use in production.
This document discusses using AMQP and RabbitMQ for real-time system performance monitoring and job queueing. It describes key AMQP concepts like exchanges, queues, routing keys, and delivery modes. It then discusses how the speaker built a job queueing system called CatalystX::JobServer that uses these concepts and RabbitMQ. Demo examples are shown of using this system to queue and monitor the status of jobs. Potential next steps and other solutions like Gearman are also mentioned.
Webinar: Queues with RabbitMQ - Lorna MitchellCodemotion
Queues are a great addition to any application that has some tasks that need processing asynchronously. This could be sending a confirmation email, resizing an avatar, or recalculating a running total of some kind; in all those cases it would be cool to send the response back to the user and then sort out that task later. This session looks at how to use a RabbitMQ job queue in your application. It also looks at how to design elegant and robust long-running workers that will consume the jobs from the queue and process them. This session is ideal for technical leads, developers and architects alike.
Introduction to Laravel Framework (5.2)Viral Solani
This document provides an overview of the Laravel PHP framework, including why it was created, its main features and components. Some key points:
- Laravel was created to guide developers to best practices and utilizes modern PHP features. It has an active community and good documentation.
- Its major components include routing, controllers, blade templating, Eloquent ORM, authentication, queues and more. It also uses Composer for dependency management.
- Other tools in the Laravel ecosystem help with deployment (Homestead, Forge), billing (Cashier), APIs (Lumen) and more. The framework is fully-featured but aims to be easy to learn and use.
- The document discusses using Fabric and Boto for automating tasks in cloud computing environments. Fabric allows running Python scripts and commands over SSH, while Boto is the Python API for interacting with AWS services like EC2.
- Examples are provided of writing basic Fabric files with tasks to run commands on remote servers. Key features covered include defining host groups with roles, enabling parallel execution of certain tasks, and setting failure handling modes.
- Automating tasks with Fabric and Boto can improve efficiency, consistency, and manageability of cloud infrastructure and deployments.
This document provides tips for writing LotusScript code for large systems with a focus on logging, performance, code reuse, and handling weird situations. Some key points include:
- Logging is important for stability and managing large systems. Recommends using OpenLog or creating and emailing log documents to avoid performance impacts.
- Views with click-sorted columns and unnecessary views hurt performance. Recommends minimizing views and avoiding click-sort.
- Agents need to be well-behaved to avoid overloading servers. Suggests profiling agents, breaking large tasks into multiple runs, and not relying on Agent Manager to kill misbehaving agents.
- Code reuse is important for maintenance. Recommends creating
Puppet Camp NYC 2014: Build a Modern Infrastructure in 45 min!Puppet
The document describes how to build a modern infrastructure using Puppet modules. It discusses setting up MCollective for orchestration, Sensu for monitoring, Logstash for logging, and Jenkins for continuous integration. A Puppet module called moderninfra is demonstrated that defines the architecture and installs/configures all of the required components including RabbitMQ, Elasticsearch, and Kibana. The full infrastructure can then be built out across multiple nodes by writing Hiera data and node definitions.
This document discusses daemons in Linux operating systems. It defines daemons as background processes that perform tasks like responding to network requests and hardware activity. Some key daemons mentioned include init, cron, xinetd, inetd, sshd, and atd. Details are provided on what each daemon does and how they are configured through files like cron.allow, cron.deny, xinetd.conf, and sshd_config. The document also explains how services are enabled and disabled for different daemons using commands like insserv and insserv -r.
Deployment of WebObjects applications on CentOS LinuxWO Community
With the rise of cloud computing and the death of the Xserve, learn how you can deploy your WebObjects applications on a CentOS server. You will also get tips about how to secure your server so that you don't get hack.
Common Pitfalls of Functional Programming and How to Avoid Them: A Mobile Gam...gree_tech
This material is presented on CUFP 2013.
Functional programming is already an established technology is many areas. However, the lack of skilled developers has been a challenging hurdle in the adoption of such languages. It is easy for an inexperienced programmer to fall into the many traps of functional programming, resulting in a loss of productivity and bad software quality. Resource leaks caused by Haskell's lazy evaluation, for instance, are only the tip of the iceberg. Knowledge sharing and a mature tool-assisted development process are ways to avoid such pitfalls. At GREE, one of the largest mobile gaming companies, we use Haskell and Scala to develop major components of our platform, such as a distributed NoSQL solution, or an image storage infrastructure. However, only 11 programmers use functional programming on their daily task. In this talk, we will describe some unexpected functional programming issues we ran into, how we solved them and how we hope to avoid them in the future. We have developed a system testing framework to enhance regression testing, spent lots of time documenting pitfalls and introduced technical reviews. Recently, we even started holding lunchtime presentations about functional programming in order to attract beginners and prevent them from falling into the same traps.
Unmanned Aerial Vehicles can be automated using Metasploit to fingerprint clients, scan for servers, and exploit vulnerabilities. Metasploit provides built-in modules to automate scanning networks using tools like Nmap and Nexpose. Exploits and payloads can then be automatically run on vulnerable servers and clients. Post-exploitation activities can also be automated using Meterpreter scripts and plugins to perform tasks like privilege escalation, packet capture, and maintaining persistence.
Distributed app development with nodejs and zeromqRuben Tan
This document discusses using Node.js and ZeroMQ for distributed application development. It defines distributed applications as apps distributed across multiple cloud locations that communicate via a standardized protocol. ZeroMQ is introduced as a socket library that can be used for inter-app communication, with common patterns being push-pull for sending data and req-rep for request-response. Scaling is discussed as adding more app instances for push-pull and adding more rep apps for req-rep. Sample ZeroMQ code in Node.js is also provided.
The document discusses infrastructure as code (IAC) and its principles and categories. Some key points:
- IAC treats infrastructure like code by writing code to define, deploy, and update infrastructure. This allows infrastructure to be managed programmatically.
- Common categories of IAC include ad hoc scripts, configuration management tools like Ansible and Puppet, server templating tools like Packer, and server provisioning tools like Terraform.
- Benefits of IAC include automation, consistency, repeatability, versioning, validation, reuse, and allowing engineers to focus on code instead of manual tasks.
- AWS offers CloudFormation for provisioning AWS resources through templates. Other tools integrate with Cloud
This document discusses building hermetic systems without Docker. It defines hermetic systems as airtight and pure, with well-defined inputs and outputs. It discusses sources of non-determinism like external libraries and services that can introduce "leaks". It proposes using Clojure components and embedding services like Elasticsearch to build deterministic, reproducible systems. Components are reusable units with well-defined dependencies and lifecycles. Embedding services isolates the system from external changes. Randomness and time can also introduce non-determinism but may be modeled as reproducible services. The goal is to evaluate systems, identify leaks, and design trade-offs to build robust, hermetic systems.
Writing Asynchronous Programs with Scala & AkkaYardena Meymann
The document provides an overview of Yardena Meymann's background and experience working with asynchronous programming in Scala. It discusses some of the common tools and approaches for writing asynchronous programs in Scala, including Futures, Actors, Streams, HTTP clients/servers, and integration with Kafka. It highlights some of the challenges of asynchronous programming and how different tools address issues like error handling, retries, and backpressure.
Steamlining your puppet development workflowTomas Doran
The document discusses ways to streamline a Puppet development workflow including using revision control, running Puppet in noop or automatic mode, moving changes slowly through testing and using branches, reporting on changes, and implementing testing strategies like unit testing with rspec-puppet and integration testing with serverspec. It also recommends tools like Foreman, Norman, Puppetfile, and Jenkins to improve testing and deployment.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
4. History
• Previously we used the daemons gem plus
the daemon generator plugin.
• Headaches:
• Each daemon is a separate process.
• Not very DRY:
5. Not DRY
• Each daemon has two files:
• The actual code that runs as a daemon
(e.g. network_point_update_daemon.rb)
•
6. Not DRY
• Each daemon has two files:
• And a daemon control script
(e.g. network_point_update_daemon_ctl)
•
7. Not DRY
• The only bit of code that actually does our work:
AppUser.daily_amount_grant
• The Rails environment is loaded within each
daemon process! Not good for memory.
• No centralized logging/error handling; poignantly
recognized when we wanted exceptions emailed to
us.
• Management of each daemon process required
another entry into monit's config file (eww!):
11. Rooster
What does it do?
Scratches the itches of our previous
configuration
12. Rooster
It’s DRY
• One daemon, monitored by monit
• One Rails environment, loaded once
• Easier management: TCP server (easily
accessed by telnet), rake tasks, more to come...
• Centralized and configurable error handling and
logging:
Rooster::Runner.error_handler = lambda { |e| HoptoadNotifier.notify(e) }
13. How?
• Rooster leverages 3 (optionally 4) excellent open source bits of
software:
14. How?
• Rooster leverages 3 (optionally 4) excellent open source bits of
software:
• EventMachine: provides event-driven I/O using the Reactor
pattern. It lets you write network clients and servers without
handling sockets.
15. How?
• Rooster leverages 3 (optionally 4) excellent open source bits of
software:
• EventMachine: provides event-driven I/O using the Reactor
pattern. It lets you write network clients and servers without
handling sockets.
• Rufus Scheduler: a Ruby gem for scheduling pieces of
code (can leverage EventMachine if available).
16. How?
• Rooster leverages 3 (optionally 4) excellent open source bits of
software:
• EventMachine: provides event-driven I/O using the Reactor
pattern. It lets you write network clients and servers without
handling sockets.
• Rufus Scheduler: a Ruby gem for scheduling pieces of
code (can leverage EventMachine if available).
• daemons: A Ruby gem that provides an easy way to wrap
existing ruby code to be run as a daemon, and to be
controlled by simple start/stop/restart commands. (I know we
moved away from our previous daemons-based solution, but
not all daemons are bad).
17. How?
• Rooster leverages 3 (optionally 4) excellent open source bits of
software:
• EventMachine: provides event-driven I/O using the Reactor
pattern. It lets you write network clients and servers without
handling sockets.
• Rufus Scheduler: a Ruby gem for scheduling pieces of
code (can leverage EventMachine if available).
• daemons: A Ruby gem that provides an easy way to wrap
existing ruby code to be run as a daemon and to be controlled
by simple start/stop/restart commands. (I know we moved
away from our previous daemons-based solution, but not all
daemons are bad).
• Chronic (Optional) - A handy gem for natural language date/
time parsing.
19. How?
• These components fit together thusly:
• The rooster daemon is started, and then kicks off the
Rooster::Runner.
20. How?
• These components fit together thusly:
• The rooster daemon is started, and then kicks off the
Rooster::Runner.
• Rooster::Runner runs the main EventMachine reactor loop,
loads a Rufus::Scheduler, loads (and optionally schedules)
each rooster task, and starts the ControlServer.
21. How?
• These components fit together thusly:
• The rooster daemon is started, and then kicks off the
Rooster::Runner.
• Rooster::Runner runs the main EventMachine reactor loop,
loads a Rufus::Scheduler, loads (and optionally schedules)
each rooster task, and starts the ControlServer.
• Rooster::ControlServer is a TCP-based server that listens for
Rooster control commands (e.g. schedule, unschedule, exit,
etc.).
22. How?
• These components fit together thusly:
• The rooster daemon is started, and then kicks off the
Rooster::Runner.
• Rooster::Runner runs the main EventMachine reactor loop,
loads a Rufus::Scheduler, loads (and optionally schedules)
each rooster task, and starts the ControlServer.
• Rooster::ControlServer is a TCP-based server that listens for
Rooster control commands (e.g. schedule, unschedule, exit,
etc.).
• Rooster::ControlClient issues commands to the
ControlServer; used mainly as a rake helper.
24. Example
• I want a task that kills all kittens at 4:20pm every day
• > script/generate rooster_task KittenKiller
• Generates a new templated task in:
RAILS_ROOT/lib/rooster/tasks/kitten_killer_task.rb
• rake rooster:launch
(and then maybe `rake rooster:start TASK=KittenKillerTask`)
25. Commands
• Tag-based commands are handy for controlling only a subset of
available tasks
• For example, we have separate rooster tasks running on app1
and app3, and are controlled with those server-specific tags.
26. Future Goals
• Make rooster task scheduling blocks DRYer, especially by
abstracting away the ActiveRecord connection pool cleanup.
• Refactor Rooster::Runner (prettier code).
• Add scripts (e.g. script/rooster daemon:start)
• On daemon launch, autostart tasks having a certain tag (or
accept a lambda, e.g.
launch_if => lamba { |task| task.tags.include?(“app1”) } )