The document discusses using RabbitMQ for message queueing. It describes how RabbitMQ uses exchanges, queues, and bindings to route messages from producers to consumers. It provides examples of publishing and consuming messages using Pika and Kombu in Python. It also discusses issues with latency and packet loss and describes plugins like Shovel, STOMP, and Celery that extend RabbitMQ's capabilities.
This presentation was given at PyCon AU 2012 but not recorded. It was written as I learned about modern message queueing methods (in particular RabbitMQ.)
An introduction to message queues with PHP. We'll focus on RabbitMQ and how to leverage queuing scenarios in your applications. The talk will cover the main concepts of RabbitMQ server and AMQP protocol and show how to use it in PHP. The RabbitMqBundle for Symfony2 will be presented and we'll see how easy you can start to use message queuing in minutes.
Presented at Symfony User Group Belgium: http://www.meetup.com/Symfony-User-Group-Belgium/events/169953362/
Puppet is an open source configuration management tool that can be used to automate the configuration and management of infrastructure and applications. It uses a client-server architecture and declarative language to define and enforce the desired state of systems. Other HashiCorp tools like Packer, Terraform, Vault and Nomad can integrate with Puppet for tasks like infrastructure provisioning, secrets management and workload orchestration. Bolt is a task orchestration tool from Puppet that can be used to automate operational tasks across infrastructure defined by tools like Terraform. Consul provides service discovery and configuration for the Puppet infrastructure.
During our most recent workshops, we presented the most popular libraries of Node.js, and their installation on Docker. Thanks to our presentation, you’ll be able to program applications which use Node.js and ES6.
Using Node.js to Build Great Streaming Services - HTML5 Dev ConfTom Croucher
The document discusses using Node.js to build streaming services. It describes how Node.js allows for scalable server-side code using JavaScript and mentions libraries like JSONStream that can be used to parse JSON streams. The document also discusses different types of streaming like simplex, throughput, and duplex streaming and how to manage backpressure in streams.
Thrift and PasteScript are frameworks for building distributed applications and services. Thrift allows defining data types and interfaces using a simple definition language that can generate code in multiple languages. It uses a compact binary protocol for efficient RPC-style communication between clients and servers. PasteScript builds on WSGI and provides tools like paster for deploying and managing Python web applications, along with reloading and logging capabilities. It integrates with Thrift via server runners and application factories.
Integrating icinga2 and the HashiCorp suiteBram Vogelaar
We all love infrastructure as code, we automate everything ™ but how many
of us can really say we could destroy and recreate our core infrastructure
without human intervention. Can you be sure there isnt a DNS problem or
that all the things ™ are done in the right order This talk walks the
audience through a green fields exercise that sets up service discovery
using Consul, infrastructure as code using terraform, using images build
with packer and configured using puppet.
This presentation was given at PyCon AU 2012 but not recorded. It was written as I learned about modern message queueing methods (in particular RabbitMQ.)
An introduction to message queues with PHP. We'll focus on RabbitMQ and how to leverage queuing scenarios in your applications. The talk will cover the main concepts of RabbitMQ server and AMQP protocol and show how to use it in PHP. The RabbitMqBundle for Symfony2 will be presented and we'll see how easy you can start to use message queuing in minutes.
Presented at Symfony User Group Belgium: http://www.meetup.com/Symfony-User-Group-Belgium/events/169953362/
Puppet is an open source configuration management tool that can be used to automate the configuration and management of infrastructure and applications. It uses a client-server architecture and declarative language to define and enforce the desired state of systems. Other HashiCorp tools like Packer, Terraform, Vault and Nomad can integrate with Puppet for tasks like infrastructure provisioning, secrets management and workload orchestration. Bolt is a task orchestration tool from Puppet that can be used to automate operational tasks across infrastructure defined by tools like Terraform. Consul provides service discovery and configuration for the Puppet infrastructure.
During our most recent workshops, we presented the most popular libraries of Node.js, and their installation on Docker. Thanks to our presentation, you’ll be able to program applications which use Node.js and ES6.
Using Node.js to Build Great Streaming Services - HTML5 Dev ConfTom Croucher
The document discusses using Node.js to build streaming services. It describes how Node.js allows for scalable server-side code using JavaScript and mentions libraries like JSONStream that can be used to parse JSON streams. The document also discusses different types of streaming like simplex, throughput, and duplex streaming and how to manage backpressure in streams.
Thrift and PasteScript are frameworks for building distributed applications and services. Thrift allows defining data types and interfaces using a simple definition language that can generate code in multiple languages. It uses a compact binary protocol for efficient RPC-style communication between clients and servers. PasteScript builds on WSGI and provides tools like paster for deploying and managing Python web applications, along with reloading and logging capabilities. It integrates with Thrift via server runners and application factories.
Integrating icinga2 and the HashiCorp suiteBram Vogelaar
We all love infrastructure as code, we automate everything ™ but how many
of us can really say we could destroy and recreate our core infrastructure
without human intervention. Can you be sure there isnt a DNS problem or
that all the things ™ are done in the right order This talk walks the
audience through a green fields exercise that sets up service discovery
using Consul, infrastructure as code using terraform, using images build
with packer and configured using puppet.
How we use Varnish at Opera Software, from the beginning (2009) to now.
Presentation hold for the 5th Varnish Users Group meeting (VUG5) held in Paris on March 22nd 2012.
Python has been adding more and more async features to the language and the standard library. Starting with asyncio in python 3.4 and including the new async/await keywords in python 3.5, it’s difficult to understand how all these pieces fit together. More importantly, it’s hard to envision how to use these new language features in a real world application. In this talk we’re going to move beyond the basic examples of TCP echo servers and example servers that can add number together. Instead I’ll show you a realistic asyncio application. This application is a port of redis, a popular data structure server, written in python using asyncio. In addition to basic topics such as handling simple redis commands (GET, SET, RPUSH, etc), we’ll look at notifications using pub/sub, and how to implement blocking queues.
Построение распределенной системы сбора данных с помощью RabbitMQ, Alvaro Vid...Ontico
This document discusses building a distributed data ingestion system using RabbitMQ. It introduces RabbitMQ as a multi-protocol, polyglot messaging broker. The document then outlines some issues with a naïve ad-hoc solution to distributing data and proposes using RabbitMQ federation to address these issues. It provides an overview of how RabbitMQ federation works and how to configure it. Finally, it discusses additional RabbitMQ features like sharded queues and federated queues that can help scale the system.
Toster - Understanding the Rails Web Model and Scalability OptionsFabio Akita
In my first time at Russia, I've presented about Reactor Pattern, Eventmachine, WebSocket and the Pusher service as options for when Rails alone is not enough
The document describes the author's experience deploying and configuring Varnish caching at Opera over many years. Some key points discussed include:
- Initial deployment in 2009 caching static assets for My Opera, which grew to serve 15% of requests
- Troubleshooting issues like session mixing and unauthorized access
- Implementing caching for dynamic pages like the front page while respecting cookies and languages
- Decentralizing caching to multiple data centers for lower latency globally
- Generating and caching thumbnails on-the-fly to handle frequent design changes
- Developing a more generic "shields-up" configuration to cache unpopular content securely
- Ongoing work caching APIs and content on other
Server-Side Push: Comet, Web Sockets come of age (OSCON 2013)Brian Sam-Bodden
Server-side browser push technologies have been around for a while in one way or another, ranging from from crude browser polling to Flash enabled frameworks. In this session you’ll get a code-driven walk-through on the evolution and mechanics of server-push technologies, including:
Server streaming
Polling and long Polling
Comet
Web Sockets
Scaling Ruby with Evented I/O - Ruby undergroundOmer Gazit
Ruby is considered by many to be slow and unscalable. In this talk we’ll try to disprove this premise by introducing EventMachine. We will cover the basic concepts of evented I/O programming and the Reactor pattern. Talk about best practices and useful libraries for EventMachine and see how to test your event driven code.
Code examples from the presentation can be found at: https://github.com/omerisimo/em_underground
Alfresco 2019 DevCon lightning talk alan davisAlan Davis
The document discusses porting a custom transformer to Alfresco's new Transform Service. It describes the architecture changes between ACS 6.0 and 6.1 that introduced the Transform Service, including the addition of a Proxy, Router, and asynchronous Rendition Service 2. It provides details on configuring and registering a custom transformer with the Transform Service Registry, and how the service is triggered by the Rendition Service. Finally, it outlines potential future developments, such as public access to transformer code and new transform types.
In the world of social gaming, the classic 2-tier of web application does not cut it anymore. We need new and better solutions.
Follow along the evolution of game servers at Wooga and get an in-depth look into the next-generation backend putting the combined forces of Erlang and Ruby to work. Learn how scalability, reliability, concurrency control and beautiful code do not need to be mutually exclusive.
A gentle introduction to Observability and how to setup a highly available monitoring platform accros multiple datacenters.
During this talk we will investigate how we can setup and monitor an monitoring setup accross 2 DCs using Prometheus, Loki, Tempo, Alertmanager and Grafana. monitoring some services with some lessons learned along the way.
Video presentation: https://www.youtube.com/watch?v=jLAFXQ1Av50
Most applications written in Ruby are great, but also exists evil code applying WOP techniques. There are many workarounds in several programming languages, but in Ruby, when it happens, the proportion is bigger. It's very easy to write Ruby code with collateral damage.
You will see a collection of bad Ruby codes, with a description of how these codes affected negatively their applications and the solutions to fix and avoid them. Long classes, coupling, misapplication of OO, illegible code, tangled flows, naming issues and other things you can ever imagine are examples what you'll get.
PuppetDB: Sneaking Clojure into Operationsgrim_radical
The document provides an overview of PuppetDB, which is a system for storing and querying data about infrastructure as code and system configurations. Some key points:
- PuppetDB stores immutable data about systems and allows querying of this data to enable higher-level infrastructure operations.
- It uses techniques like command query responsibility separation (CQRS) to separate write and read pipelines for better performance and reliability.
- The data is stored in a relational database for efficient querying, and queries are expressed in an abstract syntax tree (AST)-based language.
- The system is designed for speed, reliability, and ease of deployment in operations. It leverages techniques from Clojure and the JVM.
Bootstrap your Cloud Infrastructure using puppet and hashicorp stackBram Vogelaar
The document discusses using Packer, Puppet, Vagrant, Terraform, and Consul to automate infrastructure provisioning in the cloud. Packer is used to build machine images with Puppet provisioning. Vagrant then uses these images to bootstrap VMs. Terraform models infrastructure in code and provisions resources like virtual machines. Consul provides service discovery and coordination.
A gentle introduction to Observability and how to setup a highly available monitoring platform across multiple datacenters.
During this talk we will investigate how we can setup and monitor an monitoring setup across 2 DCs using Prometheus, Loki, Tempo, Alertmanager and Grafana. monitoring some services with some lessons learned along the way.
Zeromq is a communication library that provides sockets for building scalable distributed applications. It supports common messaging patterns like request-reply, publish-subscribe, and push-pull. An example chat application is presented that uses a pub-sub pattern with a publisher server sending messages to subscribed clients. The server receives messages on a pull socket and forwards them to connected clients via a publish socket. Clients subscribe to the publish socket and can send messages via a push socket. Code samples in Ruby demonstrate setting up the sockets and connections for the server and client components.
Unix message queues allow processes to communicate asynchronously through kernel-managed buffers. A message queue is identified by a message queue identifier and contains a linked list of messages. Each message contains a message type, byte count, and pointer to the actual message data. Processes use message queue APIs like msgget(), msgsnd(), msgrcv(), and msgctl() to open, send, receive, and manage messages on queues.
Introducing message queue system, and explain how message queue can be used for queuing tasks. This is especially useful for web application to perform tasks in an asynchronously manner.
Redis is being used as a message queue to asynchronously process image uploads on a website for gaming screenshots. When a user uploads images, the application server adds a message to the Redis queue containing metadata about the upload. A separate process polls the queue and processes each upload by resizing images, creating database entries, and more. This allows upload processing to happen in the background without blocking the user.
How we use Varnish at Opera Software, from the beginning (2009) to now.
Presentation hold for the 5th Varnish Users Group meeting (VUG5) held in Paris on March 22nd 2012.
Python has been adding more and more async features to the language and the standard library. Starting with asyncio in python 3.4 and including the new async/await keywords in python 3.5, it’s difficult to understand how all these pieces fit together. More importantly, it’s hard to envision how to use these new language features in a real world application. In this talk we’re going to move beyond the basic examples of TCP echo servers and example servers that can add number together. Instead I’ll show you a realistic asyncio application. This application is a port of redis, a popular data structure server, written in python using asyncio. In addition to basic topics such as handling simple redis commands (GET, SET, RPUSH, etc), we’ll look at notifications using pub/sub, and how to implement blocking queues.
Построение распределенной системы сбора данных с помощью RabbitMQ, Alvaro Vid...Ontico
This document discusses building a distributed data ingestion system using RabbitMQ. It introduces RabbitMQ as a multi-protocol, polyglot messaging broker. The document then outlines some issues with a naïve ad-hoc solution to distributing data and proposes using RabbitMQ federation to address these issues. It provides an overview of how RabbitMQ federation works and how to configure it. Finally, it discusses additional RabbitMQ features like sharded queues and federated queues that can help scale the system.
Toster - Understanding the Rails Web Model and Scalability OptionsFabio Akita
In my first time at Russia, I've presented about Reactor Pattern, Eventmachine, WebSocket and the Pusher service as options for when Rails alone is not enough
The document describes the author's experience deploying and configuring Varnish caching at Opera over many years. Some key points discussed include:
- Initial deployment in 2009 caching static assets for My Opera, which grew to serve 15% of requests
- Troubleshooting issues like session mixing and unauthorized access
- Implementing caching for dynamic pages like the front page while respecting cookies and languages
- Decentralizing caching to multiple data centers for lower latency globally
- Generating and caching thumbnails on-the-fly to handle frequent design changes
- Developing a more generic "shields-up" configuration to cache unpopular content securely
- Ongoing work caching APIs and content on other
Server-Side Push: Comet, Web Sockets come of age (OSCON 2013)Brian Sam-Bodden
Server-side browser push technologies have been around for a while in one way or another, ranging from from crude browser polling to Flash enabled frameworks. In this session you’ll get a code-driven walk-through on the evolution and mechanics of server-push technologies, including:
Server streaming
Polling and long Polling
Comet
Web Sockets
Scaling Ruby with Evented I/O - Ruby undergroundOmer Gazit
Ruby is considered by many to be slow and unscalable. In this talk we’ll try to disprove this premise by introducing EventMachine. We will cover the basic concepts of evented I/O programming and the Reactor pattern. Talk about best practices and useful libraries for EventMachine and see how to test your event driven code.
Code examples from the presentation can be found at: https://github.com/omerisimo/em_underground
Alfresco 2019 DevCon lightning talk alan davisAlan Davis
The document discusses porting a custom transformer to Alfresco's new Transform Service. It describes the architecture changes between ACS 6.0 and 6.1 that introduced the Transform Service, including the addition of a Proxy, Router, and asynchronous Rendition Service 2. It provides details on configuring and registering a custom transformer with the Transform Service Registry, and how the service is triggered by the Rendition Service. Finally, it outlines potential future developments, such as public access to transformer code and new transform types.
In the world of social gaming, the classic 2-tier of web application does not cut it anymore. We need new and better solutions.
Follow along the evolution of game servers at Wooga and get an in-depth look into the next-generation backend putting the combined forces of Erlang and Ruby to work. Learn how scalability, reliability, concurrency control and beautiful code do not need to be mutually exclusive.
A gentle introduction to Observability and how to setup a highly available monitoring platform accros multiple datacenters.
During this talk we will investigate how we can setup and monitor an monitoring setup accross 2 DCs using Prometheus, Loki, Tempo, Alertmanager and Grafana. monitoring some services with some lessons learned along the way.
Video presentation: https://www.youtube.com/watch?v=jLAFXQ1Av50
Most applications written in Ruby are great, but also exists evil code applying WOP techniques. There are many workarounds in several programming languages, but in Ruby, when it happens, the proportion is bigger. It's very easy to write Ruby code with collateral damage.
You will see a collection of bad Ruby codes, with a description of how these codes affected negatively their applications and the solutions to fix and avoid them. Long classes, coupling, misapplication of OO, illegible code, tangled flows, naming issues and other things you can ever imagine are examples what you'll get.
PuppetDB: Sneaking Clojure into Operationsgrim_radical
The document provides an overview of PuppetDB, which is a system for storing and querying data about infrastructure as code and system configurations. Some key points:
- PuppetDB stores immutable data about systems and allows querying of this data to enable higher-level infrastructure operations.
- It uses techniques like command query responsibility separation (CQRS) to separate write and read pipelines for better performance and reliability.
- The data is stored in a relational database for efficient querying, and queries are expressed in an abstract syntax tree (AST)-based language.
- The system is designed for speed, reliability, and ease of deployment in operations. It leverages techniques from Clojure and the JVM.
Bootstrap your Cloud Infrastructure using puppet and hashicorp stackBram Vogelaar
The document discusses using Packer, Puppet, Vagrant, Terraform, and Consul to automate infrastructure provisioning in the cloud. Packer is used to build machine images with Puppet provisioning. Vagrant then uses these images to bootstrap VMs. Terraform models infrastructure in code and provisions resources like virtual machines. Consul provides service discovery and coordination.
A gentle introduction to Observability and how to setup a highly available monitoring platform across multiple datacenters.
During this talk we will investigate how we can setup and monitor an monitoring setup across 2 DCs using Prometheus, Loki, Tempo, Alertmanager and Grafana. monitoring some services with some lessons learned along the way.
Zeromq is a communication library that provides sockets for building scalable distributed applications. It supports common messaging patterns like request-reply, publish-subscribe, and push-pull. An example chat application is presented that uses a pub-sub pattern with a publisher server sending messages to subscribed clients. The server receives messages on a pull socket and forwards them to connected clients via a publish socket. Clients subscribe to the publish socket and can send messages via a push socket. Code samples in Ruby demonstrate setting up the sockets and connections for the server and client components.
Unix message queues allow processes to communicate asynchronously through kernel-managed buffers. A message queue is identified by a message queue identifier and contains a linked list of messages. Each message contains a message type, byte count, and pointer to the actual message data. Processes use message queue APIs like msgget(), msgsnd(), msgrcv(), and msgctl() to open, send, receive, and manage messages on queues.
Introducing message queue system, and explain how message queue can be used for queuing tasks. This is especially useful for web application to perform tasks in an asynchronously manner.
Redis is being used as a message queue to asynchronously process image uploads on a website for gaming screenshots. When a user uploads images, the application server adds a message to the Redis queue containing metadata about the upload. A separate process polls the queue and processes each upload by resizing images, creating database entries, and more. This allows upload processing to happen in the background without blocking the user.
The document summarizes POSIX Inter-Process Communication (IPC) mechanisms, including POSIX message queues, semaphores, and shared memory. These IPC techniques use common functions and properties to access and identify objects. Message queues allow processes to exchange prioritized messages, semaphores coordinate access to shared resources, and shared memory allows direct read/write access to the same memory by multiple processes. The document describes the key functions and usage for each POSIX IPC method.
Easy enterprise application integration with RabbitMQ and AMQPRabbit MQ
VMware vFabric RabbitMQ Technical Webinar December 2010 by VMware engineer Emile Joubert. Covers common integration patterns, and how RabbitMQ makes these easily implemented, using AMQP as a communications mechanism.
You can view a recording of this presentation on YouTube: http://www.youtube.com/user/SpringSourceDev#p/c/5956C6D9EC319817/0/ABGMjX4K0D8
Celery is an open source asynchronous task queue/job queue based on distributed message passing. It allows tasks to be executed concurrently, in the background across multiple servers. Common use cases include running long tasks like API calls or image processing without blocking the main process, load balancing tasks across servers, and concurrent execution of batch jobs. Celery uses message brokers like RabbitMQ to asynchronously queue and schedule tasks. Tasks are defined as Python functions which get executed by worker processes. The workflow involves defining tasks, adding tasks to the queue from views or management commands, and having workers process the tasks.
Analyzed sales data using market analysis, SWOT analysis, GAP analysis and implemented matrices
Represented graphical charts for sales prediction(Current and Future) using Tableau and identified growth with future prediction
Designed data models, logical models, data mart, data warehouse, relational database, star schema, extended star schema, tables, columns, attributes, relationship (primary, foreign , composite keys), sorting
1. Order management comprises a series of business processes including order transactions, invoice transactions, and accumulating snapshots to track orders through the fulfillment pipeline.
2. Key dimensions discussed include product, customer ship-to, deal, and date dimensions while order, invoice and accumulating snapshot fact tables were examined.
3. Techniques for implementing included fact normalization, dimension role-playing, handling multiple currencies, and designing real-time partitions to extend the data warehouse in real-time.
Life in a Queue - Using Message Queue with djangoTareque Hossain
Brief introduction on message queue and how its relevant in web applications
How to tell if your web application could benefit from message queue
Common example of tasks that could benefit from message queues
Choosing a broker/protocol
What broker/protocol PBS Education chose and why
Message queue solution architecture
Brief introduction on celery/carrot
Writing a message queue task using celery
How to invoke a message queue taks
What happens when you invoke a task (walk through architecture)
How to write tasks efficiently
What are the things that are good to know when writing tasks (things we experienced at PBS Education)
The document discusses message queues and their uses. Message queues allow for asynchronous communication between applications and components. They decouple systems, allow for background processing, and improve scalability. Common use cases for message queues include processing email notifications, auto-scaling cloud applications, handling image/video processing, and interacting with services like Apple Push Notifications.
Message queues provide a way for applications and systems to communicate asynchronously by passing messages. They allow for decoupling of components and offloading of work. Some common uses of message queues include asynchronous processing, communication between applications/systems, auto-scaling, and handling legacy applications. Popular message queue servers include RabbitMQ, ActiveMQ, and Beanstalkd. Code examples are provided for publishing and receiving messages with each server.
This document discusses various inter-process communication (IPC) mechanisms in Linux, including pipes, FIFOs, and message queues. Pipes allow one-way communication between related processes, while FIFOs (named pipes) allow communication between unrelated processes through named pipes that persist unlike anonymous pipes. Message queues provide more robust messaging between unrelated processes by allowing messages to be queued until received and optionally retrieved out-of-order or by message type. The document covers the key functions and system calls for creating and using each IPC mechanism in both shell and C programming.
Sprayer is a low-latency messaging system supporting the delivery of messages to millions of users. In this talk I explain Sprayer's architecture and how we use RabbitMQ as our backbone transport technology.
Scaling Symfony2 apps with RabbitMQ - Symfony UK MeetupKacper Gunia
Slides from my talk at Symfony UK Meetup. London, 20 Aug 2014. http://twitter.com/cakper
Video: https://www.youtube.com/watch?v=cha92Og9M5A
More Domain-Driven Design related content at: https://domaincentric.net/
MCollective is a framework for system management and orchestration that allows users to execute tasks across many servers simultaneously. It uses message queuing as middleware to facilitate communication between a client and servers. Users can write custom agents in Ruby to perform actions on servers in response to messages. MCollective provides features for system discovery, inventory collection, task execution, and configuration management across thousands of nodes.
Servers with Event Machine - David Troy - RailsConf 2011David Troy
EventMachine allows Ruby programs to handle many concurrent connections without blocking or threads. It uses an asynchronous model where callbacks are invoked in response to I/O events like incoming connections or data. EventMachine provides a high performance non-blocking TCP server that handles all I/O with callbacks rather than blocking. It can be used to build scalable I/O intensive applications like email servers that handle thousands of concurrent connections in a single Ruby process. The key is to never block the EventMachine reactor loop.
The document discusses using Python for ethical hacking and penetration testing. It provides reasons for using Python such as its ease of use, readable syntax, rich libraries, and existing tools. It then covers various Python libraries and frameworks used for tasks like reconnaissance, scanning, exploitation, and packet manipulation. Specific topics covered include file I/O, requests, sockets, scapy, and more.
ZeroMQ provides a simple way to build distributed systems without relying on heavyweight message brokers. It uses a messaging model where messages are atomic and can be routed between applications. ZeroMQ avoids many of the complexities of traditional message queues and makes it easy to implement common distributed patterns like request-reply, publish-subscribe, and routing in a variety of programming languages using simple socket-like APIs.
Scaling applications with RabbitMQ at SunshinePHPAlvaro Videla
Do you need to process thousands of images in the background for your web app?
Do you need to share data across multiple applications, probably written in different languages and sitting at different servers?
Your real time data feed is becoming slow because you are polling the database constantly for new data updates?
Do you need to scale information processing during peek times?
What about deploying new features with zero downtime? If any of these problems sound familiar then you probably need to use messaging in your application.
In this talk I will introduce RabbitMQ, a messaging and queue server that can help us tackle those problems. We will learn the benefits of a Queue Server and see how to integrate messaging into our applications. With this talk we hope that the term 'decoupling' gets a new, broader, meaning.
Web3j is a Java library that provides complete Ethereum JSON-RPC implementation for interacting with Ethereum client APIs like Geth and Parity. It supports smart contract wrappers, wallet management, synchronous and asynchronous API as well as RxJava Observables. Web3j allows deploying, calling functions on and getting events from smart contracts.
MessagePack is a binary serialization format that is compact and fast. It works with many programming languages and is used for communication between processes and data storage. It enables building high performance applications including real-time messaging systems. MessagePack implementations support asynchronous RPC where clients can make multiple concurrent calls to servers using shared event loops to improve efficiency.
Google's Go is a relatively new systems programming language that has recently gained a lot of traction with developers. It brings together the ease and efficiency of development in modern interpreted languages like Python, Perl, and Ruby with the efficiency and safety of a statically typed, compiled language like C/C++ and Java.
On top of that, Go is a language built for modern hardware and problems. With built-in support for concurrency, programmers can easily build software to scale up to today's many-core beasts. Programming in Go is really nice, and in this tutorial, you will learn why.
We will cover an introduction to the Go programming language, and together we will build a multi-user network service demonstrating all of the major principles of programming in Go.
Service discovery like a pro (presented at reversimX)Eran Harel
So you want to auto scale your services, and use service oriented architecture, eh?
Want to reduce the cost of managing your clusters, and discover them dynamically?
In this talk we shall see how consul helps you do that very efficiently, explain how it works, demonstrate spinning up several interconnected services, and show how we can achieve seamless discovery, HA, and fault tolerance.
Service Delivery Assembly Line with Vagrant, Packer, and AnsibleIsaac Christoffersen
Leverage Packer, Vagrant, and Ansible as part of a service delivery pipeline. Streamline your continuous delivery process while also targeting multiple cloud providers.
This document provides an overview of Couchbase Server and how to use it with Ruby. Couchbase Server is a NoSQL database that supports automatic key sharding and replication. It is used by companies like Heroku and Zynga. The document outlines how to install the Couchbase Ruby gem, perform basic CRUD operations, use optimistic locking, expiration, map/reduce, and integrate Couchbase with Rails and other Ruby frameworks.
Rhebok, High Performance Rack Handler / Rubykaigi 2015Masahiro Nagano
This document discusses Rhebok, a high performance Rack handler written in Ruby. Rhebok uses a prefork architecture for concurrency and achieves 1.5-2x better performance than Unicorn. It implements efficient network I/O using techniques like IO timeouts, TCP_NODELAY, and writev(). Rhebok also uses the ultra-fast PicoHTTPParser for HTTP request parsing. The document provides an overview of Rhebok, benchmarks showing its performance, and details on its internals and architecture.
This document provides an overview of Socket.IO, a JavaScript library for real-time web applications. It discusses what Socket.IO is, how it provides persistent connections and real-time functionality across browsers including older versions of Internet Explorer. It also summarizes how to install and use Socket.IO on both the client-side and server-side as well as how to send and receive events. Key features like broadcasting messages, acknowledgements, and configurations are also highlighted. Upcoming releases like optimizations, gzip support, and integration with Redis for scaling are mentioned as well.
A fun filled tour through distributed programming with the Ruby standard library.
Presented on February 2nd, 2012 at RubyFuza in Cape Town, South Africa.
TorqueBox allows mixing Java and Ruby by running Ruby code on the Java Virtual Machine (JVM). It provides Ruby applications access to enterprise Java features like JNDI, JMS, Quartz, and more. TorqueBox applications can be deployed to JBoss Application Server with these Java services and capabilities.
This document proposes an architecture for distributed indexing, storage, and real-time analysis of logs. It discusses challenges of scaling log collection and analysis across hundreds of servers generating terabytes of data daily. The proposed architecture uses multicast messaging and sharding to distribute indexing and querying across clusters of servers for scalability. It emphasizes low overhead indexing and real-time aggregation of results.
This document guides the reader through building a system where a user connects to a server over a secure connection and receives a sequence of JSON-encoded objects. It begins by introducing the ServerCore component and shows how to fill its protocol handler factory hole. It then demonstrates creating a stackedjson protocol handler using a pipeline of components like PeriodicWakeup, Chooser, and MarshallJSON. This protocol securely transmits JSON data chunks to clients like a ConsoleEchoer. It discusses how the client-side mirrors the server components to receive and display the messages.
Angboard is a pure Javascript horizon replacement that uses current generation Javascript tooling to connect directly to OpenStack APIs, avoiding middleware for a more responsive user experience (UX) with easier UX changes. It is a browser-based tool that uses Bootstrap, AngularJS, and an HTTP(S) server with an API proxy to access OpenStack services like Nova, Swift, and Keystone.
In which Richard will tell you about some things you should never (probably ever) do to or in Python. Warranties may be voided. The recording of this talk is online at http://www.youtube.com/watch?v=H2yfXnUb1S4
This document provides an introduction and overview to game programming using Python and Pygame. It begins with introductions from the presenter and discusses initial considerations for constructing a game such as genre, setting, and theme. It then covers the basic elements of game programming like displaying graphics, opening a window, handling user input, and animation. Code examples are provided to demonstrate opening a window, adding a main loop, loading and drawing images, handling keyboard input, and limiting the frame rate for less CPU usage. The document provides a high-level tour of the key concepts for building simple games with Pygame.
Introduction to Game Programming TutorialRichard Jones
The slides to accompany the Introduction to Game Programming tutorial I ran at LCA 2010. The tutorial ran over 90 minutes with the participants following along.
Presentation derived from the "What's new in Python 2.6" document on http://www.python.org/ including much reformatting for presenting and presenter notes.
Please download the Keynote original - that way the presentation notes aren't burned into the slides.
Presentation derived from the "What's new in Python 2.5" document on http://www.python.org/ including much reformatting for presenting and presenter notes.
Please download the Keynote original - that way the presentation notes aren't burned into the slides.
Don't be fooled by the thumbnail - the first couple of slides are a silly joke I forgot to remove before uploading.
Presentation derived from the "What's new in Python 2.4" document on http://www.python.org/ including much reformatting for presenting and presenter notes.
Please download the Keynote original - that way the presentation notes aren't burned into the slides.
This document discusses Tkinter, a GUI toolkit for Python. It provides examples of basic Tkinter code for common widgets like buttons, labels, entries and more. It also covers Tkinter concepts like packing, grids, styling with themes, and events. The document seeks to demonstrate that Tkinter is simple to use yet robust, with a rich set of widgets and capabilities.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
2. Preamble
• I work for a global telco
• We push telco events and messages around
• All should be processed quickly and reliably
• Some generate a response
3. Scenario
• We send messages around in small text
files, sometimes with RFC 2822 headers
• Use the UNIX sendf/recfiled mechanism
10. pymq
• very hazy on the details
• depends on MySQL? and Django?!?
• ignored while looking at kombu / rabbitmq
11. ZeroMQ aka 0MQ
"ZeroMQ is a message orientated IPC Library."
- commenter on stackoverflow
12. AMQP
• Is a standard for message queueing
• Producers submit messages to brokers
• Brokers consist of exchanges and queues
• Exchanges route the messages
• Queues store messages
• Consumers pull messages out
46. Issues
• pika's BlockingConnection (my use case is
simple enough) hard-codes the socket
timeout and fails to cope with latency >1s
• Fails to cope at all with packet loss
47. Deleting a Queue
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters('server'))
channel = connection.channel()
channel.queue_delete(queue='hello')
channel.close()
connection.close()
49. Publisher
from kombu import BrokerConnection
with BrokerConnection('amqp://localhost') as conn:
with conn.SimpleQueue('hello') as queue:
queue.put('Hello World!')
print " [x] Sent 'Hello World!'"
51. Publisher
from kombu import BrokerConnection
with BrokerConnection('amqp://localhost//') as conn:
with conn.SimpleQueue('hello', queue_opts=dict(durable=False)) as queue:
queue.put('Hello World!')
print " [x] Sent 'Hello World!'"
59. stompclient
from kombu import BrokerConnection
with BrokerConnection('amqp://localhost') as conn:
with conn.SimpleQueue('hello') as queue:
queue.put('Hello World!')
print " [x] Sent 'Hello World!'"
from stompclient import PublishClient
client = PublishClient('localhost', 61613)
client.connect()
client.send('/queue/hello', 'Hello, world!')
client.disconnect()
60. stompclient
from stompclient import PublishClient
client = PublishClient('localhost', 61613)
client.connect()
client.send('/queue/hello', 'Hello, world!')
client.disconnect()
from stompclient import PublishClient
with PublishClient('localhost', 61613) as client:
client.send('/queue/hello', 'Hello, world!')
61. Celery
"a synchronous or asynchronous task queue/job
queue based on distributed message passing"
62. TcpCatcher
• TCP, SOCKS, HTTP(S) proxy & monitor
• Can introduce latency and transmission
errors
• Understands HTTP and images
• Can debug/interfere/log SSL traffic
• Free: www.tcpcatcher.fr
Editor's Notes
\n
\n
\n
\n
Network latency currently means processing slows down.\n
One "facility" is the zipping up of multiple events into a batch and sending the zip across the WAN in one go. This obviously complicates things further and introduces additional latency onto the message transmission.\n
\n
\n
\n
... and I never actually came back to evaluate it\n
Is message queueing without a broker. Kind of like a glorified socket. Messages are routed in common MQ patterns right down at the network level. If you want store and forward, you're on your own for the persistence part.\n
\n
AMQP in a nutshell. I might add that amqp.org was no help whatsoever in figuring this out.\n
There's a bunch of AMQP implementations out there, but in the interests of keeping my own sanity I only looked at the one most popular free implementation. Since they all end up implementing the same thing anyway.\n
\n
\n
Queues are where your messages end up in the broker. They sit there until a client (a.k.a. consumer) connects to the queue and siphons them off. Queues may be configured so messages are discarded if there isn’t a consumer ready to accept them. Multiple consumers may connect to a queue - the messages will be passed to each consumer in turn.\n
Exchanges are routers with routing tables that sit in front of queues. They're declared by consumers, just like queues, except that there's a default "just pass the message to the queue" exchange for the simple case. Every message has what’s known as a “routing key”, which is simply a string. The exchange has a list of bindings (routes) that say, for example, messages with routing key “X” go to queue “spam”.\n
Messages come in to the broker, are routed by exchanges and stored in queues until slurped off by consumers. Within a broker you may have multiple logical systems called virtual hosts. I'm not sure why. There's a default one and it's probably all I'll ever need.\nQueues and exchanges are created programmatically by your producers or consumers - not via a configuration file or command line program - your MQ configuration is in-line with your app code.\n
An interesting aside for performance - exchanges all run in their own processes so adding more exchanges is a way to spread load and increase throughput.\n
“routing rules” (or bindings) link an exchange to a queue based on a routing key. It is possible for two binding rules to use the same routing key. For example, maybe messages with the routing key “audit” need to go both to the “log-forever” queue and the “alert-the-big-dude” queue. To accomplish this, just create two binding rules (each one linking the exchange to one of the queues) that both trigger on routing key “audit”. In this case, the exchange duplicates the message and sends it to both queues.\n
There are multiple types of exchanges. They all do routing, but they accept different styles of binding “rules”. \n
A "direct" exchange only matches if the routing key is “dogs” or not. The default exchange is a Direct exchange.\n
For example, a “topic” exchange tries to match a message’s routing key against a wildcard pattern like “dogs.*”.\n
A "fanout" exchange ignores the routing key and distributes the messages to all queues.\n
There's two levels of persistence in play in RabbitMQ: the structure of the broker and the messages in the broker's queues.\n\n
You may mark your queues and exchanges as “durable” so the queue or exchange will be re-created automatically on reboot. It does not mean the messages in the queues will survive the reboot. They won’t.\n
When you publish your message to an exchange, you may set a flag called “Delivery Mode” to the value 2, which means “persistent”. “Delivery Mode” usually (depending on your AMQP library) defaults to a value of 1, which means “non-persistent”.\n
So the steps for persistent messaging are...\nIf you bind a durable queue to a durable exchange, RabbitMQ will automatically preserve the binding. Similarly, if you delete any exchange/queue (durable or not) any bindings that depend on it get deleted automatically.\n
RabbitMQ will not allow you to bind a non-durable exchange to a durable queue, or vice-versa. Both the exchange and the queue must be durable for the binding operation to succeed.\nYou cannot change the creation flags on a queue or exchange after you’ve created it. For example, if you create a queue as “non-durable”, and want to change it to “durable”, the only way to do this is to destroy the queue and re-create it. It’s a good reason to double check your declarations\n
This is my typical setup - I have hosts separated by network (WAN) out of my control. I need all the events generated on the remote hosts to be processed in a timely, reliable manner by the processing server.\n
One solution is to write a forwarding consumer on the remote host - pretty simple code but not very elegant.\n
An alternative setup we could use with the processing server consumer pulling from all the remote host queues. This makes it more configuration work when adding new remote hosts though.\n
Then I discovered the Shovel plugin. I've ignored exchanges up until now, but using Shovel allows an exchange on one host to pull messages from a queue and fire them at another queue. It basically runs an erlang client in the remote broker to forward the messages. In theory. In practise...\n
Look hard - it says "Fun" in there. This is NOT FUN. This was bloody hard to figure out. The main problem is that once you decide you're going to use Shovel, you're now programming Erlang. See previous statement about fun.\n
Not only am I now learning Erlang but the errors you get back for malformed configuration files (Erlang programs) are really unhelpful. This error basically says "there's an error somewhere in the 37 lines of your program."\n
This is my minimum shovel configuration, eventually discovered after much trial-and-error. I shall not share with you the entire volume of bizarre and obscure errors and warnings I waded through to get to this point, nor the number of dead-end alleyways the poor documentation led me down. Unfortunately when you start up the server with this configuration the log file will FILL with warnings about the queue not existing until a client connects and creates the queue.\n
This would probably be simpler if I was familiar with Erlang, but I found the documentation to be basically impenetrable. Also, for some reason I couldn't declare the queue without declaring an exchange. Which isn't needed if I don't declare the queue. There was an awful lot of stumbling around in the dark to get this working, but in the end it does.\n
Default login is guest/guest...\n
It tries to stay fairly independent of the underlying network support library. It uses amqplib underneath by default, which is what most of the libraries do.\n
This code connects to our RabbitMQ server using a blocking connection (send and wait for successful delivery at server before continuing); it declares a durable "helllo" queue and publishes a simple message to the queue. The queue and undelivered messages will be persistent across restarts of RabbitMQ. It's a slightly bizarrely wordy API. The exchange argument is required, but we use the "no-op" or "default" exchange here. More on them later.\n
This code connects to our RabbitMQ server using a blocking connection (listen and wait for messages); it also declares a "helllo" queue (just in case no publisher is connected) and consumes messages from the queue. You can see that a bunch of the code is the same - connections, channels and queues.\n
Here we see the publisher running on my laptop (local) and the consumer running on the server where the RabbitMQ server is also running. You can see that we can publish into the queue without anything consuming the messages, and we can consume published events immediately. We can also listen as a consumer with no publisher publishing. It's all quite disconnected.\n
By making one small change to our consumer, we can consume remote queues.\n
Here we set up a second consumer that will consume the remote queue on the server from the local host. Messages published to the queue will be passed to each consumer in turn round-robin style. When a consumer stops consuming the queue will transparently feed all messages to the remaining consumer. And of course this extends to having multiple publishers as well.\n
Can fix the hard-coded socket timeout issue, and a try/except might be able to handle the packet loss.\n
If you set up a queue with the wrong parameters you'll need to delete it. For some unknown reason the rabbitmq control program doesn't provide the ability to delete queues, so you need to do it from a client. This code does that.\n
Kombu is a messaging framework for Python. It replaces Carrot.\nThe aim of Kombu is to make messaging in Python as easy as possible by providing an idiomatic high-level interface for the AMQ protocol, and also provide proven and tested solutions to common messaging problems. Its "transports" include AMQP variations and non-AMQP "virtual transports" such as Redis, MongoDB, CouchDB and Beanstalk or "database transports" such as SQLAlchemy and Django ORM.\n
This took me some time to write as the documentation for Kombu is quite limited. Attempting to set things up using channels lead nowhere. I couldn't find clear documentation on how to create a channel. In the end I found the SimpleQueue which worked after some effort, but I'm still not clear on the details. Then I discovered that the default queue parameters were different (see next slide.)\n
When I was testing early on I wasn't using durable queues. kombu's SimpleQueue sets durable to True by default which caused the above bizarro error (which basically says I'm trying to use a queue with different parameters to those it was created with.) This error is not specific to Kombu, but it was unexpected and inexplicable when I first encountered it until I guessed at the durable parameter setting.\n
Took me a while to figure out how to disable durable.\n\nBut again, there's no channel - it's really just API noise for simple code like this.\n
The puka module implements a client for AMQP 0-9-1 protocol. It’s tuned to work with RabbitMQ broker, but should work fine with other message brokers that support this protocol. It tries to be a nicer API than pika, which is honestly quite appalling.\n
Everything in puka works off the Client which is quite different but pretty convenient. The API is basically the same as pika at the business end. The big difference is in the promises and the ability to wait on the promised action being completed successfully. Puka wants to be asynchronous - the wait() calls effectively force it to be synchronous for simple code like this.\nI needed to patch puka version 0.0.5 as mentioned in issue #15 on their tracker to fix a connection-time issue. Unfortunately even with that fix this was still fragile and inexplicably stopped working at one point. Puka also does away with channels like kombu's simple case.\n
STOMP is the Simple (or Streaming) Text Orientated Messaging Protocol\n
"STOMP is a very simple and easy to implement protocol, coming from the HTTP school of design; the server side may be hard to implement well, but it is very easy to write a client to get yourself connected. For example you can use Telnet to login to any STOMP broker and interact with it!"\n
The STOMP website kinda mirrors the AMQP website: there's a specification there but little else. Not so simple. Fortunately there's a bunch of Python STOMP client libraries.\n
Enabling STOMP was easy enough in the RabbitMQ configuration file - the docs were much clearer and the configuration much simpler than Shovel.\n
So here's one Python library, stompclient, publishing to my "hello" queue. \n
So... not a whole lot simpler...\n
I made a small change to the stompclient library to allow this usage, which I think is pretty darned simple - I believe it'll be in the next release. I'm not completely sold on STOMP yet though. I'm not sure it improves things over AMQP enough to switch from just using Kombu.\n
RPC engine. It is focused on real-time operation, but supports scheduling as well. The execution units, called tasks, are executed concurrently on a single or more worker servers. It's built over kombu and a variety of non-AMQP backends. The RPC nature and backend generality limit the abilities of MQ somewhat, though is nice if you're just after worker management (which I'm not.)\n
I found this worked sometimes, but not other times. RabbitMQ did NOT like me putting this in the middle of the Shovel setup, but the Python libraries I used were happy to use it as a proxy.\n