Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Handling Asynchronous Workloads With OpenShift and


Published on

Slides for a live webinar on OpenShift Commons.

Published in: Technology
  • Be the first to comment

Handling Asynchronous Workloads With OpenShift and

  1. 1. Ivan Dwyer | Business Development | | @fortyfivan Handling Asynchronous Workloads in OpenShift with Event-Driven Computing for the Modern Cloud Era
  2. 2. Agenda ➔ The Modern Cloud ➔ Event-Driven Computing ➔ Where Fits ➔ Live Demo ➔ and OpenShift
  3. 3. The Modern Cloud Empowering developers to do what they do best
  4. 4. Evolution Server VM Container Monolith N-Tiered Microservices Major Release Software Updates Continuous Delivery DIY Software Defined API-Driven Unit of Scale Application Architecture Deployment Model Async Workloads
  5. 5. The Modern Cloud Stack IaaS On-demand compute, storage, and networking resources PaaS Application configuration, management, and deployment SaaS APIs and services to build and extend applications
  6. 6. ➔ Developers want to innovate ➔ Developers want abstraction ➔ Developers want self-service ➔ Developers want freedom ➔ Developers want consistency ➔ Developers want to write code! Developer Empowerment
  7. 7. The modern cloud provides developers everything needed to build, deploy, and scale applications. But what about the workloads that happen in the background? “GitHub is 50% background work”
  8. 8. Event-Driven Computing Reacting to the changes in the world
  9. 9. Making a Distinction Applications Tasks Hosted Load Balanced Elastic Orchestrated Realtime Ephemeral Queued Concurrent Choreographed Asynchronous
  10. 10. Identify the Right Pieces ➔ Outside of user response loop ➔ 3rd party service API calls ➔ Long running processes ➔ Transaction Processing ➔ Scale-out / burst processing ➔ Scheduled jobs Independent Single Responsibility Stateless Interchangeable Loosely Coupled Asynchronous
  11. 11. Email & Notifications Multimedia Encoding Transactions Web Crawling Data Transfer Data Crunching 3rd Party API Calls Scheduled Jobs Common Tasks
  12. 12. Event-Driven Workflows Webhook Callback API Call Stream Transaction Schedule Queue Queue Database Analytics System API App UI Notification Log Event Trigger Task Execution Results Delivered
  13. 13. New Goals ➔ Build highly scalable and reactive backend systems ➔ Respond to events and changing environments automatically ➔ Run processes at scale without managing infrastructure ➔ Distribute workloads without configuration management ➔ Collect, deliver, and transform data seamlessly ➔ Integrate components into a unified platform
  14. 14. The Challenge ➔ Building functionality for async concurrency is extremely complex ➔ More moving parts means more components to keep track of and configure properly ➔ Loosely coupled services means steps must be taken to keep data consistent ➔ Distributed services creates more API traffic and communication layers ➔ Keeping applications and task workloads in sync is challenging
  15. 15. Building and maintaining a reliable environment for handling asynchronous workloads within distributed applications is extremely challenging. There is a need for a task-centric platform to handle the inner working of these workloads, while remaining tightly integrated with the app-centric platform. This is what aims to solve.
  16. 16. Where Fits? Event-Driven Computing for the Modern Cloud Era
  17. 17. What We Do We build technology to power asynchronous workloads at scale for distributed applications of all kinds Decouple Components Treat your applications as a collection of microservices that scale up and down independently. Respond to Events Trigger workloads on-demand based on events that happen in the lifecycle of your systems and applications. Choreograph Workflows Chain together previously complex process flows with ease by setting endpoints and triggers. Message Queue Job Scheduler Task Environment
  18. 18. How It Works Build Upload Run Scale ➔ Build lightweight tasks ➔ Use any language ➔ Containerize with Docker ➔ Commit to a repo ➔ Package as a container ➔ Upload to ➔ Set event triggers ➔ Create schedules ➔ Queue tasks on-demand ➔ Set concurrency levels ➔ Scales automatically ➔ No provisioning needed
  19. 19. Concepts ➔ Workers: The task code and our unit of containerized compute. ➔ Runners: The runtime agent that spins up containers for workload processing. ➔ Stacks: Docker images that provide basic language and library dependencies. ➔ Queues: Method of dispatching workloads through a persistent message queue. ➔ Schedules: Regular occurring tasks much like cron jobs, but managed. ➔ Concurrency: Number of tasks run at the same time and our unit of scale. ➔ Clusters: Location and environment for runner deployment and workload processing.
  20. 20. Under the Hood: Features API Code History Dashboard Monitoring Task Queue Priorities Schedules Auto Retry Auth Encryption Audit Trail Management Choreography Security
  21. 21. Under the Hood: Components API Priority Manager Task Scheduler Public Cloud On-Premises Task Queues Customer Code Docker Images
  22. 22. When To Use Microservices Decouple application components as independently developed and deployed services that are choreographed by Internet of Things Choreograph machine generated workloads asynchronously with Iron. io’s reliable data transport and task- centric runtime. Mobile Compute Run a “serverless” backend that doesn’t interfere with the user experience by triggering workers to run in the background. Hybrid Cloud Offload individual workloads to Iron. io while maintaining secure in-house systems using the same APIs across all environments.
  23. 23. Why Choose “Serverless” Environment Power large-scale workloads without the need to provision and manage infrastructure. No Ops Needed Create complex workflows without configuration scripts or complex async/concurrent code. Workload Scalability Scale effectively and efficiently at the task level through lightweight and loosely coupled containers. Developer Friendly Cloud-native API-driven feature set with client libraries across all major languages. Speed to Market Comprehensive environment that gets up and running in minutes with seamless platform integrations. Hybrid Capable Deploy the service and distribute workloads to any cloud environment, public or private.
  24. 24. Case Study: Bleacher Report 1. Sports story breaks 2. Event trigger spins up thousands of tasks in IronWorker 3. Each concurrent task sends thousands of push notifications Result: Bleacher Report can send millions of push notifications in under a minute
  25. 25. Case Study: Hotel Tonight 1. Scheduled IronWorker pulls data from a variety of sources 2. Data is pipelined into IronWorker for transformation 3. Data is pipelined to destination data warehouse Result: Hotel Tonight has dozens of sources syncing 24/7
  26. 26. Case Study: Untappd 1. Mobile user “checks in” a beer 2. Background tasks are kicked off to run concurrently 3. App is refreshed with data results Result: Untappd cut its event response time from 7 seconds to 500 milliseconds
  27. 27. “Speed of delivery is a constant focus for us. No longer worrying about infrastructure allows us to focus on delivering new features and optimizing existing ones.” “IronWorker’s modularity allows for persistent points along the lifetime of the pipeline. Each worker in the pipeline is responsible for its own unit of work and has the ability to kick off the next task in the pipeline.” “I like that I don't have to worry about whether to scale more servers. It's done automatically by, which is key for us and obviously why we love the platform."
  28. 28. Live Demo Hello OpenShift
  29. 29. and OpenShift
  30. 30. Deployment Models Public Cloud Elastic scalability No Maintenance Rich feature set Dedicated Secure gateway Managed service High performance . On-Premises Multi-site deployment Flexible configuration Safe and secure
  31. 31. OpenShift Online Integration
  32. 32. OpenShift Enterprise Integration ➔ Docker service packaging ◆ Both IronMQ and IronWorker are packaged via container ◆ IronMQ passed certification, IronWorker up next ➔ Kubernetes HA deployment ◆ Each service can be deployed as pods ◆ The task runtime can be deployed as pods ➔ Scale via replication controller ◆ Simply add nodes for more service instances ◆ Simply add nodes for more workload capacity ➔ Service broker API ◆ SSO and service binding to applications ◆ Supports multitenancy
  33. 33. "Vendors that embrace the concept of public and private PaaS are also in favor of hybrid PaaS models where workloads can be directed to either public or private instances depending on how an enterprise sets application policy. Hybrid models provide the most flexibility where the private and public PaaS components are the same or have been specifically designed to work together.”
  34. 34. Pair Programming Get a hands-on walkthrough of our platform Architecture Review Let us share some best practices and advice Start a Free Trial Start building with in minutes