Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Using IO Visor to Secure Microservices Running on CloudFoundry [OpenStack Summit Austin | April 2016]

As microservices grow, traditional firewall rules based on network ACLs are no longer scalable and fall short of providing fine-grained enforcement. Group Based Policy (GBP) is a flexible policy language that allows users to specify policy enforcement based on intent, independent of network infrastructure and IP addressing. Using micro-segmented virtual domains, administrators can define policies at a centralized location and use IO Visor technology for distributed enforcement. This provides infrastructure independent rules, template-based policy definitions, and scale-out policy enforcement for a solution that secures and scales with microservices. This session will be presented by members of the IO Visor community and will cover how IO Visor technology can be used to define and enforce GBP. The discussion will also cover using GBP for cloud foundry application spaces where microservices are deployed and need scalable, efficient security policies.

  • Login to see the comments

Using IO Visor to Secure Microservices Running on CloudFoundry [OpenStack Summit Austin | April 2016]

  1. 1. Securing Microservices in CloudFoundry Brenden Blanco and Deepa Kalani! Architects, CTO Office - PLUMgrid!
  2. 2. Need for Micro Segmentation §  Movement towards cloud native applications. §  Elastic nature of applications requires a more agile way of configuring policies §  Operators would like to have an intuitive way of defining policies, based on application roles and not ip addresses. §  Relying on traditional firewall rules will quickly make it unmanageable as applications move around §  Move towards a whitelist model of policy definition, where one defines acceptable information flow and everything else is blocked
  3. 3. IPTables to define Endpoint Policy - State Explosion IP1->IP3 IP1->IP5 IP1->IP7 IP1->IP8 IP3->IP1 IP3->IP5 IP3->IP7 IP3->IP8 IP2->IP4 IP2->IP6 IP2->IP9 IP2->IP10 IP4->IP6 IP4->IP2 IP4->IP9 IP4->IP10 IP2->IP4 IP2->IP6 IP2->IP9 IP2->IP10 IP4->IP6 IP4->IP2 IP4->IP9 IP4->IP10 IP5->IP1 IP5->IP3 IP5->IP7 IP5->IP8 IP7->IP1 IP7->IP5 IP7->IP3 IP7->IP8 IP8->IP3 IP8->IP5 IP8->IP7 IP8->IP1 IP9->IP4 IP9->IP6 IP9->IP2 IP9->IP10 IP10->IP2 IP10->IP6 IP10->IP4 IP10->IP9 IP Table Rules
  4. 4. Group Based Policy - secure, scalable, intent based Green->Green Red->Red Green->Green Red->Red Green->Green Red->Red IP1,IP3->Green IP2,IP4-> Red IP5,IP7->Green IP6-> Red IP8->Green IP9,IP10-> Red Endpoint Groups Policies
  5. 5. Policy specification for Cloud Foundry Applications §  Define Endpoints and EPGs (Applications are represented by Groups of Endpoints) §  Policy definition is in the nature of applications. §  e.g. A_APP->A_DB 80 allow, B_APP->A_APP allow. §  Envision policy as a graph of application connectivity A_App B_APP C_APP A_DB DB_Ext
  6. 6. IO Module, users perspective IO Module Management interface - REST API - Cli / config file Interfaces - Interface Type (Net, Tracing, Storage, …) Something runs in kernel Something runs in user space Controllers live up here IO Modules Catalog Search for IO Mod Download IO Mod Somewhere in the cloud ( there is a catalog of public IO Modules
  7. 7. IO Module, developers perspective IO Modules Catalog Publish new Modules Somewhere in the cloud ( there is a catalog of public IO Modules Data Plane Management interface - REST API - Cli / config file Interfaces - Interface Type (Net, Tracing, Storage, …) Users interact with the Module with: User space helper IO Module Control Plane (user space) IO Module Data Plane (kernel) IO Module developer IO Module IOVisor SDK Clang / P4 Python, C, C++, Go, JS …
  8. 8. IO Module, graph composition IOVisor Manager Kernel a^achment points Kernel space User space Open repo of “IO Modules” Kernel code Kernel code •  extending Linux Kernel capabilices APIs to Controllers Metadata
  9. 9. Composing IO Modules
  10. 10. Policy Plugin with IO Visor 10 Overlay –VXLAN Linux Bridge Vxlan Dev C C C Garden/1 - Garden/0 - Linux Bridge Vxlan Dev C C C Policy boundary
  11. 11. Thank You!
  12. 12. Backup Slides 1 2
  13. 13. Introducing IO Visor Project 1 3 Future of Linux Kernel IO for soDware defined services Led by iniHal contribuHons from PLUMgrid (Upstreamed since Kernel 3.16) EvoluHon of Kernel BPF & eBPF (Berkeley Packet Filter) “IO Visor will work closely with the Linux kernel community to advance universal IO extensibility for Linux. This collabora=on is cri=cally important as virtualiza=on is puAng more demands on flexibility, performance and security. Open source soFware and collabora=ve development are the ingredients for addressing massive change in any industry. IO Visor will provide the essen:al framework for this work on Linux virtualiza:on and networking.” Jim Zemlin, Execu:ve Director, The Linux Founda:on.
  14. 14. IO Visor Project: What? 1 4 •  A programmable data plane and development tools to simplify the creation of new infrastructure ideas •  An open source project and a community of developers •  Enables a new way to Innovate, Develop and Share IO and Networking functions Open Source & Community Programmable Data Plane 1 2 •  A place to share / standardize new ideas in the form of “IO Modules” Repository of “IO Modules” 3
  15. 15. IO Visor Project Use Cases Example: Networking §  IO Visor is used to build a fully distributed virtual network across multiple compute nodes §  All data plane components are inserted dynamically in the kernel §  No usage of virtual/physical appliances needed §  Example here master/examples/distributed_bridge 1 5 Virtual/Physical Appliances Virtual Network Topology in Kernel Space