The past decade saw massive move towards virtualization with monolithic applications running as VMs in private / public cloud.
The next decade will be marked by newer application architectures that not only includes monolithic apps but also distributed application architecture led by Containerization & Serverless. This movement has significant impact on how apps are architected, networked, and secured.
2010-2020 has been about migration from centralized data center to hybrid clouds with adoption of public cloud providers like AWS. The reason for this change was all about operational simplification and reduced time to market
2020+ -- this trend is continuing with digital transformation - driven by adoption of multiple cloud providers for reasons like performance, risk reduction, acquisitions, etc. In addition, there is newer emerging trend of running applications in the edge.
Traditionally, the security policies and connectivity was managed at the network level and http level. There was growing adoption of microsegmentation where policies were written and enforced at the network level.
All of this will no longer work as with microservices and serverless, all the traffic between apps is REST/gRPC APIs that are multiplexed on same network port (eg. HTTP 443) and as a result, the network level security and connectivity is not very useful anymore
What if We Have Legacy Infrastructure in Place?
To recap: apps are changing, driven by critical business needs. ← Read above.
These changes, as noted above, have a significant impact to enterprise architectures in WHERE apps run and HOW those apps and data are connected, secured and operated. The properties for dealing with the requirements of distributed apps and data are driving the following shifts:
Transformation from locations, application types to connectivity
WHERE — Multiple Clouds and EdgeData gravity for performance reasons, risk reductions, edge AI use cases, etc.
HOW — Hybrid ApplicationsContainerization of apps and serverless environments, as well as the need to connect/discover legacy apps are driving new architectural, network and security challenges
HOW — Layer 3 (Network) → Layer 7 (App/Proxy/API)Connectivity is changing from traditional network-level (IP) access to app-to-app communication using REST/gRPC APIs which are usually delivered over HTTPS (TCP/443)
Apps Have Changed — So Have the Required Capabilities for Networking + Securing Them
App-2-App Networking— micro-services and containers communicate to each other in addition to the end user, and over wider locations, making reliability/performance an even greater consideration
Higher-layer Security — app-to-app and micro-services architectures require zero-trust at the API layer, because that’s how they communicate
API-first — apps are written to be API producers and consumers Day 1. This has fundamental implications on how those APIs are delivered, connected, and secured
To summarize — there are several technical and architectural trends (driven by business needs) around the types of applications, their locality, and the means of how they are delivered/accessed. As they converge together, it is creating major challenges for traditional networking, security and app services infrastructures.
CDN/Edge 1.0
Assumed a limited number of origin sites
Designed to deal with “dumb” clients with bad connectivity options
Requires massive number of PoPs and immense storage
Cloud/Edge 1.5
Assumes multiple origin sites, manually interconnected
Still presumes clients might have bad connectivity
More storage-efficient but still requires massive number of PoPs
Distributed Cloud/Edge 2.0
Creates mesh of all origin sites
Assumes clients are modern and well connected
Does not require a high number of PoPs supplements with client assist, app distribution and excellent peering
Distributed applications and data, which we call a distributed cloud. In this environment you can take advantage of anywhere compute/network/storage exists to offer the applications and services you need. We will go deeper into this in a moment but it begs the question, what’s changed that requires this distributed cloud? It would seem this would make the operational challenges of multi-cloud worse? Those thougthts aren’t wrong but let me start with WHY we think this is happening. We believe a Distributed Cloud architecture is required to address the demand of modern apps.
Why today's infrastructure is obsolete for/to support modern apps
Legacy vendors, limited approaches + mixed results
- Multiple components (software and services) that are disparate/not connected and don’t work together well (very complex to manage, maintain etc.)
- Varying operations teams and models – teams are siloed, working with/on their own software/apps specific to their role/mandates/scope of work
- Varying configs and intent – different systems with different code bases, interfaces...hard to maintain unified policy, controls across disparate software/systems
- Siloed monitoring and visibility - Really impossible to get end-to-end/layer-to-layer visibility across users, perf, security etc.
When it comes to distributed apps and the need to process distributed data; traditional networking and security tools (and their operations) are unsustainable.
The result of all these trends and changing models is that organizations struggle to deliver, scale, and secure their applications, potentially leading to diminished business success and damaged customer relationships. You start to quickly see why delivering and securing extraordinary end-user experiences has become infinitely more complex. Companies are at an inflection point where this is no longer sustainable. This fragmented approach to application security and delivery if fundamentally flawed. And it is all manifested as OPERATIONAL COMPLEXITY caused by:
#1 - Inconsistency of technologies across environments is creating unsustainable technical and operational debt
Developers must get their apps to market and fast – and they’ve been empowered to acquire and deploy the tools they see fit. However, traditional multi-vendor (5+ products/services) approach to app networking and security for each target deployment location (on-prem, cloud, edge) is unsustainable. (NOTE: These 5+ services do things like physical connectivity, network routing, application load balancing, API gateway, and security at both the network level and application level.)
There’s a separate cloud or management solution that each team needs to configure, operate & automate to make the whole delivery chain work.
#2 - Manual stitching together is not fast or scalable, and it leaves you vulnerable. DevOps teams are forced to hard code the automation. And without visibility across these environments, when something goes wrong and you find out about it via an angry customer on Twitter, the mean time to resolution can be days or weeks, resulting in lost customers. #3 – The attack surface and the sophistication of attacks has increased. Attacks have evolved in their sophistication, often faster than security teams can keep up. The days of blocking an IP address from an attacker are gone. There’s brute force attacks from multiple sources (classic DDoS) in the cloud and on prem, plus more elaborate attacks targeted at specific system exploits (like feed streaming, or content delivery at the edge). Bad actors have access to millions of usernames and passwords, and they are stealing from you and your customers. #4 - Rich telemetry is trapped in silos, limiting insights into app performance and the end-users digital experience. Left-to-right insights – gleaning telemetry across all the touch points between the application logic through to the end-user’s digital experience say like an Amazon does – is not possible in this current state. Telemetry and data are trapped in silos.
Since the systems are disjointed (even when two or more services are coming from the same vendor, for example, VMware), their policy management and visibility systems are all very different and its even harder to do policy and observability across the systems. This is a big and consistent struggle for the teams.
The other issue is that given the nature of how these systems are deployed and managed, it leads to operational silos where doing shared services becomes challenges. Each cluster requires may require SecOps teams to provide some shared policy rules, etc. This becomes hard to deliver without more automation, tooling, and processes.
This combination delivers a new solution for delivering adaptive applications. Helping you…
1.) Reduce Operational Complexity: making it easier to deploy, scale and maintain critical app network components and services
2.) Better End User Performance/Enhanced end user experience
3.) Faster, Improve Time To Value/Service: less effort, less wasted time and resources and more results
Detailed outcomes F5 Distributed Cloud can deliver:
Consolidated Services – lower TCO and reduced complexity/simplified, consistent operations/mgt with consistent tooling from prem to cloud to edge
SaaS Based Operations offering greater agility and scale – lower OPEX, greater scale and reduced time to market/faster
Policy and Observability – centralized and unified, capturing network, app and user telemetry across deployments
Multi-tenant platform – shared with separation of duties, checks and balances
Security – robust multi-layered security services (L3-L7), including access controls across multiple environments
Multi-Cloud Networking: App-level networking across clouds with common services, integrated security and end-to-end visibility
1)Connecting Workloads – Location-to-Location
Universal "build once, deploy globally" network
Rapid deployment, easy operation
Uniform multi- and hybrid- cloud connectivity
Built-in multi-layer security, no integration needed
Common configurability and visibility
2) Connecting Workloads – Resource-to-Resource
Easily link workloads across and within clouds
SaaS based Kubernetes ingress-egress controller
Integrated load balancing, API gateway
Integrated multi-layer app and API security
Common control plane, policies, and observability
Application Delivery: Cloud and Edge Automated deployment and cloud-native environment for Kubernetes workloads on the network edge
1) Running Workloads – Moving Resources Closer
Run microservice-based apps wherever you want
Distributed execution on cloud, DC, or edge
Load-balances workload location, not just traffic
Secure e2e multi-layer policy and secret sharing
Looks like Kubernetes, runs like it's globally local
Distributed Cloud Console – SaaS based centralized controller that managed the lifecycle service components and provides a common point for all applications and services, including analytics
Distributed Cloud Mesh – routing and services engine, a data plane which runs anywhere and enables network stitching and other services including comprehensive app security
Distributed Cloud Stack – platform service for distributed Kubernetes clusters
Delivers:
Fill app deliver stack and services
End-to-end traceability and observability
Full data path programmability
Portable between edge, private and public clouds with global service mesh
Distributed control plane for vK8s (virtual K8s)
Infrastructure as code
Secrets management providing security controls for App2App
GitOps for fleets between edge, private and public clouds
F5 Distributed Cloud allows you to manage all of your sites as a “logical cloud”
Portable platform that spans multiple sites/clouds
Private backbone connects all sites
Connecting those sites is done through these nodes (Distributed Cloud Mesh and Distributed Cloud App Stack)
Nodes can be virtual machines, live on hardware within customer data centers, sites etc or cloud instances (e.g. EC2)
Nodes provide vK8s (virtual K8s), network and security services
Services managed through F5 Distributed Cloud’s SaaS base console
Reduce Operational Complexity & Improve visibility:
Simplified network + security vendor stack
Multitenancy with self-service improves productivity & collaborations
Centralized observability across your entire environment
Improve Time to Services
Consolidated service with common API and networking + security capabilities
Improved developer experience
Enhanced End User Experience
App workloads offloaded anywhere, closer to the interaction
Reduced latency for apps and APIs
Avoid unwanted interruptions with built in security and intelligent traffic routing
Reduce Operational Complexity:
Increase Productivity Gains and Cost Optimization ---- NetOps can speed up migration to infra-as-code (SaaS Based operations) with built-in automation assistance and lifecycle management with end-to-end visibility.
Potential OPEX Reductions --- via consolidated, simplified vendor stack (network + security) and secure global connectivity (cut in transit costs/network costs)
SaaS-based operation --- with a single pane-of-glass for policy, lifecycle management and end-to-end observability
Multi-tenancy and Self-Services --- self-serve with separation of duties allows developers, DevOps, NetOps and SecOps to openly collaborate (e.g. NetOps can deploy VoltMesh in their services VPC and configure networking + security while DevOps configures DNS, load balancing, API gateway on the same deployment with their global, private, and service rich network being avail. within minutes)
Simplify Infrastructure and Operations, Lower complexity --- SaaS-managed VoltMesh nodes in your cloud VPC with the option to use our global network helps you deliver a secure multi-cloud network without worrying about complex network ops.
Consistent platform with SaaS based operations across heterogeneous infrastructure reduces complexity and cost
Seamless Scalability --- globally distributed control plane with resource orchestration across distributed clouds/clusters enables massive scale
Faster Deployment & Simplified Ops --- DevOps and developers can significantly simplify deployment operations of one or more Kubernetes/K8s ingress and egress controllers with our SaaS-based lifecycle management and multi-cluster control plane.
Simplified Infrastructure Ops --- Deploying directly to Volterra’s global network allows you to focus on your apps, while we manage the K8s control plane, worker nodes, security, DNS and load balancing
Improve Time to Service:
Significantly faster deployments ---- Accelerate cloud migration or adoption of a new cloud provider using a consolidated service that exposes the same API and networking + security capabilities across any cloud provider.
Rapid Service Delivery --- SaaS-based deployments and lifecycles management across clouds/clusters with centralized intent and policy increases agility
Improved Developer Experience --- Increase productivity by delivering APIs without VPNs or complex firewall configs. Giving simple and secure access to backend services to accelerate testing, and the ability to expose services for inbound testing
Leverage Automation, including native support for developer tools --- support automation needs of app teams, simplify and leverage automation with Volterra public APIs, terraform providers and vestctl including identity and access management and multi-tenancy providing app teams self-service capability. Users can use their existing CI/CD tools like CircleCI, Spinnaker and GitLab.
Enhance End User Experience:
Dramatically Faster Apps --- Offloading cloud workloads to our global network of edge PoPs and/or remote site or customer edge locations can help you achieve in app latency — resulting in a more powerful user experience
Maximum Reliability & Performance --- Your apps can be automatically deployed across our global network, leveraging built-in app security and intelligent traffic routing around failures to be delivered with maximum uptime and resiliency
Increased Uptime and Reliability --- Delivering highly available services across clusters or clouds. Use our global network to connect across clusters and expose services to the Internet - with built-in L3-L7 DDoS mitigation, WAF, DNS, and TLS certificate management with end-to-end encryption for compliance.
Maximum Security with Zero Trust --- Implement multi-layer security in and across clusters, including ingress + egress, WAF and DDoS mitigation. Automate zero-trust at the API-level with API Discovery and policy-based control.
Reduced Risk --- uniform identity, zero trust security and centralized observability with continuous verification removes blind spots and reduces risk
Customer overview:
Industry: B2C – online gambling and poker website
Tech sophistication: Private data center and no cloud environments
Buyers: Private data center manager; other stakeholders included public cloud manager, DevOps, and CTO
Pain points: Hacker attack at private data center
Project-at-a-glance:
Initial engagement was in response to/in order to address DDoS attack on their private Data center and we helped them develop an immediate DDoS mitigation approach via VoltMesh (infrastructure play)
After initial engagement-built trust, expanded how VoltMesh was used: over time replaced private DC, internet service provider, WAF and supported transition to public cloud environment and back-up / duplication of key business functions to reduce risk
Primary use case: multi-cloud networking and security
Results – Critical outcomes in BOLD:
“To provide the best playing experience for our players and the most secure environment we use Volterra's VoltMesh service and global private backbone” --- Head of Operations
Increased collaboration across siloed technical functions
Ability to scale out / flex capacity to respond to business needs
End-to-end security -- Reduced risk across multiple environments
Vendor consolidation (replaced 3-5 vendors)
Shift from Capex to Opex
Customer overview:
Industry: Information technology and electronics
Tech sophistication: Edge, private data center/cloud and AWS public cloud environment
Buyers: Cloud engineering team (VP/GM and Director)
Pain points: Operational bottlenecks / agility of complex workloads in distributed environment (15K - 54K edge devices);Large engineering team facing challenges with homegrown solution
Project-at-a-glance:
Primary use case (Modern Apps in Distributed Cloud) focused on application delivery and lifecycle management, and security for a large distributed customer edge environment
Initial engagement was set to develop customer edge solution for digital signage and public surveillance; distributed app delivery using VoltMesh and VoltStack
Results – Critical outcomes in BOLD:
Simplified operations across large distributed edge environment
Ability to scale out at the edge based on business needs
End-to-end security and visibility (to the edge)
Reduced risk across complicated edge environments
Vendor consolidation via integrated stack (driving simplified operation and lower TCO)
Agility/Decreased time to service