Where many business segments quickly succumb to consolidation, the technologies that comprise the Cloud are instead organizing to interoperate.
In this session we’re going to look at ways to orchestrate complex collaborative environments, focusing on operating multi-server / multi-Cloud infrastructures.
Agenda:
Innovation and consolidation
Innovation in the Cloud industry
Microservices
The flipside of microservices
Orchestrating for the microservices ecosystem
Orchestrating for reliability
Disclaimer: I do not own the rights to images/graphs used in this presentation. Graphs on slides 11&12 from @berndruecker in https://www.slideshare.net/BerndRuecker/wjax-2017-microservice-collaboration.
Implementation of the Open Source Virtualization Technologies in Cloud Computingneirew J
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
Implementation of the Open Source Virtualization Technologies in Cloud Computingijccsa
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
Niloufer Tamboly and Mallik Prasad presented 'Securing The Journey To The Cloud' at the first (ISC)2 New Jersey Chapter meeting.
Chapter officers:
Gurdeep Kaur, President
Niloufer Tamboly, Membership Chair
Mallik Prasad, Secretary
Anthony Nelson, Treasurer
LinuxCon North America 2013: Why Lease When You Can Buy Your CloudMark Hinkle
Perhaps one of the perplexing things about cloud computing is the choice around renting time in someone else’s cloud (Amazon, Google, Rackspace or a myriad of others) or building your own. It’s not unlike the age-old car buyer’s dilemma, take the lower payments and lower total miles lease or buy the car and drive it for the long haul. Cloud computing users are often faced with the same conundrum. This presentation will focus on how to buy and build a cloud that can be fulfill the needs of most users including strategies for making use of the open source private cloud or managing workloads in both the private and public cloud using open source software.
Overview of Cloud Computing, Infrastructure as a Service, Platform as a Service, Software as a Service.
Cloud computing means transferring ICT resources (servers, hosts, applications, databases, platforms etc.) to a cloud service provider (CSP) with the goal of reducing capital expenditures (CapEx).
Cloud computing differs from legacy hosting services in that CSPs offer standardized services on a massive scale which results in economy-of-scale effects thus further reducing operating expenses (OpEx).
Different cloud models such as public, private and hybrid clouds address different customer needs.
The 3 categories for the functional level of cloud services are IaaS (Infrastructure as a Service),
PaaS (Platform as a Service) and SaaS (Software as a Service). Countless models emerge almost daily such as MaaS (Management as a Service), BaaS (Backend as a Service) and NaaS (Network as a Service).
To accommodate increases in processing power, cloud services offer the possibility to scale-up or scale-out.
Implementation of the Open Source Virtualization Technologies in Cloud Computingneirew J
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
Implementation of the Open Source Virtualization Technologies in Cloud Computingijccsa
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
Niloufer Tamboly and Mallik Prasad presented 'Securing The Journey To The Cloud' at the first (ISC)2 New Jersey Chapter meeting.
Chapter officers:
Gurdeep Kaur, President
Niloufer Tamboly, Membership Chair
Mallik Prasad, Secretary
Anthony Nelson, Treasurer
LinuxCon North America 2013: Why Lease When You Can Buy Your CloudMark Hinkle
Perhaps one of the perplexing things about cloud computing is the choice around renting time in someone else’s cloud (Amazon, Google, Rackspace or a myriad of others) or building your own. It’s not unlike the age-old car buyer’s dilemma, take the lower payments and lower total miles lease or buy the car and drive it for the long haul. Cloud computing users are often faced with the same conundrum. This presentation will focus on how to buy and build a cloud that can be fulfill the needs of most users including strategies for making use of the open source private cloud or managing workloads in both the private and public cloud using open source software.
Overview of Cloud Computing, Infrastructure as a Service, Platform as a Service, Software as a Service.
Cloud computing means transferring ICT resources (servers, hosts, applications, databases, platforms etc.) to a cloud service provider (CSP) with the goal of reducing capital expenditures (CapEx).
Cloud computing differs from legacy hosting services in that CSPs offer standardized services on a massive scale which results in economy-of-scale effects thus further reducing operating expenses (OpEx).
Different cloud models such as public, private and hybrid clouds address different customer needs.
The 3 categories for the functional level of cloud services are IaaS (Infrastructure as a Service),
PaaS (Platform as a Service) and SaaS (Software as a Service). Countless models emerge almost daily such as MaaS (Management as a Service), BaaS (Backend as a Service) and NaaS (Network as a Service).
To accommodate increases in processing power, cloud services offer the possibility to scale-up or scale-out.
My talk @ T-Mobile for the B2B and Commissions teams as part of T-Mobile's move to Uncareer architecture and obsessive customer focus. This effort is to define a path for these two team to move from the current way of Monoliths to a more distributed Microservice architecture while understanding the landmines/risk on the way. Also has a mention of event sourcing architecture and CAP theorem
VMworld 2013: Network Function Virtualization in the Cloud: Case for Enterpri...VMworld
VMworld 2013
Alka Gupta, VMware
Sanjay Aiyagari, VMware
Allon Dafner, Amdocs
Iain Woolf, Alcatel-Lucent
Artur Tyloch, Nokia Solutions and Networks
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
cncf overview and building edge computing using kubernetesKrishna-Kumar
Open Source India Conference 2018 Presentation to the general audience - not a deep technical talk. Narrated like a story for make it interesting......
Converged, Hyperconverged, and Composable Infrastructure EcoCast. Join ActualTech Media as we talk to emerging integrated systems solutions providers as they tell you exactly how they work their magic. You will also hear from vendors that augment that services provided by the infrastructure by ensuring that your data always stays protected.
5 Steps to a Secure Hybrid Architecture - Session Sponsored by Palo Alto Netw...Amazon Web Services
A hybrid Architecture is one of the easiest ways to securely address new application requirements and cloud-first development initiatives. This approach allows you to start small and expand as your requirements change while maintaining a strong security posture. In this session, you will learn the 5 key steps to building a hybrid architecture using the VM-Series next-generation firewall.
Speaker: Bisham Kishnani, Consulting Engineer (APJC) – DataCenter & Virtualization, Palo Alto Networks
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
More Related Content
Similar to Collaboration over Consolidation in the Cloud
My talk @ T-Mobile for the B2B and Commissions teams as part of T-Mobile's move to Uncareer architecture and obsessive customer focus. This effort is to define a path for these two team to move from the current way of Monoliths to a more distributed Microservice architecture while understanding the landmines/risk on the way. Also has a mention of event sourcing architecture and CAP theorem
VMworld 2013: Network Function Virtualization in the Cloud: Case for Enterpri...VMworld
VMworld 2013
Alka Gupta, VMware
Sanjay Aiyagari, VMware
Allon Dafner, Amdocs
Iain Woolf, Alcatel-Lucent
Artur Tyloch, Nokia Solutions and Networks
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
cncf overview and building edge computing using kubernetesKrishna-Kumar
Open Source India Conference 2018 Presentation to the general audience - not a deep technical talk. Narrated like a story for make it interesting......
Converged, Hyperconverged, and Composable Infrastructure EcoCast. Join ActualTech Media as we talk to emerging integrated systems solutions providers as they tell you exactly how they work their magic. You will also hear from vendors that augment that services provided by the infrastructure by ensuring that your data always stays protected.
5 Steps to a Secure Hybrid Architecture - Session Sponsored by Palo Alto Netw...Amazon Web Services
A hybrid Architecture is one of the easiest ways to securely address new application requirements and cloud-first development initiatives. This approach allows you to start small and expand as your requirements change while maintaining a strong security posture. In this session, you will learn the 5 key steps to building a hybrid architecture using the VM-Series next-generation firewall.
Speaker: Bisham Kishnani, Consulting Engineer (APJC) – DataCenter & Virtualization, Palo Alto Networks
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
7. Tech Industry has a long history of market
concentration.
► IBM in mainframe computers
► Microsoft in PC operating systems
► SAP & Oracle in enterprise applications
Is the Cloud Industry already consolidated?
8. Cloud vendors seem to be dedicated to playing together nicely – for now
► Collaboration makes sense to keep R&D costs low.
► Cloud Platforms are becoming powerful integrated systems.
► All major providers support nearly all dev environments (AWS has a growing Microsoft business, 40% of
Azure runs on Linux etc.).
11. Basic idea behind microservices:
Microservices break down software into functional components that interoperate / communicate to create an overall
application.
12. The complexity lies in orchestrating microservices:
Microservices do not live in isolation, their
complexity lies in the large-scale environment or
ecosystem they live in.
Building, standardizing, and maintaining this
infrastructure in a stable, scalable, fault-tolerant,
and reliable way is essential for successful
microservice operation.
14. Microservices ecosystem:
Actual machines, servers, physical computers > Amazon EC2, Google Cloud Platform, Microsoft Azure etc. or
private DC.
Microservices
Application Platform
Communication
Hardware
DevOps teams, self-service dev tools etc
Networks, DNS, RCPs, API endpoints,
service discovery / registry, Load-
balancing etc.
15. Other risk factors in the Microservice Ecosystem:
► Network failures (EC2 outage etc)
► Security breaches – the fewer the providers, the higher the risk
► Vendor lock-in :
- Vendor lock-in at the service layer (AWS lambda, IBM Watson)
- Cloud vendors will stop competing on price once they’ve reached critical mass
► Cut back on innovation
17. 1/ AVAILABILITY
► Services need to be available locally
► Services need to be available globally / externally
On a public internet that is not smart enough find the best available path.
On a public internet full of bots (consuming real user traffic) and DDOs attacks.
19. live.cedexis.com
+15 BILLION
MEASUREMENTS PER DAY
+1 BILLION DAILY
END-USER SESSIONS
FROM +50’000
NETWORKS AROUND THE WORLD
THROUGHPUT
RANGING FROM 1
TO 10 ON A SINGLE
PROVIDER ALONG
THE DAY
1000 OUTAGE PER
CDN PER DAY.
10:21 PM – CDN1 167ms
10:22 PM – CDN2 94ms
10:22 PM – CLOUD 1 128ms
10:23 PM – CLOUD 2 230ms
10:26 PM – DC 1 153ms
21. 2/ SPEED
► Services need to be fast locally
► Services need to be fast globally / externally
On a public internet that is not built to find the fasted path.
22. 2/ WORK ALL THE TIME, UNDER ANY CONDITION
► Services need to work all the time, everywhere – and every service has to be designed to work always
► Services need to work under heavy pressure
When a lot of traffic starts flowing in (need to scale up)
Under attack (DDoS etc)
When services depend on each other (none of them should be a SPOF) etc
24. 1/ LOCAL RELIABILITY
► Provide local fallback / alternative when the main endpoint is slow / unavailable / a source of errors.
Multiple endpoints for each critical microservice
Local Load-Balancing
Local heath check monitoring
25. 2/ GLOBAL RELIABILITY
► Orchestrate a multi-homed infrastructure at the global level too
► Use a Global Load-Balancer in order to route traffic away from bottlenecks and outages based on:
Global (external) monitoring (health-checks)
Real end-user monitoring
Load/Error feedback (directly from the server / PoP / region)
Automate your traffic management with a software-defined solution
26. 2/ GLOBAL RELIABILITY
Multi-server / Multi-Region
/ Multi-Cloud / Multi-CDN
or Hybrid architectures
• Local Load-Balancing to
select Optimum Server /
Instance
• Continuously Updated
RUM & APM Monitoring
• Global health-checks /
monitoring
• Software-defined,
automated Global Load-
balancing
RUM Cloud
Scoring
Availability &
Latency
App
Performance
Monitoring
CPU & I/O
APP PERFORMANCE MONITORING
Continuous Self-Correcting
Action
Monitor Data
Center &
Application
Health
LOCAL HEALTHCHECKS
REAL USER MONITORING
Select Optimum
Server
LOCAL LOAD-BALANCING
Select
Optimal
Cloud Region
/ Cloud
GLOBAL, REAL-TIME LOAD-BALANCING
27. Make sure your services are multi-homed & orchestrated so that
they can collaborate together to provide a fast, reliable service - all
the time.
Good morning or good afternoon, depending on where you are in the world! I’m Aude, Cloud Evangelist at Cedexis. In this webinar today we’re going to look at ways to orchestrate complex collaborative environments, focusing on operating multi-server / multi-Cloud infrastructures.
Historically, industries have tended towards consolidation. Because innovation were very technological, because entering an established market and expanding beyond a few percentages of marketshare was too costly. Because economies of scale made more sense… Just look at the airline of car manufacturing industries – there are only a handful of actors left!
But if we look beyond technology-heavy industries, you have even worst examples. Have you looked at how many companies are behind your favorite morning cereals or ice-cream? I’m going to give you a hint – I can bet they are owned by one of these 10 corporations.
However in our technology-obsessed era, the usual innovation cycle has been disrupted (we’ve gone full cycle, technology disrupting itself!). Apple disrupted the music industry with the iPod, and so many other industries afterwards. Tesla might mess up car manufacturers and even batteries-makers. Uber, Airbnb… there are many examples of innovators disrupting established industries today – and by disruption we really mean they’re killing off the established actors. They grow fast, they raise a ton of money, and become too big to buy. What’s really change is the current investment race going on in Silicon Valley – there’s so much money flowing in that innovators become too big to acquire and Consolidation becomes nearly impossible. Did you see how much salesforce just spent on Mulesoft? 6.5B!
So where does that leave us in the Cloud industry?
If you look at marketshare of the top three vendors, it looks like the Cloud industry is already consolidated. AWS, Google and Azure collectively own more than 75% of the Cloud Platform market. Same for Microsoft Dynamics, Oracle and Salesforce in the Customer service and sales automation market. Cloud vendors should be competing heavily against each other, given there’s little room to gain more marketshare aside from taking it from the other Top 3 actors.
But it’s not what we see happening. On the contrary, cloud vendors seem to be dedicated to playing together nicely – at least for now. Just look at the video streaming ecosystem – you’re practically going to use a different vendor for encoding, packaging, CRM, player, analytics, traffic steering… Collaboration makes sense - each actor focuses on one core technology. If one had to develop each brick separately, it would be much too costly in R&D.
Another good example are marketplaces. If you look at AWS or Azure’s marketplaces you’ll see how much collaboration there is = there are even products that compete cloud platform’s offerings. These major vendors have become integrated systems.
Cloud platforms are simply adapting to the way users are consuming IT resources. It’s not that they don’t want to compete against each other, it’s that applications are now developed as microservices, a collection of technologies each developed and maintained by a multitude of actors.
Microservices have gained traction because they allow developers to make use of code and technologies that have been perfected externally. They aren’t plagued by the same scalability challenges posed by monolithic apps - they are optimized for scalability, efficiency and for developer velocity.
I’m sure every one of you here knows that, but I’ll say it nonetheless - the basic idea behind microservices is to break down software into functional components that can be scaled up or down to accommodate user needs. This of course fits nicely with the Cloud industry capabilities – it would be much more difficult to do on-prem.
Complexity doesn’t reside in moving monolithic apps to microservices, not even in building these micro-services. I’m not saying it’s easy of course! But the real complexity is in being able to build a successful collaborative environment and infrastructure to run these microservices on.
The infrastructure has to sustain the microservice ecosystem. The goal of all infrastructure engineers and architects must be to remove the low-level operational concerns from microservice development and build a stable infrastructure that can scale, one that developers can easily build and run microservices on top of. And of course that’s easier said than done!
We can look at the microservice ecosystem as four different layers, where the lower 3 are the infrastructure: the hardware layer, the communication layer and the application platform. The top layer is where individual microservices live. A microservice will send some data in a standardized format over the network to another service (or perhaps to a message broker or another microservice’s API endpoint). The interoperability of these various layers and actors composing each layer is where most difficulties happen.
Even if you manage to solve interoperability between the different layers, your infrastructure are still at risk. Network fails, vendors get DDoS. And if you decide to pick one big vendor to run your services on, you’ll be vendor-locked. What happens when that vendor stops competing on price? Or stops investing in innovation over opening new PoPs in regions of the world you have no interest in?
So, what should you look out for when orchestrating your microservices?
First and foremost, you need vendors that have good network availability. Your services need to be available locally, as well as globally. The internet is not built to help your content find the fastest path, bots and DDoS consume traffic, outages happen all the time.
You may have heard about AWS’ S3 outage last year, or EC2’s major failure a few months ago. Worldwide outages are now making headlines because so many companies rely on cloud services to operate. But what newspapers don’t report on are the ‘regular’ outages, occurring at infrastructure or network level that will bring down the access to an instance or to some regions.
At Cedexis we have a real-user monitoring tool called Radar. We basically have JS tags deployed on thousands of websites, testing Cloud services and network performance directly from end-users. We make on average 15 Billion measurements per day – allowing us to see the micro-outages that are happening all over the world, in real time.
And I can tell you there are many outages happening everyday!
As an example, a couple of weeks ago we saw that one of Azure’s US west regions went down. Did you hear about it? What that meant for end-users was at best a degraded user experience and at worse a complete service interruption.
But even under “normal” conditions, response times to access cloud providers are fluctuating all the time.
Once you’ve looked at your vendors’ availability, you also want to make sure they are fast.
Fast locally : when your microservices are deployed within a controlled local environment
Fast globally : when multiple microservices are delivered over clouds or saas solutions. There are huge differences in performance between the different Cloud platforms, depending on where you’re connecting from. Even within one cloud platform, the very same AWS EU West region has very different performance whether you’re connecting from BT or Sky or TalkTalk.
in the DevOps world, the concept of “site under maintenance” is long gone. Your services simply cannot be unavailable anymore. They need to work all the time, everywhere – and load under 3 seconds anywhere in the world if possible. They also need to work under pressure, whether it be DDoS, heavy traffic etc.
So how do you actually orchestrate for high availability, speed, and resiliency?
It’s necessary to get visibility on the conditions of your infrastructure in order to make sure that you are sending users to an endpoint that is available - and to an endpoint that can handle the load. Make sure you have multiple endpoints = different servers and/or different physical locations and/or different ISP/connectivity to internet for the microservices to rely on.
Sometime apps or microservices can look available from the outside but are down or close to overloaded from the inside. Local health monitoring will provide critical information such as high frequency checks, load feedback from the servers, circuit breaker, local retry.
In order to orchestrate these multiple endpoints we advocate using a local load-balancer (such as NETSCALER, NGINX, HAPRPXY, VARNISH etc) in order to route traffic effectively across these multiple servers or local instances. This load-balancer should take into account the data flowing from your monitoring tools in order to make intelligent traffic management decisions.
Similarly, real-user monitoring and network information is key to global reliability.
We are strong advocates of multi-homed infrastructures – not just at the server / instance level, but using multiple datacenters, multiple cloud regions, multiple clouds or CDNs in order to help make your service 100% available for end-users.
We also recommend external health checks (up to the second for critical services requiring high availability) as well as real-user monitoring.
Why RUM? Because it will provide real network information and allow you to keep an eye on the previously mentioned outages / peering issues. Particularly useful for fully dynamic and sync transactions like recommendation tools / booking engines etc or cached/CDN-based content in multiple countries and locations.
Load/error feedback will allow your global load-balancer to automatically remove that POP when it’s over used / close to unavailability or source of too much error.
The advantage of combining an external (real-user based) and internal (also real-user based) vision is that you can pretty much let your infrastructure manage itself – you get to sleep at night again!
First you of course check that your Datacenter or instances are up and running, then you automatically add network data on how fast / available they are from the outside. Here we have three cloud regions that seem green from an external network perspective.
Now, this is when internal data comes in – load metrics will tell you which region is over utilized or not. In our example, region C using Netscaler looks to be the best choice. So the global load-balancer, taking into account all of this information in real time, should send traffic over to that region – reducing in turn the load on the other regions. Over time, this enables continuous self-correcting cycle of your different endpoints.
To conclude, make sure your services are multi-homed, in order to be able to select the server, region, cloud that is the most available and has the best performance. Use internal and external monitoring data to feed network health information to your local and global load-balancers. And sleep again at night knowing that your infrastructure is a self-healing, reliable machine.
If you’d like more information on multi-Cloud or hybrid-Cloud architectures, please reach out to me @ aude@cedexis.com sales@cedexis.com