This talk was done in Feb 2020. Sergey and I co-presented at CTO Forum on Microservices and Service Mesh (how they relate, requirements, goals, best practices and how DevOps and Agile has had convergence in the set of features for Service Mesh and gateways around observability, feature flags, etc.)
Lessons learned from over 25 Data Virtualization implementationsDenodo
Watch full webinar here: https://bit.ly/327fzQ0
If you have been part of the Denodo community for a while, you likely have heard of SimplicityBI (now part of the BDO family). This expert firm located in Canada has done over 25 implementations of Denodo across North America in over 10 different industries.
Watch full webinar here: https://buff.ly/2MwDyhq
The use of Data Virtualization as a global delivery layer means that Denodo is a critical component of the data architecture. It cannot fail, needs to be fault tolerant and perform as designed. In this context, enterprise level-monitoring is key to make sure the virtual layer is in good health and proactively detect potential issues. Fortunately, Denodo provides a full suite of monitoring capabilities and integrates with leading monitoring tools like Splunk, Elastic and CloudWatch.
Attend this session to learn:
- How to configure the key global parameters of the Denodo server
- How to integrate Denodo with enterprise monitoring solutions like Splunk and Cloudwatch
- Key metrics to monitor
This presentation explains the Integrator's Dilemma and and how the SnapLogic Integration Cloud can help.
To learn more, visit: http://www.snaplogic.com/.
Apache Mesos, Apache Hadoop, Apache Spark + Custom Enterprise Applications: This stack combined is greater than the sum of each of the pieces of this stack. Mesos can manage resources across an entire data center, Hadoop provides a distributed data store and scalable data processing, and Spark delivers great in-memory and disk-based performance of data processing as well as streaming capabilities. Couple all of that with custom enterprise applications, and the data center turns into a well-oiled machine. When combined, this software stack delivers unlimited flexibility for the entire data center.
Jim Scott, Director of Architecture and Enterprise Strategy | Strata + Hadoop World | Barcelona, Spain, November 2014
Organizations looking to the cloud now have more vendor offerings and architecture choices available to them than ever before. In order to correctly select and implement the most appropriate cloud based DBMS architecture for their shops, technology pros must create and execute a well-thought out, detailed analysis of the competing offerings.
In addition, they must consider the impact cloud based DBMS systems, like any new architecture, will have on their support environment. Changes to policies and procedures, security controls, staff roles and responsibilities, change management processes and support documentation must be evaluated.
Lessons learned from over 25 Data Virtualization implementationsDenodo
Watch full webinar here: https://bit.ly/327fzQ0
If you have been part of the Denodo community for a while, you likely have heard of SimplicityBI (now part of the BDO family). This expert firm located in Canada has done over 25 implementations of Denodo across North America in over 10 different industries.
Watch full webinar here: https://buff.ly/2MwDyhq
The use of Data Virtualization as a global delivery layer means that Denodo is a critical component of the data architecture. It cannot fail, needs to be fault tolerant and perform as designed. In this context, enterprise level-monitoring is key to make sure the virtual layer is in good health and proactively detect potential issues. Fortunately, Denodo provides a full suite of monitoring capabilities and integrates with leading monitoring tools like Splunk, Elastic and CloudWatch.
Attend this session to learn:
- How to configure the key global parameters of the Denodo server
- How to integrate Denodo with enterprise monitoring solutions like Splunk and Cloudwatch
- Key metrics to monitor
This presentation explains the Integrator's Dilemma and and how the SnapLogic Integration Cloud can help.
To learn more, visit: http://www.snaplogic.com/.
Apache Mesos, Apache Hadoop, Apache Spark + Custom Enterprise Applications: This stack combined is greater than the sum of each of the pieces of this stack. Mesos can manage resources across an entire data center, Hadoop provides a distributed data store and scalable data processing, and Spark delivers great in-memory and disk-based performance of data processing as well as streaming capabilities. Couple all of that with custom enterprise applications, and the data center turns into a well-oiled machine. When combined, this software stack delivers unlimited flexibility for the entire data center.
Jim Scott, Director of Architecture and Enterprise Strategy | Strata + Hadoop World | Barcelona, Spain, November 2014
Organizations looking to the cloud now have more vendor offerings and architecture choices available to them than ever before. In order to correctly select and implement the most appropriate cloud based DBMS architecture for their shops, technology pros must create and execute a well-thought out, detailed analysis of the competing offerings.
In addition, they must consider the impact cloud based DBMS systems, like any new architecture, will have on their support environment. Changes to policies and procedures, security controls, staff roles and responsibilities, change management processes and support documentation must be evaluated.
Webinar: DataStax Enterprise 6: 10 Ways to Multiply the Power of Apache Cassa...DataStax
Today’s customers want experiences that are contextual, always on, and above all — delightful. To be able to provide this, enterprises need a distributed, hybrid cloud-ready database that can easily crunch massive volumes of data from disparate sources while offering data autonomy and operational simplicity. Don’t miss this webinar, where you’ll learn how DataStax Enterprise 6 maintains hybrid cloud flexibility with all the benefits of a distributed cloud database, delivers all the advantages of Apache Cassandra with none of the complexities, doubles performance, and provides additional capabilities around robust transactional analytics, graph, search, and more.
View recording: https://youtu.be/tuiWAt2jwBw
Explore all DataStax webinars: https://www.datastax.com/resources/webinars
Transform Your Mainframe Data for the Cloud with Precisely and Apache KafkaPrecisely
Your mainframe does hard work for your business, supporting essential computing transactions every day. However, mainframe data does not easily integrate with the cloud platforms driving data-driven, real-time, analytics-focused business processes. Integrating data from this critical technology often results in high costs and downtime. So, what can you do?
View this on-demand webinar to learn how Precisely Connect can help use the power of Apache Kafka to eliminate data silos and make cloud-based, event-driven data architectures a reality. Start your cloud transformation journey today, knowing you don’t need to leave essential transaction data behind!
During this webinar, you will learn more about:
· Where to begin your cloud transformation journey using mainframe data and Apache Kafka
· What you need to move mainframe data to the cloud while reducing costs, modernizing architectures, and using the staff you have today
· How Precisely Connect customers are using change data capture and Apache Kafka to deliver real-time insights to the cloud
An RDX Insights Series Presentation that analyzes the most significant areas of database vendor competition. Competitive evaluations include public vs private cloud, the three leading public cloud offerings, NoSQL vs relational, open source vs commercial and the traditional DBMS vendors vs all competitors.
Choosing technologies for a big data solution in the cloudJames Serra
Has your company been building data warehouses for years using SQL Server? And are you now tasked with creating or moving your data warehouse to the cloud and modernizing it to support “Big Data”? What technologies and tools should use? That is what this presentation will help you answer. First we will cover what questions to ask concerning data (type, size, frequency), reporting, performance needs, on-prem vs cloud, staff technology skills, OSS requirements, cost, and MDM needs. Then we will show you common big data architecture solutions and help you to answer questions such as: Where do I store the data? Should I use a data lake? Do I still need a cube? What about Hadoop/NoSQL? Do I need the power of MPP? Should I build a "logical data warehouse"? What is this lambda architecture? Can I use Hadoop for my DW? Finally, we’ll show some architectures of real-world customer big data solutions. Come to this session to get started down the path to making the proper technology choices in moving to the cloud.
Modern Data Warehousing with the Microsoft Analytics Platform SystemJames Serra
The traditional data warehouse has served us well for many years, but new trends are causing it to break in four different ways: data growth, fast query expectations from users, non-relational/unstructured data, and cloud-born data. How can you prevent this from happening? Enter the modern data warehouse, which is able to handle and excel with these new trends. It handles all types of data (Hadoop), provides a way to easily interface with all these types of data (PolyBase), and can handle “big data” and provide fast queries. Is there one appliance that can support this modern data warehouse? Yes! It is the Analytics Platform System (APS) from Microsoft (formally called Parallel Data Warehouse or PDW) , which is a Massively Parallel Processing (MPP) appliance that has been recently updated (v2 AU1). In this session I will dig into the details of the modern data warehouse and APS. I will give an overview of the APS hardware and software architecture, identify what makes APS different, and demonstrate the increased performance. In addition I will discuss how Hadoop, HDInsight, and PolyBase fit into this new modern data warehouse.
Cloud's Hidden Impact on IT Support OrganizationsChristopher Foot
The rapid growth of cloud offerings are providing organizations with cost effective alternatives to on-premises systems. When calculating TCO and return on their cloud investment, savvy decision makers must also factor in costs that include staff training, new organizational roles and responsibilities, policy and procedure changes, modifications to application design, build and change management processes as well as the impact cloud applications will have on existing support toolsets.
The last slide includes a link to the YouTube Webinar of this presentation.
Securing the Data Hub--Protecting your Customer IP (Technical Workshop)Cloudera, Inc.
Your data is your IP and its security is paramount. The last thing you want is for your data to become a target for threats. This workshop will focus on the realities of protecting your customer’s IP from external and internal threats with battle hardened technologies and methodologies. Another key concept that will be examined is the connection of people, processes and technology. In addition, the session will take a look at authentication and authorisation, auditing and data lineage as well as the different groups required to play a part in the modern data hub. We will also look at how to produce high impact operation reports from Cloudera’s RecordService a new core security layer that centrally enforces fine-grained access control policy, which helps close the feedback loop to ensure awareness of security as a living entity within your organisation.
Seamless, Real-Time Data Integration with ConnectPrecisely
As many of our customers have come to learn - integrating legacy data into modern data architecture is easier said than done! View this on-demand webinar to learn all about Precisely's seamless data integration solutions and how they have helped thousands of customers like you trust their data.
Learn about the two flavors of Precisely's Connect:
• Collect, prepare, transform and load your data to various targets using Connect ETL with the flexibility of using clusters and running on many different environments. With our 'design once, deploy anywhere' feature; what is built on prem today, can run on a cloud platform tomorrow with no development or mainframe expertise required.
• Capture data changes in real-time with no coding, tuning or performance impact using Connect CDC. Replicating exactly WHAT you need and HOW you need it with over 80 built-in data transformation methods.
Data-Centric and Message-Centric System ArchitectureRick Warren
Presentation from April, 2010 summarizing the principles of data-centric design and how they apply to DDS technology. Message-centric design is presented by way of contrast.
The Ultimate Guide to Cloud Migration - A Whitepaper by RapidValueRapidValue
Digital transformation based on cloud-first strategy is a marathon. Any transformation journey which is disruptive and requires changing the core foundation of the organization can be very challenging. It is bound to fail unless the journey is planned with specific goals in mind, right roles and resources allocated to it, ‘as-is’ to ‘to-be state’ is mapped and implementation engine is fine-tuned.
Based on the experience of implementing numerous transformation projects for our global clients, RapidValue has formulated a BRAVE framework for cloud-first digital transformation.
Webinar | How to Understand Apache Cassandra™ Performance Through Read/Writ...DataStax
In this webinar, you will leverage free and open source tools as well as enterprise-grade utilities developed by DataStax to get a solid grasp on the performance of a masterless distributed database like Cassandra. You’ll also get the opportunity to walk through DataStax Enterprise Insights dashboards and see exactly how to identify performance bottlenecks.
View Recording: https://youtu.be/McZg_MMzVjI
Cloud Innovation Day - Commonwealth of PA v11.3Eric Rice
Enhance and accelerate your path to digital innovation and transformation with IBM Cloud. Develop a roadmap to get started with cloud and incorporate best practices from other organizations just like yours.
Early Draft: Service Mesh allows developers to focus on business logic while the crosscutting network data layer code is handled by the Service Mesh. This is a boon because this code can be tricky to implement and hard to test all of the edge cases. Service Mesh takes this a few steps further than AOP or Servlet Filters or custom language-specific frameworks because it works regardless of the underlying programming language being used which is great for polyglot development shops. Thus standardizing how these layers work, while allowing teams to pick the best tools or languages for the job at hand. Kubernetes and Istio Service Mesh automate best practices for DevSecOps needs like: failover, scale-out, scalability, health checks, circuit breakers, rate limiters, metrics, observability, avoiding cascading failure, disaster recovery, and traffic routing; supporting CI/CD and microservices architecture.
Istio’s ability to automate and maintaining zero trust networks is its most important feature. In the age of high-profile data breaches, security is paramount. Companies want to avoid major brand issues that impact the bottom line and shrink market capitalization in an instant. Istio allows a standard way to do mTLS and auto certificate rotation which helps prevent a breach and limits the blast radius if a breach occurs. Istio also takes the concern of mTLS from microservices deployments and makes it easy to use taking the burden off of application developers.
Webinar: DataStax Enterprise 6: 10 Ways to Multiply the Power of Apache Cassa...DataStax
Today’s customers want experiences that are contextual, always on, and above all — delightful. To be able to provide this, enterprises need a distributed, hybrid cloud-ready database that can easily crunch massive volumes of data from disparate sources while offering data autonomy and operational simplicity. Don’t miss this webinar, where you’ll learn how DataStax Enterprise 6 maintains hybrid cloud flexibility with all the benefits of a distributed cloud database, delivers all the advantages of Apache Cassandra with none of the complexities, doubles performance, and provides additional capabilities around robust transactional analytics, graph, search, and more.
View recording: https://youtu.be/tuiWAt2jwBw
Explore all DataStax webinars: https://www.datastax.com/resources/webinars
Transform Your Mainframe Data for the Cloud with Precisely and Apache KafkaPrecisely
Your mainframe does hard work for your business, supporting essential computing transactions every day. However, mainframe data does not easily integrate with the cloud platforms driving data-driven, real-time, analytics-focused business processes. Integrating data from this critical technology often results in high costs and downtime. So, what can you do?
View this on-demand webinar to learn how Precisely Connect can help use the power of Apache Kafka to eliminate data silos and make cloud-based, event-driven data architectures a reality. Start your cloud transformation journey today, knowing you don’t need to leave essential transaction data behind!
During this webinar, you will learn more about:
· Where to begin your cloud transformation journey using mainframe data and Apache Kafka
· What you need to move mainframe data to the cloud while reducing costs, modernizing architectures, and using the staff you have today
· How Precisely Connect customers are using change data capture and Apache Kafka to deliver real-time insights to the cloud
An RDX Insights Series Presentation that analyzes the most significant areas of database vendor competition. Competitive evaluations include public vs private cloud, the three leading public cloud offerings, NoSQL vs relational, open source vs commercial and the traditional DBMS vendors vs all competitors.
Choosing technologies for a big data solution in the cloudJames Serra
Has your company been building data warehouses for years using SQL Server? And are you now tasked with creating or moving your data warehouse to the cloud and modernizing it to support “Big Data”? What technologies and tools should use? That is what this presentation will help you answer. First we will cover what questions to ask concerning data (type, size, frequency), reporting, performance needs, on-prem vs cloud, staff technology skills, OSS requirements, cost, and MDM needs. Then we will show you common big data architecture solutions and help you to answer questions such as: Where do I store the data? Should I use a data lake? Do I still need a cube? What about Hadoop/NoSQL? Do I need the power of MPP? Should I build a "logical data warehouse"? What is this lambda architecture? Can I use Hadoop for my DW? Finally, we’ll show some architectures of real-world customer big data solutions. Come to this session to get started down the path to making the proper technology choices in moving to the cloud.
Modern Data Warehousing with the Microsoft Analytics Platform SystemJames Serra
The traditional data warehouse has served us well for many years, but new trends are causing it to break in four different ways: data growth, fast query expectations from users, non-relational/unstructured data, and cloud-born data. How can you prevent this from happening? Enter the modern data warehouse, which is able to handle and excel with these new trends. It handles all types of data (Hadoop), provides a way to easily interface with all these types of data (PolyBase), and can handle “big data” and provide fast queries. Is there one appliance that can support this modern data warehouse? Yes! It is the Analytics Platform System (APS) from Microsoft (formally called Parallel Data Warehouse or PDW) , which is a Massively Parallel Processing (MPP) appliance that has been recently updated (v2 AU1). In this session I will dig into the details of the modern data warehouse and APS. I will give an overview of the APS hardware and software architecture, identify what makes APS different, and demonstrate the increased performance. In addition I will discuss how Hadoop, HDInsight, and PolyBase fit into this new modern data warehouse.
Cloud's Hidden Impact on IT Support OrganizationsChristopher Foot
The rapid growth of cloud offerings are providing organizations with cost effective alternatives to on-premises systems. When calculating TCO and return on their cloud investment, savvy decision makers must also factor in costs that include staff training, new organizational roles and responsibilities, policy and procedure changes, modifications to application design, build and change management processes as well as the impact cloud applications will have on existing support toolsets.
The last slide includes a link to the YouTube Webinar of this presentation.
Securing the Data Hub--Protecting your Customer IP (Technical Workshop)Cloudera, Inc.
Your data is your IP and its security is paramount. The last thing you want is for your data to become a target for threats. This workshop will focus on the realities of protecting your customer’s IP from external and internal threats with battle hardened technologies and methodologies. Another key concept that will be examined is the connection of people, processes and technology. In addition, the session will take a look at authentication and authorisation, auditing and data lineage as well as the different groups required to play a part in the modern data hub. We will also look at how to produce high impact operation reports from Cloudera’s RecordService a new core security layer that centrally enforces fine-grained access control policy, which helps close the feedback loop to ensure awareness of security as a living entity within your organisation.
Seamless, Real-Time Data Integration with ConnectPrecisely
As many of our customers have come to learn - integrating legacy data into modern data architecture is easier said than done! View this on-demand webinar to learn all about Precisely's seamless data integration solutions and how they have helped thousands of customers like you trust their data.
Learn about the two flavors of Precisely's Connect:
• Collect, prepare, transform and load your data to various targets using Connect ETL with the flexibility of using clusters and running on many different environments. With our 'design once, deploy anywhere' feature; what is built on prem today, can run on a cloud platform tomorrow with no development or mainframe expertise required.
• Capture data changes in real-time with no coding, tuning or performance impact using Connect CDC. Replicating exactly WHAT you need and HOW you need it with over 80 built-in data transformation methods.
Data-Centric and Message-Centric System ArchitectureRick Warren
Presentation from April, 2010 summarizing the principles of data-centric design and how they apply to DDS technology. Message-centric design is presented by way of contrast.
The Ultimate Guide to Cloud Migration - A Whitepaper by RapidValueRapidValue
Digital transformation based on cloud-first strategy is a marathon. Any transformation journey which is disruptive and requires changing the core foundation of the organization can be very challenging. It is bound to fail unless the journey is planned with specific goals in mind, right roles and resources allocated to it, ‘as-is’ to ‘to-be state’ is mapped and implementation engine is fine-tuned.
Based on the experience of implementing numerous transformation projects for our global clients, RapidValue has formulated a BRAVE framework for cloud-first digital transformation.
Webinar | How to Understand Apache Cassandra™ Performance Through Read/Writ...DataStax
In this webinar, you will leverage free and open source tools as well as enterprise-grade utilities developed by DataStax to get a solid grasp on the performance of a masterless distributed database like Cassandra. You’ll also get the opportunity to walk through DataStax Enterprise Insights dashboards and see exactly how to identify performance bottlenecks.
View Recording: https://youtu.be/McZg_MMzVjI
Cloud Innovation Day - Commonwealth of PA v11.3Eric Rice
Enhance and accelerate your path to digital innovation and transformation with IBM Cloud. Develop a roadmap to get started with cloud and incorporate best practices from other organizations just like yours.
Early Draft: Service Mesh allows developers to focus on business logic while the crosscutting network data layer code is handled by the Service Mesh. This is a boon because this code can be tricky to implement and hard to test all of the edge cases. Service Mesh takes this a few steps further than AOP or Servlet Filters or custom language-specific frameworks because it works regardless of the underlying programming language being used which is great for polyglot development shops. Thus standardizing how these layers work, while allowing teams to pick the best tools or languages for the job at hand. Kubernetes and Istio Service Mesh automate best practices for DevSecOps needs like: failover, scale-out, scalability, health checks, circuit breakers, rate limiters, metrics, observability, avoiding cascading failure, disaster recovery, and traffic routing; supporting CI/CD and microservices architecture.
Istio’s ability to automate and maintaining zero trust networks is its most important feature. In the age of high-profile data breaches, security is paramount. Companies want to avoid major brand issues that impact the bottom line and shrink market capitalization in an instant. Istio allows a standard way to do mTLS and auto certificate rotation which helps prevent a breach and limits the blast radius if a breach occurs. Istio also takes the concern of mTLS from microservices deployments and makes it easy to use taking the burden off of application developers.
The Reality of Managing Microservices in Your CD PipelineDevOps.com
As we shift from monolithic software development practices to microservices, our well-designed CD pipeline will need to change. Microservices are small functions, deployed independently and linked via APIs at run-time. While these differences seem minor, they actually have a large impact on your overall CD structure. Think hundreds of workflows, small of any builds and the loss of a monolithic 'application.'
Join Tracy Ragan, CEO of DeployHub and Brendan O'Leary, Developer Evangelist at GitLab, to learn more.
It's never too early to start the conversation.
Microservices and DevOps form a powerful alliance for modern software development.Reiterate that microservices and DevOps are not just technology choices; they represent a fundamental shift in the way we build and deliver software. By embracing these approaches, organizations can accelerate innovation and achieve long-term success in today's fast-changing digital landscape.
Speaker:
Owen Garrett
Sr. Director, Product Management
NGINX, Inc.
On-Deman Link: https://www.nginx.com/resources/webinars/need-service-mesh/
About the webinar:
Service mesh is one of the hottest emerging technologies. Even though it’s a nascent technology, many vendors have already released their implementation. But do you really need a service mesh?
Attend this webinar to learn about the levels of maturity on the journey to modernizing your apps using microservices, and the traffic management approaches best suited to each level. We’ll help you figure out if you really need a service mesh.
For enterprises trying to stay ahead of the game, having a robust and fast application development program can make or break their market presence. The challenge for developers, however, is to build responsive, devise-agnostic applications in days, not months.
Fundamental and Practice.
Explain about microservices characters and pattern. And also how to be good build microservices. And also additional the scale cube and CAP theory.
Vikash Pandey delivered a session on "Microservices – Explored" at ATAGTR2020
ATAGTR2020 was the 5th Edition of Global Testing Retreat.
Vikash is an empathetic leader working with people & technology in the area Product Development, Consulting, Support and Operations for 20+ years
The video recording of the session is now available on the following link: https://youtu.be/dF5wx4w66s8
To know more about #ATAGTR2020, please visit: https://gtr.agiletestingalliance.org/
Introduction to Microservices Architecture - SECCOMP 2020Rodrigo Antonialli
This presentation gives an high-level overview of what is a Microservices Architecture, as a summary from well-known sources about the topic regarding it's characteristics, advantages and challenges, along with some enabling technologies.
These are my summarized notes from all the microservices session I attended at QCon 2015. These sessions had tons of learning around how to scale microservices and avoid common pitfalls
Just a JSON parser plus a small subset of JSONPath.
Small (currently 4200 lines of code)
Very fast, uses an index overlay from the ground up.
Does not do JavaBean serialization but can serialize into basic Java types and can map to Java classes and Java records.
You can’t afford to not transform. Digital transformation requires a deep understanding of practices. Having a team called DevOps is not doing DevOps per se. Teams must adopt the culture of DevOps, Agility, Lean, MVP, etc. as it is a clear win. The book Accelerate covers studies and guides to show the business value and ROI for adopting these practices.
There are guides, books, practices, and additional information cited. Takes ideas from the book Accelerate by Forsgren PhD, Nicole, Jez Humble, from IT Revolution Press, personal experience and Pluralsight courses on CI/CD, DevOps adoption, etc.
Covers how we built a set of high-speed reactive microservices and maximized cloud/hardware costs while meeting objectives in resilience and scalability. Talks about Akka, Kafka, QBit, in-memory computing, from a practitioners point of view. Based on the talks delivered by Geoff Chandler, Jason Daniel, and Rick Hightower at JavaOne 2016 and SF Fintech at Scale 2017, but updated.
Reactive Java: Promises and Streams with Reakt (JavaOne Talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PPT so there is more notes. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
Reactive Java: Promises and Streams with Reakt (JavaOne talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PDF. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
High-Speed Reactive Microservices - trials and tribulationsRick Hightower
Covers how we built a set of high-speed reactive microservices and maximized cloud/hardware costs while meeting objectives in resilience and scalability. This has more notes attached as it is based on the ppt not the PDF.
This session endeavors to explain high-speed reactive microservice architecture, a set of patterns for building services that can readily back mobile and web applications at scale. It uses a scale-up and -out versus a scale-out model to do more with less hardware. A scale-up model uses in-memory operational data, efficient queue handoff, and microbatch streaming, plus async calls to handle more calls on a single node. High-speed microservice architecture endeavors to get back to OOP roots, where data and logic live together in a cohesive, understandable representation of the problem domain, and away from separation of data and logic, because data lives with the service logic that operates on it.
Netty Notes Part 3 - Channel Pipeline and EventLoopsRick Hightower
Learning more about Netty helps me understand Vert.x better. Netty in Action is a great book. The threading model of Netty is very important to understanding event loops and reactive programming.
Netty Notes Part 2 - Transports and BuffersRick Hightower
Continues on from Part 1 of Netty Notes which covered an overview of Netty concepts. Dives into transports and buffer usage, and why Netty matters for performance.
WebSocket MicroService vs. REST MicroserviceRick Hightower
Comparing the speed of RPC calls over WebScoket Microservices versus REST based microservices. Using wrk, QBit, and examples in Java we show how much faster WebSocket is for doing RPC service calls.
Consul: Microservice Enabling Microservices and Reactive ProgrammingRick Hightower
Consul is a service discovery system that provides a microservice style interface to services, service topology and service health.
With service discovery you can look up services which are organized in the topology of your datacenters. Consul uses client agents and RAFT to provide a consistent view of services. Consul provides a consistent view of configuration as well also using RAFT. Consul provides a microservice interface to a replicated view of your service topology and its configuration. Consul can monitor and change services topology based on health of individual nodes.
Consul provides scalable distributed health checks. Consul only does minimal datacenter to datacenter communication so each datacenter has its own Consul cluster. Consul provides a domain model for managing topology of datacenters, server nodes, and services running on server nodes along with their configuration and current health status.
Consul is like combining the features of a DNS server plus Consistent Key/Value Store like etcd plus features of ZooKeeper for service discovery, and health monitoring like Nagios but all rolled up into a consistent system. Essentially, Consul is all the bits you need to have a coherent domain service model available to provide service discovery, health and replicated config, service topology and health status. Consul also provides a nice REST interface and Web UI to see your service topology and distributed service config.
Consul organizes your services in a Catalog called the Service Catalog and then provides a DNS and REST/HTTP/JSON interface to it.
To use Consul you start up an agent process. The Consul agent process is a long running daemon on every member of Consul cluster. The agent process can be run in server mode or client mode. Consul agent clients would run on every physical server or OS virtual machine (if that makes more sense). Client runs on server hosting services. The clients use gossip and RPC calls to stay in sync with Consul.
A client, consul agent running in client mode, forwards request to a server, consul agent running in server mode. Clients are mostly stateless. The client does LAN gossip to the server nodes to communicate changes.
A server, consul agent running in server mode, is like a client agent but with more tasks. The consul servers use the RAFT quorum mechanism to see who is the leader. The consul servers maintain cluster state like the Service Catalog. The leader manages a consistent view of config key/value pairs, and service health and topology. Consul servers also handle WAN gossip to other datacenters. Consul server nodes forwards queries to leader, and forward queries to other datacenters.
A Datacenter is fairly obvious. It is anything that allows for fast communication between nodes, with as few or no hops, little or no routing, and in short: high speed communication. This could be an Amazon EC2 availability zone, a networking environment like a subnet, or any private, low latency, high
The Java microservice lib. QBit is a reactive programming lib for building microservices - JSON, HTTP, WebSocket, and REST. QBit uses reactive programming to build elastic REST, and WebSockets based cloud friendly, web services. SOA evolved for mobile and cloud. QBit is a Java first programming model. It uses common Java idioms to do reactive programming.
It focuses on Java 8. It is one of the few of a crowded field of reactive programming libs/frameworks that focuses on Java 8. It is not a lib written in XYZ that has a few Java examples to mark a check off list. It is written in Java and focuses on Java reactive programming using active objects architecture which is a focus on OOP reactive programming with lambdas and is not a pure functional play. It is a Java 8 play on reactive programming.
Services can be stateful, which fits the micro service architecture well. Services will typically own or lease the data instead of using a cache.
CPU Sharded services, each service does a portion of the workload in its own thread to maximize core utilization.
The idea here is you have a large mass of data that you need to do calculations on. You can keep the data in memory (fault it in or just keep in the largest part of the histogram in memory not the long tail). You shard on an argument to the service methods. (This was how I wrote some personalization engine in the recent past).
Worker Pool service, these are for IO where you have to talk to an IO service that is not async (database usually or legacy integration) or even if you just have to do a lot of IO. These services are semi-stateless. They may manage conversational state of many requests but it is transient.
ServiceQueue wraps a Java object and forces methods calls, responses and events to go through high-speed, batching queues.
ServiceBundle uses a collection of ServiceQueues.
ServiceServer uses a ServiceBundle and exposes it to REST/JSON and WebSocket/JSON.
Events are integrated into the system. You can register for an event using an annotation @EventChannel, or you can implement the event channel interface. Event Bus can be replicated. Event busses can be clustered (optional library). There is not one event bus. You can create as many as you like. Currently the event bus works over WebSocket/JSON. You could receive events from non-Java applications.
Find out more at: https://github.com/advantageous/qbit
Groovy JSON support and the Boon JSON parser are up to 3x to 5x faster than Jackson at parsing JSON from String and char[], and 2x to 4x faster at parsing byte[].
Groovy JSON support and Boon JSON support are also faster than Jackson at encoding JSON strings. Boon is faster than Jackson at serializing/de-serializing Java instances to/fro JSON. The core of the Boon JSON parser has been forked into Groovy 2.3 (now in Beta). In the process Boon JSON support was improved and further enhanced. Groovy and Boon JSON parsers speeds are equivalent. Groovy now has the fastest JSON parser on the JVM.
MongoDB quickstart for Java, PHP, and Python developersRick Hightower
Quick introduction to MongoDB.
Covers major features, CRUD, DB operations, comparison to SQL, basic console, etc.
Covers architecture of Replica Sets, Autosharding, MapReudce, etc.
Examples in JavaScript, Java, PHP and Python.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
1. CTO Forum
Service Mesh
Draft 2
Microservice Journey
Service Mesh
Architecture Service Mesh
Service Mesh Concerns
Service Mesh Security
Service Mesh Evolution
2. [
2
Author of best-selling agile development book
Early adopter of Microservices, TDD, DevOps, Agile,
Container Orchestration, 12 factor deployments, KPIs/
metric, health checks, tracing, etc.
Successfully ran development organizations
Developed open source software used by millions
• Java Champion 2018
Early adopter and advocate of microservices
• Worked on Vert.x, QBit, Reakt, Groovy, Boon, etc.
• Speaker on microservices at JavaOne
• Designed/implemented microservices-based
systems that scale to 100M users
Wrote App Gateway for streaming music service
Worked with Service Meshes as early as 2015
Worked with Container Orchestration as early as 2016
Senior Director at fortune 100, managing group using
Kubernetes and implementing stream processing
RICK HIGHTOWER
Sergey Sundukovskiy, Ph.D. has over 20 years of
experience serving in capacities of Chief Technology
Officer, Chief Information Officer and Chief Product
Officer. Sergey specializes in implementation of
subscription based high volume SaaS platforms, with
strong emphasis on early stage product development
and market deployment. Specific areas of expertise
include A/B Testing, Big Data, Video Management,
eCommerce, RTB platforms and Cloud Computing.
Sergey often mentors first-time founders and advises
early stage Startups with emphasis on Product
Development, Product Market Testing, Public Relations,
Product Marketing, Team Building, Customer Success
and Organizational Management
4. Lorem Ipsum Dolor
Microservices
Without Service Mesh
Difficulty Is Not In Breaking Down the
Monolith
Easy Problems
Service Granularity
Service Boundaries
Service Communication
Service Contract
Service Roles and Responsibilities
5. Distributed System Problems
❖ Unreliable Networks - Nothing Works As Expected
❖ Lack of High Availability - Everything Eventually Fails
❖ Communication Latency - Everything Slows Down
❖ Limited Bandwidth - It Is Never Enough
❖ Zero Trust Environment - It Is Never Safe
❖ Changing Service Topology - Everybody Gets Lost
6. Microservice Components - Service Config
The interesting part is that each of these microservices can have their own
configuration
Such configurations include details like:
❖ Application configuration.
❖ Database configuration.
❖ Communication Channel Configuration - queues and other
infrastructure.
❖ URLs of other microservices to talk to.
Ex. Git, Vault, File System
7. Microservice Components - Service Discovery
Service discovery involves 3 parties: service provider, service consumer and service
registry.
❖ service provider registers itself with service registry when it enters and
deregister itself when it leaves the system
❖ service consumer gets the location of a provider from registry, and then talks to
the provider
❖ service registry maintains the latest location of providers
Ex. Zooker, Consul, Etcd
8. Microservice Components - Service Routing
Service Routing primary responsibilities for API routing, composition and edge functions
❖ authentication – verifying the identity of the client making the request
❖ authorization – verifying that the client is authorized to perform that particular operation
❖ rate limiting – limiting how many requests per second are allowed from either a specific client
and/or from all clients
❖ caching – cache responses to reduce the number of requests made to the services
❖ metrics collection – collect metrics on API usage for billing analytics purposes
Ex. Zuul, NGINX, Spring Cloud Gateway
9. Microservice Observability
Observability is not monitoring
❖ Health Checking
❖ Metrics
❖ Audit Logging
❖ Distributed Tracing
❖ Exception Logging
❖ Service Logging
Ex. Prometheus, Grafana, Jaeger
11. Microservice Patterns - Circuit Breaker
The circuit breaker concept is straightforward. It wraps a function with a
monitor that tracks failures. The circuit breaker has 3 distinct states, Closed,
Open, and Half-Open:
❖ Closed – When everything is normal, the circuit breaker remains in the
closed state and all calls pass through to the services.
❖ Open – The circuit breaker returns an error for calls without executing the
function.
❖ Half-Open – After a timeout period, the circuit switches to a half-open
state to test if the underlying problem still exists.
12. Microservice Patterns - Rate Limiter
Rate Limiting pattern ensures that a service accepts only a defined
maximum number of requests during a window. This ensures that
underline resources are used as per their limits and don't exhaust.
13. Microservice Patterns - Retry
Retry pattern enables an application to handle transient failures while
calling to external services. It ensures retrying operations on external
resources a set number of times. If it doesn't succeed after all the retry
attempts, it should fail and response should be handled gracefully by the
application.
14. Microservice Patterns - Bulkhead
Bulkhead ensures the failure in one part of the system doesn't cause the
whole system down. It controls the number of concurrent calls a
component can take. This way, the number of resources waiting for the
response from that component is limited. There are two types of bulkhead
implementation:
❖ The semaphore isolation approach limits the number of concurrent
requests to the service. It rejects requests immediately once the limit is
hit.
❖ The thread pool isolation approach uses a thread pool to separate the
service from the caller and contain it to a subset of system resources.
16. How we got here
❖ Web pages that were brochures
❖ eCommerce
❖ Legacy integration
❖ Rush to ‘webify’ businesses
❖ SOA: wrap legacy systems as services to use from the web
❖ Virtualization, Virtualization 2.0, Cloud, Containers, and now
Container orchestration
❖ We want faster feedback and leaner more agile delivery
17. Continuous delivery
❖ The ability to deliver
❖ Build quality in
❖ Work in small batches
❖ Automate repetitive tasks including
❖ testing & deployments
❖ Pursue continuous improvement
❖ Ownership
❖ Comprehensive configuration management
❖ Continuous integration
❖ Continuous testing
You can’t skip steps.
There is investment up
front.
Today’s speed up can
be tomorrows painted
yourself
In a corner.
18. Why DevOps, CI/CD and Microservices?
❖ High performers 2x the rate will exceed organizational performance goals as
low performers:
❖ 2x profitability
❖ 2x productivity
❖ 2x market share
❖ 2x number of customers
❖ High performers twice as likely to exceed non-commercial performance goals as
low performers
❖ 2x better quantity of products and services
❖ 2x operating efficiency
❖ 2x customer satisfaction
❖ 2x quality of products/services
❖ 2x achieving organizational/mission goals
❖ 50% increase in market capitalization compared to low performers!
18
20. Convergence
DevOps
Automation is better
CI/CD
Fast Feedback is better
Lean/Agile
Simpler is better
Microservices
Small is better
12 Factor Deploys
KPIs and Health
Service Mesh
• Observability
• Logging
• Tracing
• KPIs
• Dashboards
• Canary Deployments
• Fractional
• Version Labels
• Supports small CI/CD
with Microservice
• Traffic Management
21. Microservices: INCEPTION and Natural Evolution
❖ Now you can run a Java Virtual Machine in a Docker
image
❖ Which is just a process pretending to be an OS
❖ Which is running in an OS that is running in the cloud
❖ Which is running inside of a virtual machine
❖ Which is running in Linux server that you don’t own
that you share with people whom you don’t know
❖ Servers are not giant refrigerator boxes that you order
from Sun and wait three months for (circa 2000)..… Goal
was to run a lot of things on same server
❖ Did you develop code in the 90s with punch cards?
❖ Microservices recognize trend
21
22. [
22
‣ Philosophy behind microservices mirrors Unix
‣ Unix’s inventor, Ken Thompson, defined its philosophy:
• One tool, one job.
‣ Emphasizes building short, simple, clear, modular, and extendable code
• Easily maintained and repurposed by other developers
MICROSERVICES: UNIX PHILOSOPHY
What is microservice arc
23. Microservices
❖ Focus is building small, reusable, scalable services
❖ Adopt the Unix single-purpose utility approach to service development
❖ Small and malleable so they can be released more often
❖ Easier to write
❖ Easier to change
❖ Go hand in hand with continuous integration and continuous delivery
❖ Heavily REST-based and message oriented
❖ Focus on business capability
❖ Refocus on object oriented programming roots
❖ Organize code around business domains.
❖ Data and business rules colocated in the same process or set of processes.
What is microservice architecture?
24. Microservices: Key ingredients
❖ Independently deployable, small, domain-driven services
❖ Own their data (no shared databases)
❖ Communication through a well-defined wire protocol
usually JSON over HTTP (curl-able interfaces)
❖ Well defined interfaces and minimal functionality
❖ Avoiding cascading failures and synchronous calls -
reactive design for failure
❖ Shortly after MicroServices: Containers came out
26. MicroServices: Achieving Resilience
❖ Avoid synchronous calls to avoid cascading failures
❖ Circuit breaker frameworks, retries, resiliency, network layer libs
❖ Instead embrace:
❖ Streams, queues,
❖ Actor systems
❖ Event loops
❖ Other async calls.
❖ Spend more time with distributed logging/log aggregation w/MDC
❖ Distributed tracing: A calls B who calls D or E or F who calls X or Y or Z
26
27. MicroServices: Monitoring and KPIs
❖ Customer/User experience KPIs
❖ Debugging (requests per second, # threads, #
connections, failed auth, expired tokens, etc.)
❖ Circuit breaker (monitor health, restarts, act/react based
on KPIs)
❖ Cloud orchestration (monitor load, spin up instances)
❖ Health checks and observable KPIs
27
28. MicroServices: Continuous Deployment
❖ Microservices are continuously deployable services
❖ Focus of microservices is on breaking applications into small (micro),
reusable services that might be useful to other services or other
applications.
❖ ‘micro’ part of microservices comes to denote small
❖ Services can be deployed independently.
❖ Can be tweaked and then redeployed independently.
❖ Microservice vs monolith when deploying
What is microservice a
30. –Rick Hightower
“Service Mesh like Istio does the things that the
very best InfoSec, Dev teams, SREs and DevOps
teams would do: mTLS zero trust networking,
automate observability and dashboard creation,
automate tracing, and automate logging
aggregation while enabling continuous
deployment via traffic management and canary
deployments. It takes what we’ve learned in the
DevSecOps community and makes it the default,
out of the box.”
31. –Rick Hightower (Why you might need a Service Mesh like Istio?)
“To maximize shareholder value, companies are
embracing CI/CD and Microservices architecture.
This allows product teams to deliver faster, get
feedback more often and evolve quickly.
This Digital Transformation strategy allows
companies to address nimble upstarts as well as
provide our customers with an intelligent, rich
experience.”
32. CTO Forum
What is Service
Mesh?
Observability and Telemetry
Service discovery
Traffic management
Security
Supports CI/CD and Microservices
33.
34. What is a Service Mesh?
❖ Service mesh is a network of microservices and
interactions between microservices
❖ Service mesh tools scale to help manage size and
complexity of large Service Meshes
❖ Modern service mesh aids understanding and
managing
❖ Helps organizations migrate from monolithic
applications to microservice architecture
35. –Rick Hightower (Why you might need a Service Mesh like Istio?)
“Using a Service Mesh facilitates CI/CD and
Microservices architecture. Service Mesh
automates best practices for DevSecOps needs like
failover, scale-out, scalability, 0 trust networking,
health checks, circuit breakers, rate limiters, KPI
collection, dashboard creation, observability,
avoiding cascading failure, disaster recovery, and
traffic routing”
36. Decorate Network Data Layer
❖ Service Mesh decorates network layer to implement
cross-cutting concerns which are usually NFRs
❖ Service Mesh is to MicroServices as AOP is to DDD
and OOP
❖ Service Mesh is to MicroServices as Servlet Filters
are to Servlets.
37. Service Mesh Features
❖ Networking: Discovery, load balancing, failure recovery (circuit
breaking), rate limiting, etc.
❖ Observability: time series KPIs, log aggregation, alerting and
monitoring, USE and RED Dashboards
❖ CI/CD and frequent releases: canary rollouts, green/blue deploys,
new version rollouts, traffic management
❖ And to gradually release a Microservice and select which
downstream and upstream Microservice that can talk
❖ Security access control, end-to-end authentication (RBAC), service
identity, 0 trust networking - mTLS, etc.
38. Simplifies hard programming
❖ Service Mesh performs many low-level L3/L4 networking tasks
❖ Previously left up to application developers to implement or to
many libs for many platforms/languages
❖ Low level network code is hard to write and maintain
❖ filled with edge cases.
❖ Service Mesh completely abstracted out from the microservices
business logic
❖ Provides level of consistency provides additional operational
predictability for polyglot programming environments
41. Service Meshes At a glance
❖ Istio
❖ Backed by IBM, Red Hat, Google, and Lyft
❖ Uses Envoy
❖ Supports more than Kubernetes
❖ Linkerd
❖ CNCF
❖ V1: Finagle, Scala, Twitter stack
❖ V2: Conduit merged: Now Rust and Go Lang based
❖ Consul
❖ Hashicorp
❖ Uses Envoy
❖ Supports more than Kubernetes
❖ Nice comparison of Consul, Linkerd and Istio
42. Observability and Telemetry
❖ automate many aspects of observability
❖ log aggregation, telemetry of services, collecting KPIs
and generating
❖ Automates creating USE and RED Dashboards
❖ See service performance trends and dashboards
❖ how long did a service request take?
❖ how often is the service being called?
43. Service Discovery
❖ Service inventory and understand how services
communicate—tracing call graph, amount of calls per span,
etc.
❖ essential for microservices architecture
❖ Allows services to find other dependent services
❖ Helps keep track of services running in infra
❖ essential for microservices architecture
❖ Manage and visualize services and its dependencies
❖ essential for microservices architecture
44. Traffic Management
❖ Segment features through feature flags and limit
consumption of new services with clients that can
handle changes to APIs or wire protocols with gradual
rollouts
❖ Gradual and continuous release instead of a big bang
rollout
❖ Fine grain deployments
❖ Essential for microservices architecture and CI/CD
45. Traffic Mgmt Interoperability
❖ Big Kubernetes issue with cloud interoperability has been ingress and egress
❖ Service Mesh makes great strides to solve interoperability
❖ Standardize ingress/egress and many other networking concerns so routing
rules, RBAC and TLS termination don’t vary with each vendor or cloud provider
❖ Interoperability suffers w/ Kubernetes federation and hybrid clouds
❖ Service Mesh, and Git Ops (Flux, Argo CD, Anthos Config Manager)
❖ Keep copy of Kubernetes objects between clusters
❖ Using Service Meshes to span clouds and clusters
❖ Now possible to create service meshes that span clusters and clouds
❖ standard service registry plugins (consul/kubernetes), Istio gateways, ad
hoc services and networks defined with CIDR addresses.
46. –Rick Hightower (Why you might need a Service Mesh like Istio?)
“Service Mesh aids in avoiding data breaches as
well as limiting their blast radius. Data breaches
can have dire business value consequences.”
47. Security
❖ Identity, Security, RBAC, 0 trust networking
❖ Secure service-to-service communications via 0 trust networking
❖ Key is service identity
❖ Service identity enables automatic mTLS (mutual TLS) for service-to-service communications
❖ Microservices enhanced to automatically communicate securely via mTLS without code
change
❖ Plugin an existing CA certificate
❖ Enforce service-level authentication using either TLS SNI or JSON Web Tokens (JWS) or
headers or networking origination
❖ Enables fine-grained traffic governance
❖ Allows configure role-based access control (RBAC) for each service and limit which other
services have access to key services
❖ Can be configured to block access based on headers or specific URLs or sub-URIs and paths
48. –Rick Hightower (Why you might need a Service Mesh like Istio?)
“(A Service Mesh’s) ability to automate and maintaining
zero trust networks is its most important feature. In the
age of high-profile data breaches, security is paramount.
…avoid major brand issues … (that can) shrink market
capitalization in an instant. (Service Mesh) helps prevent
a breach and limits the blast radius …”
49. Traffic Management Features
❖ Rate limits based on identity or headers or policies
❖ Fail-over rules (via circuit breakers)
❖ Fine-grained traffic management policies and the application code
never changes
❖ Extend policies to connected service meshes
❖ Route rules can be based on locality of the service
❖ prefer local data center,
❖ or local proximity networks over remotes.
❖ Failover rules are location-aware
❖ Routing can take into account the health of services (active and passive)