Is your system healthy? Are SLOs being met? What are the top performance constraints? What are the high-priority implementation concerns? Is the architecture a right fit? Are the teams leveraging the capabilities of the platform? What are the pain points with platform services? It can be challenging to find root cause among problem symptoms in distributed systems. Just as in real life, it's important for microservices to undergo regular health checks.
In this talk, we'll provide a systems-based approach to execute an app health check along 10 different dimensions: monitoring and metrics, failure mode analysis, technical debt, emergency response, performance optimization, change management, microservices rationalization, platform as a product, balanced team, and path to production. We'll explain how to address issues uncovered during a health check and provide recommendations on how to build a sustainable Day 2 app-ops reliability engineering practice.
How do we add some sanity to the process of constructing microservices and provide guidelines and design heuristics on restructuring microservices. In this talk we will look at life after running microservices architectures in production and learn from the mistakes committed over the past five years. We will analyze real life systems on the criteria for consolidating microservices into monoliths or moduliths based on technical and business heuristics as illustrated In [4]. The techniques - a combination of mapping microservices to core technical attributes [2] reduced by affinity mapping and business domain context distillation [3] - have emerged from working with a number of customers where the value of microservices has not been realized despite leveraging Domain Driven Design.
1. Essay on this topic : https://hackmd.io/10j-7DfqSIu1C8GQjHa1Bw?view
2. https://content.pivotal.io/blog/should-that-be-a-microservice-keep-these-six-factors-in-mind
3. https://medium.com/nick-tune-tech-strategy-blog/core-domain-patterns-941f89446af5
4. https://twitter.com/RKela/status/1227188151887843329/photo/1
SpringOne Platform 2018 Recap in 5 minutesRohit Kelapure
The document provides a recap of announcements and news from Spring One Platform 2018. Key points include:
- Buildpacks were contributed to CNCF and the buildpack ecosystem was unified with a well-defined platform to buildpack contract.
- PKS 1.2 reached general availability and can now deploy to AWS alongside other new features.
- The Pivotal Function Service was introduced, building on Knative to allow running functions anywhere and triggering via HTTP/message brokers.
- Several partners received awards for their work in areas like systems integration, independent software, and managed services on the Pivotal platform.
- New developments were announced for projects like Spring Framework, PAS, Concourse, and Kubernetes integration.
API First or Events First: Is it a Binary Choice? Rohit Kelapure
This document discusses the differences between APIs and events as architectural approaches and when each is best suited. It provides examples of when to use APIs versus events, including for business purposes like monetization, modernizing applications, and enabling new paradigms. The document also covers challenges of both approaches and shows how architectures can evolve from monolithic to microservices using a combination of APIs and events with loose coupling.
This document discusses strategies for transitioning from monolithic applications to microservices. It covers topics like domain-driven design, event storming, identifying core domains, and technical and business heuristics for determining when to use microservices. It also addresses challenges that can arise with too many microservices and discusses alternative approaches like modular monoliths. The implementation section covers sociotechnical architecture approaches and building cloud-native applications.
Tools and Recipes to Replatform Monolithic Apps to Modern Cloud EnvironmentsVMware Tanzu
Digital transformation includes replatforming applications to streamline release cycles, improve availability, and manage apps and services at scale. But many enterprises are afraid to take the first step because they don’t know where to start. In this webinar, Rohit will provide a step-by-step guide that covers:
● How to find high-value modernization projects within your application portfolio
● Easy tools and techniques to minimally change applications in preparation for replatforming
● How to choose the platform with the right level of abstraction for your app
● Examples that show how Java EE Websphere applications can be deployed to Pivotal Cloud Foundry
Speaker: Rohit Kelapure, Pivotal Consulting Practice Lead
Pivotal Platform: A First Look at the October ReleaseVMware Tanzu
Join Dan Baskette and Jared Ruckle for a first look at the latest Pivotal Platform capabilities with demos and expert Q&A. Attend this session and learn how you can put these new updates to work for your enterprise.
Build apps atop Kubernetes with:
● Azure Spring Cloud, a complete runtime for Spring apps atop Azure Kubernetes Service
● Pivotal Build Service, an automated workflow for code-to-container builds
● Container Services Manager for Pivotal Platform, a bridge between Pivotal Application Service and PKS
Build apps atop a self-managed platform with:
● Pivotal Application Service 2.7, and its additional app deployment capabilities
● Pivotal Service Instance Manager, a new tool to help you manage backing services at scale
Get your apps to production with CI/CD tools like:
● Pivotal Continuous Delivery with Spinnaker
● Pivotal Concourse 5.5
We’ll also review Pivotal Spring Cloud Gateway and Pivotal Cloud Cache 1.9!
Presenter : Dan Baskette, Director, Technical Marketing & Jared Ruckle, Director, Product Marketing
Your opportunity to see how you can address your application development and delivery challenges with Pivotal Cloud Foundry.
Speaker: Vijay Rajagopal, Advisory Platform Architect, Pivotal
How do we add some sanity to the process of constructing microservices and provide guidelines and design heuristics on restructuring microservices. In this talk we will look at life after running microservices architectures in production and learn from the mistakes committed over the past five years. We will analyze real life systems on the criteria for consolidating microservices into monoliths or moduliths based on technical and business heuristics as illustrated In [4]. The techniques - a combination of mapping microservices to core technical attributes [2] reduced by affinity mapping and business domain context distillation [3] - have emerged from working with a number of customers where the value of microservices has not been realized despite leveraging Domain Driven Design.
1. Essay on this topic : https://hackmd.io/10j-7DfqSIu1C8GQjHa1Bw?view
2. https://content.pivotal.io/blog/should-that-be-a-microservice-keep-these-six-factors-in-mind
3. https://medium.com/nick-tune-tech-strategy-blog/core-domain-patterns-941f89446af5
4. https://twitter.com/RKela/status/1227188151887843329/photo/1
SpringOne Platform 2018 Recap in 5 minutesRohit Kelapure
The document provides a recap of announcements and news from Spring One Platform 2018. Key points include:
- Buildpacks were contributed to CNCF and the buildpack ecosystem was unified with a well-defined platform to buildpack contract.
- PKS 1.2 reached general availability and can now deploy to AWS alongside other new features.
- The Pivotal Function Service was introduced, building on Knative to allow running functions anywhere and triggering via HTTP/message brokers.
- Several partners received awards for their work in areas like systems integration, independent software, and managed services on the Pivotal platform.
- New developments were announced for projects like Spring Framework, PAS, Concourse, and Kubernetes integration.
API First or Events First: Is it a Binary Choice? Rohit Kelapure
This document discusses the differences between APIs and events as architectural approaches and when each is best suited. It provides examples of when to use APIs versus events, including for business purposes like monetization, modernizing applications, and enabling new paradigms. The document also covers challenges of both approaches and shows how architectures can evolve from monolithic to microservices using a combination of APIs and events with loose coupling.
This document discusses strategies for transitioning from monolithic applications to microservices. It covers topics like domain-driven design, event storming, identifying core domains, and technical and business heuristics for determining when to use microservices. It also addresses challenges that can arise with too many microservices and discusses alternative approaches like modular monoliths. The implementation section covers sociotechnical architecture approaches and building cloud-native applications.
Tools and Recipes to Replatform Monolithic Apps to Modern Cloud EnvironmentsVMware Tanzu
Digital transformation includes replatforming applications to streamline release cycles, improve availability, and manage apps and services at scale. But many enterprises are afraid to take the first step because they don’t know where to start. In this webinar, Rohit will provide a step-by-step guide that covers:
● How to find high-value modernization projects within your application portfolio
● Easy tools and techniques to minimally change applications in preparation for replatforming
● How to choose the platform with the right level of abstraction for your app
● Examples that show how Java EE Websphere applications can be deployed to Pivotal Cloud Foundry
Speaker: Rohit Kelapure, Pivotal Consulting Practice Lead
Pivotal Platform: A First Look at the October ReleaseVMware Tanzu
Join Dan Baskette and Jared Ruckle for a first look at the latest Pivotal Platform capabilities with demos and expert Q&A. Attend this session and learn how you can put these new updates to work for your enterprise.
Build apps atop Kubernetes with:
● Azure Spring Cloud, a complete runtime for Spring apps atop Azure Kubernetes Service
● Pivotal Build Service, an automated workflow for code-to-container builds
● Container Services Manager for Pivotal Platform, a bridge between Pivotal Application Service and PKS
Build apps atop a self-managed platform with:
● Pivotal Application Service 2.7, and its additional app deployment capabilities
● Pivotal Service Instance Manager, a new tool to help you manage backing services at scale
Get your apps to production with CI/CD tools like:
● Pivotal Continuous Delivery with Spinnaker
● Pivotal Concourse 5.5
We’ll also review Pivotal Spring Cloud Gateway and Pivotal Cloud Cache 1.9!
Presenter : Dan Baskette, Director, Technical Marketing & Jared Ruckle, Director, Product Marketing
Your opportunity to see how you can address your application development and delivery challenges with Pivotal Cloud Foundry.
Speaker: Vijay Rajagopal, Advisory Platform Architect, Pivotal
SpringOne Platform 2017
Jason Michener, Comcast; Vipul Savjani, Accenture
Comcast has been on a Cloud-Native Transformation Journey with Pivotal Cloud Foundry for the past 3 years. Recently, Comcast Customer Experience and Engineering Teams were given a seemingly impossible task: Replace a 3rd party AI/ML Customer Service tool by building our own in 8 weeks. Come learn how we leveraged our Pivotal Cloud Foundry service platforms in a hybrid public/private cloud with our best customer experience professionals to fundamentally change how we are engaging with our customers.
This document discusses developer ready infrastructure and the evolution of cloud platforms. It argues that platforms need to support developers through automation and by handling operational concerns so developers can focus on building applications. It outlines different platform layers from infrastructure as a service (IaaS) to fully managed application platforms and serverless functions. Pivotal's approach leverages Kubernetes, BOSH, and Cloud Foundry to provide a fully automated and production-ready container platform that can run on any cloud and handle all operational tasks.
DevOps automation for Container based App DeliveryWaveMaker, Inc.
Modernization of IT and Container revolution
DevOps automation using containers
Lift and shift Apps into containers automagically.
Unified App delivery to Hybrid & Multi-clouds
Case study and Demo
Leveraging Standard Buildpacks to Migrate Not-So-Standard AppsVMware Tanzu
SpringOne 2021
Session Title: Leveraging Standard Buildpacks to Migrate Not-So-Standard Apps
Speakers: Brandon Blincoe, App Modernization Strategist at VMware; Matthew Campbell, Solutions Architect at VMware
DevOps KPIs as a Service: Daimler’s SolutionVMware Tanzu
1. Daimler developed a DevOps KPI-as-a-Service solution to provide transparency into key performance indicators for its Cloud Foundry-based platforms.
2. The solution collects and stores platform data daily and generates reports in Excel format on demand to analyze metrics like usage, capacity, and adoption over time.
3. Initial goals were to leverage existing platform data with little effort using a "learning by doing" approach; the team now aims to improve integration, documentation, automation, and marketing of the KPI tool within Daimler.
Deep Dive into Pivotal Cloud Foundry 2.0VMware Tanzu
SpringOne Platform 2017
Jeffrey Hammond, Forrester; Richard Seroter, Pivotal
Pivotal Cloud Foundry (PCF) is the enterprise platform of choice for cloud-native apps. With the release of PCF 2.0, the platform undergoes its biggest change ever. In this session, learn all about the latest release of PCF and all the major new capabilities that power your transformation. This is the place to learn all about Pivotal vision for the future of the platform.
Cloud-Native Patterns and the Benefits of MySQL as a Platform Managed ServiceVMware Tanzu
You can’t have cloud-native applications without a modern approach to databases and backing services. Data professionals are looking for ways to transform how databases are provisioned and managed.
In this webinar, we’ll cover practical strategies you can employ to deliver improved business agility at the data layer. We’ll discuss the impact that microservices are having in the enterprise, and what this means for MySQL and other popular databases. Join us and learn the answers to these common questions:
● How can you meet the operational challenge of scaling the number of MySQL database instances and managing the fleet?
● Adding to this scale challenge, how can your MySQL instances maintain availability in a world where the underlying IT infrastructure is ephemeral?
● How can you secure data in motion?
● How can you enable self-service while maintaining control and governance?
We’ll cover these topics and share how enterprises like yours are delivering greater outcomes with our Pivotal Platform managed MySQL.
Now you can scale without fear of failure.
Presenters:
Judy Wang, Product Management
Jagdish Mirani, Product Marketing
The document discusses how VMware products like NSX, vRealize Operations, and vRealize Log Insight can provide monitoring, logging, and security capabilities for Pivotal Cloud Foundry environments. It highlights how NSX delivers inherently secure infrastructure, high performance distributed networking, and availability for PaaS. The document also notes how NSX can help organizations run things cheaper and be more efficient through improved data center operations and reduced CapEx.
Unlock your VMWare Investment with Pivotal Cloud Foundry (VMworld 2014)VMware Tanzu
Presented by Cornelia Davis - Platform Engineer, Cloud Foundry, Pivotal
You might have heard that software is eating the world; in every industry enterprises are being challenged to bring software to their consumers faster, more frequently and with insanely great user experiences. Pivotal Cloud Foundry, the leading enterprise Platform as a Service (PaaS) that is powered by Cloud Foundry, is designed to remove friction from the traditional application lifecycle, from dev all the way through production. At the core it exposes application and services “dial tone”, rather than infrastructure “dial tone”, scoping a broad set of capabilities such as autoscaling, dynamic routing, logging, monitoring, health management, and more, around the application. Pivotal Cloud Foundry itself depends on the infrastructure “dial tone” that is brilliantly provided by vSphere or vCHS.
In this session we’ll start with the industry drivers for PaaS, explain how it leverages your existing vSphere or vCHS investment, and then dive into the details of what Pivotal Cloud Foundry brings to the enterprise developer and operator. Light on slides and heavy on demo, you’ll come away with a solid understanding of how Pivotal CF can revolutionize they way your enterprise develops, delivers and manages software.
Here we go! Our Experts take on Legacy Application Modernization with Microsoft Azure.
With Microsoft Azure gaining ground in the Cloud infrastructure race, this article aims to discuss the cutting-edge features and advantages of Legacy App Modernization using Microsoft Azure and the Key things to consider when your application takes on the Azure outfit. Article below derived from the White Paper presented by our MS Azure team. Read on to explore the top ways how Application Modernization using Microsoft Azure helps you gain the competitive edge.
Read more, please visit here: https://www.optisolbusiness.com/insight/legacy-application-modernization-with-microsoft-azure
The document discusses the benefits and features provided by Pivotal Cloud Foundry (PCF), including multi-cloud support, scalability, logging, metrics, containerization, orchestration, security, high availability, and support for multiple programming languages and frameworks. It also describes what developers and operators get with PCF, such as a polyglot environment, CI/CD, autoscaling, routing, compliance with industry standards, and more. The document explains that with PCF, applications are packaged using buildpacks along with their dependencies into containers, which run on top of stemcells that provide a preconfigured operating system image.
VMware Tanzu Application Service as an Integration PlatformVMware Tanzu
SpringOne 2021
Session Title: VMware Tanzu Application Service as an Integration Platform
Speakers: Manoj Thekumpurath, Sr. Manager at Deloitte; Siddharth Mehrotra, Senior Manager at Deloitte
Packaging and Distributing Applications for KubernetesVMware Tanzu
SpringOne 2021
Session Title: Packaging and Distributing Applications for Kubernetes
Speakers: Ian Zink, Staff Software Engineer at VMware; Nitasha Verma, Solutions Engineer at VMware
John Hancock’s Journey from Service-Oriented to Microservices Architecture on...VMware Tanzu
SpringOne Platform 2019
Title: John Hancock’s Journey from Service-Oriented to Microservices Architecture on Pivotal Platform
Speakers: Sunil Saxena, John Hancock Financial Services; Ankit Sharma, John Hancock Financial Services
Youtube: https://youtu.be/VaunNuDfd3E
James Watters Kafka Summit NYC 2019 KeynoteJames Watters
The document discusses how Spring Boot and Kafka can form the basis of a new enterprise application platform that enables continuous delivery and efficient scaling through microservices and event-driven architecture. It provides examples of companies like Netflix and T-Mobile that have successfully adopted this approach. The document advocates an "event-first" design and argues this platform approach allows for arbitrary scaling, multi-cloud deployment, and increased developer autonomy and agility.
“Sh*^%# on Fire, Yo!”: A True Story Inspired by Real EventsVMware Tanzu
SpringOne 2020
“Sh*^%# on Fire, Yo!”: A True Story Inspired by Real Events
James Webb, MTS at T-Mobile
Brendan Aye, Technical Director, Platform Architecture at T-Mobile
A proper Microservice is designed for fast failure.
Like other architectural style, microservices bring costs and benefits. Some development teams have found microservices architectural style to be a superior approach to a monolithic architecture. Other teams have found them to be a productivity-sapping burden.
This material start with the basic what and why microservice, follow with the Felix example and the the successful strategies to develop microservice application.
Cross-Platform Observability for Cloud FoundryVMware Tanzu
This document discusses cross-platform observability for Cloud Foundry. It highlights the need for observability of both platforms and applications to achieve stability, scalability, security and speed. It discusses challenges of monitoring microservices that generate large amounts of metrics data. The document promotes an observability-as-a-service approach for any application and cloud. It demonstrates metrics, traces and histograms as pillars of observability and service level objectives. Distributed tracing is presented as a way to troubleshoot microservices faster. The document concludes with a demo and best practices from an organization that uses observability to deliver high quality code.
Platform Health Assessment at Department of Homeland Security Citizenship and...VMware Tanzu
SpringOne Platform 2019
Session Title: Platform Health Assessment at Department of Homeland Security Citizenship and Immigration Services
Speakers: Chris Saunders, Platform Architect Manager, Pivotal and Kelly Walsh, Engagement Director, Pivotal and Paul Beccio, Developer, DHS USCIS
Youtube: https://youtu.be/LZsqqSH9VbI
APM members were guests of Lockheed Martin for this interactive presentation which outlined Lockheed Martin’s experience in implementing Enterprise Agile across the corporation. This presentation focuses on management practices and lessons learned.
SpringOne Platform 2017
Jason Michener, Comcast; Vipul Savjani, Accenture
Comcast has been on a Cloud-Native Transformation Journey with Pivotal Cloud Foundry for the past 3 years. Recently, Comcast Customer Experience and Engineering Teams were given a seemingly impossible task: Replace a 3rd party AI/ML Customer Service tool by building our own in 8 weeks. Come learn how we leveraged our Pivotal Cloud Foundry service platforms in a hybrid public/private cloud with our best customer experience professionals to fundamentally change how we are engaging with our customers.
This document discusses developer ready infrastructure and the evolution of cloud platforms. It argues that platforms need to support developers through automation and by handling operational concerns so developers can focus on building applications. It outlines different platform layers from infrastructure as a service (IaaS) to fully managed application platforms and serverless functions. Pivotal's approach leverages Kubernetes, BOSH, and Cloud Foundry to provide a fully automated and production-ready container platform that can run on any cloud and handle all operational tasks.
DevOps automation for Container based App DeliveryWaveMaker, Inc.
Modernization of IT and Container revolution
DevOps automation using containers
Lift and shift Apps into containers automagically.
Unified App delivery to Hybrid & Multi-clouds
Case study and Demo
Leveraging Standard Buildpacks to Migrate Not-So-Standard AppsVMware Tanzu
SpringOne 2021
Session Title: Leveraging Standard Buildpacks to Migrate Not-So-Standard Apps
Speakers: Brandon Blincoe, App Modernization Strategist at VMware; Matthew Campbell, Solutions Architect at VMware
DevOps KPIs as a Service: Daimler’s SolutionVMware Tanzu
1. Daimler developed a DevOps KPI-as-a-Service solution to provide transparency into key performance indicators for its Cloud Foundry-based platforms.
2. The solution collects and stores platform data daily and generates reports in Excel format on demand to analyze metrics like usage, capacity, and adoption over time.
3. Initial goals were to leverage existing platform data with little effort using a "learning by doing" approach; the team now aims to improve integration, documentation, automation, and marketing of the KPI tool within Daimler.
Deep Dive into Pivotal Cloud Foundry 2.0VMware Tanzu
SpringOne Platform 2017
Jeffrey Hammond, Forrester; Richard Seroter, Pivotal
Pivotal Cloud Foundry (PCF) is the enterprise platform of choice for cloud-native apps. With the release of PCF 2.0, the platform undergoes its biggest change ever. In this session, learn all about the latest release of PCF and all the major new capabilities that power your transformation. This is the place to learn all about Pivotal vision for the future of the platform.
Cloud-Native Patterns and the Benefits of MySQL as a Platform Managed ServiceVMware Tanzu
You can’t have cloud-native applications without a modern approach to databases and backing services. Data professionals are looking for ways to transform how databases are provisioned and managed.
In this webinar, we’ll cover practical strategies you can employ to deliver improved business agility at the data layer. We’ll discuss the impact that microservices are having in the enterprise, and what this means for MySQL and other popular databases. Join us and learn the answers to these common questions:
● How can you meet the operational challenge of scaling the number of MySQL database instances and managing the fleet?
● Adding to this scale challenge, how can your MySQL instances maintain availability in a world where the underlying IT infrastructure is ephemeral?
● How can you secure data in motion?
● How can you enable self-service while maintaining control and governance?
We’ll cover these topics and share how enterprises like yours are delivering greater outcomes with our Pivotal Platform managed MySQL.
Now you can scale without fear of failure.
Presenters:
Judy Wang, Product Management
Jagdish Mirani, Product Marketing
The document discusses how VMware products like NSX, vRealize Operations, and vRealize Log Insight can provide monitoring, logging, and security capabilities for Pivotal Cloud Foundry environments. It highlights how NSX delivers inherently secure infrastructure, high performance distributed networking, and availability for PaaS. The document also notes how NSX can help organizations run things cheaper and be more efficient through improved data center operations and reduced CapEx.
Unlock your VMWare Investment with Pivotal Cloud Foundry (VMworld 2014)VMware Tanzu
Presented by Cornelia Davis - Platform Engineer, Cloud Foundry, Pivotal
You might have heard that software is eating the world; in every industry enterprises are being challenged to bring software to their consumers faster, more frequently and with insanely great user experiences. Pivotal Cloud Foundry, the leading enterprise Platform as a Service (PaaS) that is powered by Cloud Foundry, is designed to remove friction from the traditional application lifecycle, from dev all the way through production. At the core it exposes application and services “dial tone”, rather than infrastructure “dial tone”, scoping a broad set of capabilities such as autoscaling, dynamic routing, logging, monitoring, health management, and more, around the application. Pivotal Cloud Foundry itself depends on the infrastructure “dial tone” that is brilliantly provided by vSphere or vCHS.
In this session we’ll start with the industry drivers for PaaS, explain how it leverages your existing vSphere or vCHS investment, and then dive into the details of what Pivotal Cloud Foundry brings to the enterprise developer and operator. Light on slides and heavy on demo, you’ll come away with a solid understanding of how Pivotal CF can revolutionize they way your enterprise develops, delivers and manages software.
Here we go! Our Experts take on Legacy Application Modernization with Microsoft Azure.
With Microsoft Azure gaining ground in the Cloud infrastructure race, this article aims to discuss the cutting-edge features and advantages of Legacy App Modernization using Microsoft Azure and the Key things to consider when your application takes on the Azure outfit. Article below derived from the White Paper presented by our MS Azure team. Read on to explore the top ways how Application Modernization using Microsoft Azure helps you gain the competitive edge.
Read more, please visit here: https://www.optisolbusiness.com/insight/legacy-application-modernization-with-microsoft-azure
The document discusses the benefits and features provided by Pivotal Cloud Foundry (PCF), including multi-cloud support, scalability, logging, metrics, containerization, orchestration, security, high availability, and support for multiple programming languages and frameworks. It also describes what developers and operators get with PCF, such as a polyglot environment, CI/CD, autoscaling, routing, compliance with industry standards, and more. The document explains that with PCF, applications are packaged using buildpacks along with their dependencies into containers, which run on top of stemcells that provide a preconfigured operating system image.
VMware Tanzu Application Service as an Integration PlatformVMware Tanzu
SpringOne 2021
Session Title: VMware Tanzu Application Service as an Integration Platform
Speakers: Manoj Thekumpurath, Sr. Manager at Deloitte; Siddharth Mehrotra, Senior Manager at Deloitte
Packaging and Distributing Applications for KubernetesVMware Tanzu
SpringOne 2021
Session Title: Packaging and Distributing Applications for Kubernetes
Speakers: Ian Zink, Staff Software Engineer at VMware; Nitasha Verma, Solutions Engineer at VMware
John Hancock’s Journey from Service-Oriented to Microservices Architecture on...VMware Tanzu
SpringOne Platform 2019
Title: John Hancock’s Journey from Service-Oriented to Microservices Architecture on Pivotal Platform
Speakers: Sunil Saxena, John Hancock Financial Services; Ankit Sharma, John Hancock Financial Services
Youtube: https://youtu.be/VaunNuDfd3E
James Watters Kafka Summit NYC 2019 KeynoteJames Watters
The document discusses how Spring Boot and Kafka can form the basis of a new enterprise application platform that enables continuous delivery and efficient scaling through microservices and event-driven architecture. It provides examples of companies like Netflix and T-Mobile that have successfully adopted this approach. The document advocates an "event-first" design and argues this platform approach allows for arbitrary scaling, multi-cloud deployment, and increased developer autonomy and agility.
“Sh*^%# on Fire, Yo!”: A True Story Inspired by Real EventsVMware Tanzu
SpringOne 2020
“Sh*^%# on Fire, Yo!”: A True Story Inspired by Real Events
James Webb, MTS at T-Mobile
Brendan Aye, Technical Director, Platform Architecture at T-Mobile
A proper Microservice is designed for fast failure.
Like other architectural style, microservices bring costs and benefits. Some development teams have found microservices architectural style to be a superior approach to a monolithic architecture. Other teams have found them to be a productivity-sapping burden.
This material start with the basic what and why microservice, follow with the Felix example and the the successful strategies to develop microservice application.
Cross-Platform Observability for Cloud FoundryVMware Tanzu
This document discusses cross-platform observability for Cloud Foundry. It highlights the need for observability of both platforms and applications to achieve stability, scalability, security and speed. It discusses challenges of monitoring microservices that generate large amounts of metrics data. The document promotes an observability-as-a-service approach for any application and cloud. It demonstrates metrics, traces and histograms as pillars of observability and service level objectives. Distributed tracing is presented as a way to troubleshoot microservices faster. The document concludes with a demo and best practices from an organization that uses observability to deliver high quality code.
Platform Health Assessment at Department of Homeland Security Citizenship and...VMware Tanzu
SpringOne Platform 2019
Session Title: Platform Health Assessment at Department of Homeland Security Citizenship and Immigration Services
Speakers: Chris Saunders, Platform Architect Manager, Pivotal and Kelly Walsh, Engagement Director, Pivotal and Paul Beccio, Developer, DHS USCIS
Youtube: https://youtu.be/LZsqqSH9VbI
APM members were guests of Lockheed Martin for this interactive presentation which outlined Lockheed Martin’s experience in implementing Enterprise Agile across the corporation. This presentation focuses on management practices and lessons learned.
The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments.pdfkalichargn70th171
The emergence of Low-Code and No-Code platforms has reshaped the realm of software development. These platforms offer a transformative solution, empowering individuals with varying coding proficiencies to craft functional and efficient applications. Through intuitive visual development tools and pre-built components, Low-Code/No-Code platforms facilitate problem-solving and value creation, liberating users from the complexities of traditional coding.
The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments (...kalichargn70th171
The emergence of Low-Code and No-Code platforms has reshaped the realm of software development. These platforms offer a transformative solution, empowering individuals with varying coding proficiencies to craft functional and efficient applications. Through intuitive visual development tools and pre-built components, Low-Code/No-Code platforms facilitate problem-solving and value creation, liberating users from the complexities of traditional coding.
https://www.learntek.org/blog/sdlc-phases/
https://www.learntek.org/
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses.
https://www.learntek.org/blog/sdlc-phases/
https://www.learntek.org/
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses.
This document contains a summary of Venkata L Gunda's work experience and qualifications. He has over 8 years of experience in IT testing, including testing web and product applications. He has expertise in test planning, execution, quality assurance, and documentation. He is proficient in testing tools like QlikView, QC/ALM, QTP, and others. He has experience leading testing teams and projects in various domains including insurance, finance, and rail transportation. He holds certifications in ITIL, ISTQB, QC, and others.
The document discusses IBM's MobileFirst DevOps approach for continuously delivering high quality mobile apps and rapidly responding to feedback. It promotes leveraging collaborative development, continuous integration, release and deployment, and testing practices. Example case studies are provided that demonstrate how these practices can increase customer renewal rates, reduce release times, and decrease problems. The IBM toolset for supporting these DevOps capabilities is also outlined.
The document discusses IBM's MobileFirst DevOps approach for continuously delivering high quality mobile apps and rapidly responding to feedback. It promotes automating continuous development, testing, deployment, and monitoring processes to balance speed and quality. Key capabilities highlighted include collaborative development using Rational tools, continuous integration, testing, release, and monitoring across mobile, backend systems and cloud.
Effective performance engineering is a critical factor in delivering meaningful results. The implementation must be built into every aspect of the business, from IT and business management to internal and external customers and all other stakeholders. Convetit brought together ten experts in the field of performance engineering to delve into the trends and drivers that are defining the space. This Foresights discussion will directly influence Business and Technology Leaders that are looking to stay ahead of the challenges they face with delivering high performing systems to their end users, today and in the next 2-5 years.
Tufts Health: Creating a World Class Future User Experience PlatformProlifics
Speakers:
William Pappalardo, Tufts Health Plan
Tim Reilly, Prolifics
Abstract: In this session, you will learn why Tufts Health Plan chose IBM's Customer Experience Suite and Employee Experience Suite to replace their existing portal portfolio. Tufts Health Plan wanted to ensure they had a world class future looking user experience platform in place before modernizing and investing in new capabilities for their users. The session will detail how they subsequently planned and delivered an effective online experience using portal, web content management, forms, social solutions and more. The team will discuss their business priorities, technology selection, lessons learned, and what's up next in their roadmap.
MCA with 3+ years of experience as a IT-Consultant/Implementation Engineer And Business Trips.
My name is Abdul Wahab and i reside in Bangalore,India. I am looking for change. Please refer me if there are any opening in your company or others.
Pl find the attachment below i am an Post graduate with 3.8 years of experience in dot net and java web applications as an IT-Consultant/Implementation Engineer.
Dealing with Business trip from our Company to the client place with business work flow.
My resume gives detailed structure of my work
This document contains the resume of Satya Haritha summarizing their experience and qualifications. They have over 3 years of experience using Java/J2EE technologies including JDBC, JSP, Servlets, and Struts to develop and maintain web applications. Their most recent roles include working as a software engineer for BIRLASOFT Ltd developing an insurance management system and previously working for CVK Infratech Pvt Ltd developing a health assistance application. They have a BTech degree from JNTU University and are proficient in technologies like Java, SQL, HTML, XML, Eclipse and more.
The document provides a profile summary for Ayyappa Kumar including his contact information, industry preference, and over 6 years of experience in manual and automation testing using tools like HP UFT and QC. He is currently a consultant at Virtusa where his responsibilities include developing test cases, automation scripts, defect tracking, and ensuring quality testing. He has experience testing applications in domains such as ERP, BFSI, and various web and mainframe projects.
Today’s highly competitive and customer-centric market conditions have pushed software and
solution delivery organizations beyond the traditionally accepted limits of software
development and delivery capabilities. Lean methodologies such as Lean Six Sigma and DevOps
can help improve operational solution delivery capacities through
Streamlining of solution delivery process
Improved software quality
Automation of system operations
Self-administration of system operations by development teams
Agile methodologies augment such operational improvements with their own enablement of
faster time to market (TTM) by transforming the Lean concept of value-added activities into
value-added product features. Agile software architecture augments solution delivery
organizations’ Agile software development life cycle (SDLC) capabilities with flexible
architectures that facilitate future product development.
This report from DCG Software Value discusses whether or not function points are still relevant in the IT world, given all the innovative changes and processes that have occurred.
Download this report here: http://ow.ly/108Vrw
This report from DCG Software Value discusses whether or not function points are still relevant in the IT world, given all the innovative changes and processes that have occurred.
Ravi Nelluri has over 11 years of experience in information technology with a focus on quality assurance. This includes both manual and automation testing as well as client interfacing and delivery roles in test management. He has expertise in test planning, design, execution, and defect tracking. He is proficient in automation tools like Selenium and has experience implementing test automation frameworks. He has worked on various projects across different domains testing applications, websites, and mainframe systems.
This presentation includes:
- Why performance matters for digital businesses?
- Use Cases for performance / load testing
- Load Test Design Considerations
- Tools and Technologies
- Methodology and Approach
- Activities and Deliverables
- Load Testing Success Stories
Similar to Travelers 360 degree health assessment of microservices on the pivotal platform (20)
Migrate Heroku & OpenShift Applications to IBM BlueMixRohit Kelapure
This slide deck describes some of the architectural principles behind the Heroku, OpenShift, Cloud Foundry and BlueMix enterprise PaaS. The commonalities and differences in designing and porting apps across these platforms to Cloud Foundy/BlueMix are explored.
Liberty Buildpack: Designed for Extension - Integrating your services in Blue...Rohit Kelapure
The Liberty Buildpack aims to remove the hassle of running Java applications on Cloud Foundry whether it is the simplified setup, auto-configuration of Liberty and Java EE references to cloud resources, reduced droplet size through selective provisioning of the runtime, or the zero-touch configuration and usage of services. There are times, however, when an application needs a feature that the buildpack does not yet provide. This talk will start by showing how to use and configure the Java buildpack and finish by showing how to extend the buildpack to ensure that IBM BlueMix Cloud Foundry is the best place to run your application. To build services and integrate them with BlueMix, you must implement the Service Broker API of Cloud Foundry for your services. This talk will explain how to write plugins to the Liberty Buildpack that will auto wire services your organization developed and integrated into CF making it easier for your apps to use the services in Cloud Foundry.
A Deep Dive into the Liberty Buildpack on IBM BlueMix Rohit Kelapure
This talk goes into the details and mechanics of how the Liberty buildpack deploys an application into the IBM BlueMix Cloud Foundry. It also explores how the Cloud Foundry runtime drives the Liberty buildpack code and what the Liberty buildpack code in Cloud Foundry does to run an application in the cloud environment. This talk touches on the restrictions that Cloud Foundry and the Liberty runtime imposes on applications running in Cloud Foundry. Developers attending this talk get deep insight into the why, what, how, and when of the Liberty buildpack ruby code, enabling them to write applications faster and optimized for the Liberty runtime in IBM BlueMix.
Liberty provides dynamic caching capabilities through configuration in server.xml and properties files. Caches can store entries in memory or disk and work across multiple tiers like servlets and Java objects. The Cache Monitor displays metrics and configurations for all cache instances. There are some restrictions around features like replication and caching for certain workloads.
The ICAP Integrated Development Environment (IDE) provides a number of standard development tools to ease the design of modern applications.
Mobile (Worklight)
Includes IBM's industry leading mobile development platform
Java (WebSphere Liberty Profile)
Rapidly build next-generation, engaging applications for the WebSphere Application Server Liberty Profile.
JavaScript (Node.js)
Easily build applications with the most popular JavaScript runtime for event-driven server side development .
Cloud Explorer
Quickly discover shared services to enhance applications. Develop custom services to share with others.
This document summarizes a session from the IBM Exceptional Web Experience Conference 2012 in Austin, Texas. The session discussed how IBM WebSphere Portal and Web Content Manager can leverage IBM WebSphere eXtreme Scale and IBM WebSphere DataPower XC10 Appliance to greatly increase cache capacity and improve performance. Offloading the dynamic cache to these elastic caching solutions can reduce response times, increase throughput, and enable faster startup of new servers.
Classloader leak detection in websphere application serverRohit Kelapure
The document discusses IBM WebSphere Application Server V8.5 features for classloader memory leak prevention, detection, and remediation. It introduces that customers discovered classloader and ThreadLocal memory leaks in WebSphere Application Server and their own applications. The new features in V8.5 include prevention of common leak patterns, detection of application-triggered leaks, and automated fixing of leaks by leveraging JDK APIs. The summary is configured through JVM properties and administrators can view leak detection messages and run operations to find and fix leaks through dynamic MBeans.
This document discusses patterns and best practices for dependency injection using the Contexts and Dependency Injection (CDI) specification. It provides an overview of CDI concepts like scopes, qualifiers, producers, events, decorators, and alternatives. It also discusses how CDI is implemented in WebSphere Application Server, including bean failover support.
This document provides a summary and comparison of the Java EE and Spring frameworks. It outlines the evolution of both technologies and highlights key features from Java EE 6 and Spring 3.0/3.1. The document also discusses how Spring and Java EE can coexist, approaches to migrating from Spring to Java EE, and concludes with references for further information.
This document discusses patterns and best practices for dependency injection using the Contexts and Dependency Injection (CDI) specification. It provides an overview of CDI concepts like scopes, qualifiers, producers, events, decorators, and alternatives. It also discusses how CDI is implemented in WebSphere Application Server, including bean failover support.
Web sphere application server performance tuning workshopRohit Kelapure
The document describes a workshop on using the WebSphere Application Server Performance Tuning Toolkit (PTT) to analyze performance issues in the Daytrader application. It includes instructions on setting up the environment, installing Daytrader, and walking through 5 scenarios that simulate different types of performance problems: synchronization blocking, deadlock, high CPU usage, connection leak, and memory leak. For each scenario, it describes how to trigger the problem, monitor it using the PTT, and analyze the issue using thread dumps and the ISA V5 tool. The goal is to help users understand common performance issues and how to diagnose them.
The WebSphere Application Server Performance Tuning Toolkit provides a three-pronged approach to performance monitoring and tuning: it monitors servers for errors and potential problems, accelerates performance tuning by centralizing monitoring and tuning scripts, and facilitates problem determination through features like thread dumps, heap dumps, and runtime tracing. The toolkit offers a friendly UI and requires no additional installation or configuration. It provides reports to analyze monitoring data both online and offline.
The IBM Java Health Center provides profiling, garbage collection, I/O, lock analysis, threads, and native memory information for the IBM JVM. It is fully supported through PMRs by the IBM Java Tools team. The Health Center Agent runs inside the JVM and collects data, which can be analyzed using the Health Center Client Eclipse perspective. The agent can run in socket, headless, or late attach mode. Headless mode writes data files without opening a socket, while socket mode opens a port. The client can load data files to analyze profiling, memory, and other information.
This session compares the Spring and Java EE stacks in terms of Web frameworks. It re-examines the motivations behind the Spring framework and explores the emergence of the Java EE programming model to meet the challenges posed. The presentation provides insight into when Spring and/or Java EE is appropriate for a building Web applications and if they can coexist.
Rohit Kelapure is an IBM Advisory Software Engineer responsible for the resiliency of WebSphere Application Server. He is a team lead and architect of caching and data replication features in WebSphere. The presentation discusses server resiliency fundamentals, common JVM problems like thread hangs, memory leaks, and tooling for debugging such as Eclipse Memory Analyzer and Thread Dump Analyzer.
The document discusses and compares caching technologies for Java applications, specifically Memcached and DistributedMap. It provides an overview of general object cache characteristics and how cache instances work. Memcached is described as an open-source, high-performance distributed memory caching system where each cluster of servers is a single cache instance. DistributedMap is a built-in component of WebSphere Application Server that allows for multiple cache instances within a single JVM. The document outlines some advantages of each system and poses some open questions for further performance comparisons.
SIBus Tuning for production WebSphere Application Server Rohit Kelapure
The document discusses various topologies for SIBus messaging in WebSphere Application Server, including single server, single messaging engine; application server cluster with single messaging engine; multiple application server clusters with single messaging engine; and multiple bus member topologies. It examines the pros and cons of each approach in terms of scalability, performance, availability, and manageability.
First Failure Data Capture for your enterprise application with WebSphere App...Rohit Kelapure
First Failure Data Capture (FFDC) is a tool to capture diagnostic data when problems occur in code. It includes extensible frameworks for data collectors, formatters, and incident forwarders to dynamically capture more contextual data. FFDC creates a unique incident file for each failure, updates a summary report, and allows dynamically adding extensions without changing core logging calls.
Dynacache and Memcached are caching technologies for Java applications. Dynacache is built into WebSphere Application Server and caches content in JVM memory, allowing cache operations through POJO calls. Memcached is an open-source, high-performance distributed memory caching system where keys and values are transmitted over TCP/IP and cached content is not stored in JVM memory. Each technology has advantages such as Dynacache supporting memory and disk storage with size restrictions and faster operations through POJO calls without serialization, while Memcached does not consume memory in the Java heap.
This document compares the caching technologies Memcached and DistributedMap, which is part of Dynacache. Memcached is an open-source key-value store, where each key is cached on one server and clients must transmit keys and values over TCP/IP. DistributedMap is a caching component of WebSphere Application Server that caches content in JVM memory, allowing cache operations through POJO calls. The document outlines advantages of each technology, such as Memcached's ability to be used across different platforms and DistributedMap's support within WebSphere and integration with other IBM products.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
During our two weeks engagement with Pivotal, we started with Discovery and Framing phase, in this phase we
1. Created Story Backlogs prioritized by the goals & activities (that is shown on top right corner)
2. Performed Architecture Retrospective and assessed the application architecture holistically
3. Performed Fishbone Analysis to identify failure modes between microservices, queues and failure points for different applications. (that is shown on bottom right corner)
4. Performed Platform Health Check Activities which includes Reviewing of the current Rabbit MQ implementation of application and best practices, Reviewing of the Autoscaling Policies and Reviewing of Environment specific differences
5. Performed Application Source Code Review and Evaluated the upgrade impact of SpringBoot 2.0 and Java 11 and Dependencies
6. Defined Performance Plan and Isolation Segment Setup
Inconsistence on response time for the apps
Determine Root cause of performance drift
Understand different CPU utilization across PCF instances
7. Talked about how we can leverge Pivotal Platform Metrics Dashboard for application monitoring
During our two weeks engagement with Pivotal, we started with Discovery and Framing phase, in this phase we
1. Created Story Backlogs prioritized by the goals & activities (that is shown on top right corner)
2. Performed Architecture Retrospective and assessed the application architecture holistically
3. Performed Fishbone Analysis to identify failure modes between microservices, queues and failure points for different applications. (that is shown on bottom right corner)
4. Performed Platform Health Check Activities which includes Reviewing of the current Rabbit MQ implementation of application and best practices, Reviewing of the Autoscaling Policies and Reviewing of Environment specific differences
5. Performed Application Source Code Review and Evaluated the upgrade impact of SpringBoot 2.0 and Java 11 and Dependencies
6. Defined Performance Plan and Isolation Segment Setup
Inconsistence on response time for the apps
Determine Root cause of performance drift
Understand different CPU utilization across PCF instances
7. Talked about how we can leverge Pivotal Platform Metrics Dashboard for application monitoring
A 360 degree health assessment of our application reveals many interesting observations about our applications and platform, identify several key risks and received applicable recommendations from Pivotal Solution Architects. Instead of going over each of the Application Health Check dimension, I will try to focus on few dimensions in the interest of the time.
First I would like to talk about Failure Mode Analysis dimension.
In this dimension, we performed a failure mode testing where we were able to reproduce the issue we were facing in production due to thread pool exhaustion caused by resource contention leading to HIGH CPU.
The risk identified for this dimension is that we would have to perform chaos testing with high load to reach break point, app cannot tolerate loss of RMQ and run in degrade mode for extended period of time. Latency autoscaling in PCF does not work for this application.
The recommendation from Pivotal was to
● Tune size of the ForkJoin Threadpool = 10
● Forkjoin Queue max depth = 10
● HTTP Threadpool size = 100
● Configure autoscaling to be CPU based [80, 160] with min, max instances set to [1,10]
● CallerRunsPolicy for ForkJoin pool
Next dimension, I would like to talk about Technical Debt & Code Hygiene dimension.
In this dimension, we discovered that application has a bloated classpath. Microservices are as big ~ 250MB and embed 5 app servers (netty, jetty, jersey, spark server & tomcat)
The risk was that if time and resources are not spent in reducing the number and scope of dependencies then the apps will take longer to start and eventually auto-scaling will not work. Need to speed up inner loop of development.
The recommendation was to
● Eliminate Shared Service Library
● Eliminate and prune external dependencies
● Migrate to Spring Boot 2.x and Java 8u11
● Apps should be run and profiled in local sandbox with all service dependencies
● Set a threshold on size of app jars in CI pipeline to stop third party library proliferation
Monitoring and Metrics
Establishing desired service behavior, measuring how the service is actually behaving, and correcting discrepancies.
Assessment of Monitoring & Metrics reveals that we were using too many tools to monitor our applications and it was causing confusion in identifying the root cause of the issue. Recommendations was to reduce the number of monitoring tools and use PCF metrics along with Dynatrace or similar Application Performance Monitoring tool for root cause analysis.
Failure Mode Analysis
Understand the impact of failure of critical external dependencies on the core service. Play out scenarios where there is partial or complete loss of business functionality and plan for appropriate countermeasures.
Assessment of Failure Mode Analysis reveals that we should perform chaos testing with high load to identify the failure impact such as application can’t tolerate loss of Rabbit MQ ….
Technical Debt
Dependency Management and Library updates within the project. Is there a substantial bloat of libraries and third party dependencies in the project? Where is the technical debt accumulated in the components?
Assessment of Technical Debt reveals that application has been bloated due to inclusion of the various dependencies jars which are not being lev…
Emergency Response
Are run books in place to capture the right set of logs when a failure occurs? Does the development team follow a prescribed set of steps to triage and debug a problem in production? Are circuit breakers and other fallbacks in place to revert to a degraded functionality during failure?
Assessment of Technical Debt reveals that we would need to have an automated when of collecting the CPU thread and heat dumps when CPU is experiencing high utilization.
Performance Optimization
Are the applications starting slowly? Applications not meeting their expected SLAs. Analysis of performance issues ranging from high memory allocation to increased latency and high CPU. Performance test plan evaluation.
Assessment of Performance Optimization reveals that application was CPU constrained due to unmanaged threads. Proper Optimization of the size of the thread pools are necessary to drive the performance along with use of correct garbage collector. Local profiling of the application is very important to understand the thread utilization which can be achieved by using Visual VM and Jmeter tools.
Next dimension, I would like to focus is about Architecture dimension.
This dimension reveals that our Microservices are at the right level of granularity; however there is tight coupling and unnecessary big data dependencies present in code.
The risk that is identified that there is considerable sharing of service libraries between microservices leading to tight coupling. Shared service library is a monolith that is dragged into each service. Big data dependencies are leading to monoliths. Standalone mode execution jar should run locally, on Spark and Cloud Foundry
The recommendation was to
● Eliminate the core service library sharing between microservices
● Decouple model execution in app from Hadoop and Spark to decompose dark mode functionality
● Reduce exceptions and errors at startup. Reduce startup < 30s
● Use BOSH DNS to remove SCS overhead
Architecture
Is the architecture tightly coupled? Are Microservices too fine grained? Is the architecture adding technical debt? Is the architecture tending in the right direction? Can it be extended easily?
Assessment of Architecture, uncovers our implementation of Micro Services were at the right level, however there was tight coupling between them due to sharing of the core framework library which made our application big due to incorporation of unnecessary dependency components ….
Change Management
How does feature development work? What changes need to be made to the architecture and code for sustainability and evolution along the right dimensions? Top 3 things to bring the code and design in alignment with design principles
Assessment of Change Management,
Platform as a Product
The platform’s capabilities change in response to the needs of its users. It is treated as a product that is inclusive of not only Pivotal Platform but all the services and integrations that make it a viable environment for applications to run.
Assessment of Platform as Product, reveals that
Balanced Team
The platform team consists of a product manager and at least two platform engineers with a combination of infrastructure and software engineering skills. Does the team has all the tools and workstation infrastructure it needs for performing at a high velocity?
Process and Path to Production
Developers are able to take full advantage of the platform via modern and optimized tools and processes. Does Devops and CI/CD follow the right set of processes? How is code promoted across environments?
We made a great progress in achieving the objectives that we set at the beginning of two weeks engagements. I would like highlights some of the achievements that we accomplished from 360 Degree Health Assessment:
(2) Our team has started doing local profiling of the application from startup, CPU and latency perspective before deploying to cloud for performance testing using Visual VM and Jmeter tool
(3 and 4) We resolved the performance mystery from the Production outage by implementing manage thread strategy and right sizing our Threadpool settings
(5) Got consistent result of our performance testing by running in isolation segment setup
(6) Demonstrated > app can scale under sustained load keeping response times under SLO
(7) Reduced the overall application size and improved the startup time by 50% by reducing the classpath bloating, removing unnecessary exceptions and errors, pruning pom.xml.
Understanding what users want of your service helps to inform SLIs
Be careful not to select too many so as not to be able to focus on what users really care about
The cost of increasing reliability is two-fold:
Cost of extra hardware, software, licenses (for redundancy)
Opportunity cost of not working on new features