Vikash Pandey delivered a session on "Microservices – Explored" at ATAGTR2020
ATAGTR2020 was the 5th Edition of Global Testing Retreat.
Vikash is an empathetic leader working with people & technology in the area Product Development, Consulting, Support and Operations for 20+ years
The video recording of the session is now available on the following link: https://youtu.be/dF5wx4w66s8
To know more about #ATAGTR2020, please visit: https://gtr.agiletestingalliance.org/
Adopting Cloud Testing for Continuous Delivery, with the premier global provi...SOASTA
IDC, the premier global provider of IT market research, and SOASTA, an IDC industry leader in cloud testing know that maintaining leadership means moving quickly to outpace the competition. Both IDC and SOASTA work with clients to realize the benefits that cloud computing brings to delivering high quality, rapidly deployable web and mobile applications.
Join them in this webinar where you will hear:
IDC speak on:
Perspectives on the state of cloud computing for agile web and mobile development
Market dynamics and maturity around the cloud and cloud testing
Recommendations for getting started with cloud testing
SOASTA speak on:
The business drivers for cloud and virtualization
Customer goals of using and implementing cloud testing
The road to implementing cloud testing in a continuous integration model
Case studies of customer cloud testing success
SOASTA’s services and technology will be highlighted and demonstrated as a solution for continuous web and mobile testing as utilized by the Paychex team.
Who Should Attend?
Senior IT Management
Development and QA Executives and Directors
Performance team leads and engineers
Test Automation leads and engineers
Mobile Development and Testing team leads and engineers
The document summarizes key topics in cloud computing including definitions of cloud types (private, public, hybrid, community), characteristics of cloud services (on-demand self-service, broad network access, etc.), cloud service models (SaaS, PaaS, IaaS), benefits and risks of cloud adoption, security considerations, and predictions for cloud computing in 2012.
OSLC provides a simple solution for integrating tools across the software lifecycle by defining standard interfaces that allow tools to share information using linked data principles, facilitating increased automation, traceability, and reuse while reducing maintenance costs as users can work seamlessly across their tools without complex synchronization schemes. The OSLC community is working to further develop and promote open specifications through an independent standards organization to improve DevOps and application lifecycle management.
Devops transformation in the Rational Collaborative Lifecycle OrganizationRobbie Minshall
Set of slides providing a summary of our DevOps Transformation efforts within the Rational Collaborative Lifecycle Management organization. Discusses the use of IBM Urbancode Deploy, IBM Pure Application System and adoption of DevOps methodologies.
The document discusses IBM Rational Solution for Systems and Software Engineering. It aims to help organizations reduce time, cost and risk of developing profitable products and systems by providing an integrated set of tools, practices and services for specifying, designing, implementing and validating complex products and the software powering them. Key capabilities highlighted include requirements engineering, systems modeling and analysis, quality management, embedded software development, change and configuration management, and collaboration throughout development.
Are your cloud applications performing? How Application Performance Managemen...DevOps.com
This document discusses application modernization and why application performance monitoring (APM) is important during the modernization process. It provides an overview of common business reasons for modernizing applications, such as increasing flexibility, availability, scalability and portability. The document then discusses common challenges of modernization and provides examples of how companies approach modernizing applications. It emphasizes the importance of APM throughout the modernization lifecycle to deliver applications with speed, quality and control. The document concludes with examples of client experiences modernizing applications and lessons learned regarding monitoring tools in containerized/cloud environments.
Presentation used for IBM Systems Magazine Webcast: Mobile DevOps: Build and Connect on July 17, 2014
To see the recorded webcast - http://www-01.ibm.com/software/os/systemz/webcast/devops/series/
Mobile to mainframe - Enterprise DevOps - MoDevEast SlidesSanjeev Sharma
This document discusses adopting DevOps practices in the enterprise. It begins with an agenda that covers an overview of DevOps, Lean principles, applying DevOps in the enterprise including for mobile apps and mainframes, and adopting DevOps through people, processes, and technology. The document then covers definitions of DevOps, Lean principles like the Deming cycle, and challenges of applying DevOps across heterogeneous environments, mobile apps, and mainframes. It emphasizes coordinating across teams and tiers to accelerate delivery while ensuring quality.
Adopting Cloud Testing for Continuous Delivery, with the premier global provi...SOASTA
IDC, the premier global provider of IT market research, and SOASTA, an IDC industry leader in cloud testing know that maintaining leadership means moving quickly to outpace the competition. Both IDC and SOASTA work with clients to realize the benefits that cloud computing brings to delivering high quality, rapidly deployable web and mobile applications.
Join them in this webinar where you will hear:
IDC speak on:
Perspectives on the state of cloud computing for agile web and mobile development
Market dynamics and maturity around the cloud and cloud testing
Recommendations for getting started with cloud testing
SOASTA speak on:
The business drivers for cloud and virtualization
Customer goals of using and implementing cloud testing
The road to implementing cloud testing in a continuous integration model
Case studies of customer cloud testing success
SOASTA’s services and technology will be highlighted and demonstrated as a solution for continuous web and mobile testing as utilized by the Paychex team.
Who Should Attend?
Senior IT Management
Development and QA Executives and Directors
Performance team leads and engineers
Test Automation leads and engineers
Mobile Development and Testing team leads and engineers
The document summarizes key topics in cloud computing including definitions of cloud types (private, public, hybrid, community), characteristics of cloud services (on-demand self-service, broad network access, etc.), cloud service models (SaaS, PaaS, IaaS), benefits and risks of cloud adoption, security considerations, and predictions for cloud computing in 2012.
OSLC provides a simple solution for integrating tools across the software lifecycle by defining standard interfaces that allow tools to share information using linked data principles, facilitating increased automation, traceability, and reuse while reducing maintenance costs as users can work seamlessly across their tools without complex synchronization schemes. The OSLC community is working to further develop and promote open specifications through an independent standards organization to improve DevOps and application lifecycle management.
Devops transformation in the Rational Collaborative Lifecycle OrganizationRobbie Minshall
Set of slides providing a summary of our DevOps Transformation efforts within the Rational Collaborative Lifecycle Management organization. Discusses the use of IBM Urbancode Deploy, IBM Pure Application System and adoption of DevOps methodologies.
The document discusses IBM Rational Solution for Systems and Software Engineering. It aims to help organizations reduce time, cost and risk of developing profitable products and systems by providing an integrated set of tools, practices and services for specifying, designing, implementing and validating complex products and the software powering them. Key capabilities highlighted include requirements engineering, systems modeling and analysis, quality management, embedded software development, change and configuration management, and collaboration throughout development.
Are your cloud applications performing? How Application Performance Managemen...DevOps.com
This document discusses application modernization and why application performance monitoring (APM) is important during the modernization process. It provides an overview of common business reasons for modernizing applications, such as increasing flexibility, availability, scalability and portability. The document then discusses common challenges of modernization and provides examples of how companies approach modernizing applications. It emphasizes the importance of APM throughout the modernization lifecycle to deliver applications with speed, quality and control. The document concludes with examples of client experiences modernizing applications and lessons learned regarding monitoring tools in containerized/cloud environments.
Presentation used for IBM Systems Magazine Webcast: Mobile DevOps: Build and Connect on July 17, 2014
To see the recorded webcast - http://www-01.ibm.com/software/os/systemz/webcast/devops/series/
Mobile to mainframe - Enterprise DevOps - MoDevEast SlidesSanjeev Sharma
This document discusses adopting DevOps practices in the enterprise. It begins with an agenda that covers an overview of DevOps, Lean principles, applying DevOps in the enterprise including for mobile apps and mainframes, and adopting DevOps through people, processes, and technology. The document then covers definitions of DevOps, Lean principles like the Deming cycle, and challenges of applying DevOps across heterogeneous environments, mobile apps, and mainframes. It emphasizes coordinating across teams and tiers to accelerate delivery while ensuring quality.
This document discusses how Zurich Insurance was able to deliver DevOps style production values and double performance of their Risk Management Platform using PureApplication and UrbanCode Deploy. PureApplication allowed them to create reusable patterns for deploying the solution components. UrbanCode Deploy provided automated deployment of the patterns and management of the environments. Together, PureApplication and UrbanCode Deploy provided a synergetic solution that rapidly and consistently deployed the overall Risk Management Platform, reducing downtime and speeding up computation times.
- IT is at an inflection point due to pressures from both internal and external factors such as legacy architectures, outdated applications, rigid processes, and high operating costs. 70% of CXOs expect their IT department to undergo significant changes in the next three years.
- The future of IT service delivery is a brokerage model enabled by cloud computing. In this model, the CIO becomes a Chief Innovation Officer and cloud computing allows for the transformation of IT departments.
- For higher education institutions, adopting a cloud brokerage model allows the IT department to help transform the institution's business while also transforming itself. This helps meet the expectations of students and the broader campus community for more flexible, on-demand, and cost
Continuous Delivery for cloud - scenarios and scopeSanjeev Sharma
Cloud is both a catalyst and an enabler for DevOps. Having the flexibility and the services and capabilities provided by the Cloud lowers the barrier to adoption for organization looking to adopt DevOps. Hence, allowing them to achieve the business goals of Speed, Business Agility and Innovation.
This webinar will explore the impact of DevOps on using the Cloud as a Platform as a Service and vice versa. It will explore the different use cases of DevOps that are enabled or enhanced by the Cloud platform, and the different 'scopes' of adoption by organizations adopting Cloud and DevOps in an iterative manner.
Testing the brave new world of saa s applications quest 2018 v1GerieOwen
Testing SaaS applications presents unique challenges compared to traditional on-premise software. Key aspects to test include business processes, configurations, customizations, data migration and integrations. Non-functional testing of performance, availability, security and other qualities is also important. An effective test approach includes standard test scenarios addressing these areas, with separate test environments and tracks coordinated through Scrum of Scrums meetings. Specialized test skills are required, and planning for vendor upgrades is crucial.
Microservices: A Step Towards Modernizing Healthcare ApplicationsCitiusTech
This document/White Paper talks about the importance of Microservices and the role that it plays in today's ever-changing IT heathcare landscape.
The document aims to share a perspective on areas to consider while adopting microservices architecture for modernizing healthcare applications.
DevOps defines a set of roles and responsibilities focused on reducing risk in IT deployments and projects. By connecting development and operations, enterprise IT departments can begin to break down silos in order to:
- maximize automation;
- eliminate or significantly reduce human error;
- increase consistency; and
- reduce time spent on the outages, error detection and prevention caused by unstable environments
Gunnar Menzel, President of ODCA, Chief Architect of Capgemini Infra, outlines the ODCA perspective on the DevOps concept, focusing on key challenges it can help resolve and the benefits it can provide.
Download the white paper today http://opendatacenteralliance.org/article/devops-magnifying-business-value/
Cloud Application Rationalization- The Cloud, the Enterprise, and Making the ...Chad Lawler
“Cloud Application Rationalization - The Cloud, the Enterprise and Making the Right Decisions for your Business”, Gartner Symposium ITXPO, October 24, 2011, Author Chad M. Lawler, Ph.D., Director, Consulting Services, Cloud Computing, U.S. Strategic Technology Solutions, Hitachi Consulting
A new approach to delivering applications with speed, quality, and scale to accelerate business success
Experience the next generation of Application Lifecycle Management – with support for waterfall projects, agile, and everything in between.
2014 10 23 Twin Cities User Group PresentationRoger Snook
This document discusses how to continuously deliver high quality mobile apps and rapidly respond to feedback using DevOps practices. It recommends taking a DevOps approach to mobile development that emphasizes collaborative development, continuous testing, continuous release and deployment, and continuous customer feedback. This allows organizations to accelerate software delivery, balance speed and quality, and reduce time to customer feedback. The document provides an overview of IBM's DevOps for Mobile offerings to help achieve these goals across the mobile development lifecycle.
Top concerns that we hear from customers are “How can we release on-time?”, “How can we have a stable release?” We answer them in a simple one-liner, “Embrace DevOps”
Neev capabilities in building video and live streaming appsNeev Technologies
Neev is an IT services and product development company that has expertise in video/live streaming applications, web technologies, and mobile development. It has worked with over 15 clients in media and entertainment, education, and other industries to design, build, deploy, and maintain streaming applications. Neev leverages technologies like Java, Ruby on Rails, and cloud platforms from AWS and Google. The document provides details on Neev's capabilities and case studies of projects involving video streaming portals, a video editing SaaS, and a conferencing application.
The cloud is here to stay, and companies are looking to their internal applications teams to provide strategic guidance on how best to take advantage of the cloud. Find out how your peers are using business applications in the cloud to their advantage.
Key take-aways:
- Ways cloud computing can drive process transformation for your organization;
- How companies are using business applications in the cloud for competitive advantage;
- How virtual private cloud computing eliminates risks commonly associated with public cloud environments.
The document discusses cloud analytics, cloud testing, and virtual desktop infrastructure (VDI).
Cloud analytics allows organizations to implement analytics capabilities in the cloud to scale easily as the company grows and removes the burden of on-premise management. Cloud testing verifies cloud functions like redundancy and performance scalability. VDI creates a virtualized desktop environment on remote servers that users can access from any device, bringing benefits like access, security, cost reduction, and device portability.
This document provides information about the IT consulting firm Prolifics. It discusses their approach of using standardized patterns to customize IT solutions for clients. This allows them to deliver solutions faster, cheaper, and with better results. Prolifics utilizes expertise in various technologies and industries to implement patterns that simplify infrastructure, reduce costs, and improve agility. They also offer additional services around software licensing optimization, cloud strategies, and managed IT services.
Modernising the Enterprise: An Evening with the AWS Enterprise User GroupHarley Young
The French aristocrat, writer, and aviator, Antoine De Saint-Exupery most famously wrote The Little Prince. He's also credited with the following quote, "If you want to build a ship, don’t drum up the men and women to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea." Cloud modernisation is a little bit like building that ship. You can't merely command an organisation to pick up its applications and move to the cloud. Instead, you must teach them to yearn for the vast and endless potential the cloud provides to build whatever they can imagine for their business.
In the presentation, I explain a 5-step approach that has been used to transform and modernise some of the world's most successful businesses in nearly all imaginable industry verticals as they contemplated a move to the AWS cloud:
1. Know your business
2. Understand your environment
3. Prepare the organisation
4. Move the first workloads to the cloud
5. Get help when you need it
PaaS POV_To PaaS or Not There really is no question_150601_FINAL_PRINT_READYRene Claudio
Enterprise IT needs to achieve a much higher degree of agility by increasing delivery velocity from requirements to releases. PaaS is a foundational enabler of IT agility by allowing developers to focus on coding while automating operational activities like provisioning and deploying environments. PaaS provides application runtimes and services, enables microservices architectures, and automates operations tasks like infrastructure management, deployments, and scaling. Achieving IT agility starts with a PaaS proof-of-concept to identify workloads that would benefit and determine a roadmap for adoption.
Pure App + Patterns + Prolifics = Feeding Change Prolifics
This document provides information on Prolifics, an IT services company that utilizes patterns and expertise to help clients. It discusses Prolifics' technical excellence, industry focus, global delivery advantage, and core values. The document then outlines various IT services Prolifics can provide, including application development and testing, business analytics, managed services, and more. It emphasizes that Prolifics utilizes patterns and expertise to help clients adapt faster, transform applications, improve security and more.
The document discusses microservices architecture and monolithic architecture. It defines microservices as an architectural style where applications are composed of small, independent services that communicate over well-defined APIs. This allows for independent deployability and scalability of individual services. The document contrasts this with monolithic architecture, which packages an entire application into a single deployable unit with tight coupling between components.
The Reality of Managing Microservices in Your CD PipelineDevOps.com
As we shift from monolithic software development practices to microservices, our well-designed CD pipeline will need to change. Microservices are small functions, deployed independently and linked via APIs at run-time. While these differences seem minor, they actually have a large impact on your overall CD structure. Think hundreds of workflows, small of any builds and the loss of a monolithic 'application.'
Join Tracy Ragan, CEO of DeployHub and Brendan O'Leary, Developer Evangelist at GitLab, to learn more.
It's never too early to start the conversation.
This document discusses how Zurich Insurance was able to deliver DevOps style production values and double performance of their Risk Management Platform using PureApplication and UrbanCode Deploy. PureApplication allowed them to create reusable patterns for deploying the solution components. UrbanCode Deploy provided automated deployment of the patterns and management of the environments. Together, PureApplication and UrbanCode Deploy provided a synergetic solution that rapidly and consistently deployed the overall Risk Management Platform, reducing downtime and speeding up computation times.
- IT is at an inflection point due to pressures from both internal and external factors such as legacy architectures, outdated applications, rigid processes, and high operating costs. 70% of CXOs expect their IT department to undergo significant changes in the next three years.
- The future of IT service delivery is a brokerage model enabled by cloud computing. In this model, the CIO becomes a Chief Innovation Officer and cloud computing allows for the transformation of IT departments.
- For higher education institutions, adopting a cloud brokerage model allows the IT department to help transform the institution's business while also transforming itself. This helps meet the expectations of students and the broader campus community for more flexible, on-demand, and cost
Continuous Delivery for cloud - scenarios and scopeSanjeev Sharma
Cloud is both a catalyst and an enabler for DevOps. Having the flexibility and the services and capabilities provided by the Cloud lowers the barrier to adoption for organization looking to adopt DevOps. Hence, allowing them to achieve the business goals of Speed, Business Agility and Innovation.
This webinar will explore the impact of DevOps on using the Cloud as a Platform as a Service and vice versa. It will explore the different use cases of DevOps that are enabled or enhanced by the Cloud platform, and the different 'scopes' of adoption by organizations adopting Cloud and DevOps in an iterative manner.
Testing the brave new world of saa s applications quest 2018 v1GerieOwen
Testing SaaS applications presents unique challenges compared to traditional on-premise software. Key aspects to test include business processes, configurations, customizations, data migration and integrations. Non-functional testing of performance, availability, security and other qualities is also important. An effective test approach includes standard test scenarios addressing these areas, with separate test environments and tracks coordinated through Scrum of Scrums meetings. Specialized test skills are required, and planning for vendor upgrades is crucial.
Microservices: A Step Towards Modernizing Healthcare ApplicationsCitiusTech
This document/White Paper talks about the importance of Microservices and the role that it plays in today's ever-changing IT heathcare landscape.
The document aims to share a perspective on areas to consider while adopting microservices architecture for modernizing healthcare applications.
DevOps defines a set of roles and responsibilities focused on reducing risk in IT deployments and projects. By connecting development and operations, enterprise IT departments can begin to break down silos in order to:
- maximize automation;
- eliminate or significantly reduce human error;
- increase consistency; and
- reduce time spent on the outages, error detection and prevention caused by unstable environments
Gunnar Menzel, President of ODCA, Chief Architect of Capgemini Infra, outlines the ODCA perspective on the DevOps concept, focusing on key challenges it can help resolve and the benefits it can provide.
Download the white paper today http://opendatacenteralliance.org/article/devops-magnifying-business-value/
Cloud Application Rationalization- The Cloud, the Enterprise, and Making the ...Chad Lawler
“Cloud Application Rationalization - The Cloud, the Enterprise and Making the Right Decisions for your Business”, Gartner Symposium ITXPO, October 24, 2011, Author Chad M. Lawler, Ph.D., Director, Consulting Services, Cloud Computing, U.S. Strategic Technology Solutions, Hitachi Consulting
A new approach to delivering applications with speed, quality, and scale to accelerate business success
Experience the next generation of Application Lifecycle Management – with support for waterfall projects, agile, and everything in between.
2014 10 23 Twin Cities User Group PresentationRoger Snook
This document discusses how to continuously deliver high quality mobile apps and rapidly respond to feedback using DevOps practices. It recommends taking a DevOps approach to mobile development that emphasizes collaborative development, continuous testing, continuous release and deployment, and continuous customer feedback. This allows organizations to accelerate software delivery, balance speed and quality, and reduce time to customer feedback. The document provides an overview of IBM's DevOps for Mobile offerings to help achieve these goals across the mobile development lifecycle.
Top concerns that we hear from customers are “How can we release on-time?”, “How can we have a stable release?” We answer them in a simple one-liner, “Embrace DevOps”
Neev capabilities in building video and live streaming appsNeev Technologies
Neev is an IT services and product development company that has expertise in video/live streaming applications, web technologies, and mobile development. It has worked with over 15 clients in media and entertainment, education, and other industries to design, build, deploy, and maintain streaming applications. Neev leverages technologies like Java, Ruby on Rails, and cloud platforms from AWS and Google. The document provides details on Neev's capabilities and case studies of projects involving video streaming portals, a video editing SaaS, and a conferencing application.
The cloud is here to stay, and companies are looking to their internal applications teams to provide strategic guidance on how best to take advantage of the cloud. Find out how your peers are using business applications in the cloud to their advantage.
Key take-aways:
- Ways cloud computing can drive process transformation for your organization;
- How companies are using business applications in the cloud for competitive advantage;
- How virtual private cloud computing eliminates risks commonly associated with public cloud environments.
The document discusses cloud analytics, cloud testing, and virtual desktop infrastructure (VDI).
Cloud analytics allows organizations to implement analytics capabilities in the cloud to scale easily as the company grows and removes the burden of on-premise management. Cloud testing verifies cloud functions like redundancy and performance scalability. VDI creates a virtualized desktop environment on remote servers that users can access from any device, bringing benefits like access, security, cost reduction, and device portability.
This document provides information about the IT consulting firm Prolifics. It discusses their approach of using standardized patterns to customize IT solutions for clients. This allows them to deliver solutions faster, cheaper, and with better results. Prolifics utilizes expertise in various technologies and industries to implement patterns that simplify infrastructure, reduce costs, and improve agility. They also offer additional services around software licensing optimization, cloud strategies, and managed IT services.
Modernising the Enterprise: An Evening with the AWS Enterprise User GroupHarley Young
The French aristocrat, writer, and aviator, Antoine De Saint-Exupery most famously wrote The Little Prince. He's also credited with the following quote, "If you want to build a ship, don’t drum up the men and women to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea." Cloud modernisation is a little bit like building that ship. You can't merely command an organisation to pick up its applications and move to the cloud. Instead, you must teach them to yearn for the vast and endless potential the cloud provides to build whatever they can imagine for their business.
In the presentation, I explain a 5-step approach that has been used to transform and modernise some of the world's most successful businesses in nearly all imaginable industry verticals as they contemplated a move to the AWS cloud:
1. Know your business
2. Understand your environment
3. Prepare the organisation
4. Move the first workloads to the cloud
5. Get help when you need it
PaaS POV_To PaaS or Not There really is no question_150601_FINAL_PRINT_READYRene Claudio
Enterprise IT needs to achieve a much higher degree of agility by increasing delivery velocity from requirements to releases. PaaS is a foundational enabler of IT agility by allowing developers to focus on coding while automating operational activities like provisioning and deploying environments. PaaS provides application runtimes and services, enables microservices architectures, and automates operations tasks like infrastructure management, deployments, and scaling. Achieving IT agility starts with a PaaS proof-of-concept to identify workloads that would benefit and determine a roadmap for adoption.
Pure App + Patterns + Prolifics = Feeding Change Prolifics
This document provides information on Prolifics, an IT services company that utilizes patterns and expertise to help clients. It discusses Prolifics' technical excellence, industry focus, global delivery advantage, and core values. The document then outlines various IT services Prolifics can provide, including application development and testing, business analytics, managed services, and more. It emphasizes that Prolifics utilizes patterns and expertise to help clients adapt faster, transform applications, improve security and more.
The document discusses microservices architecture and monolithic architecture. It defines microservices as an architectural style where applications are composed of small, independent services that communicate over well-defined APIs. This allows for independent deployability and scalability of individual services. The document contrasts this with monolithic architecture, which packages an entire application into a single deployable unit with tight coupling between components.
The Reality of Managing Microservices in Your CD PipelineDevOps.com
As we shift from monolithic software development practices to microservices, our well-designed CD pipeline will need to change. Microservices are small functions, deployed independently and linked via APIs at run-time. While these differences seem minor, they actually have a large impact on your overall CD structure. Think hundreds of workflows, small of any builds and the loss of a monolithic 'application.'
Join Tracy Ragan, CEO of DeployHub and Brendan O'Leary, Developer Evangelist at GitLab, to learn more.
It's never too early to start the conversation.
How to add security in dataops and devopsUlf Mattsson
The emerging DataOps is not Just DevOps for Data. According to Gartner, DataOps is a collaborative data management practice focused on improving the communication, integration and automation of data flows between data managers and consumers across an organization.
The goal of DataOps is to create predictable delivery and change management of data, data models and related artifacts. DataOps uses technology to automate data delivery with the appropriate levels of security, quality and metadata to improve the use and value of data in a dynamic environment.
This session will discuss how to add Security in DataOps and DevOps.
MuCon 2015 - Microservices in Integration ArchitectureKim Clark
The document discusses integration architecture in a microservices world. It begins by defining integration architecture as how data and functions are shared between applications. It then discusses challenges with large enterprise landscapes that have undergone mergers and acquisitions. The document outlines different types of integration architectures like external, enterprise, batch-based, and event-based integration. It also discusses common misconceptions around microservices, such as thinking microservices refer to exposed APIs rather than application components. The summary concludes by noting debates around the differences between microservices and service-oriented architecture (SOA).
This document discusses when a service mesh may be needed and provides an overview of the current service mesh landscape. It begins with why microservices are adopted and the challenges of operating distributed applications. It then describes a maturity journey where a service mesh is not initially needed but may become useful for applications that become more complex, distributed, and interdependent. The document outlines some current major service mesh implementations and notes that the technology is still new and changing rapidly. It recommends investigating service meshes through proof of concepts but cautions that production usage requires significant resources. It profiles F5 Aspen Mesh and NGINX solutions for service meshes and microservices.
A Guide on What Are Microservices: Pros, Cons, Use Cases, and MoreSimform
IT organizations can be benefitted from a microservices approach to application development with more agile and accelerated time to market. However, there is a catch in order to break an app into fine-grained services.
This talk was done in Feb 2020. Sergey and I co-presented at CTO Forum on Microservices and Service Mesh (how they relate, requirements, goals, best practices and how DevOps and Agile has had convergence in the set of features for Service Mesh and gateways around observability, feature flags, etc.)
Presentazione dello speech tenuto da Carmine Spagnuolo (Postdoctoral Research Fellow - Università degli Studi di Salerno/ ACT OR) dal titolo "Technology insights: Decision Science Platform", durante il Decision Science Forum 2019, il più importante evento italiano sulla Scienza delle Decisioni.
Alex Thissen (Xpirit) - Een verschuiving in architectuur: op weg naar microse...AFAS Software
This document discusses microservices architecture as a modern approach to application development. It begins by outlining some of the challenges with monolithic architectures and how microservices address needs for scalability, agility, availability and efficiency. Key characteristics of microservices are that they are independently deployable, use lightweight protocols for communication, and are organized around business capabilities rather than technical boundaries. The document provides examples of how to decompose a monolithic application into microservices and discusses considerations for designing services, service communication, and hosting microservices using containers and orchestration platforms.
Application Modernization With Cloud Native Approach_ An in-depth Guide.pdfbasilmph
Taking outdated applications and upgrading its platform infrastructure, internal
systems, and the way of using is known as application modernization. The
advantages of application modernization can be summarized as increasing the
speed with which new features are delivered, exposing the functionality of existing
applications to be consumed via API by other services, and re-platforming applications from on-premises to cloud-native application modernization.
The document discusses adopting microservices and DevOps approaches to improve agility. It defines microservices as independent processes communicating via APIs. Microservices allow modularizing applications into business-aligned components. DevOps emphasizes collaboration, automation, and continuous delivery to reduce disruption. The document recommends a three step approach: 1) Modularize monolithic applications using domain-driven design, 2) Adopt platforms for continuous integration, and 3) Use containers for continuous deployment and monitoring of microservices at scale.
Automating Applications with Habitat - Sydney Cloud Native MeetupMatt Ray
Habitat is an open source tool for automating the build, deployment, and management of applications. It defines a standard lifecycle for applications that includes building, deploying, running, and managing applications and their dependencies. Habitat packages applications and dependencies together, and uses supervisors to manage applications in production. It aims to simplify and standardize the delivery of developer services by automating common tasks like configuration, service discovery, and clustering across different runtime environments.
Cloud native is a new paradigm for developing, deploying, and running applications using containers, microservices, and container orchestration. The Cloud Native Computing Foundation (CNCF) drives adoption of this paradigm through open source projects like Kubernetes, Prometheus, and Envoy. Cloud native applications are packaged as lightweight containers, developed as loosely coupled microservices, and deployed on elastic cloud infrastructure to optimize resource utilization. CNCF seeks to make these innovations accessible to everyone.
Architecting for speed: how agile innovators accelerate growth through micros...Jesper Nordström
The document discusses the benefits of adopting a microservices architecture compared to traditional monolithic applications. It states that microservices allow for easier deployment, superior scalability, and improved productivity. Specifically, microservices enable faster development cycles since individual services can be independently developed and deployed. They also allow for more efficient scaling since individual services can be scaled up or down independently without affecting other services. This results in more efficient use of code and infrastructure.
Architecting for speed: how agile innovators accelerate growth through micros...3gamma
In a world where software has become the key differentiator, enterprises are forced to transform the way they build, ship and run software in order to stay in the game. Adopting a microservices architecture enables organisations to not only become more agile but also to cut costs and increase stability.
Culture Is More Important Than Competence In IT.pptxmushrunayasmin
The DevOps implementation will simplify the current support structure inside operations by automating environment build and application release management tasks.
This would guarantee the quicker delivery of online software items of greater quality, increasing client satisfaction.
Learn More: https://bjitgroup.com/agile-software-company
This document discusses moving from a traditional SOA architecture to a microservices architecture using DevOps principles. It describes microservices as independent processes communicating via APIs. It outlines 3 steps to transition: 1) apply domain-driven design to modularize services, 2) use a PaaS for continuous integration, and 3) use containers for deployment. Continuous integration, deployment, and monitoring are discussed as key DevOps practices to manage the technical debt of microservices and provide agility. Examples of AWS Lambda, Spring Cloud, and Azure Service Fabric are given as platforms that provide the tools to build microservices applications.
Understanding The Cloud For Enterprise Businesses. Triaxil
Cloud is getting lots of attention these days. Cloud is a transformational platform that can support the opportunities of today’s digital business being shaped and driven by mobile, social, IoT (Internet of Things), Big Data and other forces. Cloud Computing not only is a powerful agent of change, but it also can accelerate transformation.
The benefits are big. “Cloud computing is a disruptive phenomenon, with the potential to make IT organizations more responsive than ever,” says research firm Gartner. “Cloud computing promises economic advantages, speed, agility, flexibility,infinite elasticity an dinnovation.” As a result, more and more enterprises are moving to the cloud. According to Gartner, 78 percent of enterprises are planning to increase their investment in cloud through 2017.
Similar to #ATAGTR2020 Presentation - Microservices – Explored (20)
#Interactive Session by Anindita Rath and Mahathee Dandibhotla, "From Good to...Agile Testing Alliance
#Interactive Session by Anindita Rath and Mahathee Dandibhotla, "From Good to Great: Enhancing Testability in Software Testing " at ATAGTR2023
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Ajay Balamurugadas, "Where Are The Real Testers In T...Agile Testing Alliance
#Interactive Session by Ajay Balamurugadas, "Where Are The Real Testers In The Age of AI? " at ATAGTR2023
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Jishnu Nambiar and Mayur Ovhal, "Monitoring Web Per...Agile Testing Alliance
#Interactive Session by Jishnu Nambiar and Mayur Ovhal, "Monitoring Web Performance: Leveraging Grafana and Selenium for Real-Time Issue Alerts" at ATAGTR2023
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Pradipta Biswas and Sucheta Saurabh Chitale, "Navigat...Agile Testing Alliance
#Interactive Session by Pradipta Biswas and Sucheta Saurabh Chitale, "Navigating the IoT Performance Testing Landscape" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Apoorva Ram, "The Art of Storytelling for Testers" at...Agile Testing Alliance
#Interactive Session by Apoorva Ram, "The Art of Storytelling for Testers" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Nikhil Jain, "Catch All Mail With Graph" at #ATAGTR2023.Agile Testing Alliance
#Interactive Session by Nikhil Jain, "Catch All Mail With Graph" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Ashok Kumar S, "Test Data the key to robust test cove...Agile Testing Alliance
#Interactive Session by Ashok Kumar S, "Test Data the key to robust test coverage" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Seema Kohli, "Test Leadership in the Era of Artificia...Agile Testing Alliance
#Interactive Session by Seema Kohli, "Test Leadership in the Era of Artificial Intelligence" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Ashwini Lalit, RRR of Test Automation Maintenance" at...Agile Testing Alliance
#Interactive Session by Ashwini Lalit, RRR of Test Automation Maintenance" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Srithanga Aishvarya T, "Machine Learning Model to aut...Agile Testing Alliance
#Interactive Session by Srithanga Aishvarya T, "Machine Learning Model to automate performance test script development using Jmeter" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Kirti Ranjan Satapathy and Nandini K, "Elements of Qu...Agile Testing Alliance
#Interactive Session by Kirti Ranjan Satapathy and Nandini K, "Elements of Quality Engineering in Remote IoT System" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Sudhir Upadhyay and Ashish Kumar, "Strengthening Test...Agile Testing Alliance
#Interactive Session by Sudhir Upadhyay and Ashish Kumar, "Strengthening Testing Oversight Using Environment Automation" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Sayan Deb Kundu, "Testing Gen AI Applications" at #AT...Agile Testing Alliance
#Interactive Session by Sayan Deb Kundu, "Testing Gen AI Applications" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Dinesh Boravke, "Zero Defects – Myth or Reality" at #...Agile Testing Alliance
#Interactive Session by Dinesh Boravke, "Zero Defects – Myth or Reality" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Saby Saurabh Bhardwaj, "Redefine Quality Assurance – Journey from Centralized to Decentralized, Distributed Blockchain/Web3 testing" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Keynote Session by Sanjay Kumar, "Innovation Inspired Testing!!" at #ATAGTR2...Agile Testing Alliance
#Keynote Session by Sanjay Kumar, "Innovation Inspired Testing!!" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Keynote Session by Schalk Cronje, "Don’t Containerize me" at #ATAGTR2023.Agile Testing Alliance
#Keynote Session by Schalk Cronje, "Don’t Containerize me" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Chidambaram Vetrivel and Venkatesh Belde, "Revolution...Agile Testing Alliance
#Interactive Session by Chidambaram Vetrivel and Venkatesh Belde, "Revolutionizing Security Testing with AI" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Aniket Diwakar Kadukar and Padimiti Vaidik Eswar Dat...Agile Testing Alliance
#Interactive Session by Aniket Diwakar Kadukar and Padimiti Vaidik Eswar Datta, "A Holistic Testing Methodology for Immersive Experience in AR, VR, and the Metaverse" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
#Interactive Session by Vivek Patle and Jahnavi Umarji, "Empowering Functiona...Agile Testing Alliance
#Interactive Session by Vivek Patle and Jahnavi Umarji, "Empowering Functional Testing with Support Vector Machines: An Experimental Journey" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
2. Agenda
What is microservices
Why should microservices matter to us
Trends supporting Microservices
Advantages, concerns and mitigations
Services and Libraries
Important aspects of
Microservices:
Service discovery and registration
Deployment
Data Handling
Migration strategies of monolith to microservices
Let’s share varying perspectives
3. 43
Married, with a son (12)
Pune, India. Indian citizen
Strengths: Positivity, Maximizer, Developer, Arranger,
Responsibility.
• 20+ years in industry out of that 13+ years with FIS
• Consulting, Delivery, Support
• Result oriented, focused on solving problems
• Technology enthusiast with strong interest in people and their success
• Before FIS:
• Fidelity Investments
• Iflex Solutions
• One Off Software Development
• e-Enable Technologies
• A blogger, a consistent learner, build and share my views
• Advocate of DevOps, Cloud and Microservices
• An Inbox thinker
Who am I?
4. What is microservices
Martin Fowler: an approach to developing a single application
as a suite of small services, each running in its own process and
communicating with lightweight mechanisms, often an HTTP
resource application programming interface (API).
IBM describes it as a cloud-native approach to building applications from "loosely
coupled and independently deployable smaller components, or services" that:
• Have their own stack;
• Use REST APIs and other forms of communication to connect to other
services; and
• Are sorted by business capability and separated into "need-to-know"
chunks via bounded context.
We can summarize it as: microservices are an app design architecture with a philosophy based on building independent
components that all connect via API (HTTP, Thrift or REST) to reduce complexity, increase scalability, and allow applications to
be distributed with more ease than traditional monolithic-style program architecture.
5. Why should microservices matter to us
Few notable concerns
Vanson Bourne on behalf of API management platform provider Kong, polled 200 senior IT
leaders of organizations with more than 1,000 employees. • The survey also finds significant microservices challenges
that remain are:
• ensuring security (36%),
• integration with legacy applications (32%),
• the complexity of management (31%) and
• updating API documentation (31%).
• Despite these concerns, however, more than 80% of
survey respondents who have adopted microservices
report that their organization performs well against
metrics for development efficiency, the ability to use
new platforms, collaboration across teams and sharing
of services across applications.
• 89% of technology leaders agree that companies that are not able to effectively support
microservices will be less able to compete in the future.
• The primary reasons cited for adopting microservices are
• improvements to security (56%),
• increased development speed (55%),
• increased speed of integrating new technologies (53%),
• improved infrastructure flexibility (53%) and
• improved collaboration across teams (46%)
• Availability (75%), security (74%), performance (65%) and scalability (64%) are given highest
importance.
• Faster development speeds (95%), increased collaboration (94%) and reduced deployment
risks (93%) as key desired outcomes for adopting any new technologies
• The survey also notes 83% of organizations are relying on open-source software to become
more agile. The most commonly used open-source technologies are databases (64%),
containers (48%), API gateways (41%), infrastructure automation (40%), container
orchestration (37%) and continuous integration/continuous delivery (CI/CD) tools (36%).
The foundation, to fully utilize the capabilities of microservices, is to have a strong DevOps culture embedded in the team.
6. Trends supporting Microservices
• In 2019, trends that were prominent and pervasive:
• Test Automation, the result of test-driven development that requires developers to carry out tests throughout the Continuous Integration (CI)
pipeline.
• Incident Response, rise of Site Reliability Engineers is a response to the resiliency challenges of the systems. They were and are in charge of
efficiency, performance, latency, availability, capacity planning, and emergency response.
• Continuous Deployment, more developers were tooling around the deployments of microservices, which have cut down the cost associated with
complexity. Organizations could keep on the trend of migration to microservice architectures.
• In 2020, trends that are prominent and pervasive:
• High microservices market growth, 22.5% in USA and overall, 27.4%.
• Cloud adoption, organizations and their software delivery staff is moving away from locally hosted applications and shift into the cloud.
• Observability tools, companies recognize how important observability is for distributed, microservices-based architectures.
• Web services frameworks, developers will be able to use a framework for web services as microservices continue to evolve and offer
development out-of-the-box capabilities and implementation of the service design patterns and code automatically.
• Increased demand for frequent updates to the applications, In response to user demands for interactive, rich and dynamic experiences on
various platforms, microservices can support the frequency demand. It also goes well with the need of scalability and agility to the applications
having high availability, scalability, and easy-to-execute on the cloud platforms
7. Advantages, Concerns and Mitigations
Few Advantages Challenges to look out for
• Because the microservices function independently, each
component can be built in whatever language the developer of
that component prefers.
• During initial stages it’s the complexity. The number of services and its deployment
complexity. Until sorted delivery process, go slow.
• Distribution cost, increased total latency across systems performing the business
function. Carefully considering number of microservices.• Individual components can be updated on their own since their
internal operation is inconsequential to the app as a whole. As
long as they report back the correct data, the app will continue
working.
• Reduction of reliability, more moving parts in the system, the reliability suffers.
Concepts like service mesh, observability and utilization of those tools could help.
• Scaling can be done to each individual component, saving time
and computing resources spent replicating entire applications to
account for load on a single component.
• Any service can call any service, there is no restriction at all. Hence a solid version
strategy of service APIs is a recommended mitigation. Or think of Change Strategy.
Versioning an API is like having your age in your name. Yes, I’m talking to
you John32 and Emmanuel46. And then comes the fallacy of “nested” resources: If the
resource John32 (/john32) has nested resource child (/john32/child), is
the child resource of version 32 as well? Or is it a child when John was at version 32?
What if the child is changing versions regardless of John32? - Zdenek Nemec
In a survey it was found that point-to-point versioning, which is the strategy employed
by most web API developers today, is 45% more costly with 4 different API versions
than a Compatible Versioning strategy. While Compatible Versioning has a higher initial
cost, over time, it provides huge cost savings. Change Strategy embraces Compatible
Versioning strategy.
• A component no longer meeting the need of its application can be
replaced without affecting the app as a whole.
• Applications can be decentralized between multiple cloud
providers, servers, and other services both local and remote.
8. Services and Libraries
Services
Loosely Coupled
Easily Upgradeable
Libraries
Tightly Coupled
Complicated Jars
Componentization via services : Teams must prefer distributing
components as services rather than libraries
9. Microservices important aspects - Service
discovery
• When using client-side discovery, the client is responsible for determining the network locations of available service instances and load balancing requests across them.
• Netflix Eureka is a service registry. It provides a REST API for managing service-instance registration and for querying available instances. Netflix Ribbon is an IPC client that works
with Eureka to load balance requests across the available service instances.
• relatively straightforward and, except for the service registry, there are no other moving parts.
• since the client knows about the available services instances, it can make intelligent, application-specific load-balancing decisions such as using hashing consistently.
• A significant drawback of this pattern is that it couples the client with the service registry. One must implement client-side service discovery logic for each programming
language and framework used by your service clients.
• When using server-side discovery The client makes a request to a service via a load balancer. The load balancer queries the service registry and routes each request to an
available service instance.
• The AWS Elastic Load Balancer (ELB) is an example of a server-side discovery router.
• Some deployment environments such as Kubernetes and Marathon run a proxy on each host in the cluster. The proxy plays the role of a server-side discovery load
balancer.
• The service registry is a key part of service discovery. It is a database containing the network locations of service instances. Few examples are, Etcd, consul, and
Apache Zookeeper
…
10. Microservices important aspects - Service
registration
• When using the self-registration pattern, a service instance is responsible for registering and deregistering itself with the service registry. Netflix OSS Eureka client is an example
of this pattern.
• One benefit is that it is relatively simple and doesn’t require any other system components.
• However, a major drawback is that it couples the service instances to the service registry. You must implement the registration code in each programming language and
framework used by your services.
• When using the third-party registration pattern, a system component known as the service registrar handles the registration. The service registrar tracks changes to the set of
running instances by either polling the deployment environment or subscribing to events. the open source Registrator project and Netflix OSS Prana are examples of service
registrars.
• A major benefit is that services are decoupled from the service registry. You don’t need to implement service-registration logic for each programming language and
framework used by your developers.
• One drawback of this pattern is that unless it’s built into the deployment environment, it is yet another highly available system component that you need to set up and
manage.
11. Microservices important aspects -
Deployment
• A microservices application consists of tens or even hundreds of services. Services are written in a variety of languages and frameworks. Each one is a mini-application with its
own specific deployment, resource, scaling, and monitoring requirements.
• Multiple service instance per host - We provision one or more physical or virtual hosts and run multiple service instances on each one. Each service instance runs at a well-known
port on one or more hosts. These hosts require pet like treatments.
• Each service instance to be a process or a process group and/or multiple service instances in the same process or process group.
• Relatively efficient resource usage, deploying a service instance is relatively fast, starting a service is usually very fast.
• There is little or no isolation of the service instances, unless each service instance is a separate process. While we can accurately monitor each service instance’s resource
utilization, we cannot limit the resources each instance uses. A stronger dependency of operations on development as there will be technology specific deployments
instructions to be passed on from development to operations, leading to increased risk of deployment errors.
• Service instance per host- we run each service instance in isolation on its own host. Two flavors of this pattern, service instance per Virtual Machine and service instance per
Container. We package each service as a virtual machine (VM) image such as an Amazon EC2 AMI.
• Animator, packer.io are technologies that help build VM images. Boxfuse builds secure and lightweight VM images that are fast to build, boot quickly, and are more secure since
they expose a limited attack surface. CloudNative has the Bakery, a SaaS offering for creating EC2 AMIs. We can configure our CI server to invoke the Bakery after the tests for your
microservice pass.
• Each service instance runs in complete isolation, can leverage mature cloud infrastructure with load balancing and autoscaling coming along by default, encapsulates our
service’s implementation technology.
• Less efficient resource utilization. Each service instance has the overhead of an entire VM, including the operating system, deploying a new version of a service is usually
slow. VM images are typically slow to build due to their size.
…
12. Microservices important aspects -
Deployment
• Service instance per container - each service instance runs in its own container. The processes that are running I a container have their own port namespace and root filesystem.
We can limit a container’s memory and CPU resources. Examples of container technologies include Docker and Solaris Zones.
• Our services need to be packaged as container images, it’s a filesystem image consisting of the application and dependent libraries.
• Containers isolate our service instances from each other. We can easily monitor the resources consumed by each container. Also, like VMs, containers encapsulate the
technology used to implement our services. The container management API also serves as the API for managing your services. Containers are lightweight than VMs.
• Container is not as mature as the infrastructure for VMs, containers are not as secure as VMs since the containers share the kernel of the host OS with one another, we are
responsible for the undifferentiated heavy lifting of administering the container images unless we use GCE and ECS.
• The advancements are aiming to blur the distinction between containers and VMs. Boxfuse VMs are fast to build and start. The Clear Containers project aims to create lightweight
VMs.
• Serverless - AWS Lambda, an example of serverless, natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API which allows you to
use any additional programming languages to author your functions.
• The request-based pricing means that we only pay for the work that your services actually perform. Also, because we are not responsible for the IT infrastructure we can
focus on developing your application.
• Few significant limitations, not intended to be used to deploy long-running services, requests must complete within 300 seconds, s, must be written in one of the
supported languages, services must be stateless, services must also start quickly.
13. Microservices important aspects - Data
Handling
• Data access becomes much more complex when we move to a microservices architecture. The data owned by each microservice is private to that microservice and can only be
accessed via its API.
• Different microservices often use different kinds of databases, SQL, NoSQL, Graph.
• A partitioned, polyglot-persistent architecture for data storage has many benefits, including loosely coupled services and better performance and scalability. However, it does
introduce some distributed data management challenges.
• Two-Phase Commit is usually not a viable option in modern applications. The CAP theorem requires us to choose between availability and ACID-style consistency, and availability
is usually the better choice. Moreover, many modern technologies, such as most NoSQL databases, do not support 2PC.
• Another challenge is how to implement queries that retrieve data from multiple services. We may retrieve data using an application-side join, but that’s not the most optimal
solution for various situations.
• The solution is to use an event-driven architecture. In this architecture, a microservice publishes an event when something notable happens, such as when it updates a business
entity. Other microservices subscribe to those events. When a microservice receives an event, it can update its own business entities, which might lead to more events being
published.
• It is important to note that transactions across microservices are not ACID transactions. They offer much weaker guarantees such as eventual consistency. This transaction model
has been referred to as the BASE model (trading some consistency for availability for dramatic improvements in scalability).
• It enables the implementation of transactions that span multiple services and provide eventual consistency. Another benefit is that it also enables an application to maintain
materialized views.
• The programming model is more complex than when using ACID transactions. Often we must implement compensating transactions to recover from application-level failures.
Applications must deal with inconsistent data; the subscribers must detect and ignore duplicate events.
…
14. Microservices important aspects - Data
Handling
• There are a few ways to achieve atomicity with event driven architecture as well-
• The database as a message queue - publish events using a multi-step process involving only local transactions. This approach eliminates the need for 2PC by having the application use local
transactions to update state and publish events.
• Transaction log mining - The events to be published by a thread or process that mines the database’s transaction or commit log. The Transaction Log Miner thread or process reads the
transaction log and publishes events to the Message Broker.
• Event sourcing, rather than storing the current state of an entity, the application stores a sequence of state-changing events. The application reconstructs an entity’s current state by
replaying the events. Whenever the state of a business entity changes, a new event is appended to the list of events. Since saving an event is a single operation, it is inherently atomic.
• It solves one of the key problems in implementing an event-driven architecture and makes it possible to reliably publish events whenever state changes. As a result, it solves data
consistency issues in a microservices architecture.
• because it persists events rather than domain objects, it mostly avoids the object-relational impedance mismatch problem.
• also provides a 100% reliable audit log of the changes made to a business entity and makes it possible to implement temporal queries that determine the state of an entity at any
point in time.
• our business logic consists of loosely coupled business entities that exchange events. This makes it a lot easier to migrate from a monolithic application to a microservices
architecture.
• We can use events to maintain materialized views that pre-join data owned by multiple microservices. The service that maintains the view subscribes to the relevant events and updates
the view. This approach solves the challenge of how to implement queries that retrieve data from multiple services.
15. Migration strategies of monolith to
microservices
• We should stop the temptation to rewrite a monolith to microservices in a BIG BANG way. Martin Fowler reportedly commented that “the only thing a Big Bang rewrite guarantees is a Big Bang!”.
• One application modernization strategy is the Strangler Application. One gradually builds a new application consisting of microservices and run it in conjunction with the monolithic application.
Over time, the amount of functionality implemented by the monolithic application shrinks until either it disappears entirely, or it becomes just another microservice.
• Here are a few strategies of doing this:
• We should stop digging when we are in a hole. That means when we are implementing new functionality, we should not add more code to the monolith. Instead, the big idea with this strategy is
to put that new code in a standalone microservice.
• There are two additional components to this strategy, a router – that receives and redirects the legacy requests to monolith and new functionality requests to newly added microservices, and glue
code - which integrates the service with the monolith.
• A microservice can access the monolith’s data through either of the below listed strategy:
• Invoke a remote API provided by the monolith
• Access the monolith’s database directly
• Maintain its own copy of the data, which is synchronized with the monolith’s database
• The biggest benefit of stop digging strategy is It prevents the monolith from becoming even more unmanageable. The service can be developed, deployed, and scaled independently of the
monolith. However, this approach does nothing to address the problems with the monolith.
• To fix a monolith the solution is to break it, break it in layers, like front-end, back-end, DB Access etc. And another solution could be to start extracting services out of the monolith. A good
approach is to start with a few modules that are easy to extract. This will give us experience with microservices in general and the extraction process in particular. After that we should extract those
modules that will give us the greatest benefit.